The OpenAI Java SDK provides convenient access to the OpenAI REST API from applications written in Java.
The REST API documentation can be found on platform.openai.com. Javadocs are available on javadoc.io.
implementation("com.openai:openai-java:1.5.1")
<dependency> <groupId>com.openai</groupId> <artifactId>openai-java</artifactId> <version>1.5.1</version> </dependency>
This library requires Java 8 or later.
See the openai-java-example
directory for complete and runnable examples.
The primary API for interacting with OpenAI models is the Responses API. You can generate text from the model with the code below.
importcom.openai.client.OpenAIClient; importcom.openai.client.okhttp.OpenAIOkHttpClient; importcom.openai.models.ChatModel; importcom.openai.models.responses.Response; importcom.openai.models.responses.ResponseCreateParams; // Configures using the `OPENAI_API_KEY`, `OPENAI_ORG_ID` and `OPENAI_PROJECT_ID` environment variablesOpenAIClientclient = OpenAIOkHttpClient.fromEnv(); ResponseCreateParamsparams = ResponseCreateParams.builder() .input("Say this is a test") .model(ChatModel.GPT_4_1) .build(); Responseresponse = client.responses().create(params);
The previous standard (supported indefinitely) for generating text is the Chat Completions API. You can use that API to generate text from the model with the code below.
importcom.openai.client.OpenAIClient; importcom.openai.client.okhttp.OpenAIOkHttpClient; importcom.openai.models.ChatModel; importcom.openai.models.chat.completions.ChatCompletion; importcom.openai.models.chat.completions.ChatCompletionCreateParams; // Configures using the `OPENAI_API_KEY`, `OPENAI_ORG_ID`, `OPENAI_PROJECT_ID` and `OPENAI_BASE_URL` environment variablesOpenAIClientclient = OpenAIOkHttpClient.fromEnv(); ChatCompletionCreateParamsparams = ChatCompletionCreateParams.builder() .addUserMessage("Say this is a test") .model(ChatModel.GPT_4_1) .build(); ChatCompletionchatCompletion = client.chat().completions().create(params);
Configure the client using environment variables:
importcom.openai.client.OpenAIClient; importcom.openai.client.okhttp.OpenAIOkHttpClient; // Configures using the `OPENAI_API_KEY`, `OPENAI_ORG_ID`, `OPENAI_PROJECT_ID` and `OPENAI_BASE_URL` environment variablesOpenAIClientclient = OpenAIOkHttpClient.fromEnv();
Or manually:
importcom.openai.client.OpenAIClient; importcom.openai.client.okhttp.OpenAIOkHttpClient; OpenAIClientclient = OpenAIOkHttpClient.builder() .apiKey("My API Key") .build();
Or using a combination of the two approaches:
importcom.openai.client.OpenAIClient; importcom.openai.client.okhttp.OpenAIOkHttpClient; OpenAIClientclient = OpenAIOkHttpClient.builder() // Configures using the `OPENAI_API_KEY`, `OPENAI_ORG_ID`, `OPENAI_PROJECT_ID` and `OPENAI_BASE_URL` environment variables .fromEnv() .apiKey("My API Key") .build();
See this table for the available options:
Setter | Environment variable | Required | Default value |
---|---|---|---|
apiKey | OPENAI_API_KEY | true | - |
organization | OPENAI_ORG_ID | false | - |
project | OPENAI_PROJECT_ID | false | - |
baseUrl | OPENAI_BASE_URL | true | "https://api.openai.com/v1" |
Tip
Don't create more than one client in the same application. Each client has a connection pool and thread pools, which are more efficient to share between requests.
To send a request to the OpenAI API, build an instance of some Params
class and pass it to the corresponding client method. When the response is received, it will be deserialized into an instance of a Java class.
For example, client.chat().completions().create(...)
should be called with an instance of ChatCompletionCreateParams
, and it will return an instance of ChatCompletion
.
Each class in the SDK has an associated builder or factory method for constructing it.
Each class is immutable once constructed. If the class has an associated builder, then it has a toBuilder()
method, which can be used to convert it back to a builder for making a modified copy.
Because each class is immutable, builder modification will never affect already built class instances.
The default client is synchronous. To switch to asynchronous execution, call the async()
method:
importcom.openai.client.OpenAIClient; importcom.openai.client.okhttp.OpenAIOkHttpClient; importcom.openai.models.ChatModel; importcom.openai.models.chat.completions.ChatCompletion; importcom.openai.models.chat.completions.ChatCompletionCreateParams; importjava.util.concurrent.CompletableFuture; // Configures using the `OPENAI_API_KEY`, `OPENAI_ORG_ID`, `OPENAI_PROJECT_ID` and `OPENAI_BASE_URL` environment variablesOpenAIClientclient = OpenAIOkHttpClient.fromEnv(); ChatCompletionCreateParamsparams = ChatCompletionCreateParams.builder() .addUserMessage("Say this is a test") .model(ChatModel.GPT_4_1) .build(); CompletableFuture<ChatCompletion> chatCompletion = client.async().chat().completions().create(params);
Or create an asynchronous client from the beginning:
importcom.openai.client.OpenAIClientAsync; importcom.openai.client.okhttp.OpenAIOkHttpClientAsync; importcom.openai.models.ChatModel; importcom.openai.models.chat.completions.ChatCompletion; importcom.openai.models.chat.completions.ChatCompletionCreateParams; importjava.util.concurrent.CompletableFuture; // Configures using the `OPENAI_API_KEY`, `OPENAI_ORG_ID`, `OPENAI_PROJECT_ID` and `OPENAI_BASE_URL` environment variablesOpenAIClientAsyncclient = OpenAIOkHttpClientAsync.fromEnv(); ChatCompletionCreateParamsparams = ChatCompletionCreateParams.builder() .addUserMessage("Say this is a test") .model(ChatModel.GPT_4_1) .build(); CompletableFuture<ChatCompletion> chatCompletion = client.chat().completions().create(params);
The asynchronous client supports the same options as the synchronous one, except most methods return CompletableFuture
s.
The SDK defines methods that return response "chunk" streams, where each chunk can be individually processed as soon as it arrives instead of waiting on the full response. Streaming methods generally correspond to SSE or JSONL responses.
Some of these methods may have streaming and non-streaming variants, but a streaming method will always have a Streaming
suffix in its name, even if it doesn't have a non-streaming variant.
These streaming methods return StreamResponse
for synchronous clients:
importcom.openai.core.http.StreamResponse; importcom.openai.models.chat.completions.ChatCompletionChunk; try (StreamResponse<ChatCompletionChunk> streamResponse = client.chat().completions().createStreaming(params)) { streamResponse.stream().forEach(chunk -> { System.out.println(chunk); }); System.out.println("No more chunks!"); }
Or AsyncStreamResponse
for asynchronous clients:
importcom.openai.core.http.AsyncStreamResponse; importcom.openai.models.chat.completions.ChatCompletionChunk; importjava.util.Optional; client.async().chat().completions().createStreaming(params).subscribe(chunk -> { System.out.println(chunk); }); // If you need to handle errors or completion of the streamclient.async().chat().completions().createStreaming(params).subscribe(newAsyncStreamResponse.Handler<>() { @OverridepublicvoidonNext(ChatCompletionChunkchunk) { System.out.println(chunk); } @OverridepublicvoidonComplete(Optional<Throwable> error) { if (error.isPresent()) { System.out.println("Something went wrong!"); thrownewRuntimeException(error.get()); } else { System.out.println("No more chunks!"); } } }); // Or use futuresclient.async().chat().completions().createStreaming(params) .subscribe(chunk -> { System.out.println(chunk); }) .onCompleteFuture(); .whenComplete((unused, error) -> { if (error != null) { System.out.println("Something went wrong!"); thrownewRuntimeException(error); } else { System.out.println("No more chunks!"); } });
Async streaming uses a dedicated per-client cached thread pool Executor
to stream without blocking the current thread. This default is suitable for most purposes.
To use a different Executor
, configure the subscription using the executor
parameter:
importjava.util.concurrent.Executor; importjava.util.concurrent.Executors; Executorexecutor = Executors.newFixedThreadPool(4); client.async().chat().completions().createStreaming(params).subscribe( chunk -> System.out.println(chunk), executor );
Or configure the client globally using the streamHandlerExecutor
method:
importcom.openai.client.OpenAIClient; importcom.openai.client.okhttp.OpenAIOkHttpClient; importjava.util.concurrent.Executors; OpenAIClientclient = OpenAIOkHttpClient.builder() .fromEnv() .streamHandlerExecutor(Executors.newFixedThreadPool(4)) .build();
The SDK provides conveniences for streamed chat completions. A ChatCompletionAccumulator
can record the stream of chat completion chunks in the response as they are processed and accumulate a ChatCompletion
object similar to that which would have been returned by the non-streaming API.
For a synchronous response add a Stream.peek()
call to the stream pipeline to accumulate each chunk:
importcom.openai.core.http.StreamResponse; importcom.openai.helpers.ChatCompletionAccumulator; importcom.openai.models.chat.completions.ChatCompletion; importcom.openai.models.chat.completions.ChatCompletionChunk; ChatCompletionAccumulatorchatCompletionAccumulator = ChatCompletionAccumulator.create(); try (StreamResponse<ChatCompletionChunk> streamResponse = client.chat().completions().createStreaming(createParams)) { streamResponse.stream() .peek(chatCompletionAccumulator::accumulate) .flatMap(completion -> completion.choices().stream()) .flatMap(choice -> choice.delta().content().stream()) .forEach(System.out::print); } ChatCompletionchatCompletion = chatCompletionAccumulator.chatCompletion();
For an asynchronous response, add the ChatCompletionAccumulator
to the subscribe()
call:
importcom.openai.helpers.ChatCompletionAccumulator; importcom.openai.models.chat.completions.ChatCompletion; ChatCompletionAccumulatorchatCompletionAccumulator = ChatCompletionAccumulator.create(); client.chat() .completions() .createStreaming(createParams) .subscribe(chunk -> chatCompletionAccumulator.accumulate(chunk).choices().stream() .flatMap(choice -> choice.delta().content().stream()) .forEach(System.out::print)) .onCompleteFuture() .join(); ChatCompletionchatCompletion = chatCompletionAccumulator.chatCompletion();
The SDK defines methods that accept files.
To upload a file, pass a Path
:
importcom.openai.models.files.FileCreateParams; importcom.openai.models.files.FileObject; importcom.openai.models.files.FilePurpose; importjava.nio.file.Paths; FileCreateParamsparams = FileCreateParams.builder() .purpose(FilePurpose.FINE_TUNE) .file(Paths.get("input.jsonl")) .build(); FileObjectfileObject = client.files().create(params);
Or an arbitrary InputStream
:
importcom.openai.models.files.FileCreateParams; importcom.openai.models.files.FileObject; importcom.openai.models.files.FilePurpose; importjava.net.URL; FileCreateParamsparams = FileCreateParams.builder() .purpose(FilePurpose.FINE_TUNE) .file(newURL("https://example.com/input.jsonl").openStream()) .build(); FileObjectfileObject = client.files().create(params);
Or a byte[]
array:
importcom.openai.models.files.FileCreateParams; importcom.openai.models.files.FileObject; importcom.openai.models.files.FilePurpose; FileCreateParamsparams = FileCreateParams.builder() .purpose(FilePurpose.FINE_TUNE) .file("content".getBytes()) .build(); FileObjectfileObject = client.files().create(params);
Note that when passing a non-Path
its filename is unknown so it will not be included in the request. To manually set a filename, pass a MultipartField
:
importcom.openai.core.MultipartField; importcom.openai.models.files.FileCreateParams; importcom.openai.models.files.FileObject; importcom.openai.models.files.FilePurpose; importjava.io.InputStream; importjava.net.URL; FileCreateParamsparams = FileCreateParams.builder() .purpose(FilePurpose.FINE_TUNE) .file(MultipartField.<InputStream>builder() .value(newURL("https://example.com/input.jsonl").openStream()) .filename("input.jsonl") .build()) .build(); FileObjectfileObject = client.files().create(params);
The SDK defines methods that return binary responses, which are used for API responses that shouldn't necessarily be parsed, like non-JSON data.
These methods return HttpResponse
:
importcom.openai.core.http.HttpResponse; importcom.openai.models.files.FileContentParams; FileContentParamsparams = FileContentParams.builder() .fileId("file_id") .build(); HttpResponseresponse = client.files().content(params);
To save the response content to a file, use the Files.copy(...)
method:
importcom.openai.core.http.HttpResponse; importjava.nio.file.Files; importjava.nio.file.Paths; importjava.nio.file.StandardCopyOption; try (HttpResponseresponse = client.files().content(params)) { Files.copy( response.body(), Paths.get(path), StandardCopyOption.REPLACE_EXISTING ); } catch (Exceptione) { System.out.println("Something went wrong!"); thrownewRuntimeException(e); }
Or transfer the response content to any OutputStream
:
importcom.openai.core.http.HttpResponse; importjava.nio.file.Files; importjava.nio.file.Paths; try (HttpResponseresponse = client.files().content(params)) { response.body().transferTo(Files.newOutputStream(Paths.get(path))); } catch (Exceptione) { System.out.println("Something went wrong!"); thrownewRuntimeException(e); }
The SDK defines methods that deserialize responses into instances of Java classes. However, these methods don't provide access to the response headers, status code, or the raw response body.
To access this data, prefix any HTTP method call on a client or service with withRawResponse()
:
importcom.openai.core.http.Headers; importcom.openai.core.http.HttpResponseFor; importcom.openai.models.ChatModel; importcom.openai.models.chat.completions.ChatCompletion; importcom.openai.models.chat.completions.ChatCompletionCreateParams; ChatCompletionCreateParamsparams = ChatCompletionCreateParams.builder() .addUserMessage("Say this is a test") .model(ChatModel.GPT_4_1) .build(); HttpResponseFor<ChatCompletion> chatCompletion = client.chat().completions().withRawResponse().create(params); intstatusCode = chatCompletion.statusCode(); Headersheaders = chatCompletion.headers();
You can still deserialize the response into an instance of a Java class if needed:
importcom.openai.models.chat.completions.ChatCompletion; ChatCompletionparsedChatCompletion = chatCompletion.parse();
For more information on debugging requests, see the API docs.
When using raw responses, you can access the x-request-id
response header using the requestId()
method:
importcom.openai.core.http.HttpResponseFor; importcom.openai.models.chat.completions.ChatCompletion; importjava.util.Optional; HttpResponseFor<ChatCompletion> chatCompletion = client.chat().completions().withRawResponse().create(params); Optional<String> requestId = chatCompletion.requestId();
This can be used to quickly log failing requests and report them back to OpenAI.
The SDK throws custom unchecked exception types:
OpenAIServiceException
: Base class for HTTP errors. See this table for which exception subclass is thrown for each HTTP status code:SseException
is thrown for errors encountered during SSE streaming after a successful initial HTTP response.OpenAIIoException
: I/O networking errors.OpenAIInvalidDataException
: Failure to interpret successfully parsed data. For example, when accessing a property that's supposed to be required, but the API unexpectedly omitted it from the response.OpenAIException
: Base class for all exceptions. Most errors will result in one of the previously mentioned ones, but completely generic errors may be thrown using the base class.
For methods that return a paginated list of results, this library provides convenient ways access the results either one page at a time, or item-by-item across all pages.
To iterate through all results across all pages, you can use autoPager
, which automatically handles fetching more pages for you:
importcom.openai.models.finetuning.jobs.FineTuningJob; importcom.openai.models.finetuning.jobs.JobListPage; // As an Iterable:JobListPagepage = client.fineTuning().jobs().list(params); for (FineTuningJobjob : page.autoPager()) { System.out.println(job); }; // As a Stream:client.fineTuning().jobs().list(params).autoPager().stream() .limit(50) .forEach(job -> System.out.println(job));
// Using forEach, which returns CompletableFuture<Void>:asyncClient.fineTuning().jobs().list(params).autoPager() .forEach(job -> System.out.println(job), executor);
If none of the above helpers meet your needs, you can also manually request pages one-by-one. A page of results has a data()
method to fetch the list of objects, as well as top-level response
and other methods to fetch top-level data about the page. It also has methods hasNextPage
, getNextPage
, and getNextPageParams
methods to help with pagination.
importcom.openai.models.finetuning.jobs.FineTuningJob; importcom.openai.models.finetuning.jobs.JobListPage; JobListPagepage = client.fineTuning().jobs().list(params); while (page != null) { for (FineTuningJobjob : page.data()) { System.out.println(job); } page = page.getNextPage().orElse(null); }
The SDK uses the standard OkHttp logging interceptor.
Enable logging by setting the OPENAI_LOG
environment variable to info
:
$ export OPENAI_LOG=info
Or to debug
for more verbose logging:
$ export OPENAI_LOG=debug
The SDK depends on Jackson for JSON serialization/deserialization. It is compatible with version 2.13.4 or higher, but depends on version 2.18.2 by default.
The SDK throws an exception if it detects an incompatible Jackson version at runtime (e.g. if the default version was overridden in your Maven or Gradle config).
If the SDK threw an exception, but you're certain the version is compatible, then disable the version check using the checkJacksonVersionCompatibility
on OpenAIOkHttpClient
or OpenAIOkHttpClientAsync
.
Caution
We make no guarantee that the SDK works correctly when the Jackson version check is disabled.
To use this library with Azure OpenAI, use the same OpenAI client builder but with the Azure-specific configuration.
OpenAIClientclient = OpenAIOkHttpClient.builder() // Gets the API key and endpoint from the `AZURE_OPENAI_KEY` and `OPENAI_BASE_URL` environment variables, respectively .fromEnv() // Set the Azure Entra ID .credential(BearerTokenCredential.create(AuthenticationUtil.getBearerTokenSupplier( newDefaultAzureCredentialBuilder().build(), "https://cognitiveservices.azure.com/.default"))) .build();
See the complete Azure OpenAI example in the openai-java-example
directory. The other examples in the directory also work with Azure as long as the client is configured to use it.
The SDK automatically retries 2 times by default, with a short exponential backoff.
Only the following error types are retried:
- Connection errors (for example, due to a network connectivity problem)
- 408 Request Timeout
- 409 Conflict
- 429 Rate Limit
- 5xx Internal
The API may also explicitly instruct the SDK to retry or not retry a response.
To set a custom number of retries, configure the client using the maxRetries
method:
importcom.openai.client.OpenAIClient; importcom.openai.client.okhttp.OpenAIOkHttpClient; OpenAIClientclient = OpenAIOkHttpClient.builder() .fromEnv() .maxRetries(4) .build();
Requests time out after 10 minutes by default.
To set a custom timeout, configure the method call using the timeout
method:
importcom.openai.models.ChatModel; importcom.openai.models.chat.completions.ChatCompletion; importcom.openai.models.chat.completions.ChatCompletionCreateParams; ChatCompletionchatCompletion = client.chat().completions().create( params, RequestOptions.builder().timeout(Duration.ofSeconds(30)).build() );
Or configure the default for all method calls at the client level:
importcom.openai.client.OpenAIClient; importcom.openai.client.okhttp.OpenAIOkHttpClient; importjava.time.Duration; OpenAIClientclient = OpenAIOkHttpClient.builder() .fromEnv() .timeout(Duration.ofSeconds(30)) .build();
To route requests through a proxy, configure the client using the proxy
method:
importcom.openai.client.OpenAIClient; importcom.openai.client.okhttp.OpenAIOkHttpClient; importjava.net.InetSocketAddress; importjava.net.Proxy; OpenAIClientclient = OpenAIOkHttpClient.builder() .fromEnv() .proxy(newProxy( Proxy.Type.HTTP, newInetSocketAddress( "https://example.com", 8080 ) )) .build();
The SDK consists of three artifacts:
openai-java-core
- Contains core SDK logic
- Does not depend on OkHttp
- Exposes
OpenAIClient
,OpenAIClientAsync
,OpenAIClientImpl
, andOpenAIClientAsyncImpl
, all of which can work with any HTTP client
openai-java-client-okhttp
- Depends on OkHttp
- Exposes
OpenAIOkHttpClient
andOpenAIOkHttpClientAsync
, which provide a way to constructOpenAIClientImpl
andOpenAIClientAsyncImpl
, respectively, using OkHttp
openai-java
- Depends on and exposes the APIs of both
openai-java-core
andopenai-java-client-okhttp
- Does not have its own logic
- Depends on and exposes the APIs of both
This structure allows replacing the SDK's default HTTP client without pulling in unnecessary dependencies.
Customized OkHttpClient
Tip
Try the available network options before replacing the default client.
To use a customized OkHttpClient
:
- Replace your
openai-java
dependency withopenai-java-core
- Copy
openai-java-client-okhttp
'sOkHttpClient
class into your code and customize it - Construct
OpenAIClientImpl
orOpenAIClientAsyncImpl
, similarly toOpenAIOkHttpClient
orOpenAIOkHttpClientAsync
, using your customized client
To use a completely custom HTTP client:
- Replace your
openai-java
dependency withopenai-java-core
- Write a class that implements the
HttpClient
interface - Construct
OpenAIClientImpl
orOpenAIClientAsyncImpl
, similarly toOpenAIOkHttpClient
orOpenAIOkHttpClientAsync
, using your new client class
The SDK is typed for convenient usage of the documented API. However, it also supports working with undocumented or not yet supported parts of the API.
To set undocumented parameters, call the putAdditionalHeader
, putAdditionalQueryParam
, or putAdditionalBodyProperty
methods on any Params
class:
importcom.openai.core.JsonValue; importcom.openai.models.chat.completions.ChatCompletionCreateParams; ChatCompletionCreateParamsparams = ChatCompletionCreateParams.builder() .putAdditionalHeader("Secret-Header", "42") .putAdditionalQueryParam("secret_query_param", "42") .putAdditionalBodyProperty("secretProperty", JsonValue.from("42")) .build();
These can be accessed on the built object later using the _additionalHeaders()
, _additionalQueryParams()
, and _additionalBodyProperties()
methods.
To set undocumented parameters on nested headers, query params, or body classes, call the putAdditionalProperty
method on the nested class:
importcom.openai.core.JsonValue; importcom.openai.models.chat.completions.ChatCompletionCreateParams; ChatCompletionCreateParamsparams = ChatCompletionCreateParams.builder() .responseFormat(ChatCompletionCreateParams.ResponseFormat.builder() .putAdditionalProperty("secretProperty", JsonValue.from("42")) .build()) .build();
These properties can be accessed on the nested built object later using the _additionalProperties()
method.
To set a documented parameter or property to an undocumented or not yet supported value, pass a JsonValue
object to its setter:
importcom.openai.core.JsonValue; importcom.openai.models.chat.completions.ChatCompletionCreateParams; ChatCompletionCreateParamsparams = ChatCompletionCreateParams.builder() .addUserMessage("Say this is a test") .model(JsonValue.from(42)) .build();
The most straightforward way to create a JsonValue
is using its from(...)
method:
importcom.openai.core.JsonValue; importjava.util.List; importjava.util.Map; // Create primitive JSON valuesJsonValuenullValue = JsonValue.from(null); JsonValuebooleanValue = JsonValue.from(true); JsonValuenumberValue = JsonValue.from(42); JsonValuestringValue = JsonValue.from("Hello World!"); // Create a JSON array value equivalent to `["Hello", "World"]`JsonValuearrayValue = JsonValue.from(List.of( "Hello", "World" )); // Create a JSON object value equivalent to `{ "a": 1, "b": 2 }`JsonValueobjectValue = JsonValue.from(Map.of( "a", 1, "b", 2 )); // Create an arbitrarily nested JSON equivalent to:// {// "a": [1, 2],// "b": [3, 4]// }JsonValuecomplexValue = JsonValue.from(Map.of( "a", List.of( 1, 2 ), "b", List.of( 3, 4 ) ));
Normally a Builder
class's build
method will throw IllegalStateException
if any required parameter or property is unset.
To forcibly omit a required parameter or property, pass JsonMissing
:
importcom.openai.core.JsonMissing; importcom.openai.models.ChatModel; importcom.openai.models.chat.completions.ChatCompletionCreateParams; ChatCompletionCreateParamsparams = ChatCompletionCreateParams.builder() .model(ChatModel.GPT_4_1) .messages(JsonMissing.of()) .build();
To access undocumented response properties, call the _additionalProperties()
method:
importcom.openai.core.JsonValue; importjava.util.Map; Map<String, JsonValue> additionalProperties = client.chat().completions().create(params)._additionalProperties(); JsonValuesecretPropertyValue = additionalProperties.get("secretProperty"); Stringresult = secretPropertyValue.accept(newJsonValue.Visitor<>() { @OverridepublicStringvisitNull() { return"It's null!"; } @OverridepublicStringvisitBoolean(booleanvalue) { return"It's a boolean!"; } @OverridepublicStringvisitNumber(Numbervalue) { return"It's a number!"; } // Other methods include `visitMissing`, `visitString`, `visitArray`, and `visitObject`// The default implementation of each unimplemented method delegates to `visitDefault`, which throws by default, but can also be overridden });
To access a property's raw JSON value, which may be undocumented, call its _
prefixed method:
importcom.openai.core.JsonField; importcom.openai.models.chat.completions.ChatCompletionMessageParam; importjava.util.Optional; JsonField<List<ChatCompletionMessageParam>> messages = client.chat().completions().create(params)._messages(); if (messages.isMissing()) { // The property is absent from the JSON response } elseif (messages.isNull()) { // The property was set to literal null } else { // Check if value was provided as a string// Other methods include `asNumber()`, `asBoolean()`, etc.Optional<String> jsonString = messages.asString(); // Try to deserialize into a custom typeMyClassmyObject = messages.asUnknown().orElseThrow().convert(MyClass.class); }
In rare cases, the API may return a response that doesn't match the expected type. For example, the SDK may expect a property to contain a String
, but the API could return something else.
By default, the SDK will not throw an exception in this case. It will throw OpenAIInvalidDataException
only if you directly access the property.
If you would prefer to check that the response is completely well-typed upfront, then either call validate()
:
importcom.openai.models.chat.completions.ChatCompletion; ChatCompletionchatCompletion = client.chat().completions().create(params).validate();
Or configure the method call to validate the response using the responseValidation
method:
importcom.openai.models.ChatModel; importcom.openai.models.chat.completions.ChatCompletion; importcom.openai.models.chat.completions.ChatCompletionCreateParams; ChatCompletionchatCompletion = client.chat().completions().create( params, RequestOptions.builder().responseValidation(true).build() );
Or configure the default for all method calls at the client level:
importcom.openai.client.OpenAIClient; importcom.openai.client.okhttp.OpenAIOkHttpClient; OpenAIClientclient = OpenAIOkHttpClient.builder() .fromEnv() .responseValidation(true) .build();
Java enum
classes are not trivially forwards compatible. Using them in the SDK could cause runtime exceptions if the API is updated to respond with a new enum value.
Using JsonField<T>
enables a few features:
- Allowing usage of undocumented API functionality
- Lazily validating the API response against the expected shape
- Representing absent vs explicitly null values
Why don't you use data
classes?
It is not backwards compatible to add new fields to a data class and we don't want to introduce a breaking change every time we add a field to a class.
Checked exceptions are widely considered a mistake in the Java programming language. In fact, they were omitted from Kotlin for this reason.
Checked exceptions:
- Are verbose to handle
- Encourage error handling at the wrong level of abstraction, where nothing can be done about the error
- Are tedious to propagate due to the function coloring problem
- Don't play well with lambdas (also due to the function coloring problem)
This package generally follows SemVer conventions, though certain backwards-incompatible changes may be released as minor versions:
- Changes to library internals which are technically public but not intended or documented for external use. (Please open a GitHub issue to let us know if you are relying on such internals.)
- Changes that we do not expect to impact the vast majority of users in practice.
We take backwards-compatibility seriously and work hard to ensure you can rely on a smooth upgrade experience.
We are keen for your feedback; please open an issue with questions, bugs, or suggestions.