✅ Fix chat interface - restore continuous conversation flow
🎯 Major improvements to MissionControl component: - Always keep input field visible and functional after AI responses - Auto-clear input after submitting questions for better UX - Add dynamic visual indicators (first question vs follow-up) - Improve response layout with clear separation and hints - Enable proper chat-like experience for continuous learning 🌟 Additional enhancements: - Better language-specific messaging throughout interface - Clearer visual hierarchy between input and response areas - Intuitive flow that guides users to ask follow-up questions - Maintains responsive design and accessibility 🔧 Technical changes: - Enhanced MissionControl state management - Improved component layout and styling - Better TypeScript integration across components - Updated tsconfig for stricter type checking
This commit is contained in:
342
node_modules/openai/README.md
generated
vendored
342
node_modules/openai/README.md
generated
vendored
@@ -1,6 +1,6 @@
|
||||
# OpenAI TypeScript and JavaScript API Library
|
||||
|
||||
[>)](https://npmjs.org/package/openai)  [](https://jsr.io/@openai/openai)
|
||||
[](https://npmjs.org/package/openai)  [](https://jsr.io/@openai/openai)
|
||||
|
||||
This library provides convenient access to the OpenAI REST API from TypeScript or JavaScript.
|
||||
|
||||
@@ -100,6 +100,7 @@ Request parameters that correspond to file uploads can be passed in many differe
|
||||
|
||||
```ts
|
||||
import fs from 'fs';
|
||||
import fetch from 'node-fetch';
|
||||
import OpenAI, { toFile } from 'openai';
|
||||
|
||||
const client = new OpenAI();
|
||||
@@ -124,85 +125,6 @@ await client.files.create({
|
||||
});
|
||||
```
|
||||
|
||||
## Webhook Verification
|
||||
|
||||
Verifying webhook signatures is _optional but encouraged_.
|
||||
|
||||
For more information about webhooks, see [the API docs](https://platform.openai.com/docs/guides/webhooks).
|
||||
|
||||
### Parsing webhook payloads
|
||||
|
||||
For most use cases, you will likely want to verify the webhook and parse the payload at the same time. To achieve this, we provide the method `client.webhooks.unwrap()`, which parses a webhook request and verifies that it was sent by OpenAI. This method will throw an error if the signature is invalid.
|
||||
|
||||
Note that the `body` parameter must be the raw JSON string sent from the server (do not parse it first). The `.unwrap()` method will parse this JSON for you into an event object after verifying the webhook was sent from OpenAI.
|
||||
|
||||
```ts
|
||||
import { headers } from 'next/headers';
|
||||
import OpenAI from 'openai';
|
||||
|
||||
const client = new OpenAI({
|
||||
webhookSecret: process.env.OPENAI_WEBHOOK_SECRET, // env var used by default; explicit here.
|
||||
});
|
||||
|
||||
export async function webhook(request: Request) {
|
||||
const headersList = headers();
|
||||
const body = await request.text();
|
||||
|
||||
try {
|
||||
const event = client.webhooks.unwrap(body, headersList);
|
||||
|
||||
switch (event.type) {
|
||||
case 'response.completed':
|
||||
console.log('Response completed:', event.data);
|
||||
break;
|
||||
case 'response.failed':
|
||||
console.log('Response failed:', event.data);
|
||||
break;
|
||||
default:
|
||||
console.log('Unhandled event type:', event.type);
|
||||
}
|
||||
|
||||
return Response.json({ message: 'ok' });
|
||||
} catch (error) {
|
||||
console.error('Invalid webhook signature:', error);
|
||||
return new Response('Invalid signature', { status: 400 });
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Verifying webhook payloads directly
|
||||
|
||||
In some cases, you may want to verify the webhook separately from parsing the payload. If you prefer to handle these steps separately, we provide the method `client.webhooks.verifySignature()` to _only verify_ the signature of a webhook request. Like `.unwrap()`, this method will throw an error if the signature is invalid.
|
||||
|
||||
Note that the `body` parameter must be the raw JSON string sent from the server (do not parse it first). You will then need to parse the body after verifying the signature.
|
||||
|
||||
```ts
|
||||
import { headers } from 'next/headers';
|
||||
import OpenAI from 'openai';
|
||||
|
||||
const client = new OpenAI({
|
||||
webhookSecret: process.env.OPENAI_WEBHOOK_SECRET, // env var used by default; explicit here.
|
||||
});
|
||||
|
||||
export async function webhook(request: Request) {
|
||||
const headersList = headers();
|
||||
const body = await request.text();
|
||||
|
||||
try {
|
||||
client.webhooks.verifySignature(body, headersList);
|
||||
|
||||
// Parse the body after verification
|
||||
const event = JSON.parse(body);
|
||||
console.log('Verified event:', event);
|
||||
|
||||
return Response.json({ message: 'ok' });
|
||||
} catch (error) {
|
||||
console.error('Invalid webhook signature:', error);
|
||||
return new Response('Invalid signature', { status: 400 });
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Handling errors
|
||||
|
||||
When the library is unable to connect to the API,
|
||||
@@ -211,18 +133,22 @@ a subclass of `APIError` will be thrown:
|
||||
|
||||
<!-- prettier-ignore -->
|
||||
```ts
|
||||
const job = await client.fineTuning.jobs
|
||||
.create({ model: 'gpt-4o', training_file: 'file-abc123' })
|
||||
.catch(async (err) => {
|
||||
if (err instanceof OpenAI.APIError) {
|
||||
console.log(err.request_id);
|
||||
console.log(err.status); // 400
|
||||
console.log(err.name); // BadRequestError
|
||||
console.log(err.headers); // {server: 'nginx', ...}
|
||||
} else {
|
||||
throw err;
|
||||
}
|
||||
});
|
||||
async function main() {
|
||||
const job = await client.fineTuning.jobs
|
||||
.create({ model: 'gpt-4o', training_file: 'file-abc123' })
|
||||
.catch(async (err) => {
|
||||
if (err instanceof OpenAI.APIError) {
|
||||
console.log(err.request_id);
|
||||
console.log(err.status); // 400
|
||||
console.log(err.name); // BadRequestError
|
||||
console.log(err.headers); // {server: 'nginx', ...}
|
||||
} else {
|
||||
throw err;
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
main();
|
||||
```
|
||||
|
||||
Error codes are as follows:
|
||||
@@ -238,73 +164,6 @@ Error codes are as follows:
|
||||
| >=500 | `InternalServerError` |
|
||||
| N/A | `APIConnectionError` |
|
||||
|
||||
## Request IDs
|
||||
|
||||
> For more information on debugging requests, see [these docs](https://platform.openai.com/docs/api-reference/debugging-requests)
|
||||
|
||||
All object responses in the SDK provide a `_request_id` property which is added from the `x-request-id` response header so that you can quickly log failing requests and report them back to OpenAI.
|
||||
|
||||
```ts
|
||||
const completion = await client.chat.completions.create({
|
||||
messages: [{ role: 'user', content: 'Say this is a test' }],
|
||||
model: 'gpt-4o',
|
||||
});
|
||||
console.log(completion._request_id); // req_123
|
||||
```
|
||||
|
||||
You can also access the Request ID using the `.withResponse()` method:
|
||||
|
||||
```ts
|
||||
const { data: stream, request_id } = await openai.chat.completions
|
||||
.create({
|
||||
model: 'gpt-4',
|
||||
messages: [{ role: 'user', content: 'Say this is a test' }],
|
||||
stream: true,
|
||||
})
|
||||
.withResponse();
|
||||
```
|
||||
|
||||
## Realtime API Beta
|
||||
|
||||
The Realtime API enables you to build low-latency, multi-modal conversational experiences. It currently supports text and audio as both input and output, as well as [function calling](https://platform.openai.com/docs/guides/function-calling) through a `WebSocket` connection.
|
||||
|
||||
```ts
|
||||
import { OpenAIRealtimeWebSocket } from 'openai/beta/realtime/websocket';
|
||||
|
||||
const rt = new OpenAIRealtimeWebSocket({ model: 'gpt-4o-realtime-preview-2024-12-17' });
|
||||
|
||||
rt.on('response.text.delta', (event) => process.stdout.write(event.delta));
|
||||
```
|
||||
|
||||
For more information see [realtime.md](realtime.md).
|
||||
|
||||
## Microsoft Azure OpenAI
|
||||
|
||||
To use this library with [Azure OpenAI](https://learn.microsoft.com/azure/ai-services/openai/overview), use the `AzureOpenAI`
|
||||
class instead of the `OpenAI` class.
|
||||
|
||||
> [!IMPORTANT]
|
||||
> The Azure API shape slightly differs from the core API shape which means that the static types for responses / params
|
||||
> won't always be correct.
|
||||
|
||||
```ts
|
||||
import { AzureOpenAI } from 'openai';
|
||||
import { getBearerTokenProvider, DefaultAzureCredential } from '@azure/identity';
|
||||
|
||||
const credential = new DefaultAzureCredential();
|
||||
const scope = 'https://cognitiveservices.azure.com/.default';
|
||||
const azureADTokenProvider = getBearerTokenProvider(credential, scope);
|
||||
|
||||
const openai = new AzureOpenAI({ azureADTokenProvider });
|
||||
|
||||
const result = await openai.chat.completions.create({
|
||||
model: 'gpt-4o',
|
||||
messages: [{ role: 'user', content: 'Say hello!' }],
|
||||
});
|
||||
|
||||
console.log(result.choices[0]!.message?.content);
|
||||
```
|
||||
|
||||
### Retries
|
||||
|
||||
Certain errors will be automatically retried 2 times by default, with a short exponential backoff.
|
||||
@@ -355,7 +214,7 @@ All object responses in the SDK provide a `_request_id` property which is added
|
||||
|
||||
```ts
|
||||
const response = await client.responses.create({ model: 'gpt-4o', input: 'testing 123' });
|
||||
console.log(response._request_id); // req_123
|
||||
console.log(response._request_id) // req_123
|
||||
```
|
||||
|
||||
You can also access the Request ID using the `.withResponse()` method:
|
||||
@@ -432,10 +291,7 @@ const credential = new DefaultAzureCredential();
|
||||
const scope = 'https://cognitiveservices.azure.com/.default';
|
||||
const azureADTokenProvider = getBearerTokenProvider(credential, scope);
|
||||
|
||||
const openai = new AzureOpenAI({
|
||||
azureADTokenProvider,
|
||||
apiVersion: '<The API version, e.g. 2024-10-01-preview>',
|
||||
});
|
||||
const openai = new AzureOpenAI({ azureADTokenProvider, apiVersion: "<The API version, e.g. 2024-10-01-preview>" });
|
||||
|
||||
const result = await openai.chat.completions.create({
|
||||
model: 'gpt-4o',
|
||||
@@ -452,10 +308,8 @@ For more information on support for the Azure API, see [azure.md](azure.md).
|
||||
### Accessing raw Response data (e.g., headers)
|
||||
|
||||
The "raw" `Response` returned by `fetch()` can be accessed through the `.asResponse()` method on the `APIPromise` type that all methods return.
|
||||
This method returns as soon as the headers for a successful response are received and does not consume the response body, so you are free to write custom parsing or streaming logic.
|
||||
|
||||
You can also use the `.withResponse()` method to get the raw `Response` along with the parsed data.
|
||||
Unlike `.asResponse()` this method consumes the body, returning once it is parsed.
|
||||
|
||||
<!-- prettier-ignore -->
|
||||
```ts
|
||||
@@ -476,59 +330,6 @@ console.log(raw.headers.get('X-My-Header'));
|
||||
console.log(modelResponse);
|
||||
```
|
||||
|
||||
### Logging
|
||||
|
||||
> [!IMPORTANT]
|
||||
> All log messages are intended for debugging only. The format and content of log messages
|
||||
> may change between releases.
|
||||
|
||||
#### Log levels
|
||||
|
||||
The log level can be configured in two ways:
|
||||
|
||||
1. Via the `OPENAI_LOG` environment variable
|
||||
2. Using the `logLevel` client option (overrides the environment variable if set)
|
||||
|
||||
```ts
|
||||
import OpenAI from 'openai';
|
||||
|
||||
const client = new OpenAI({
|
||||
logLevel: 'debug', // Show all log messages
|
||||
});
|
||||
```
|
||||
|
||||
Available log levels, from most to least verbose:
|
||||
|
||||
- `'debug'` - Show debug messages, info, warnings, and errors
|
||||
- `'info'` - Show info messages, warnings, and errors
|
||||
- `'warn'` - Show warnings and errors (default)
|
||||
- `'error'` - Show only errors
|
||||
- `'off'` - Disable all logging
|
||||
|
||||
At the `'debug'` level, all HTTP requests and responses are logged, including headers and bodies.
|
||||
Some authentication-related headers are redacted, but sensitive data in request and response bodies
|
||||
may still be visible.
|
||||
|
||||
#### Custom logger
|
||||
|
||||
By default, this library logs to `globalThis.console`. You can also provide a custom logger.
|
||||
Most logging libraries are supported, including [pino](https://www.npmjs.com/package/pino), [winston](https://www.npmjs.com/package/winston), [bunyan](https://www.npmjs.com/package/bunyan), [consola](https://www.npmjs.com/package/consola), [signale](https://www.npmjs.com/package/signale), and [@std/log](https://jsr.io/@std/log). If your logger doesn't work, please open an issue.
|
||||
|
||||
When providing a custom logger, the `logLevel` option still controls which messages are emitted, messages
|
||||
below the configured level will not be sent to your logger.
|
||||
|
||||
```ts
|
||||
import OpenAI from 'openai';
|
||||
import pino from 'pino';
|
||||
|
||||
const logger = pino();
|
||||
|
||||
const client = new OpenAI({
|
||||
logger: logger.child({ name: 'OpenAI' }),
|
||||
logLevel: 'debug', // Send all messages to pino, allowing it to filter
|
||||
});
|
||||
```
|
||||
|
||||
### Making custom/undocumented requests
|
||||
|
||||
This library is typed for convenient access to the documented API. If you need to access undocumented
|
||||
@@ -553,8 +354,9 @@ parameter. This library doesn't validate at runtime that the request matches the
|
||||
send will be sent as-is.
|
||||
|
||||
```ts
|
||||
client.chat.completions.create({
|
||||
// ...
|
||||
client.foo.create({
|
||||
foo: 'my_param',
|
||||
bar: 12,
|
||||
// @ts-expect-error baz is not yet public
|
||||
baz: 'undocumented option',
|
||||
});
|
||||
@@ -574,83 +376,71 @@ validate or strip extra properties from the response from the API.
|
||||
|
||||
### Customizing the fetch client
|
||||
|
||||
If you want to use a different `fetch` function, you can either polyfill the global:
|
||||
|
||||
```ts
|
||||
import fetch from 'my-fetch';
|
||||
|
||||
globalThis.fetch = fetch;
|
||||
```
|
||||
|
||||
Or pass it to the client:
|
||||
> We're actively working on a new alpha version that migrates from `node-fetch` to builtin fetch.
|
||||
>
|
||||
> Please try it out and let us know if you run into any issues!
|
||||
> https://community.openai.com/t/your-feedback-requested-node-js-sdk-5-0-0-alpha/1063774
|
||||
|
||||
By default, this library uses `node-fetch` in Node, and expects a global `fetch` function in other environments.
|
||||
|
||||
If you would prefer to use a global, web-standards-compliant `fetch` function even in a Node environment,
|
||||
(for example, if you are running Node with `--experimental-fetch` or using NextJS which polyfills with `undici`),
|
||||
add the following import before your first import `from "OpenAI"`:
|
||||
|
||||
```ts
|
||||
// Tell TypeScript and the package to use the global web fetch instead of node-fetch.
|
||||
// Note, despite the name, this does not add any polyfills, but expects them to be provided if needed.
|
||||
import 'openai/shims/web';
|
||||
import OpenAI from 'openai';
|
||||
import fetch from 'my-fetch';
|
||||
|
||||
const client = new OpenAI({ fetch });
|
||||
```
|
||||
|
||||
### Fetch options
|
||||
To do the inverse, add `import "openai/shims/node"` (which does import polyfills).
|
||||
This can also be useful if you are getting the wrong TypeScript types for `Response` ([more details](https://github.com/openai/openai-node/tree/master/src/_shims#readme)).
|
||||
|
||||
If you want to set custom `fetch` options without overriding the `fetch` function, you can provide a `fetchOptions` object when instantiating the client or making a request. (Request-specific options override client options.)
|
||||
### Logging and middleware
|
||||
|
||||
You may also provide a custom `fetch` function when instantiating the client,
|
||||
which can be used to inspect or alter the `Request` or `Response` before/after each request:
|
||||
|
||||
```ts
|
||||
import { fetch } from 'undici'; // as one example
|
||||
import OpenAI from 'openai';
|
||||
|
||||
const client = new OpenAI({
|
||||
fetchOptions: {
|
||||
// `RequestInit` options
|
||||
fetch: async (url: RequestInfo, init?: RequestInit): Promise<Response> => {
|
||||
console.log('About to make a request', url, init);
|
||||
const response = await fetch(url, init);
|
||||
console.log('Got response', response);
|
||||
return response;
|
||||
},
|
||||
});
|
||||
```
|
||||
|
||||
#### Configuring proxies
|
||||
Note that if given a `DEBUG=true` environment variable, this library will log all requests and responses automatically.
|
||||
This is intended for debugging purposes only and may change in the future without notice.
|
||||
|
||||
To modify proxy behavior, you can provide custom `fetchOptions` that add runtime-specific proxy
|
||||
options to requests:
|
||||
### Configuring an HTTP(S) Agent (e.g., for proxies)
|
||||
|
||||
<img src="https://raw.githubusercontent.com/stainless-api/sdk-assets/refs/heads/main/node.svg" align="top" width="18" height="21"> **Node** <sup>[[docs](https://github.com/nodejs/undici/blob/main/docs/docs/api/ProxyAgent.md#example---proxyagent-with-fetch)]</sup>
|
||||
By default, this library uses a stable agent for all http/https requests to reuse TCP connections, eliminating many TCP & TLS handshakes and shaving around 100ms off most requests.
|
||||
|
||||
If you would like to disable or customize this behavior, for example to use the API behind a proxy, you can pass an `httpAgent` which is used for all requests (be they http or https), for example:
|
||||
|
||||
<!-- prettier-ignore -->
|
||||
```ts
|
||||
import OpenAI from 'openai';
|
||||
import * as undici from 'undici';
|
||||
import http from 'http';
|
||||
import { HttpsProxyAgent } from 'https-proxy-agent';
|
||||
|
||||
const proxyAgent = new undici.ProxyAgent('http://localhost:8888');
|
||||
// Configure the default for all requests:
|
||||
const client = new OpenAI({
|
||||
fetchOptions: {
|
||||
dispatcher: proxyAgent,
|
||||
},
|
||||
httpAgent: new HttpsProxyAgent(process.env.PROXY_URL),
|
||||
});
|
||||
|
||||
// Override per-request:
|
||||
await client.models.list({
|
||||
httpAgent: new http.Agent({ keepAlive: false }),
|
||||
});
|
||||
```
|
||||
|
||||
<img src="https://raw.githubusercontent.com/stainless-api/sdk-assets/refs/heads/main/bun.svg" align="top" width="18" height="21"> **Bun** <sup>[[docs](https://bun.sh/guides/http/proxy)]</sup>
|
||||
|
||||
```ts
|
||||
import OpenAI from 'openai';
|
||||
|
||||
const client = new OpenAI({
|
||||
fetchOptions: {
|
||||
proxy: 'http://localhost:8888',
|
||||
},
|
||||
});
|
||||
```
|
||||
|
||||
<img src="https://raw.githubusercontent.com/stainless-api/sdk-assets/refs/heads/main/deno.svg" align="top" width="18" height="21"> **Deno** <sup>[[docs](https://docs.deno.com/api/deno/~/Deno.createHttpClient)]</sup>
|
||||
|
||||
```ts
|
||||
import OpenAI from 'npm:openai';
|
||||
|
||||
const httpClient = Deno.createHttpClient({ proxy: { url: 'http://localhost:8888' } });
|
||||
const client = new OpenAI({
|
||||
fetchOptions: {
|
||||
client: httpClient,
|
||||
},
|
||||
});
|
||||
```
|
||||
|
||||
## Frequently Asked Questions
|
||||
|
||||
## Semantic versioning
|
||||
|
||||
This package generally follows [SemVer](https://semver.org/spec/v2.0.0.html) conventions, though certain backwards-incompatible changes may be released as minor versions:
|
||||
@@ -665,11 +455,11 @@ We are keen for your feedback; please open an [issue](https://www.github.com/ope
|
||||
|
||||
## Requirements
|
||||
|
||||
TypeScript >= 4.9 is supported.
|
||||
TypeScript >= 4.5 is supported.
|
||||
|
||||
The following runtimes are supported:
|
||||
|
||||
- Node.js 20 LTS or later ([non-EOL](https://endoflife.date/nodejs)) versions.
|
||||
- Node.js 18 LTS or later ([non-EOL](https://endoflife.date/nodejs)) versions.
|
||||
- Deno v1.28.0 or higher.
|
||||
- Bun 1.0 or later.
|
||||
- Cloudflare Workers.
|
||||
|
||||
Reference in New Issue
Block a user