Skip to content

Commit 8e78028

Browse files
authored
feat (ai/core): add onFinish callback to streamText. (#1697)
1 parent d1a978b commit 8e78028

File tree

10 files changed

+1187
-25
lines changed

10 files changed

+1187
-25
lines changed
Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
---
2+
'ai': patch
3+
---
4+
5+
feat (ai/core): add onFinish callback to streamText

‎.changeset/old-hotels-attack.md‎

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
---
2+
'ai': patch
3+
---
4+
5+
feat (ai/core): add text, toolCalls, and toolResults promises to StreamTextResult (matching the generateText result API with async methods)

‎.changeset/sour-drinks-judge.md‎

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
---
2+
'@ai-sdk/provider': patch
3+
---
4+
5+
feat (ai/provider): add "unknown" finish reason (for models that don't provide a finish reason)

‎content/docs/03-ai-sdk-core/05-generating-text.mdx‎

Lines changed: 46 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -3,11 +3,20 @@ title: Generating Text
33
description: Learn how to generate text with the Vercel AI SDK.
44
---
55

6-
# Generating Text
6+
# Generating and Streaming Text
77

88
Large language models (LLMs) can generate text in response to a prompt, which can contain instructions and information to process.
99
For example, you can ask a model to come up with a recipe, draft an email, or summarize a document.
1010

11+
The Vercel AI SDK Core provides two functions to generate text and stream it from LLMs:
12+
13+
- [`generateText`](#generatetext): Generates text for a given prompt and model.
14+
- [`streamText`](#streamtext): Streams text from a given prompt and model.
15+
16+
Advanced LLM features such as [tool calling](./tools-and-tool-calling) and [structured data generation](./generating-structured-data) are built on top of text generation.
17+
18+
## `generateText`
19+
1120
You can generate text using the [`generateText`](/docs/reference/ai-sdk-core/generate-text) function. This function is ideal for non-interactive use cases where you need to write text (e.g. drafting email or summarizing web pages) and for agents that use tools.
1221

1322
```tsx
@@ -32,7 +41,7 @@ const { text } = await generateText({
3241
});
3342
```
3443

35-
## Streaming Text
44+
## `streamText`
3645

3746
Depending on your model and prompt, it can take a large language model (LLM) up to a minute to finish generating it's response. This delay can be unacceptable for interactive use cases such as chatbots or real-time applications, where users expect immediate responses.
3847

@@ -73,4 +82,38 @@ while (true) {
7382
}
7483
```
7584

76-
Advanced LLM features such as [tool calling](./tools-and-tool-calling) and [structured data generation](./generating-structured-data) are built on top of text generation.
85+
### `onFinish` callback
86+
87+
When using `streamText`, you can provide an `onFinish` callback that is triggered when the model finishes generating the response and all tool executions.
88+
89+
```tsx
90+
import { streamText } from 'ai';
91+
92+
const result = await streamText({
93+
model,
94+
prompt: 'Invent a new holiday and describe its traditions.',
95+
onFinish({ text, toolCalls, toolResults, finishReason, usage }) {
96+
// your own logic, e.g. for saving the chat history or recording usage
97+
},
98+
});
99+
```
100+
101+
### Result helper functions
102+
103+
The result object of `streamText` contains several helper functions to make the integration into [AI SDK UI](/docs/ai-sdk-ui) easier:
104+
105+
- `result.toAIStream()`: Creates an AI stream object (with tool calls etc.) that can be used with `StreamingTextResponse()` and `StreamData`.
106+
- `result.toAIStreamResponse()`: Creates an AI stream response (with tool calls etc.).
107+
- `result.toTextStreamResponse()`: Creates a simple text stream response.
108+
- `result.pipeTextStreamToResponse()`: Writes text delta output to a Node.js response-like object.
109+
- `result.pipeAIStreamToResponse()`: Writes AI stream delta output to a Node.js response-like object.
110+
111+
### Result promises
112+
113+
The result object of `streamText` contains several promises that resolve when all required data is available:
114+
115+
- `result.text`: The generated text.
116+
- `result.toolCalls`: The tool calls made during text generation.
117+
- `result.toolResults`: The tool results from the tool calls.
118+
- `result.finishReason`: The reason the model finished generating text.
119+
- `result.usage`: The usage of the model during text generation.

‎content/docs/05-ai-sdk-ui/15-storing-messages.mdx‎

Lines changed: 11 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -6,33 +6,30 @@ description: Welcome to the Vercel AI SDK documentation!
66
# Storing Messages
77

88
The ability to store message history is essential for chatbot use cases.
9-
10-
The Vercel AI SDK simplifies the process of storing chat history through the `toAIStream` method, which is designed to manage stream life cycles with ease. This method supports various lifecycle callbacks such as `onFinal`.
9+
The Vercel AI SDK simplifies the process of storing chat history through the `onFinish` callback of the `streamText` function.
1110

1211
## Implementing Persistent Chat History
1312

14-
To implement persistent chat storage, you can utilize the `onFinal` callback on the `toAIStream` method. This callback is triggered upon the completion of the model's response, making it an ideal place to handle the storage of each interaction.
13+
To implement persistent chat storage, you can utilize the `onFinish` callback on the `streamText` function.
14+
This callback is triggered upon the completion of the model's response and all tool executions,
15+
making it an ideal place to handle the storage of each interaction.
1516

16-
```tsx highlight="14-18"
17+
```tsx highlight="10-13"
1718
'use server';
1819

19-
import { Message, StreamingTextResponse, streamText } from 'ai';
20+
import { CoreMessage, streamText } from 'ai';
2021
import { openai } from '@ai-sdk/openai';
2122

22-
export async function continueConversation(messages: Message[]) {
23-
'use server';
24-
23+
export async function continueConversation(messages: CoreMessage[]) {
2524
const result = await streamText({
2625
model: openai('gpt-4-turbo'),
2726
messages,
28-
});
29-
30-
const stream = result.toAIStream({
31-
async onFinal(completion) {
32-
await saveChat(completion);
27+
async onFinish({ text, toolCalls, toolResults, finishReason, usage }) {
28+
// implement your own storage logic:
29+
await saveChat({ text, toolCalls, toolResults });
3330
},
3431
});
3532

36-
return new StreamingTextResponse(stream);
33+
return result.toAIStreamResponse();
3734
}
3835
```

0 commit comments

Comments
 (0)