You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/docs/03-ai-sdk-core/05-generating-text.mdx
+46-3Lines changed: 46 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,11 +3,20 @@ title: Generating Text
3
3
description: Learn how to generate text with the Vercel AI SDK.
4
4
---
5
5
6
-
# Generating Text
6
+
# Generating and Streaming Text
7
7
8
8
Large language models (LLMs) can generate text in response to a prompt, which can contain instructions and information to process.
9
9
For example, you can ask a model to come up with a recipe, draft an email, or summarize a document.
10
10
11
+
The Vercel AI SDK Core provides two functions to generate text and stream it from LLMs:
12
+
13
+
-[`generateText`](#generatetext): Generates text for a given prompt and model.
14
+
-[`streamText`](#streamtext): Streams text from a given prompt and model.
15
+
16
+
Advanced LLM features such as [tool calling](./tools-and-tool-calling) and [structured data generation](./generating-structured-data) are built on top of text generation.
17
+
18
+
## `generateText`
19
+
11
20
You can generate text using the [`generateText`](/docs/reference/ai-sdk-core/generate-text) function. This function is ideal for non-interactive use cases where you need to write text (e.g. drafting email or summarizing web pages) and for agents that use tools.
Depending on your model and prompt, it can take a large language model (LLM) up to a minute to finish generating it's response. This delay can be unacceptable for interactive use cases such as chatbots or real-time applications, where users expect immediate responses.
38
47
@@ -73,4 +82,38 @@ while (true) {
73
82
}
74
83
```
75
84
76
-
Advanced LLM features such as [tool calling](./tools-and-tool-calling) and [structured data generation](./generating-structured-data) are built on top of text generation.
85
+
### `onFinish` callback
86
+
87
+
When using `streamText`, you can provide an `onFinish` callback that is triggered when the model finishes generating the response and all tool executions.
88
+
89
+
```tsx
90
+
import { streamText } from'ai';
91
+
92
+
const result =awaitstreamText({
93
+
model,
94
+
prompt: 'Invent a new holiday and describe its traditions.',
Copy file name to clipboardExpand all lines: content/docs/05-ai-sdk-ui/15-storing-messages.mdx
+11-14Lines changed: 11 additions & 14 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,33 +6,30 @@ description: Welcome to the Vercel AI SDK documentation!
6
6
# Storing Messages
7
7
8
8
The ability to store message history is essential for chatbot use cases.
9
-
10
-
The Vercel AI SDK simplifies the process of storing chat history through the `toAIStream` method, which is designed to manage stream life cycles with ease. This method supports various lifecycle callbacks such as `onFinal`.
9
+
The Vercel AI SDK simplifies the process of storing chat history through the `onFinish` callback of the `streamText` function.
11
10
12
11
## Implementing Persistent Chat History
13
12
14
-
To implement persistent chat storage, you can utilize the `onFinal` callback on the `toAIStream` method. This callback is triggered upon the completion of the model's response, making it an ideal place to handle the storage of each interaction.
13
+
To implement persistent chat storage, you can utilize the `onFinish` callback on the `streamText` function.
14
+
This callback is triggered upon the completion of the model's response and all tool executions,
15
+
making it an ideal place to handle the storage of each interaction.
0 commit comments