You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -142,10 +154,11 @@ It also records a `ai.stream.firstChunk` event when the first chunk of the strea
142
154
143
155
### streamObject function
144
156
145
-
`streamObject` records 2 types of spans:
157
+
`streamObject` records 2 types of spans and 1 type of event:
146
158
147
-
-`ai.streamObject`: the full length of the streamObject call. It contains 1 or more `ai.streamObject.doStream` spans.
159
+
-`ai.streamObject` (span): the full length of the streamObject call. It contains 1 or more `ai.streamObject.doStream` spans.
148
160
It contains the [basic LLM span information](#basic-llm-span-information) and the following attributes:
161
+
149
162
-`operation.name`: `ai.streamObject` and the functionId that was set through `telemetry.functionId`
150
163
-`ai.operationId`: `"ai.streamObject"`
151
164
-`ai.prompt`: the prompt that was used when calling `streamObject`
@@ -155,9 +168,11 @@ It also records a `ai.stream.firstChunk` event when the first chunk of the strea
155
168
-`ai.response.object`: the object that was generated (stringified JSON)
156
169
-`ai.settings.mode`: the object generation mode, e.g. `json`
157
170
-`ai.settings.output`: the output type that was used, e.g. `object` or `no-schema`
158
-
-`ai.streamObject.doStream`: a provider doStream call.
171
+
172
+
-`ai.streamObject.doStream` (span): a provider doStream call.
159
173
This span contains an `ai.stream.firstChunk` event.
160
-
It contains the [basic LLM span information](#basic-llm-span-information) and the following attributes:
174
+
It contains the [call LLM span information](#call-llm-span-information) and the following attributes:
175
+
161
176
-`operation.name`: `ai.streamObject.doStream` and the functionId that was set through `telemetry.functionId`
162
177
-`ai.operationId`: `"ai.streamObject.doStream"`
163
178
-`ai.prompt.format`: the format of the prompt
@@ -166,21 +181,25 @@ It also records a `ai.stream.firstChunk` event when the first chunk of the strea
166
181
-`ai.response.object`: the object that was generated (stringified JSON)
167
182
-`ai.response.msToFirstChunk`: the time it took to receive the first chunk
168
183
-`ai.response.finishReason`: the reason why the generation finished
184
+
169
185
-`ai.stream.firstChunk` (event): an event that is emitted when the first chunk of the stream is received.
170
186
-`ai.response.msToFirstChunk`: the time it took to receive the first chunk
171
187
172
188
### embed function
173
189
174
190
`embed` records 2 types of spans:
175
191
176
-
-`ai.embed`: the full length of the embed call. It contains 1 `ai.embed.doEmbed` spans.
192
+
-`ai.embed` (span): the full length of the embed call. It contains 1 `ai.embed.doEmbed` spans.
177
193
It contains the [basic embedding span information](#basic-embedding-span-information) and the following attributes:
194
+
178
195
-`operation.name`: `ai.embed` and the functionId that was set through `telemetry.functionId`
179
196
-`ai.operationId`: `"ai.embed"`
180
197
-`ai.value`: the value that was passed into the `embed` function
181
198
-`ai.embedding`: a JSON-stringified embedding
182
-
-`ai.embed.doEmbed`: a provider doEmbed call.
199
+
200
+
-`ai.embed.doEmbed` (span): a provider doEmbed call.
183
201
It contains the [basic embedding span information](#basic-embedding-span-information) and the following attributes:
202
+
184
203
-`operation.name`: `ai.embed.doEmbed` and the functionId that was set through `telemetry.functionId`
185
204
-`ai.operationId`: `"ai.embed.doEmbed"`
186
205
-`ai.values`: the values that were passed into the provider (array)
@@ -190,14 +209,17 @@ It also records a `ai.stream.firstChunk` event when the first chunk of the strea
190
209
191
210
`embedMany` records 2 types of spans:
192
211
193
-
-`ai.embedMany`: the full length of the embedMany call. It contains 1 or more `ai.embedMany.doEmbed` spans.
212
+
-`ai.embedMany` (span): the full length of the embedMany call. It contains 1 or more `ai.embedMany.doEmbed` spans.
194
213
It contains the [basic embedding span information](#basic-embedding-span-information) and the following attributes:
214
+
195
215
-`operation.name`: `ai.embedMany` and the functionId that was set through `telemetry.functionId`
196
216
-`ai.operationId`: `"ai.embedMany"`
197
217
-`ai.values`: the values that were passed into the `embedMany` function
198
218
-`ai.embeddings`: an array of JSON-stringified embedding
199
-
-`ai.embedMany.doEmbed`: a provider doEmbed call.
219
+
220
+
-`ai.embedMany.doEmbed` (span): a provider doEmbed call.
200
221
It contains the [basic embedding span information](#basic-embedding-span-information) and the following attributes:
222
+
201
223
-`operation.name`: `ai.embedMany.doEmbed` and the functionId that was set through `telemetry.functionId`
202
224
-`ai.operationId`: `"ai.embedMany.doEmbed"`
203
225
-`ai.values`: the values that were sent to the provider
@@ -219,6 +241,15 @@ Many spans that use LLMs (`ai.generateText`, `ai.generateText.doGenerate`, `ai.s
219
241
-`ai.telemetry.metadata.*`: the metadata that was passed in through `telemetry.metadata`
220
242
-`ai.usage.completionTokens`: the number of completion tokens that were used
221
243
-`ai.usage.promptTokens`: the number of prompt tokens that were used
244
+
245
+
### Call LLM span information
246
+
247
+
Spans that correspond to individual LLM calls (`ai.generateText.doGenerate`, `ai.streamText.doStream`, `ai.generateObject.doGenerate`, `ai.streamObject.doStream`) contain
248
+
[basic LLM span information](#basic-llm-span-information) and the following attributes:
249
+
250
+
-`ai.response.model`: the model that was used to generate the response. This can be different from the model that was requested if the provider supports aliases.
251
+
-`ai.response.id`: the id of the response. Uses the ID from the provider when available.
252
+
-`ai.response.timestamp`: the timestamp of the response. Uses the timestamp from the provider when available.
222
253
-[Semantic Conventions for GenAI operations](https://opentelemetry.io/docs/specs/semconv/gen-ai/gen-ai-spans/)
223
254
-`gen_ai.system`: the provider that was used
224
255
-`gen_ai.request.model`: the model that was requested
@@ -230,6 +261,8 @@ Many spans that use LLMs (`ai.generateText`, `ai.generateText.doGenerate`, `ai.s
230
261
-`gen_ai.request.top_p`: the topP parameter value that was set
231
262
-`gen_ai.request.stop_sequences`: the stop sequences
232
263
-`gen_ai.response.finish_reasons`: the finish reasons that were returned by the provider
264
+
-`gen_ai.response.model`: the model that was used to generate the response. This can be different from the model that was requested if the provider supports aliases.
265
+
-`gen_ai.response.id`: the id of the response. Uses the ID from the provider when available.
233
266
-`gen_ai.usage.input_tokens`: the number of prompt tokens that were used
234
267
-`gen_ai.usage.output_tokens`: the number of completion tokens that were used
Copy file name to clipboardExpand all lines: content/docs/07-reference/ai-sdk-core/01-generate-text.mdx
+46-10Lines changed: 46 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -457,19 +457,37 @@ To see `generateText` in action, check out [these examples](#examples).
457
457
],
458
458
},
459
459
{
460
-
name: 'rawResponse',
461
-
type: 'RawResponse',
460
+
name: 'response',
461
+
type: 'Response',
462
462
optional: true,
463
-
description: 'Optional raw response data.',
463
+
description: 'Response metadata.',
464
464
properties: [
465
465
{
466
-
type: 'RawResponse',
466
+
type: 'Response',
467
467
parameters: [
468
+
{
469
+
name: 'id',
470
+
type: 'string',
471
+
description:
472
+
'The response identifier. The AI SDK uses the ID from the provider response when available, and generates an ID otherwise.',
473
+
},
474
+
{
475
+
name: 'model',
476
+
type: 'string',
477
+
description:
478
+
'The model that was used to generate the response. The AI SDK uses the response model from the provider response when available, and the model from the function call otherwise.',
479
+
},
480
+
{
481
+
name: 'timestamp',
482
+
type: 'Date',
483
+
description:
484
+
'The timestamp of the response. The AI SDK uses the response timestamp from the provider response when available, and creates a timestamp otherwise.',
485
+
},
468
486
{
469
487
name: 'headers',
470
488
optional: true,
471
489
type: 'Record<string, string>',
472
-
description: 'Response headers.',
490
+
description: 'Optional response headers.',
473
491
},
474
492
],
475
493
},
@@ -552,19 +570,37 @@ To see `generateText` in action, check out [these examples](#examples).
552
570
],
553
571
},
554
572
{
555
-
name: 'rawResponse',
556
-
type: 'RawResponse',
573
+
name: 'response',
574
+
type: 'Response',
557
575
optional: true,
558
-
description: 'Optional raw response data.',
576
+
description: 'Response metadata.',
559
577
properties: [
560
578
{
561
-
type: 'RawResponse',
579
+
type: 'Response',
562
580
parameters: [
581
+
{
582
+
name: 'id',
583
+
type: 'string',
584
+
description:
585
+
'The response identifier. The AI SDK uses the ID from the provider response when available, and generates an ID otherwise.',
586
+
},
587
+
{
588
+
name: 'model',
589
+
type: 'string',
590
+
description:
591
+
'The model that was used to generate the response. The AI SDK uses the response model from the provider response when available, and the model from the function call otherwise.',
592
+
},
593
+
{
594
+
name: 'timestamp',
595
+
type: 'Date',
596
+
description:
597
+
'The timestamp of the response. The AI SDK uses the response timestamp from the provider response when available, and creates a timestamp otherwise.',
0 commit comments