Text generation
With the OpenAI API, you can use a large language model to generate text from a prompt, as you might using ChatGPT. Models can generate almost any kind of text response—like code, mathematical equations, structured JSON data, or human-like prose.
Here's a simple example using the Responses API, our recommended API for all new projects.
1
2
3
4
5
6
7
8
9
import OpenAI from "openai";
const client = new OpenAI();
const response = await client.responses.create({
model: "gpt-5.2",
input: "Write a one-sentence bedtime story about a unicorn."
});
console.log(response.output_text);An array of content generated by the model is in the output property of the response. In this simple example, we have just one output which looks like this:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
[
{
"id": "msg_67b73f697ba4819183a15cc17d011509",
"type": "message",
"role": "assistant",
"content": [
{
"type": "output_text",
"text": "Under the soft glow of the moon, Luna the unicorn danced through fields of twinkling stardust, leaving trails of dreams for every child asleep.",
"annotations": []
}
]
}
]The output array often has more than one item in it! It can contain tool calls, data about reasoning tokens generated by reasoning models, and other items. It is not safe to assume that the model's text output is present at output[0].content[0].text.
Some of our official SDKs include an output_text property on model responses for convenience, which aggregates all text outputs from the model into a single string. This may be useful as a shortcut to access text output from the model.
In addition to plain text, you can also have the model return structured data in JSON format—this feature is called Structured Outputs.
Prompt engineering
Prompt engineering is the process of writing effective instructions for a model, such that it consistently generates content that meets your requirements.
Because the content generated from a model is non-deterministic, prompting to get your desired output is a mix of art and science. However, you can apply techniques and best practices to get good results consistently.
Some prompt engineering techniques work with every model, like using message roles. But different models might need to be prompted differently to produce the best results. Even different snapshots of models within the same family could produce different results. So as you build more complex applications, we strongly recommend:
- Pinning your production applications to specific model snapshots (like
gpt-5-2025-08-07for example) to ensure consistent behavior - Building evals that measure the behavior of your prompts so you can monitor prompt performance as you iterate, or when you change and upgrade model versions
Now, let's examine some tools and techniques available to you to construct prompts.
Choosing models and APIs
OpenAI has many different models and several APIs to choose from. Reasoning models, like o3 and GPT-5, behave differently from chat models and respond better to different prompts. One important note is that reasoning models perform better and demonstrate higher intelligence when used with the Responses API.
If you're building any text generation app, we recommend using the Responses API over the older Chat Completions API. And if you're using a reasoning model, it's especially useful to migrate to Responses.
Message roles and instruction following
You can provide instructions to the model with differing levels of authority using the instructions API parameter along with message roles.
The instructions parameter gives the model high-level instructions on how it should behave while generating a response, including tone, goals, and examples of correct responses. Any instructions provided this way will take priority over a prompt in the input parameter.
1
2
3
4
5
6
7
8
9
10
11
import OpenAI from "openai";
const client = new OpenAI();
const response = await client.responses.create({
model: "gpt-5",
reasoning: { effort: "low" },
instructions: "Talk like a pirate.",
input: "Are semicolons optional in JavaScript?",
});
console.log(response.output_text);The example above is roughly equivalent to using the following input messages in the input array:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
import OpenAI from "openai";
const client = new OpenAI();
const response = await client.responses.create({
model: "gpt-5",
reasoning: { effort: "low" },
input: [
{
role: "developer",
content: "Talk like a pirate."
},
{
role: "user",
content: "Are semicolons optional in JavaScript?",
},
],
});
console.log(response.output_text);Note that the instructions parameter only applies to the current response generation request. If you are managing conversation state with the previous_response_id parameter, the instructions used on previous turns will not be present in the context.
The OpenAI model spec describes how our models give different levels of priority to messages with different roles.
| developer | user | assistant |
|---|---|---|
|
| Messages generated by the model have the |
A multi-turn conversation may consist of several messages of these types, along with other content types provided by both you and the model. Learn more about managing conversation state here.
You could think about developer and user messages like a function and its arguments in a programming language.
developermessages provide the system's rules and business logic, like a function definition.usermessages provide inputs and configuration to which thedevelopermessage instructions are applied, like arguments to a function.
Reusable prompts
In the OpenAI dashboard, you can develop reusable prompts that you can use in API requests, rather than specifying the content of prompts in code. This way, you can more easily build and evaluate your prompts, and deploy improved versions of your prompts without changing your integration code.
Here's how it works:
- Create a reusable prompt in the dashboard with placeholders like
{{customer_name}}. - Use the prompt in your API request with the
promptparameter. The prompt parameter object has three properties you can configure:id— Unique identifier of your prompt, found in the dashboardversion— A specific version of your prompt (defaults to the "current" version as specified in the dashboard)variables— A map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input message types likeinput_imageorinput_file. See the full API reference.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
import OpenAI from "openai";
const client = new OpenAI();
const response = await client.responses.create({
model: "gpt-5",
prompt: {
id: "pmpt_abc123",
version: "2",
variables: {
customer_name: "Jane Doe",
product: "40oz juice box"
}
}
});
console.log(response.output_text);1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
import fs from "fs";
import OpenAI from "openai";
const client = new OpenAI();
// Upload a PDF we will reference in the prompt variables
const file = await client.files.create({
file: fs.createReadStream("draconomicon.pdf"),
purpose: "user_data",
});
const response = await client.responses.create({
model: "gpt-5",
prompt: {
id: "pmpt_abc123",
variables: {
topic: "Dragons",
reference_pdf: {
type: "input_file",
file_id: file.id,
},
},
},
});
console.log(response.output_text);Next steps
Now that you known the basics of text inputs and outputs, you might want to check out one of these resources next.
Use the Playground to develop and iterate on prompts.
Ensure JSON data emitted from a model conforms to a JSON schema.
Check out all the options for text generation in the API reference.