Tools extend what agents can do—letting them fetch real-time data, execute code, query external databases, and take actions in the world.Under the hood, tools are callable functions with well-defined inputs and outputs that get passed to a chat model. The model decides when to invoke a tool based on the conversation context, and what input arguments to provide.
For details on how models handle tool calls, see Tool calling.
The simplest way to create a tool is with the @tool decorator. By default, the function’s docstring becomes the tool’s description that helps the model understand when to use it:
Copy
from langchain.tools import tool@tooldef search_database(query: str, limit: int = 10) -> str: """Search the customer database for records matching the query. Args: query: Search terms to look for limit: Maximum number of results to return """ return f"Found {limit} results for '{query}'"
Type hints are required as they define the tool’s input schema. The docstring should be informative and concise to help the model understand the tool’s purpose.
Server-side tool use: Some chat models feature built-in tools (web search, code interpreters) that are executed server-side. See Server-side tool use for details.
Override the auto-generated tool description for clearer model guidance:
Copy
@tool("calculator", description="Performs arithmetic calculations. Use this for any math problems.")def calc(expression: str) -> str: """Evaluate mathematical expressions.""" return str(eval(expression))
Tools are most powerful when they can access runtime information like conversation history, user data, and persistent memory. This section covers how to access and update this information from within your tools.Tools can access runtime information through the ToolRuntime parameter, which provides:
Component
Description
Use case
State
Short-term memory - mutable data that exists for the current conversation (messages, counters, custom fields)
State represents short-term memory that exists for the duration of a conversation. It includes the message history and any custom fields you define in your graph state.
Add runtime: ToolRuntime to your tool signature to access state. This parameter is automatically injected and hidden from the LLM - it won’t appear in the tool’s schema.
Tools can access the current conversation state using runtime.state:
Copy
from langchain.tools import tool, ToolRuntimefrom langchain.messages import HumanMessage@tooldef get_last_user_message(runtime: ToolRuntime) -> str: """Get the most recent message from the user.""" messages = runtime.state["messages"] # Find the last human message for message in reversed(messages): if isinstance(message, HumanMessage): return message.content return "No user messages found"# Access custom state fields@tooldef get_user_preference( pref_name: str, runtime: ToolRuntime) -> str: """Get a user preference value.""" preferences = runtime.state.get("user_preferences", {}) return preferences.get(pref_name, "Not set")
The runtime parameter is hidden from the model. For the example above, the model only sees pref_name in the tool schema.
Use Command to update the agent’s state. This is useful for tools that need to update custom state fields:
Copy
from langgraph.types import Commandfrom langchain.tools import tool@tooldef set_user_name(new_name: str) -> Command: """Set the user's name in the conversation state.""" return Command(update={"user_name": new_name})
When tools update state variables, consider defining a reducer for those fields. Since LLMs can call multiple tools in parallel, a reducer determines how to resolve conflicts when the same state field is updated by concurrent tool calls.
Context provides immutable configuration data that is passed at invocation time. Use it for user IDs, session details, or application-specific settings that shouldn’t change during a conversation.Access context through runtime.context:
Copy
from dataclasses import dataclassfrom langchain_openai import ChatOpenAIfrom langchain.agents import create_agentfrom langchain.tools import tool, ToolRuntimeUSER_DATABASE = { "user123": { "name": "Alice Johnson", "account_type": "Premium", "balance": 5000, "email": "[email protected]" }, "user456": { "name": "Bob Smith", "account_type": "Standard", "balance": 1200, "email": "[email protected]" }}@dataclassclass UserContext: user_id: str@tooldef get_account_info(runtime: ToolRuntime[UserContext]) -> str: """Get the current user's account information.""" user_id = runtime.context.user_id if user_id in USER_DATABASE: user = USER_DATABASE[user_id] return f"Account holder: {user['name']}\nType: {user['account_type']}\nBalance: ${user['balance']}" return "User not found"model = ChatOpenAI(model="gpt-4.1")agent = create_agent( model, tools=[get_account_info], context_schema=UserContext, system_prompt="You are a financial assistant.")result = agent.invoke( {"messages": [{"role": "user", "content": "What's my current balance?"}]}, context=UserContext(user_id="user123"))
The BaseStore provides persistent storage that survives across conversations. Unlike state (short-term memory), data saved to the store remains available in future sessions.Access the store through runtime.store. The store uses a namespace/key pattern to organize data:
For production deployments, use a persistent store implementation like PostgresStore instead of InMemoryStore. See the memory documentation for setup details.
Copy
from typing import Anyfrom langgraph.store.memory import InMemoryStorefrom langchain.agents import create_agentfrom langchain.tools import tool, ToolRuntime# Access memory@tooldef get_user_info(user_id: str, runtime: ToolRuntime) -> str: """Look up user info.""" store = runtime.store user_info = store.get(("users",), user_id) return str(user_info.value) if user_info else "Unknown user"# Update memory@tooldef save_user_info(user_id: str, user_info: dict[str, Any], runtime: ToolRuntime) -> str: """Save user info.""" store = runtime.store store.put(("users",), user_id, user_info) return "Successfully saved user info."store = InMemoryStore()agent = create_agent( model, tools=[get_user_info, save_user_info], store=store)# First session: save user infoagent.invoke({ "messages": [{"role": "user", "content": "Save the following user: userid: abc123, name: Foo, age: 25, email: [email protected]"}]})# Second session: get user infoagent.invoke({ "messages": [{"role": "user", "content": "Get user info for user with id 'abc123'"}]})# Here is the user info for user with ID "abc123":# - Name: Foo# - Age: 25# - Email: [email protected]
Stream real-time updates from tools during execution. This is useful for providing progress feedback to users during long-running operations.Use runtime.stream_writer to emit custom updates:
Copy
from langchain.tools import tool, ToolRuntime@tooldef get_weather(city: str, runtime: ToolRuntime) -> str: """Get weather for a given city.""" writer = runtime.stream_writer # Stream custom updates as the tool executes writer(f"Looking up data for city: {city}") writer(f"Acquired data for city: {city}") return f"It's always sunny in {city}!"
If you use runtime.stream_writer inside your tool, the tool must be invoked within a LangGraph execution context. See Streaming for more details.
ToolNode is a prebuilt node that executes tools in LangGraph workflows. It handles parallel tool execution, error handling, and state injection automatically.
For custom workflows where you need fine-grained control over tool execution patterns, use ToolNode instead of create_agent. It’s the building block that powers agent tool execution.
from langchain.tools import toolfrom langgraph.prebuilt import ToolNodefrom langgraph.graph import StateGraph, MessagesState, START, END@tooldef search(query: str) -> str: """Search for information.""" return f"Results for: {query}"@tooldef calculator(expression: str) -> str: """Evaluate a math expression.""" return str(eval(expression))# Create the ToolNode with your toolstool_node = ToolNode([search, calculator])# Use in a graphbuilder = StateGraph(MessagesState)builder.add_node("tools", tool_node)# ... add other nodes and edges
Tools can access the current graph state through ToolRuntime:
Copy
from langchain.tools import tool, ToolRuntimefrom langgraph.prebuilt import ToolNode@tooldef get_message_count(runtime: ToolRuntime) -> str: """Get the number of messages in the conversation.""" messages = runtime.state["messages"] return f"There are {len(messages)} messages."tool_node = ToolNode([get_message_count])
For more details on accessing state, context, and long-term memory from tools, see Access context.
LangChain provides a large collection of prebuilt tools and toolkits for common tasks like web search, code interpretation, database access, and more. These ready-to-use tools can be directly integrated into your agents without writing custom code.See the tools and toolkits integration page for a complete list of available tools organized by category.
Some chat models feature built-in tools that are executed server-side by the model provider. These include capabilities like web search and code interpreters that don’t require you to define or host the tool logic.Refer to the individual chat model integration pages and the tool calling documentation for details on enabling and using these built-in tools.