Looking for the JS/TS version? Check out LangChain.js.
To help you ship LangChain apps to production faster, check out LangSmith. LangSmith is a unified developer platform for building, testing, and monitoring LLM applications.
pip install langchain
LangChain is the easiest way to start building agents and applications powered by LLMs. With under 10 lines of code, you can connect to OpenAI, Anthropic, Google, and more. LangChain provides a pre-built agent architecture and model integrations to help you get started quickly and seamlessly incorporate LLMs into your agents and applications.
We recommend you use LangChain if you want to quickly build agents and autonomous applications. Use LangGraph, our low-level agent orchestration framework and runtime, when you have more advanced needs that require a combination of deterministic and agentic workflows, heavy customization, and carefully controlled latency.
LangChain agents are built on top of LangGraph in order to provide durable execution, streaming, human-in-the-loop, persistence, and more. (You do not need to know LangGraph for basic LangChain agent usage.)
For full documentation, see the API reference. For conceptual guides, tutorials, and examples on using LangChain, see the LangChain Docs. You can also chat with the docs using Chat LangChain.
See our Releases and Versioning policies.
As an open-source project in a rapidly developing field, we are extremely open to contributions, whether it be in the form of a new feature, improved infrastructure, or better documentation.
For detailed information on how to contribute, see the Contributing Guide.
Base class for structured output errors.
Raised when model returns multiple structured output tool calls when only one is expected.
Raised when structured output tool call arguments fail to parse according to the schema.
Use a tool calling strategy for model responses.
Use the model provider's native structured output method.
Information for tracking structured output tool metadata.
Information for tracking native structured output metadata.
Automatically select the best strategy for structured output.
Protocol describing a context editing strategy.
Configuration for clearing tool outputs when token limits are exceeded.
Automatically prune tool results to manage context size.
Summarizes conversation history when token limits are approached.
Agent state extension for tracking shell session resources.
Structured result from command execution.
Persistent shell session that supports sequential command execution.
Middleware that registers a persistent shell tool for agents.
Represents an action with a name and args.
Represents an action request with a name, args, and description.
Policy for reviewing a HITL request.
Request for human feedback on a sequence of actions requested by a model.
Response when a human approves the action.
Response when a human edits the action.
Response when a human rejects the action.
Response payload for a HITLRequest.
Configuration for an action requiring human in the loop.
Human in the loop middleware.
Middleware that automatically retries failed tool calls with configurable backoff.
Provides Glob and Grep search over filesystem files.
Model request information for the agent.
Response from model execution including messages and optional structured output.
Model response with an optional 'Command' from 'wrap_model_call' middleware.
Annotation used to mark state attributes as omitted from input or output schemas.
State schema for the agent.
Base middleware class for an agent.
Configuration contract for persistent shell sessions.
Run the shell directly on the host process.
Launch the shell through the Codex CLI sandbox.
Run the shell inside a dedicated Docker container.
Detect and handle Personally Identifiable Information (PII) in conversations.
Uses an LLM to select relevant tools before calling the main model.
State schema for ToolCallLimitMiddleware.
Exception raised when tool call limits are exceeded.
Track tool call counts and enforces limits during agent execution.
A single todo item with content and status.
State schema for the todo middleware.
Input schema for the write_todos tool.
Middleware that provides todo list management capabilities to agents.
Represents an individual match of sensitive data.
Raised when configured to block on detected sensitive values.
Configuration for handling a single PII type.
Resolved redaction rule ready for execution.
State schema for ModelCallLimitMiddleware.
Exception raised when model call limits are exceeded.
Tracks model call counts and enforces limits.
Automatic fallback to alternative models on errors.
Middleware that automatically retries failed model calls with configurable backoff.
Emulates specified tools using an LLM instead of executing them.
Initialize a chat model from any supported provider using a unified interface.
Initialize an embedding model from a model name and optional provider.
Creates an agent graph that calls tools in a loop until a stopping condition is met.
Validate retry parameters.
Check if an exception should trigger a retry.
Calculate delay for a retry attempt with exponential backoff and optional jitter.
Fast file pattern matching tool that works with any codebase size.
Fast content search tool that works with any codebase size.
Decorator to configure hook behavior in middleware methods.
Decorator used to dynamically create a middleware with the before_model hook.
Decorator used to dynamically create a middleware with the after_model hook.
Decorator used to dynamically create a middleware with the before_agent hook.
Decorator used to dynamically create a middleware with the after_agent hook.
Decorator used to dynamically generate system prompts for the model.
Create middleware with wrap_model_call hook from a function.
Create middleware with wrap_tool_call hook from a function.
Create and manage a structured task list for your current work session.
Detect email addresses in content.
Detect credit card numbers in content using Luhn validation.
Detect IPv4 or IPv6 addresses in content.
Detect MAC addresses in content.
Detect URLs in content using regex and stdlib validation.
Apply the configured strategy to matches within content.
Return a callable detector for the given configuration.
Main entrypoint into LangChain.
Message and message content types.
Entrypoint to using chat models in LangChain.
Factory functions for chat models.
Embeddings models.
Factory functions for embeddings.
Base abstraction and in-memory implementation of rate limiters.
Tools.
Utils file included for backwards compat imports.
Entrypoint to building Agents with LangChain.
Agent factory for creating agents with middleware support.
Types for setting agent response formats.
Entrypoint to using middleware plugins with Agents.
Context editing middleware.
Summarization middleware.
Middleware that exposes a persistent shell tool to agents.
Human in the loop middleware.
Tool retry middleware for agents.
File search middleware for Anthropic text editor and memory tools.
Types for middleware and agents.
PII detection and handling middleware for agents.
LLM-based tool selector middleware.
Tool call limit middleware for agents.
Planning and task management middleware for agents.
Call tracking middleware for agents.
Model fallback middleware for agents.
Model retry middleware for agents.
Tool emulator middleware for testing.
Union type for all supported response format strategies.
Union type for context size specifications.
Type for specifying which exceptions to retry on.
Type for specifying failure handling behavior.
TypeAlias for model call handler return value.