Install dependencies
Install the following packages to follow along:Set up API keys
Get an API key from any supported model provider (for example, OpenAI or Google Gemini). Set the API keys, for example:- OpenAI
- Google Gemini
- Claude (Anthropic)
- OpenRouter
- Fireworks
- Baseten
- Ollama
- Azure
- AWS Bedrock
- HuggingFace
- Other
Build a basic agent
Start by creating a simple agent that can answer questions and call tools. The agent in this example uses the chosen language model, a basic weather function as a tool, and a simple prompt to guide its behavior:"It's always sunny in San Francisco!".
Build a real-world agent
In the following example you will build a research agent that can answer questions about text files. Along the way you will explore the following concepts:- Detailed system prompts for better agent behavior
- Create tools that integrate with external data
- Model configuration for consistent responses
- Conversational memory for chat-like interactions
- Deep Agents for built-in features
- Testing your agent
Define the system prompt
The system prompt defines your agent’s role and behavior. Keep it specific and actionable:
Create tools
Tools let a model interact with external systems by calling functions you define.
Tools can depend on runtime context and also interact with agent memory.This example uses a tool to load a document from a given URL:
Configure your model
Set up your language model with the right parameters for your use case. For example:Depending on the model and provider chosen, initialization parameters may vary; refer to their reference pages for details.
Add memory
Add memory to your agent to maintain state across interactions. This allows
the agent to remember previous conversations and context.
In production, use a persistent checkpointer that saves message history to a database.
See Add and manage memory for more details.
Create and run the agent
Now assemble your agent with all the components and run it.There are two different frameworks for creating agents: LangChain agents and deep agents.
Both LangChain and deep agents provide you with fine-grained control over tools, memory, and more.
The main difference between both is that deep agents come with a range of commonly useful capabilities already built in, such as planning, file system tools, and subagents.Use deep agents when you want maximum capability with minimal setup; choose LangChain agents when you need fine-grained control.Let’s try both:
Review the results
The results will differ based on the model and the execution.If you look at the output on both tabs, you notice that the LangChain agent provided answers but they are estimates. The agent lacks the tools to answer this question. You may also get errors that the prompt is too long.The deep agent, on the other hand can:
- LangChain agents
- Deep agents
- Plans its approach using the built-in
write_todostool to break down the research task. - Loads the file by calling the
fetch_text_from_urltool to gather information. - Manages context by using file system tools (
grepandread_file). - Spawns subagents as needed to delegate complex subtasks to specialized subagents.
Trace agent calls
Most interesting applications you build with LangChain make many calls to LLMs. As these applications get more complex, it becomes important to be able to inspect what exactly is going on inside your agent. The best way to do this is with LangSmith. Sign up for a LangSmith account and set these to start logging traces:Next steps
You now have agents that can:- Understand context and remember conversations
- Use tools intelligently
- Provide structured responses in a consistent format
- Handle user-specific information through context
- Maintain conversation state across interactions
- Plan, research, and synthesize (deep agents only)
- LangChain agents: Add and manage memory, deploy to production
- Deep Agents: Customization options, persistent memory, deploy to production
Connect these docs to Claude, VSCode, and more via MCP for real-time answers.

