Open Deep Research: A Configurable Open-Source Deep Research Agent

Open Deep Research: A Configurable Open-Source Deep Research Agent

Summary

Open Deep Research is a fully open-source, configurable agent designed for deep research applications. It supports various model providers, search tools, and Model Context Protocol (MCP) servers, offering performance comparable to other popular deep research agents. Developed by LangChain, it leverages LangGraph for robust agent orchestration and provides extensive customization options.

Repository Info

Updated on May 15, 2026
View on GitHub

Introduction

Open Deep Research is a powerful, open-source deep research agent developed by LangChain. It is built to perform comprehensive research across a multitude of model providers, search tools, and Model Context Protocol (MCP) servers. This project stands out for its configurability and its performance, which is on par with many leading deep research agents, as demonstrated on the Deep Research Bench leaderboard. The agent is designed to be simple to use while offering deep customization, making it a versatile tool for automated research tasks.

Installation

To get started with Open Deep Research, follow these steps:

  1. Clone the repository and activate a virtual environment:

    git clone https://github.com/langchain-ai/open_deep_research.git
    cd open_deep_research
    uv venv
    source .venv/bin/activate  # On Windows: .venv\Scripts\activate
    
  2. Install dependencies:

    uv sync
    # or
    uv pip install -r pyproject.toml
    
  3. Set up your .env file:

    cp .env.example .env
    
  4. Launch the agent with the LangGraph server locally:

    uvx --refresh --from "langgraph-cli[inmem]" --with-editable . --python 3.11 langgraph dev --allow-blocking
    

    This command will open the LangGraph Studio UI in your browser, providing access to the agent's interface.

Examples

Once the LangGraph server is running, you can interact with the agent through the LangGraph Studio UI. Simply ask a question in the messages input field and click Submit. The agent's behavior can be extensively customized through its configurations, accessible via the "Manage Assistants" tab in the Studio UI.

Key configuration areas include:

  • LLM Selection: Open Deep Research supports a wide range of LLM providers. Different models can be assigned for specific tasks such as summarization, research, compression, and final report generation. For example, openai:gpt-4.1-mini might be used for summarization, while openai:gpt-4.1 handles research and report writing. Models must support structured outputs and tool calling.
  • Search API: By default, the agent uses the Tavily search API, but it is compatible with various search tools and offers full MCP compatibility for native web search with Anthropic and OpenAI.
  • Other Settings: The configuration.py file contains numerous other settings to fine-tune the agent's behavior, all accessible through the LangGraph Studio UI.

Why Use It

Open Deep Research offers several compelling reasons for its adoption:

  • High Performance: It achieves competitive results on the Deep Research Bench leaderboard, ensuring high-quality research output.
  • Extensive Configurability: Users can easily swap out LLM providers, search tools, and other parameters to tailor the agent to specific research needs and available resources.
  • Fully Open Source: The project's open-source nature allows for transparency, community contributions, and deep customization beyond the default configurations.
  • Flexible Deployment: Beyond local LangGraph Studio, it can be deployed to the LangGraph Platform or integrated into the Open Agent Platform (OAP), enabling non-technical users to configure and utilize the agent.
  • Educational Resources: LangChain provides a free course on building open deep research with LangGraph, making it easier for developers to understand and extend the agent.

Links