agents deep research

Report Abuse

Basic Information

This repository implements an agentic deep research assistant built on the OpenAI Agents SDK. It provides two complementary modes: IterativeResearcher for continuous looped research on a single topic and DeepResearcher for creating structured, multi-section reports by running parallel IterativeResearcher instances for each section. The package uses a multi-agent architecture with components such as a Knowledge Gap Agent, a Tool Selector, specialized Tool Agents (web search and website crawler by default) and a Writer Agent to synthesize findings into a final report. It is intended to be used as a Python module or via a command line interface and is designed to be extended with custom tools and agents. The system supports multiple model providers that follow the OpenAI API spec and includes optional trace monitoring of agent interactions.

Links

Categorization

App Details

Features
Provides IterativeResearcher and DeepResearcher workflows for short and long-form research. Includes an LLMConfig abstraction to swap and configure models at runtime. Built-in agent types include Knowledge Gap Agent, Tool Selector, Tool Agents (Web Search, Website Crawler) and a Writer Agent. Supports parallel section research, tool calling, and concurrent web page ingestion. Command line and Python module usage are provided along with example outputs. Integrates with Serper for Google searches by default but can use other search providers and model endpoints compatible with the OpenAI API spec. Supports OpenAI trace monitoring. Installation via pip and a sample .env configuration are included. The project is extensible with instructions for adding custom tool agents.
Use Cases
Automates and accelerates long-form research by iteratively identifying knowledge gaps, selecting appropriate tools, executing searches and crawling sites, and synthesizing results into coherent sections and final reports. The DeepResearcher mode composes an outline and runs parallel researchers for scalable multi-section outputs, while the IterativeResearcher is suitable for shorter reports and lower API usage. Runtime LLM configuration and CLI options let users choose providers and set constraints like max iterations, time and output length. Trace monitoring helps observe agent interactions and debugging. The repo documents limitations such as rate limits and challenges with enforcing exact output length and provides extension points for custom tools and local models.

Please fill the required fields*