Report Abuse

Basic Information

Chidori is an open-source orchestrator, runtime, and IDE designed to build durable AI agents and long-running workflows. It provides a reactive runtime implemented in Rust that can execute user code written in Python or JavaScript and brokers interactions with LLMs and external services. The project focuses on giving developers tools to understand and control agent execution by enabling pauses, resumptions, state preservation, and exploration of alternate execution paths. Typical use includes authoring agents as single-file or multi-file projects, running the provided visual debugger entrypoint chidori-debugger, and connecting to LLM proxies such as a local LiteLLM instance or OpenAI via configuration. The repository supplies examples that demonstrate inter-language execution and LLM-driven tasks, and it aims to avoid introducing a new domain-specific language by integrating with familiar programming patterns.

Links

Categorization

App Details

Features
Chidori bundles a Rust runtime with first-class support for executing Python and JavaScript code, plus a visual debugging UI called chidori-debugger. It records full inputs and outputs for observability, supports caching of behaviors and resuming agents from partially executed states, and enables branching and time-travel debugging to revert or explore alternative execution paths. The system exposes an execution graph for monitoring and for generating code during evaluation with LLMs. It includes a simple local vector database for development and integration points for LLM proxies by default, and it documents plans for containerized nodes, additional vector DB support, multiple LLM sources, and more secure code interpreter environments.
Use Cases
Chidori reduces accidental complexity when building AI-driven applications by giving developers deterministic control over agent execution, state, and debugging. The observability features let teams trace exactly how inputs produced outputs, improving debugging and incident analysis. Pause-and-resume and state snapshots let human-in-the-loop workflows and iterative development continue without rerunning entire processes. Branching and time travel make it possible to explore alternative outcomes and revert errors, which aids robust error handling and system resilience. Support for Python and JavaScript in the same runtime and tooling for code generation during evaluation speed up prototyping of complex agent behaviors and integrations with LLMs and external APIs.

Please fill the required fields*