Report Abuse

Basic Information

Burr is a developer-focused framework for building stateful decision-making applications such as chatbots, agents, simulations and other LLM-driven or non-LLM workflows. It lets you express application logic as an explicit state machine built from simple Python functions and actions. The project includes a dependency-free core library for defining actions and transitions, an open-source telemetry UI for real-time monitoring and tracing of executions, and pluggable persisters to save and load application state. Burr is intended to make it easier to manage complex state, capture execution traces, and integrate with existing model vendors and storage systems. The repository ships examples (chatbots, RAG, adventure game, email assistant, simulation, hyperparameter tuning) and developer tooling so engineers can prototype, observe, and run stateful AI applications locally or on their infrastructure.

Links

Categorization

App Details

Features
A low-abstraction Python API that models applications as explicit state machines with action decorators and an ApplicationBuilder. An open-source telemetry UI that visualizes execution traces, realtime state changes, and includes example demo apps for exploration. Pluggable persistence integrations for memory and state storage to save and restore application snapshots. Vendor-agnostic integrations so LLMs and external tools can be used interchangeably. Shipping examples covering chatbots, RAG, games, email assistants, simulations, and ML tuning to demonstrate patterns. Streamlined install and run experience via pip and a single command to launch the UI. Designed to be framework-agnostic and to integrate with other developer libraries and observability tooling.
Use Cases
Burr helps teams build, debug, and operate complex decision logic by making state explicit and observable. The UI and tracing features let developers inspect state snapshots and replay or debug executions so reproducing and diagnosing issues is easier. Pluggable persisters and integrations enable persistent memory and reuse of existing storage or telemetry systems. The simple action/transition model reduces boilerplate for LLM orchestration while remaining vendor-agnostic, speeding prototype-to-prod cycles. Examples and built-in demos accelerate onboarding. The framework supports non-LLM use cases as well, enabling simulations and ML workflows to reuse the same stateful orchestration patterns. Users report faster development, cleaner implementations, and improved team adoption when migrating from more monolithic libraries.

Please fill the required fields*