Report Abuse

Basic Information

Burr is a developer-focused Python framework for building stateful decision-making applications such as chatbots, agents, simulations and other workflows that need explicit state management. It expresses applications as state machines composed of simple Python actions and provides a minimal, dependency-free core API to define reads, writes and transitions between actions. Burr is intended for projects that use LLMs but is framework-agnostic and works for non-LLM use cases too. The repo includes a telemetry UI for live tracing and inspection, example applications (chatbot, RAG-based chat, adventure game, email assistant, simulation, ML tuning), and pluggable persisters to save and restore application state. Installation is via pip and the package ships with a demo UI server and example code to get started quickly. Documentation and walkthroughs are provided to help developers adopt the state-machine approach.

Links

Categorization

App Details

Features
Burr provides a low-abstraction Python library to define stateful actions and transitions as an explicit state machine. It includes an open-source telemetry UI to monitor, trace and debug executions in real time and demo applications to illustrate usage. The framework offers pluggable persisters for saving and loading application state, integrations for common tooling, and examples showing LLM and non-LLM use cases. The core API centers on simple action decorators that declare reads and writes, an ApplicationBuilder to assemble actions and transitions, and runtime tracing that the UI surfaces. Burr is framework-agnostic, ships on PyPI, and provides example integrations with Streamlit and mentions interoperability with libraries like Hamilton. The README and docs include quick-start commands, examples, and a demo server for exploration.
Use Cases
Burr helps developers manage complex application state and decision logic by modeling behavior as an explicit state machine, which simplifies reasoning about flow, debugging, replay and persistence. The telemetry UI lets teams observe runs and inspect state changes in real time, aiding troubleshooting and iteration. Pluggable persisters enable durable memory and checkpoints so applications can persist conversation history, simulation state, or training metadata. Burr’s simple action primitives reduce boilerplate for wiring LLM calls, human input, or other services into a cohesive workflow while remaining agnostic to which model or vendor you use. Example projects and demos accelerate learning and prototyping. Roadmap items and integrations aim to ease deployment and scale, making Burr useful for building reproducible, inspectable, and modular AI-driven applications.

Please fill the required fields*