Features
Stateful agent runtime with persistent core and archival memory, designed for transparent long-term memory management and reasoning. Model-agnostic integration with multiple LLM backends and embedding providers, with examples referencing OpenAI, Anthropic, vLLM and Ollama. REST API plus Python and TypeScript SDKs for programmatic control and automation. Agent Development Environment (ADE) graphical UI for creating, testing, debugging and observing agents. CLI tools for quick agent creation and interaction. Docker image and pip install paths with configuration for environment variables and database backends. Database support guidance that emphasizes PostgreSQL for migrations and persistence, and SQLite as a pip install default. Open source governance and contribution guidance, example tools and memory operations shown in CLI snippets.
Use Cases
Letta helps developers and teams build, deploy and operate LLM agents that maintain persistent identity and memory across sessions. It simplifies running agents as services behind a server runtime that exposes a REST API and SDKs, enabling integration into applications or developer workflows. The ADE offers a visual way to test, observe and debug agent behavior while keeping agent data local when self-hosted. Docker and CLI options speed up deployment and experimentation. Database-backed persistence enables long-term state and controlled migrations when using PostgreSQL. Model-agnostic connectors allow switching or combining LLM providers. Overall, Letta reduces engineering overhead for creating production-ready, stateful conversational or task agents and provides tools for observability and iteration.