Report Abuse

Basic Information

Letta is an open source framework for building and running stateful LLM agents with transparent long-term memory and advanced reasoning. It provides a self-hosted server runtime that persists agents and their memories to a database so agents can live indefinitely. The project is white-box and model-agnostic, allowing connection to multiple LLM backends and embedding providers. Letta includes a REST API, Python and TypeScript SDKs, a command line interface for creating and chatting with agents, and an Agent Development Environment (ADE) graphical interface for creating, deploying and observing agents. The repository provides Docker and pip installation options and guidance for using PostgreSQL or the SQLite fallback. Letta was previously named MemGPT and is distributed under an open source license.

Links

Categorization

App Details

Features
Stateful agent runtime with persistent core and archival memory, designed for transparent long-term memory management and reasoning. Model-agnostic integration with multiple LLM backends and embedding providers, with examples referencing OpenAI, Anthropic, vLLM and Ollama. REST API plus Python and TypeScript SDKs for programmatic control and automation. Agent Development Environment (ADE) graphical UI for creating, testing, debugging and observing agents. CLI tools for quick agent creation and interaction. Docker image and pip install paths with configuration for environment variables and database backends. Database support guidance that emphasizes PostgreSQL for migrations and persistence, and SQLite as a pip install default. Open source governance and contribution guidance, example tools and memory operations shown in CLI snippets.
Use Cases
Letta helps developers and teams build, deploy and operate LLM agents that maintain persistent identity and memory across sessions. It simplifies running agents as services behind a server runtime that exposes a REST API and SDKs, enabling integration into applications or developer workflows. The ADE offers a visual way to test, observe and debug agent behavior while keeping agent data local when self-hosted. Docker and CLI options speed up deployment and experimentation. Database-backed persistence enables long-term state and controlled migrations when using PostgreSQL. Model-agnostic connectors allow switching or combining LLM providers. Overall, Letta reduces engineering overhead for creating production-ready, stateful conversational or task agents and provides tools for observability and iteration.

Please fill the required fields*