Report Abuse

Basic Information

Graphiti is an open-source developer framework for building temporally-aware knowledge graphs tailored to AI agents and dynamic applications. It continuously ingests user interactions, structured and unstructured enterprise data, and external information into a queryable graph so agents can perform stateful reasoning, memory, and context-aware retrieval. The project powers Zep's context engineering platform and provides examples and tools for initializing indices, adding episodes, and executing hybrid searches. It targets developers who need real-time updates and precise point-in-time queries rather than batch RAG workflows. The repository includes core libraries, an MCP server for Model Context Protocol integrations, a FastAPI REST service, example quickstarts, and documentation for configuring database backends and multiple LLM/embedder providers. Installation is via pip and supports Neo4j or FalkorDB backends.

Links

Categorization

App Details

Features
Graphiti emphasizes real-time incremental updates and a bi-temporal data model that records both event occurrence and ingestion times to enable accurate historical queries. It offers a hybrid retrieval stack combining semantic embeddings, BM25 keyword search, and graph traversal with optional graph-distance reranking. Developers can define custom entity and edge schemas using Pydantic models. Supported backends include Neo4j and FalkorDB, and supported LLM and embedding providers include OpenAI, Azure OpenAI, Google Gemini, Anthropic, Groq, and local Ollama models. The repo contains an MCP server for agent integrations, a FastAPI-based REST service, telemetry with opt-out controls, concurrency tuning via SEMAPHORE_LIMIT, and quickstart examples and guides.
Use Cases
Graphiti helps teams build agents that need evolving context and reliable temporal reasoning by removing the need for costly batch recomputation. Its incremental ingestion and hybrid search reduce query latency and improve relevance for changing datasets, making it suitable for agent memory, task automation, and applications that require accurate point-in-time answers. Integration with common LLM providers and local model options supports privacy and cost-control tradeoffs. The MCP server and REST API make it straightforward to connect assistants and services. Telemetry and configurable performance settings aid deployment and troubleshooting. Example projects, documentation, and schema customization accelerate developer adoption in production and research settings.

Please fill the required fields*