Report Abuse

Basic Information

Memary is an open source memory layer designed to emulate human memory for AI agents and to integrate persistent memory and knowledge graph capabilities into agent workflows. It provides a memory module, knowledge graph integration, and a routing agent implementation so developers can capture, store, and query agent experiences and facts. The project is built to work with locally hosted models (via Ollama) or remote models such as gpt-3.5-turbo and can write agent outputs back into a graph store. It includes a Streamlit demo application, configuration guidance for credentials and graph databases, and support for multi-graph setups to manage separate agents or users. The repository targets developers who want to augment agents with structured, time-aware memory and contextual retrieval mechanisms rather than a standalone end-user chatbot.

Links

Categorization

App Details

Features
Persistent Memory Stream that records entities and timestamps to capture breadth of user exposure. Entity Knowledge Store that tracks frequency and recency to model depth of knowledge and rank relevant entities. Knowledge graph integration using graph databases (examples in README mention FalkorDB and Neo4j) and LlamaIndex for populating nodes from documents. Routing/ReAct agent implementation with default tools including search, vision (LLaVA), and location utilities. New context window that combines agent responses, most relevant entities, and summarized chat history to avoid token overflow. Multi-graph support to create and switch between agent-specific graphs. Tooling to add and remove custom tools, writeback of final responses into the KG, and optional memory compression and summarization utilities.
Use Cases
Memary helps developers add human-like memory to agents so responses become more personalized, consistent, and context-aware over time. Automatic memory generation captures interactions without demanding extensive developer changes, enabling timeline analysis and theme extraction for evolving user interests. The entity store ranks topics by frequency and recency to surface what a user knows well versus what they have merely encountered, which helps tailor follow-up responses and reduce irrelevant context. Graph-backed recursive and multi-hop retrieval narrows searches to relevant subgraphs, reducing latency and improving answer relevance. Multi-graph features allow separate agent memories per user or persona. The included demo, model configuration options, and simple APIs for custom tools make it practical to integrate memary into existing agent stacks.

Please fill the required fields*