Report Abuse

Basic Information

This repository provides a template implementation of a Model Context Protocol (MCP) server integrated with Mem0 to give AI agents persistent long-term memory via semantic indexing and vector storage. It demonstrates how to implement MCP tools that save memories, return all memories, and perform semantic searches, and it follows best practices for MCP servers so it can be used as a reference or starting point for your own implementation. The project includes example configuration for running with SSE or stdio transports, Docker and local Python execution instructions, and guidance for connecting to PostgreSQL/Supabase for vector storage and to common LLM providers. Use it to learn MCP server structure, to extend with additional tools, or to provide memory capabilities to MCP-compatible clients.

Links

App Details

Features
The server exposes three core memory management endpoints: save_memory to store semantically indexed content, get_all_memories to retrieve stored entries, and search_memories to find relevant memories via semantic search. It supports multiple transports (SSE and stdio) and can be run via uv, Python, or Docker containers. Configuration is environment-driven with variables for transport, host, port, LLM provider and keys, embedding model, and database URL for PostgreSQL/Supabase. The codebase provides hooks to add custom @mcp.tool(), @mcp.resource(), and @mcp.prompt() methods, a lifespan function for dependency setup, and a utils.py for helpers. Integration examples for MCP clients such as Claude Desktop, Windsurf, and n8n are included.
Use Cases
This project helps developers add robust persistent memory to AI agents by providing a working MCP server example that integrates semantic embeddings and vector storage. It saves development time by offering working save, list, and search memory operations, plus ready-made Docker and stdio/SSE deployment patterns and example MCP client configurations. It clarifies environment and provider setup for OpenAI, OpenRouter, or Ollama and shows how to connect to a PostgreSQL/Supabase vector store. The template is also intended as a portable example that can be extended with new tools, prompts, and resources so teams can rapidly build MCP-compatible memory services or instruct coding assistants to reproduce the same structure and patterns.

Please fill the required fields*