Report Abuse

Basic Information

Archon is a command center and platform for AI coding assistants and for running a Model Context Protocol (MCP) server that lets multiple AI clients share the same knowledge, context, and task management. It is designed to centralize project documentation, crawled websites, uploaded PDFs and code examples into a searchable knowledge base, and to expose that context to MCP-compatible assistants such as Claude Code, Cursor, Kiro, and others. The repository provides the full stack to run the service locally or in containers, including a web UI, API server, MCP endpoint, agent host, and a Supabase/Postgres database with vector storage. The goal is to improve AI-driven coding workflows by giving assistants consistent, up-to-date context, tools for RAG queries and task orchestration, and real-time collaboration between users and agent clients.

Links

App Details

Features
Archon implements knowledge management features like smart web crawling, sitemap-aware crawling, document upload and intelligent chunking, automatic extraction and indexing of code examples, and vector semantic search. It includes a lightweight MCP Server exposing ten MCP tools for RAG queries and project operations, multi-LLM support (OpenAI, Ollama, Google Gemini), hybrid RAG strategies with optional reranking, real-time streaming via Socket.IO, and an agents service for hosting PydanticAI agents. The stack is microservices-based with a React+Vite frontend, FastAPI server, independent MCP and agents services, Supabase/Postgres with PGVector for embeddings, Docker Compose deployment, configurable ports and hostname, and developer hot-reload workflows for rapid iteration.
Use Cases
For teams and developers building or using AI coding assistants, Archon centralizes and operationalizes project context so LLMs produce more accurate, relevant code suggestions and documentation-aware responses. It simplifies ingesting and searching documentation, surfaces code examples directly from sources, and supports RAG strategies to improve answer quality. Task and project management are integrated with the knowledge base so assistants can reason about requirements and progress. Real-time updates, multi-user collaboration, and streaming responses enable synchronous workflows between humans and agents. The microservices design and configurable deployment let organizations run locally or scale components independently, and the MCP interface makes it straightforward to connect various agent clients and LLM providers.

Please fill the required fields*