Report Abuse

Basic Information

NagaAgent is an extensible multi-agent AI assistant project that provides a full local runtime for building, running and managing conversational agents and MCP services. The repository bundles a PyQt5 desktop interface, a FastAPI RESTful server with Server-Sent Events support, a GRAG knowledge-graph memory layer backed by Neo4j, and a modular MCP/Agent runtime that dynamically discovers, registers and hot-reloads agent implementations. It supports multiple LLM providers and models through configurable API integrations, streaming voice I/O with Edge-TTS and PyAudio, browser automation via Playwright, and standardized agent manifests for interoperable agents. The codebase includes startup scripts for Windows, macOS and Linux, environment checks, example agent configuration formats and tools to import historical logs into the knowledge graph. The project aims to run as a local, multi-platform assistant with session isolation, lifecycle management and a unified tool-calling loop.

Links

Categorization

App Details

Features
NagaAgent exposes a set of platform and developer features focused on multi-agent orchestration and rich interaction. It includes a modular AgentManager with dynamic configuration loading, validation and hot-reload of agent JSON configs and agent-manifest files. The MCP registry supports service discovery, capability search and tool listing, and APIs to query services and statistics. The conversation core implements a tool-calling loop that routes JSON-formatted handoff requests to MCP or agent managers. The GRAG memory subsystem extracts quintuple entities, stores and retrieves context from Neo4j and visualizes knowledge graphs. The project provides a PyQt5 chat UI with markdown, system tray/background mode and theming, streaming voice synthesis/recognition, RESTful APIs with Swagger docs and SSE streaming, and cross-platform install and start scripts.
Use Cases
For developers and power users this repository supplies a ready-made infrastructure to build, integrate and operate specialized conversational agents and tool services. It reduces boilerplate by standardizing agent manifests, providing agent lifecycle and session management, and offering unified JSON handoff formats so LLM outputs can trigger tool execution and multi-agent collaboration. The GRAG knowledge graph enables persistent, structured memory and similarity-based retrieval for multi-turn context. Built-in APIs, service discovery and example agent templates make it easier to add custom agents for tasks like file operations, code execution, browsing automation or domain-specific assistants. Desktop UI, system tray support and streaming voice make it practical to deploy locally across Windows, macOS and Linux while configuration examples and scripts simplify installation and Neo4j integration.

Please fill the required fields*