Report Abuse

Basic Information

Nexent is a zero-code platform and open source SDK for auto-generating intelligent agents from plain language. It is built on the MCP tool ecosystem and aims to let users describe workflows in natural language and produce runnable agents without manual orchestration or drag-and-drop builders. The repository provides a full platform with built-in agents for scenarios like work, travel, research and daily life, plus capabilities for agent orchestration such as running control, multi-agent collaboration, data processing, knowledge tracing, multimodal dialogue and batch scaling. The project includes Docker Compose deployment instructions and minimum system prerequisites, a developer guide for building from source, and links to model provider and MCP integration guidance. The repo targets both non-technical users who want to create agents using prompts and developers who want to extend the system with MCP-compatible tools and Python plugins.

Links

App Details

Features
Nexent documents a set of core platform features that enable prompt-driven agent creation and scalable data handling. It automatically generates smart agent prompts and selects tools and action plans for requests. The platform includes a scalable data processing engine with OCR and table extraction across many data formats and supports large-batch pipelines. It offers a personal-grade knowledge base with real-time import and auto-summarization, internet knowledge search via multiple providers, and knowledge-level traceability with source citations. Nexent supports multimodal understanding and dialogue including voice, text, files and images and can generate images. The MCP tool ecosystem allows drop-in or custom Python plugins that follow the MCP spec, enabling model, tool and chain swapping without changing core code.
Use Cases
Nexent helps teams and individuals quickly build and operate intelligent agents without writing orchestration code. By converting plain language into runnable agent prompts and selecting appropriate tools automatically, it reduces development time and lowers the barrier to creating multi-step, multimodal services. Its data engine and OCR make it easier to ingest diverse documents and scale processing from single tasks to batch pipelines. The personal knowledge base and internet search integration let agents answer with context and citations, improving trust and traceability. Developers benefit from MCP plugin support and documentation to customize models and tools, while operators can deploy via Docker Compose using provided scripts and configuration examples. The project also offers community resources, guides, and a feature roadmap to support adoption and contribution.

Please fill the required fields*