Report Abuse

Basic Information

AI Manus is a general-purpose AI agent system designed to create, run and manage agents that execute tools and operations inside isolated, per-task Docker sandboxes. It provides a web frontend, a backend server and a sandbox service that together let a user or developer create an Agent session, start an Ubuntu-based sandbox that launches a headless Chrome and tool APIs, then forward user messages to a PlanAct Agent for task planning and execution. The system routes events back to the web UI via server-sent events and supports OpenAI-compatible LLM providers with FunctionCall and JSON output. The README and deployment guide focus on running the stack with Docker Compose, using MongoDB/Redis for session history and background tasks, and configuring authentication, search provider and sandbox behavior.

Links

App Details

Features
Deployment is minimal and only requires a compatible LLM service and Docker. Tools supported include Terminal, Browser, File access, Web Search and messaging with real-time viewing and takeover. Sandboxes are created per task using the Docker socket and run an Ubuntu environment with Chrome, VNC (xvfb and x11vnc) and NoVNC via websockify for browser viewing. The PlanAct Agent orchestrates tool calls. Session history and background task support use MongoDB or Redis. Conversations support stopping and interrupting and file upload/download. The project is multilingual (Chinese and English), supports authentication options (password/local/none) and JWT, and allows external MCP tool integration and configurable search providers (baidu/google/bing).
Use Cases
AI Manus helps developers and teams build and run agents that need safe, reproducible access to tools and a real browser environment. By isolating each task in a Docker sandbox it reduces interference between sessions and enables live debugging through VNC/NoVNC browser viewing and takeover. The system preserves session history and supports background tasks, making long-running or multi-step workflows manageable. Its Docker Compose deployment, development scripts and image publishing instructions simplify local development and production deployment. Compatibility with OpenAI-style APIs, FunctionCall and JSON output makes it adaptable to many LLM providers. Configurable auth, search providers and MCP integration further enable integration into existing workflows and controlled multi-tool experiments.

Please fill the required fields*