Report Abuse

Basic Information

AgenticSeek is an open-source, user-facing autonomous AI assistant designed as a 100% local alternative to Manus AI. It is built to run entirely on a user's hardware so models, speech, files and browsing actions remain private and do not depend on cloud services. The project bundles a web interface and a CLI, Docker Compose services (frontend, backend, searxng, redis), and an optional self-hosted llm server so users can run LLMs locally with providers such as ollama or LM Studio, or connect to external APIs if desired. The README documents prerequisites, configuration via config.ini, provider and model selection, and how to start services. It targets tasks like autonomous web browsing, code generation and execution, multi-step task planning and file system interaction while prioritizing privacy, local model usage and modular configuration for different hardware capabilities.

Links

Categorization

App Details

Features
AgenticSeek offers fully local operation and privacy by default, supporting local LLM providers (ollama, lm-studio, local OpenAI-compatible servers) and optional cloud APIs. It can autonomously browse the web to search, read and extract information and perform experimental form filling. The system includes an autonomous coding assistant able to write, debug and run programs in multiple languages, a planner that breaks complex tasks into steps and routes work to specialized agents, and a smart agent selection system that picks the best agent for each job. Voice features include text-to-speech and experimental speech-to-text in CLI mode with a wake word. The repo supplies Dockerized services, a CLI and web UI, detailed config.ini options, troubleshooting guidance and scripts to start and manage services.
Use Cases
AgenticSeek helps users automate research, development and everyday tasks without sending data to external services. It can run web searches and save results, generate and run code (examples include Python and Go scripts), rename and manage files in a local workspace, plan and execute multi-step projects such as trip planning or data scraping, and act as a voice-enabled assistant in CLI mode. For advanced users it supports running heavy LLMs on a separate server or locally via Ollama or LM Studio to leverage larger models while keeping control of data and costs. The README includes troubleshooting, hardware recommendations, and examples so users can deploy, configure and extend the system for private autonomous agent workflows.

Please fill the required fields*