Report Abuse

Basic Information

AgentLLM is a research-oriented proof of concept that demonstrates running autonomous, goal-oriented language agents entirely in the browser using embedded open-source large language models. The project adapts the AgentGPT workflow to operate with a local, browser-native LLM backend rather than remote APIs, showing that models can perform iterative task generation and execution loops on client hardware. It focuses on a sandboxed environment that avoids external tool integrations, enabling controlled experiments in agent planning and decomposition. The README emphasizes that this is experimental and not production-ready, and it highlights a live demo and a browser requirement of modern Chromium builds with WebGPU support. The repository includes instructions for running the demo, Docker and docker-compose options, local development setup, and a manual installation path that uses Node.js, environment variables, and Prisma-based database setup.

Links

App Details

Features
The repository integrates a browser-embedded LLM stack built on WebLLM and WebGPU to run inference on Chromium-based browsers for improved performance compared to CPU-only approaches. It modifies the AgentGPT interface and loop to use WizardLM as the model and a changed prompt mechanism, producing a GUI-focused sandbox for agent experiments. Agents intentionally do not use external tools, simplifying reproducibility and predictability. Distribution and deployment conveniences include a live demo, a setup script that supports Docker, docker-compose, and local modes, GitHub Codespaces instructions, and detailed manual setup steps including Node.js v18+, .env variables, NextAuth configuration, Prisma database commands, and optional sqlite setup. The README also contains a clear disclaimer about experimental status and hardware requirements.
Use Cases
AgentLLM helps researchers and developers prototype and evaluate autonomous agent behavior in a privacy-preserving, client-side environment by allowing experiments without routing data to remote servers. It provides a ready-made sandbox combining a familiar AgentGPT-style agent loop with a browser-native model runtime so users can observe task breakdown, planning, and iterative execution while minimizing external dependencies and tool-induced variability. The included setup scripts, Docker and Codespaces instructions lower the barrier to reproducing the demo, and the manual steps document required environment variables and database setup for local development. The project is useful for exploring the feasibility, limitations, and performance trade-offs of running LLM-driven agents in-browser, but it is explicitly marked as experimental and not intended for production use.

Please fill the required fields*