auto-inspector

Report Abuse

Basic Information

Auto Inspector is an open-source autonomous AI agent designed to test websites by executing user stories and producing test reports. It is published under Apache 2.0 and developed by the Magic Inspector team to change how web testing is done. The project offers both a GUI web application (Docker and Docker Compose based) and a CLI backend (Node.js v20+), letting users create tests with human-readable user stories that the agent interprets and runs. The README describes demo recordings, example commands, and a roadmap of improvements. The repo requires an OpenAI API key for LLM-driven behaviors and includes convenience commands to run example scenarios. The project is currently in active development and the maintainers provide a hosted cloud version and enterprise support while the open-source project matures.

Links

Categorization

App Details

Features
Autonomous test generation and execution from plain user stories interpreted by an LLM. Dual delivery modes with a Dockerized GUI for easy trial and a Node.js CLI for development and automation. Example commands and scripts such as npm run example:voyager and npm run scenario for running predefined or custom user stories. Handling of variables and secrets with measures to avoid leaking secrets in logs or to the LLM. Runtime safeguards like waiting for domContentLoaded and checking page stabilization before evaluation. Action-level completion management to prevent redundant interactions. Support for running multiple cases from a test file and planned features such as tab management, persisting screenshots/results, real-time frontend step display, unit tests, and an OpenAI YAML spec generator.
Use Cases
Auto Inspector saves testers and developers time by automating the execution of web tests derived from human-readable user stories. It centralizes test planning and delegates the low-level interaction and evaluation to an autonomous agent, reducing manual scripting of UI steps. The tool provides both an approachable GUI for experimentation and a CLI for integration and extension, enabling teams to run examples quickly or extend the agent logic. Built-in behaviors such as secret handling, DOM stabilization checks, and action-level completion make tests more resilient. The project roadmap targets improved reporting, persistence of screenshots and results, real-time step visualization, and a framework for benchmarking web testing quality, which further help teams validate and iterate on web application behavior.

Please fill the required fields*