agentic radar

Report Abuse

Basic Information

Agentic Radar is a security scanner and analysis tool for agentic workflows and multi-agent systems. It is designed for developers, researchers, and security professionals who need to inspect how agentic systems function and identify security and operational issues. The tool statically analyzes code from supported frameworks to produce a comprehensive HTML report that includes a visual workflow graph, a list of external and custom tools, detected MCP servers, and a mapping of identified tools to known vulnerabilities. It supports runtime testing for certain frameworks and can harden detected system prompts by invoking an LLM when an API key is provided. The project is packaged as a Python CLI installable via pip and provides examples, a web-based visualizer, and guidance for integration into CI/CD pipelines. The focus is on producing reviewable artifacts and actionable security findings for agentic systems.

Links

App Details

Features
Agentic Radar offers static scanning and runtime testing commands exposed via a CLI. The scan command parses agentic code from several frameworks and generates an HTML report containing workflow visualization, tool identification, MCP server detection, and vulnerability mapping. The test command can run runtime adversarial tests against supported frameworks and prints rich, tabular test results including agent name, test type, injected input, agent output, pass/fail status, and explanations. Advanced features include Agentic Prompt Hardening which improves detected system prompts using an LLM when an API key is set, configurable custom test suites via YAML, and a provided GitHub Actions workflow example for CI/CD integration. Supported frameworks listed in the roadmap include OpenAI Agents, CrewAI, n8n, LangGraph, and Autogen with varying feature support.
Use Cases
The tool helps teams gain security and operational insights into agentic workflows by automating discovery and reporting of potential weaknesses. It centralizes findings in a shareable HTML report and visual workflow graph to make it easier to understand complex multi-agent interactions. Runtime tests simulate real-world adversarial inputs to reveal prompt injection, PII leakage, harmful content generation, and fake news risks for supported frameworks. Prompt Hardening offers a way to improve system prompts automatically using LLM assistance when configured. The scanner runs locally for static analysis to preserve source privacy and supports CI/CD automation so scans run on repository changes. Configurable tests and clear terminal output make vulnerability triage and remediation planning more straightforward.

Please fill the required fields*