Report Abuse

Basic Information

AgentScope is a developer-focused framework for building LLM-powered, agent-oriented applications and multi-agent workflows. It provides a transparent, modular programming model that exposes prompt engineering, API invocation, agent logic, tool integration, workflow orchestration, and state management so developers can see and control each step. The project targets use cases where multiple agents interact explicitly through message passing, supporting both synchronous and asynchronous execution patterns. It is model-agnostic so code can run with different LLM backends. The repository includes libraries for models, tools, memory, tracing, sessions, prompt formatting, evaluation and a local Studio for tracing and visualization. It is published as a Python package requiring Python 3.10 or higher and can be installed from source or via PyPI.

Links

Categorization

App Details

Features
The README documents modular components and concrete capabilities. The model module supports async invocation, reasoning models and streaming or non-streaming returns and a tools API. The tool system supports async/sync functions, streaming returns, user interruption, post-processing and group-wise management. The MCP layer supports streamable transports such as HTTP, SSE and stdio. Agents support async execution, parallel tool calls, realtime steering interruptions, customized handling, automatic state management, agent-controlled long-term memory and hooks. The stack includes tracing for models/agents/tools with third-party connectors, in-memory and long-term memory modules, session/state management, distributed and parallel evaluation, multi-agent prompt formatting and an AgentScope Studio for visualization.
Use Cases
AgentScope helps developers build, test and operate complex multi-agent LLM applications by offering transparent, LEGO-style components that reduce hidden behavior and make orchestration explicit. Real-time steering and interruption let developers customize agent behavior during execution. Model-agnostic design enables the same code to run against different LLMs. Built-in memory, session management, tracing and evaluation tooling simplify stateful agents, observability and large-scale testing. The toolkit and examples demonstrate integrating tool functions such as executing Python or shell commands. Extensive documentation, tutorials and a Studio UI assist onboarding and debugging. The project is open source under Apache 2.0 so teams can extend modules for production workflows.

Please fill the required fields*