agent service toolkit

Report Abuse

Basic Information

AI Agent Service Toolkit is a template and reference implementation for building, running and deploying LangGraph-based AI agents. It bundles an agent definition, a FastAPI service that exposes streaming and non-streaming endpoints, a Python client for invoking agents, and a Streamlit chat interface. Data models and configuration use Pydantic and the repo includes example agents, a RAG assistant example using ChromaDB, content moderation via LlamaGuard, and integration points for LangSmith tracing. The project provides Docker Compose with a PostgreSQL service for full-stack local development, instructions for running locally without Docker, and guidance for provider-specific setup such as Ollama and VertexAI. The toolkit is intended to be a starting point to customize agents, run multiple agents by URL path, and experiment with LangGraph Studio for interactive agent development.

Links

Categorization

App Details

Features
Includes a customizable LangGraph agent implementation leveraging LangGraph v0.3 features such as human-in-the-loop interrupt(), flow control with Command, long-term memory with Store, and langgraph-supervisor. Provides a FastAPI service that supports both streaming and non-streaming API endpoints and a novel approach for token- and message-based streaming. Ships a Streamlit-based chat UI and a generic AgentClient that supports synchronous/asynchronous and streaming/non-streaming calls. Offers a basic RAG agent demo with ChromaDB, content moderation via LlamaGuard (requires Groq key), feedback integration with LangSmith, async design for concurrency, Docker Compose with Postgres, and a test suite with unit and integration tests. Project files are organized into agents, schema, core, service, client and tests folders.
Use Cases
The repository saves developers time by providing a full, working agent service stack and examples to extend. It demonstrates end-to-end patterns from agent definition to a web UI and client library so you can focus on custom behaviors instead of plumbing. Docker Compose and local dev instructions simplify environment setup and live code reloading for iterative development. Built-in examples and docs show how to add new agents, enable RAG, configure provider credentials, and use LangGraph Studio. The AgentClient enables building additional frontends or integrations. The project also includes testing guidance, CI badges, and a structure for credential handling, making it easier to maintain, test and deploy LangGraph-based agents in development and production-like environments.

Please fill the required fields*