Report Abuse

Basic Information

AgentDock is an open-source framework for building, configuring, and deploying AI agents with a backend-first, provider-agnostic design. It provides two main deliverables: AgentDock Core, a TypeScript node-based library for implementing agent logic and tools, and an Open Source Client built with Next.js that demonstrates usage and acts as a reference implementation. The repository includes example agents, templates, documentation, and a book-style guide to agent design. The system emphasizes configurable determinism so developers can compose deterministic workflows alongside non-deterministic LLM-driven reasoning. It targets developers who need modular, extensible infrastructure for multi-step tool orchestration, session management, provider integration, and secure API key handling. The codebase is TypeScript-first and requires pnpm and Node.js.

Links

Categorization

App Details

Features
Modular node-based architecture with BaseNode and AgentNode abstractions for building agents and custom tools. Registries for nodes, tools, and providers enable pluggable components. CoreLLM unifies interaction with multiple LLM providers and supports BYOK API key resolution. Orchestration mechanisms allow dynamic control of tool availability and deterministic sub-workflows. Features include session management for isolated conversation state, structured logging, robust error handling, an evaluation framework for agent quality, multi-step tool calls, and storage abstraction with pluggable KV and vector backends in development. The repo includes a full Next.js client reference, preconfigured agent templates and example agents, and documentation and roadmap items for advanced memory, telemetry, and platform integrations.
Use Cases
AgentDock helps teams build reliable, production-ready AI systems by offering composable building blocks and clear execution patterns. Developers can mix deterministic workflows for predictable processing with LLM-driven nodes for adaptable reasoning, enabling complex, multi-stage pipelines triggered by agent decisions. The framework reduces boilerplate through reusable nodes and registries, simplifies provider integration with a unified LLM interface, and secures keys via BYOK and encryption. Built-in session isolation and structured logging make debugging and concurrent usage easier. Example agents and templates accelerate prototyping, while the evaluation framework and roadmap items support iterative improvement, memory extensions, and eventual cloud orchestration for scaling agents in real applications.

Please fill the required fields*