Report Abuse

Basic Information

Dynamiq is an orchestration framework for building and running agentic AI and LLM applications. It is intended for developers who need to compose, orchestrate, and manage retrieval-augmented generation (RAG) pipelines, single and multi-agent systems, and complex flows of LLM nodes and auxiliary tools. The repository provides a Python library, example scripts, and documentation that demonstrate how to create LLM nodes, agents with roles and tools, workflows that run nodes in parallel or sequentially, and orchestrators that manage multi-agent collaboration or graph-based processes. It includes components for document conversion, splitting, embedding, vector storage and retrieval, conversational memory, and integrations to external services referenced in examples. The project is distributed as a Python package with installation instructions and is open-source under the Apache 2 license.

Links

Categorization

App Details

Features
Dynamiq exposes composable primitives and examples for building LLM-driven systems. Key features shown in the README include predefined node types for OpenAI LLMs, agent implementations such as ReAct, Reflection, and SimpleAgent, and tools like an E2B interpreter and search integrations. Workflow primitives let users add nodes, set dependencies, map inputs and outputs, and run flows both synchronously and asynchronously. RAG support is provided via document converters, splitters, embedders, and Pinecone writers and retrievers. Orchestration options include a Workflow/Flow system, an AdaptiveOrchestrator with an agent manager, and a GraphOrchestrator with custom state routing. The library also offers a Memory module with backends, example patterns for chatbots, async agent execution, and sample integrations to services like OpenAI, Pinecone, ScaleSerp, and E2B. Documentation and examples accompany the codebase.
Use Cases
Dynamiq helps developers accelerate building production-grade LLM applications by providing reusable components and orchestration patterns. Instead of wiring model calls, embedding pipelines, vector stores, and custom logic from scratch, engineers can assemble nodes, tools, and agents into workflows that handle dependencies, input transformation, and parallel or sequential execution. The RAG examples illustrate indexing and retrieval flows for PDFs and other documents, simplifying document search and answer generation. Memory backends and agent roles enable stateful chatbots and iterative refinement loops. Orchestrators and agent managers help coordinate multiple specialist agents for planning, coding, and search tasks. The repository includes examples, installation instructions, and API patterns to reduce integration effort with external services and to prototype, test, and deploy LLM-driven systems faster.

Please fill the required fields*