team-of-ai-agents

Report Abuse

Basic Information

L3AGI (team-of-ai-agents) is an open-source platform for building, configuring, running and managing autonomous AI Assistants and coordinated Teams of Assistants. The repository provides a React UI and a Python FastAPI backend packaged with Docker and Docker Compose to simplify local deployment. It is intended for developers and teams who want to design collaborative agent workflows, give agents persistent memory, connect agents to external data sources and vector databases, and expose programmable APIs. The README documents a quick start, environment configuration, and architectural artifacts such as a database ERD. The project emphasizes integrations and tooling, including support for LlamaIndex, LangChain, Zep, Postgres and common web services, and offers a roadmap, community links and contributor information to help users extend or productionize multi-agent applications.

Links

Categorization

App Details

Features
The repository lists core features for orchestration and agent capabilities. Team of Assistants: create and manage groups of collaborating AI Assistants. Autonomous Assistants: configure standalone agents with behaviors and tools. Assistant Memory: persistent memory support and integration with vector DBs. Data Sources & Integrations: connectors for Postgres, MySQL, file uploads, webpages, Notion, Google Analytics and Firebase. LlamaIndex and LangChain integration for enhanced retrieval and indexing. Toolkits: built-in tools such as SERP, web scraper, DuckDuckGo, Bing, Wikipedia, ArXiv, OpenWeather, charts, Twilio, social platforms, Slack, Gmail and Google Calendar. Generators: chart generator and report generator for visualizations and reports. UI and APIs: a user-friendly React interface plus REST APIs for integration. Dockerized quick start and documented architecture and troubleshooting.
Use Cases
This project helps developers and teams quickly prototype and run multi-agent systems that require collaboration, memory and external data access. It reduces infrastructure setup time via Docker Compose and provides a ready-made UI to design, monitor and manage assistants and teams. Integrations with databases, vector stores, LlamaIndex and common web services let agents retrieve and act on real-world data. Built-in toolkits and APIs allow agents to perform web search, scraping, messaging, calendar and analytics tasks without building every connector from scratch. Persistent memory and report/chart generation assist with stateful workflows and result presentation. The included docs, community links and contributor guidance make it easier to extend the platform or adapt it to specific multi-agent simulations and automation scenarios.

Please fill the required fields*