Report Abuse

Basic Information

RAGapp provides a deployable, enterprise-focused application for building and running agentic Retrieval-Augmented Generation (RAG) systems. The project aims to make Agentic RAG easy to configure and operate in private cloud environments, offering a ready-made Docker image and management interfaces so teams can run RAG instances on their own infrastructure. It is built using LlamaIndex and supports both hosted models (OpenAI, Gemini) and local models via Ollama. The repo includes an Admin UI for configuration, a Chat UI for end-user interaction, and an API with OpenAPI docs. Deployment examples are provided for Docker Compose (including a single deployment with Ollama and Qdrant and a multi-RAGapp setup with a management UI), and Kubernetes manifests are planned. The source includes development instructions using Poetry and make tasks to build frontends and run in a local development environment.

Links

Categorization

App Details

Features
Provides a prebuilt Docker image that exposes an Admin UI, a Chat UI, and an API with documentation. Supports hosted model providers such as OpenAI and Gemini and local model hosting via Ollama. Built on LlamaIndex for retrieval and document indexing. Includes example Docker Compose deployments, including an option integrating Ollama with Qdrant for vector storage and a multi-instance deployment with a management UI. Exposes clear endpoints for admin, chat, and API. Development tooling and build tasks are included (Poetry, make build-frontends, make dev). Security design delegates authentication and authorization to an external API Gateway. README documents start instructions, endpoint URLs, deployment options, and contact guidance for issues or feature requests.
Use Cases
RAGapp helps organizations deploy agentic RAG solutions quickly on their own infrastructure, reducing reliance on hosted SaaS for core orchestration and data storage. The Admin UI simplifies configuration so non-developers can set up indexes and connectors, while the Chat UI and API provide immediate interfaces for user-facing applications and integrations. Support for local models via Ollama and vector storage with Qdrant enables on-prem or hybrid workflows for data-sensitive environments. Docker Compose examples lower the barrier to spin up working stacks, and planned Kubernetes support targets cloud-scale deployments. The repo also supplies development guidance for teams that want to extend or customize the app. The security model encourages placing an API Gateway in front for authentication and authorization.

Please fill the required fields*