Report Abuse

Basic Information

RAGs is a Streamlit application for building retrieval-augmented generation (RAG) pipelines from a data source using natural language. The app walks a user through describing the dataset (currently a single local file or a web page), specifying the task to form the LLM system prompt, and defining RAG parameters such as top-k, chunk size, summarization, embedding model and LLM. A builder agent generates an initial configuration which can be reviewed and edited in a config view, and once created the generated RAG agent is exposed as a chatbot interface that answers questions over the provided data. The project is built on LlamaIndex, is inspired by GPTs, and includes setup guidance using poetry, a .streamlit/secrets.toml file for an OpenAI key, and a launch command that runs the Streamlit home page file.

Links

Categorization

App Details

Features
A natural-language "builder agent" that constructs RAG configurations and system prompts, governed by a configurable builder_config.py. A Streamlit home UI to describe datasets and tasks and to initiate agent creation. A RAG Config view that displays generated parameters and allows manual edits; parameters include system prompt, include summarization flag, top-k, chunk size, embed model and LLM. A generated RAG Agent exposed as a standard chatbot interface that chooses retrieval or summarization tools to answer queries. Support for multiple LLM backends and embedding providers with explicit ID conventions (OpenAI, Anthropic, Replicate, local Hugging Face). Uses LlamaIndex for core RAG functionality. Setup instructions, caching behavior notes, and examples for running locally with poetry and Streamlit.
Use Cases
The app streamlines creating a RAG chatbot over user data by letting builders describe intent and data in plain language, reducing manual configuration work. Users can quickly prototype retrieval and summarization workflows, tune retrieval parameters via a UI, and swap LLMs or embeddings without rebuilding the pipeline. The generated chatbot provides an immediate interactive surface to validate whether the RAG configuration meets information needs. Support for local files and web pages makes it easy to test on varied data sources, and built-in defaults plus editable configs aid both non-technical users and developers iterating on agent behavior. Integration with LlamaIndex and explicit model ID formats enables experimentation across different model providers.

Please fill the required fields*