conversational-agent-langchain

Report Abuse

Basic Information

This repository provides a REST backend implementation for a conversational retrieval-augmented generation (RAG) agent that lets you embed documents, run semantic and hybrid search, perform QA over ingested documents, and process documents using large language models. It combines a FastAPI-based API, a Qdrant vector database for storing embeddings, and support for multiple LLM providers via LiteLLM. The project includes Docker Compose orchestration for local deployment, a simple Streamlit GUI, ingestion scripts for bulk data loading, and API documentation endpoints served at /docs and /redoc. Default providers mentioned are Cohere for embeddings and Google AI Studio Gemini for generation, but LiteLLM allows other providers. The codebase also includes testing and development instructions using poetry, uvicorn, and pytest.

Links

Categorization

App Details

Features
The repo implements a FastAPI REST API exposing the conversational RAG functionality and OpenAPI docs. It integrates Qdrant as the vector database and supports semantic search and a hybrid search mode using BM25 FastEmbedd. Model access is provider-agnostic through LiteLLM with recommended defaults of Cohere for embeddings and Google Gemini for generation. Deployment is simplified with Docker Compose and local Qdrant service options. There are scripts for bulk ingestion and demo data loading, a Streamlit-based GUI for interactive demos, testing setup via pytest and coverage, and developer tooling instructions including poetry, uvicorn, and mypy. The README documents configuration via a .env template and notes Qdrant API key and config placement.
Use Cases
This project helps teams and developers quickly prototype and run a document-aware conversational agent that answers questions from uploaded or bulk-ingested documents. It centralizes embeddings and vector search with Qdrant so semantic retrieval can be combined with LLM generation for grounded responses. Docker Compose and local development commands reduce setup friction, while the REST API and OpenAPI docs make integration with other services straightforward. LiteLLM support enables switching LLM providers without changing core code. Included ingestion scripts, demo data, Streamlit GUI, and testing routines make it easier to load content, validate behavior, and demonstrate the agent to stakeholders or integrate it into larger applications.

Please fill the required fields*