Features
Rankify offers a unified framework combining retrieval, re-ranking, and RAG in a modular design. It includes 40+ benchmark datasets with 1,000 pre-retrieved documents per dataset, prebuilt indices for Wikipedia and MS-MARCO, and a CLI for building custom indices. Supported retrievers include BM25, DPR, ColBERT, ANCE, BGE, Contriever, BPR and HYDE. The reranking suite provides around 24 advanced models and many sub-methods including MonoT5, MonoBERT, RankT5, RankGPT and transformer-based rankers. The Generator module supports multiple RAG methods such as zero-shot, basic-RAG, chain-of-thought and FiD, and integrates with Hugging Face, vLLM, LiteLLM and OpenAI backends. Built-in evaluation metrics, example notebooks, a Streamlit demo and documented APIs round out the feature set.
Use Cases
Rankify reduces development overhead by supplying pre-retrieved datasets, prebuilt indices and ready-to-use retrievers and rerankers so teams can focus on experimentation rather than infrastructure. The modular generator and multiple backend support let users prototype RAG setups with different LLMs and RAG strategies without custom glue code. CLI indexing and example code simplify onboarding and dataset preparation. Built-in evaluation utilities enable consistent benchmarking of retrieval quality, reranking gains and generated-answer accuracy. The package is installable via pip with optional extras, includes documentation and demo utilities, and is designed to be extensible so researchers can add new retrievers, rankers, datasets or RAG methods for reproducible comparison.