Awesome-RAG-Reasoning

Report Abuse

Basic Information

This repository is a curated, up-to-date collection of research papers, open-source implementations, benchmarks, datasets and high-level guidance that focus on integrating Retrieval-Augmented Generation (RAG) with explicit reasoning in large language models and agents. It collects and organizes work across complementary directions such as Reasoning-Enhanced RAG, RAG-Enhanced Reasoning and Synergized RAG-Reasoning systems. The README follows a taxonomy derived from a related survey and highlights categories including retrieval optimization, integration and generation enhancement, external knowledge sources, in-context retrieval, chain/tree/graph reasoning workflows, and agentic orchestration for single- and multi-agent systems. The target audience is researchers and practitioners who want a consolidated reference to the literature, code repositories and evaluation resources when designing, benchmarking or extending RAG+reasoning methods in LLMs and agentic AI.

Links

Categorization

App Details

Features
The repository organizes material by a clear taxonomy and provides extensive curated lists of papers, associated code implementations and venue annotations. It includes sections on retrieval optimization, integration enhancement, generation improvement, knowledge-base and web retrieval, tool use, in-context retrieval, chain/tree/graph reasoning patterns, and agentic orchestration. Benchmarks and datasets are tabulated with task types, domains, knowledge types and reasoning capabilities. The README contains asset images and a recommended related collection for deep research. Many entries include links to code and repository stars when available, and the project supplies citation text for foundational survey papers and instructions for contributing additions consistent with the established format.
Use Cases
This collection accelerates literature review and system design by centralizing recent RAG and reasoning research, implementations and benchmarks in one place. Practitioners can find candidate papers and code to reproduce or extend methods, choose appropriate datasets and benchmarks for evaluation, and compare approaches across retrieval, reasoning workflows and agent orchestration. The taxonomy helps map methods to tasks such as multi-hop QA, multimodal reasoning, knowledge-graph approaches and tool-augmented agents. Contributors can add resources following provided guidelines, and users are given citation metadata for the underlying survey and related works to support academic use and reproducible research.

Please fill the required fields*