awesome LLM resources

Report Abuse

Basic Information

This repository is a curated, continuously updated collection of resources for large language models (LLMs). It aggregates tools, libraries, papers, tutorials, courses and community links in Chinese and English to help practitioners, researchers and engineers discover materials across the LLM ecosystem. The README organizes resources into many topical sections such as Data, Fine-Tuning, Inference, Evaluation, Usage, RAG (retrieval-augmented generation), Agents, Coding, Video, Search, Speech, Unified Models, Papers, Courses, Tutorials, MCP and open o1 research. It highlights representative projects, toolkits and benchmarks and provides an outline and table of contents for quick navigation. The project includes contributor information, badges and an online readable version and is presented as an authoritative index of notable open-source and academic artifacts related to LLM development.

Links

Categorization

App Details

Features
Extensive categorized index covering end-to-end aspects of LLM work including data collection and cleaning tools, fine-tuning frameworks, inference and serving engines, RAG systems, multi-agent frameworks, evaluation suites, speech and video model lists, unified multimodal models, and small model projects. Each section lists representative repositories, libraries and toolkits with short notes and occasional highlights. The README provides a clear table of contents for fast lookup, marks notable entries, includes community and citation information, and links to an online readable site. It compiles papers, technical reports, courses and tutorials alongside practical repos, making it a hybrid literature-and-tools reference for the LLM community. The repository is maintained and periodically updated.
Use Cases
The collection helps developers, researchers and learners quickly find relevant tools and knowledge across the LLM stack without searching many disparate sources. It speeds onboarding by pointing to popular fine-tuning frameworks, inference servers, RAG engines, agent frameworks, evaluation platforms, and data processing utilities. It supports research discovery by aggregating technical reports and important papers, and it helps practitioners choose implementations for deployment, memory and long-context strategies, MCP servers, and multimodal models. Educators and learners can use the curated courses, books and tutorials to structure study. Overall it reduces discovery time, surfaces community best practices and centralizes references needed to build, evaluate and deploy LLM applications.

Please fill the required fields*