llms_paper

Report Abuse

Basic Information

This repository, titled LLMs 论文研读社 and maintained by Yang Xi (km1994), is a curated, topic-organized collection of reading notes, summaries and pointers for research papers and resources about large language models. It focuses on areas relevant to LLM algorithm engineers such as multimodal models, PEFT (parameter-efficient fine-tuning), few-shot and document QA, retrieval-augmented generation (RAG), LMM interpretability, agents and chain-of-thought methods. The README aggregates paper metadata (titles, links, institutions), concise method descriptions, experimental highlights and related GitHub projects, and groups content into many thematic sections and subpages (for example GPT4Video, PEFT series, RAG tricks and application domains). The content is primarily in Chinese with links to original papers and code when available. The repo serves as a living bibliography and study guide for keeping up with recent LLM literature.

Links

Categorization

App Details

Features
Organized, comprehensive paper notes segmented into thematic sections including multimodal, PEFT, GPT series, RAG, prompts, LMM explainability, agents, search and long-context benchmarks. Each entry typically records paper title, arXiv or conference link, institutional authorship, a short motivation/methods summary and reported results. The README includes subfolders and dedicated pages for specific topics (e.g., Video/GPT4Video, PEFT/LORA, RAG tasks), links to relevant GitHub implementations, dataset and benchmark references, recommended reading lists and example prompts or training workflows. It contains cross-references to related repositories by the author for interviews and tutorials, images and resource pointers, and practical notes on topics such as query-doc construction, retrieval strategies, prompt engineering and data engineering for fine-tuning.
Use Cases
The repo helps researchers and practitioners quickly scan and understand recent LLM literature without reading every paper in full. It collects concise method overviews, experimental findings and implementation links that accelerate literature reviews, model design choices and experiment planning. Engineers can use the categorized notes to find relevant techniques for tasks like multimodal instruction tuning, efficient fine-tuning (LoRA/QLoRA), RAG pipelines, retrieval and ranking strategies, prompt/CoT methods, long-context evaluation and inference optimizations. The collection also supports study and interview preparation with focused subcollections, points to code and datasets for replication, and highlights practical trade-offs and open problems encountered across papers, aiding decision-making for research and engineering work.

Please fill the required fields*