awesome LLM game agent papers

Report Abuse

Basic Information

This repository is a curated, continuously updated survey and bibliography of research on large language model based game agents. It collects must-read papers and accompanying code references organized by game types such as text adventure, video adventure, crafting and exploration (including Minecraft and Crafter), simulation, competition, cooperation, communication, action games, dialogue and story generation, and benchmarks. The README lists papers with year, title and venue when available, and frequently provides links to papers and code. The project is positioned as a reference for researchers, practitioners and students tracking progress in LLM agents for games and interactive environments. The README includes citation metadata for the survey paper and a maintainer contact email, and it notes that the list is updated regularly with the last recorded update on 2025/08/04.

Links

Categorization

App Details

Features
The repository organizes literature by game categories and subtopics and annotates entries with publication year, venue and links to the paper and associated code when available. It aggregates influential works such as model-driven planning, embodied agents, reinforcement learning integrations, multi-agent coordination and benchmarks. The README contains badges indicating visits, stars and forks and provides a BibTeX citation block for citing the survey. Entries range from conference and arXiv papers to open-source implementations. The content is presented as a living list with periodic weekly updates, a contact point for contributions, and explicit sections for benchmarks and evaluation suites to help users find reproducible code and evaluation resources.
Use Cases
This survey serves as a consolidated starting point for anyone researching LLM-based game agents by reducing search overhead and surfacing seminal and recent papers with code links. It helps readers discover approaches across modalities and game genres, compare methods and identify benchmarks and toolkits referenced by the community. The BibTeX citation makes it easy to credit the survey in academic work. The organized sections and links to implementations support rapid prototyping and literature reviews, and the contact information invites community contributions to keep the list current. Regular updates and curated benchmark pointers make it practical for researchers, students and developers to stay informed about state-of-the-art results and available resources.

Please fill the required fields*