Awesome Embodied Robotics and Agent

Report Abuse

Basic Information

This repository is a curated, community-maintained bibliography and resource index for embodied robotics and agent research that integrates vision-language models (VLMs) and large language models (LLMs). It collects surveys, recent papers, code links, project pages, benchmarks and simulators across many subareas such as vision-language-action models, self-evolving agents, LLMs combined with reinforcement learning, planning and manipulation, multi-agent coordination, navigation, 3D grounding, detection and interactive learning. The README is structured with a table of contents and topical sections and includes news updates and visual demo references. The list is intended to help researchers, students and practitioners discover state-of-the-art work, implementations and evaluation suites in embodied AI and to encourage contributions via pull requests. The repository notes maintainers and recent commits to signal ongoing updates.

Links

Categorization

App Details

Features
Organized topical sections that group resources by research area, including surveys, Vision-Language-Action models, self-evolving agents, advanced applications, LLMs with RL or world models, planning and manipulation, multi-agent learning, navigation, detection, 3D grounding, interactive embodied learning, rearrangement, benchmarks and simulators. Each entry commonly cites a paper reference and links to project pages, code repositories and demos when available. The README contains a news log of recent additions, example videos and figures, and a contributors/commit history to show updates. Many entries reference arXiv, conference publications and GitHub project repos. The document is designed as a navigable index for both literature and implementation artifacts and invites community contributions.
Use Cases
The repository acts as a single reference point for quickly locating influential papers, open-source implementations and benchmarking resources in embodied AI that combine visual and language models. It reduces time spent searching disparate sources by grouping related work and linking to project pages, code and demos. It supports literature reviews, course reading lists, project scaffolding and experimental comparisons by highlighting surveys, benchmarks and simulators. Researchers can use it to track recent developments via the news section and commit history. Practitioners can find code and datasets for reproduction, and students can explore organized subtopics to learn about state-of-the-art methods and evaluation suites in embodied robotics and agents.

Please fill the required fields*