Awesome Embodied Robotics and Agent
Basic Information
This repository is a curated, community-maintained bibliography and resource index for embodied robotics and agent research that integrates vision-language models (VLMs) and large language models (LLMs). It collects surveys, recent papers, code links, project pages, benchmarks and simulators across many subareas such as vision-language-action models, self-evolving agents, LLMs combined with reinforcement learning, planning and manipulation, multi-agent coordination, navigation, 3D grounding, detection and interactive learning. The README is structured with a table of contents and topical sections and includes news updates and visual demo references. The list is intended to help researchers, students and practitioners discover state-of-the-art work, implementations and evaluation suites in embodied AI and to encourage contributions via pull requests. The repository notes maintainers and recent commits to signal ongoing updates.