ICLR2025-Papers-with-Code

Report Abuse

Basic Information

This repository is a curated, regularly updated collection of papers and open-source code associated with ICLR 2025, with links and metadata gathered to help researchers track the conference output. The README states the project aims to collect the latest ICLR research progress, with an emphasis on large language models and NLP directions, and to maintain long-term updates. It aggregates accepted works across categories such as spotlight, poster, and oral, and also links to prior years collections for ICLR 2021–2024. Entries typically include paper citations and pointers to code repositories, demos, datasets and benchmark pages. The maintainer invites community engagement via watching, forking and starring, and records commit history to show recent updates and additions.

Links

Categorization

App Details

Features
Extensive index of accepted ICLR 2025 papers organized by acceptance type and topic, with explicit entries that pair each paper title with available code repositories, dataset pointers and demo pages. Includes cross-references to previous years" aggregated collections and separate files for ICLR2024, ICLR2023, ICLR2022 and ICLR2021. The README highlights curated spotlight, poster and oral lists, and contains bilingual notes and maintainance information. Many entries reference benchmarks, implementations, and resources for reproducibility. The project shows frequent updates via commit logs and includes varied research areas such as LLMs, vision, diffusion models, benchmarks and agent systems. The layout is primarily a long-form, browsable catalog of links and short descriptions.
Use Cases
The repository serves as a discovery and reference hub for researchers, students and practitioners looking for ICLR 2025 papers and their code implementations. It saves time by consolidating paper titles, acceptance categories and pointers to code, demos, datasets and benchmarks in one place, making it easier to locate reproducible implementations and evaluation suites. It supports literature review, replication, benchmark selection and trend tracking across topics such as large language models, multimodal models and evaluation benchmarks. Educators can use the list to pick readings and assignments, while developers can find starter code and datasets for experiments. The long-term update intent helps users keep current with newly accepted works and community-provided repositories.

Please fill the required fields*