Awesome LLM in Social Science

Report Abuse

Basic Information

This repository is a curated, community-maintained collection of academic papers, datasets, code links and resources about the use, evaluation, alignment and simulation of large language models from a social science perspective. It gathers surveys, empirical studies, benchmarks and datasets that evaluate LLM behavior on psychological constructs, values, personality, morality, opinions, preferences, abilities and safety risks. The collection emphasizes psychology and intrinsic values and groups resources into thematic sections such as surveys, datasets, evaluating LLMs, tool enhancement, alignment, simulation and perspective/position pieces. Entries often include pointers to papers, datasets and code repositories when available. The README encourages contributions and cites maintainers’ work, making the repo a central index for researchers, students and practitioners interested in interdisciplinary work at the intersection of LLMs and social science.

Links

App Details

Features
Organized thematic table of contents that categorizes resources into surveys, datasets, multiple evaluation axes (value, personality, morality, opinion, general preference, ability, risk/safety), tool enhancement, alignment and simulation. It highlights maintainers’ contributions with a star marker and supplies bibliographic citations for important works. Many entries include links to papers, datasets, code repositories and benchmarks such as ValueBench and other named datasets and toolkits where available. The README lists representative papers, benchmarks and reproducible artifacts and emphasizes cross-cutting topics like pluralistic alignment, psychometrics, multi-agent simulation and social-science-oriented evaluation methodologies. The project is presented as community-friendly and open to contributions and discussion.
Use Cases
The repository serves as a time-saving literature index for researchers, students and practitioners who need a consolidated view of work on LLM evaluation, alignment and social-science applications. It helps users discover surveys, empirical studies, benchmarks, datasets and code relevant to psychometrics, value and moral assessments, opinion measurement and agent-based social simulations. By grouping resources by topic and noting available code or data, the collection supports designing experiments, benchmarking models, building simulation environments and informing alignment or policy research. It is useful for cross-disciplinary collaboration between social scientists and ML researchers and for quickly locating validated datasets, evaluation frameworks and prominent recent works.

Please fill the required fields*