CL4R1T4S

Report Abuse

Basic Information

CL4R1T4S is a public archive that collects and publishes extracted system prompts, internal guidelines, and tooling from major AI model providers to improve transparency and observability. The README states the project aggregates full system prompts and scaffolds from organizations such as OpenAI, Google, Anthropic, xAI, Perplexity, Cursor, Windsurf, Devin, Manus, Replit and others. The stated motivation is that hidden system instructions shape model behavior, personas, refusals, and ethical or political framing, so making those inputs visible helps users understand the forces shaping AI outputs. The repository invites contributions that document the model name and version, date of extraction, and optional context or notes. The project positions itself as a resource for researchers, journalists, and the public to inspect what models are instructed to do and why.

Links

Categorization

App Details

Features
A curated collection of extracted system prompts and internal guidelines aggregated from many major AI labs and agent projects. Explicit contributor guidance that asks for model name/version, extraction date, and contextual notes to support provenance. Examples and captured directives are included in the README to illustrate the kinds of hidden instructions that influence model responses. Emphasis on transparency and observability rather than providing tooling to run models. Public commits and updates are visible in the repository history. The project covers a wide range of vendors and agent architectures as listed in the README and welcomes reverse-engineered or leaked system prompts for inclusion.
Use Cases
The repository helps auditors, researchers, journalists, and concerned users inspect the hidden instructions that govern AI behavior, enabling better trust assessments and accountability. By aggregating system prompts from many vendors in one place, it allows comparative analysis of personas, refusal policies, safety constraints, and how models are guided to answer or redirect. The contribution guidelines promote provenance by asking for model identifiers and extraction dates, which supports reproducibility and historical tracking of instruction changes. The archive can inform policy discussions, help detect bias or political framing baked into prompts, and serve as a resource for transparency-focused investigations and educational analysis.

Please fill the required fields*