Report Abuse

Basic Information

Expected Parrot Domain-Specific Language (EDSL) is a Python library for designing, running and analyzing AI-driven surveys and experiments aimed at computational social science and market research. It enables researchers to run surveys and labeling tasks with many AI agents and large language models in parallel, producing specified datasets that are reproducible. The package supports local or server execution and integrates with an Expected Parrot account and the Coop platform for sharing workflows. It is intended for teams and researchers who need to compare model responses, simulate diverse agent personas, run parameterized scenarios from multiple input formats, and produce analysis-ready outputs. Installation is via pip and the project targets Python 3.9 to 3.13. API keys for language models are required and can be managed through user accounts.

Links

Categorization

App Details

Features
EDSL offers a declarative survey DSL with typed question objects for consistent outputs, parameterized prompts via scenario lists that import data from CSV, PDF and images, and the ability to construct agent personas and agent lists to elicit diverse responses. It supports Model and ModelList abstractions to run many LLMs concurrently. Survey flows include piping, skip-logic and stop rules to build complex labeling pipelines. Responses are cached using a universal remote cache to enable free reproducibility of prior runs. The library includes utilities to format results as specified datasets and provides built-in analysis, visualization and export functions. Integration options include running locally or on the Expected Parrot server and sharing projects on the Coop collaboration platform.
Use Cases
EDSL helps teams and researchers accelerate AI-based surveys and data labeling by providing repeatable, parameterized experiments and ensemble comparisons across models and agent personas. The declarative question types and scenario imports reduce boilerplate and help ensure consistent structured outputs suitable for analysis. Built-in caching lowers compute costs by reusing prior LLM responses and supports exact reproducibility of experiments. Piping, skip-logic and collaborative features simplify complex survey logic and team workflows. Integration with account-managed API keys and the Coop platform assists with sharing, tracking usage and managing model access. The package also bundles analysis and visualization helpers so results can be exported and examined as datasets without custom tooling.

Please fill the required fields*