ai.robots.txt
Basic Information
This repository is a curated, community-maintained list of AI-related web crawlers and related configuration snippets intended to help website operators identify and block or manage automated AI bots. It collects names and metrics for crawlers, documents how to implement exclusions using standard robots.txt semantics, and supplies ready-to-use server configuration fragments for common web servers and proxies. The project centralizes crawler metadata in a source file named robots.json and generates derivative artifacts such as robots.txt and server snippets via a GitHub Action. The README points to additional documentation including a table of bot metrics and a FAQ, notes that some entries were sourced from a third-party tracker, and explains contribution workflow and testing steps so maintainers can keep the list up to date.