AI-Coding-Style-Guides

Report Abuse

Basic Information

This repository is a practical collection of coding style guides and ready-to-use prompts designed to compress source code for use with large language models, SWE Agents, and vibe coding workflows. It provides general rules and language-specific guidance to reduce whitespace, shorten identifiers, and refactor code so that more files and context fit within an LLM context window and to lower token costs. The repo includes a prompts file in TOML format, example compression pipelines, and stepwise examples in TypeScript and C++ demonstrating progressive compression stages. It is targeted at developers, prompt engineers, and teams who want reproducible, systematic approaches to balance compactness with maintainability when using AI-driven coding tools.

Links

App Details

Features
A curated prompts file (AI_Coding_Style_Guide_prompts.toml) with example usage and a code snippet to load prompts. A clear set of principles and basic rules for minimizing spaces, shortening local variable names, preserving top-level names and comments, and using language features to reduce size. An explicit Levels of Compression table describing eight progressive compression stages and trade-offs. Worked examples showing stepwise compression of a KMP implementation in TypeScript and a JSON parser in C++, including compressed outputs and LLM explanations. Guidance on when to preserve exports and comments, and comparisons to standard minifiers.
Use Cases
The guides help engineers and prompt engineers fit more source code and context into limited LLM windows and reduce token-based costs and latency by systematically shrinking code size. They offer decision rules for trade-offs between human readability and maximal compactness, so SWE Agents can operate more efficiently while tests and LLMs preserve correctness and explainability. The examples and prompts can be copied into prompt management systems, integrated into automation, and used to teach compression techniques. The material also shows that compressed code remains explorable by LLMs, enabling automated reconstruction or human-readable expansion when deeper inspection or debugging is needed.

Please fill the required fields*