Report Abuse

Basic Information

Crush is a terminal-first AI coding assistant that brings language models, developer tools, and project context together inside your favourite terminal. It is designed to help developers query, inspect, and modify code and workflows by wiring local and remote LLM providers into session-based interactions. The project supports multiple model providers and local model endpoints, uses Language Server Protocols for richer project context, and can be extended with Model Context Protocol servers over stdio, HTTP, or SSE. Crush is cross-platform with packaged installs, supports per-project and global JSON configuration files, and provides logging and permission controls so users can safely allow or restrict tool executions. The repository contains configuration examples for providers, LSPs, MCPs, and installation instructions for common package managers and operating systems.

Links

App Details

Features
Crush offers multi-model support so you can choose or swap LLMs during a session while preserving context. It is session-based, allowing multiple work contexts per project, and integrates LSPs to give the agent code-aware context. The tool is extensible via MCPs with stdio, http, and sse transports, and supports local model hosts like Ollama and LM Studio via OpenAI-compatible APIs. Configuration can live in project files or user config and supports environment variable expansion. Crush respects .gitignore and supports a .crushignore for finer control. It provides logging helpers, debug options, and permission controls for tool execution, plus packaged installers and binaries for many platforms.
Use Cases
For developers, Crush reduces friction by embedding an LLM-powered assistant directly in the terminal so you can ask about, search, and manipulate code without context switching. LSP integration and MCPs let the assistant access precise project information and external services, improving the relevance of suggestions and actions. Session persistence keeps conversational and project state separate across tasks. Configuration options and local model support allow teams to use preferred providers or run models locally. Permission controls and detailed logging help teams manage safety and auditability. Available packages and cross-platform support make it easy to adopt in different environments and workflows.

Please fill the required fields*