gateway

Report Abuse

Basic Information

Adaline Gateway is a developer-focused SDK that provides a single, production-grade interface for calling 300+ large language models and embeddings from many providers. It is designed to be fully local and not act as a proxy so deployments can remain private and on-premise. The project is strongly typed in TypeScript and isomorphic so it can run in multiple JavaScript environments. It standardizes requests and responses across providers by transforming between a unified Gateway schema and each provider's schema. The README includes quickstart examples for chat and embeddings, shows how to list supported models and schemas, and documents core types such as Config, Message, Content, Tool, and Reasoning to help developers build consistent LLM integrations.

Links

App Details

Features
The SDK includes typed TypeScript cores and optional provider packages for OpenAI, Anthropic, Google, AWS Bedrock, Groq, Together AI, Open Router and custom providers. It supports chat and embedding models, streaming and non-streaming responses, tool calling, and a rich set of content modalities including text and images. Built-in infrastructure features include batching with custom queues, automatic retries with exponential backoff, pluggable caching, callbacks for instrumentation, and OpenTelemetry integration. The Gateway exposes programmatic access to model schemas and config validation, provides examples for complete and stream chat and embeddings, and lets users extend the system with custom plugins, HTTP clients, and provider implementations.
Use Cases
Adaline Gateway simplifies integrating multiple LLM providers by offering a consistent API and shared types so teams do not need to handle each provider individually. Built-in batching and caching reduce latency and cost while retries and backoff improve reliability in production. OpenTelemetry and callback hooks enable observability and easy integration with existing monitoring. Strong TypeScript typing and provider schemas help validate configs and reduce runtime errors. Pluggable providers and extensibility let organizations run custom local models or vendor-specific endpoints while keeping a single developer experience across chat, streaming, tool calls, and embeddings. The SDK is aimed at teams building production LLM applications that need privacy, scalability, and operational controls.

Please fill the required fields*