lmnr

Report Abuse

Basic Information

Laminar (lmnr) is an open-source platform for tracing and evaluating AI applications aimed at engineering teams and developers. It provides instrumentation SDKs and integrations for common AI frameworks and SDKs to automatically capture call traces, inputs and outputs, latency, cost and token counts. The project supports both TypeScript and Python client libraries and offers quickstart examples showing how to initialize tracing and annotate functions with an observe wrapper or decorator. Laminar can be self-hosted via Docker Compose to run a lightweight stack or used via a managed platform. The backend is implemented in Rust for performance, traces are sent over gRPC, and the system includes components such as a message queue, databases and a frontend for dashboards.

Links

Categorization

App Details

Features
Automatic OpenTelemetry-based tracing for popular AI SDKs and frameworks including LangChain, OpenAI and Anthropic with minimal setup. Capture of detailed telemetry such as inputs and outputs, latency, cost and token usage, plus image tracing and function-level observation via an observe decorator or wrapper. Evals support for running evaluations in parallel through a simple SDK. Ability to export production trace data as datasets and run evaluations against hosted datasets. Built-for-scale architecture implemented in Rust with gRPC transport. Operational components include RabbitMQ for messaging, Postgres for application data, ClickHouse for analytics, and a Next.js frontend providing dashboards for traces, statistics, evaluations and labels.
Use Cases
The repo helps development teams instrument and monitor AI-powered applications with minimal code changes, enabling visibility into model calls, latency, costs and token usage. Teams can add the SDK in TypeScript or Python and annotate functions to get end-to-end traces, which supports debugging, performance tuning and cost analysis. Exporting traces to datasets and running parallel evals makes it easier to iterate on model quality and benchmark behavior on production data. The scalable Rust backend and gRPC transport reduce overhead in production. Self-hosting via Docker Compose or using the managed platform gives operational flexibility, and the provided dashboards centralize observability for deployments.

Please fill the required fields*