lmnr-ai / lmnr

Laminar - open-source all-in-one platform for engineering AI products. Crate data flywheel for you AI app. Traces, Evals, Datasets, Labels. YC S24.
https://www.lmnr.ai
Apache License 2.0
1.15k stars 58 forks source link
agents ai ai-observability aiops analytics developer-tools evals evaluation llm-evaluation llm-observability llm-workflow llmops monitoring observability open-source pipeline-builder rag rust-lang self-hosted

Static Badge X (formerly Twitter) Follow Static Badge

Frame 28 (1)

Laminar

Laminar is an all-in-one open-source platform for engineering AI products. Trace, evaluate, label, and analyze LLM data.

Documentation

Check out full documentation here docs.lmnr.ai.

Getting started

The fastest and easiest way to get started is with our managed platform -> lmnr.ai

Self-hosting with Docker compose

For a quick start, clone the repo and start the services with docker compose:

git clone https://github.com/lmnr-ai/lmnr
cd lmnr
docker compose up -d

This will spin up a lightweight version of the stack with Postgres, app-server, and frontend. This is good for a quickstart or for lightweight usage. You can access the UI at http://localhost:3000 in your browser.

For production environment, we recommend using our managed platform or docker compose -f docker-compose-full.yml up -d.

docker-compose-full.yml is heavy but it will enable all the features.

Contributing

For running and building Laminar locally, or to learn more about docker compose files, follow the guide in Contributing.

TS quickstart

First, create a project and generate a project API key. Then,

npm add @lmnr-ai/lmnr

It will install Laminar TS SDK and all instrumentation packages (OpenAI, Anthropic, LangChain ...)

To start tracing LLM calls just add

import { Laminar } from '@lmnr-ai/lmnr';
Laminar.initialize({ projectApiKey: process.env.LMNR_PROJECT_API_KEY });

To trace inputs / outputs of functions use observe wrapper.

import { OpenAI } from 'openai';
import { observe } from '@lmnr-ai/lmnr';

const client = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });

const poemWriter = observe({name: 'poemWriter'}, async (topic) => {
  const response = await client.chat.completions.create({
    model: "gpt-4o-mini",
    messages: [{ role: "user", content: `write a poem about ${topic}` }],
  });
  return response.choices[0].message.content;
});

await poemWriter();

Python quickstart

First, create a project and generate a project API key. Then,

pip install --upgrade 'lmnr[all]'

It will install Laminar Python SDK and all instrumentation packages. See list of all instruments here

To start tracing LLM calls just add

from lmnr import Laminar
Laminar.initialize(project_api_key="<LMNR_PROJECT_API_KEY>")

To trace inputs / outputs of functions use @observe() decorator.

import os
from openai import OpenAI

from lmnr import observe, Laminar
Laminar.initialize(project_api_key="<LMNR_PROJECT_API_KEY>")

client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])

@observe()  # annotate all functions you want to trace
def poem_writer(topic):
    response = client.chat.completions.create(
        model="gpt-4o",
        messages=[
            {"role": "user", "content": f"write a poem about {topic}"},
        ],
    )
    poem = response.choices[0].message.content
    return poem

if __name__ == "__main__":
    print(poem_writer(topic="laminar flow"))

Running the code above will result in the following trace.

Screenshot 2024-10-29 at 7 52 40 PM

Client libraries

To learn more about instrumenting your code, check out our client libraries:

NPM Version PyPI - Version