Langtrace is an open source observability software which lets you capture, debug and analyze traces and metrics from all your applications that leverages LLM APIs, Vector Databases and LLM based Frameworks.
The traces generated by Langtrace adhere to Open Telemetry Standards(OTEL). We are developing semantic conventions for the traces generated by this project. You can checkout the current definitions in this repository. Note: This is an ongoing development and we encourage you to get involved and welcome your feedback.
To use the managed SaaS version of Langtrace, follow the steps below:
Get started by adding simply three lines to your code!
npm i @langtrase/typescript-sdk
import * as Langtrace from '@langtrase/typescript-sdk' // Must precede any llm module imports
Langtrace.init({ api_key: <your_api_key> })
OR
import * as Langtrace from '@langtrase/typescript-sdk'; // Must precede any llm module imports
LangTrace.init(); // LANGTRACE_API_KEY as an ENVIRONMENT variable
Get started by adding simply two lines to your code and see traces being logged to the console!
npm i @langtrase/typescript-sdk
import * as Langtrace from '@langtrase/typescript-sdk'; // Must precede any llm module imports
Langtrace.init({
write_spans_to_console: true,
api_host: '<HOSTED_URL>/api/trace',
});
Get started by adding simply three lines to your code and see traces being exported to your remote location!
npm i @langtrase/typescript-sdk
import * as Langtrace from '@langtrase/typescript-sdk' // Must precede any llm module imports
Langtrace.init({ custom_remote_exporter: <your_exporter>, batch:<true or false>})
By default all sdk errors are reported to langtrace via Sentry. This can be disabled by setting the following enviroment variable to False
like so LANGTRACE_ERROR_REPORTING=False
LangtraceRootSpan
or whatever is passed to name
) span. Then, any calls to the LLM APIs made within the given function (fn) will be considered "children" of this parent span. This setup is especially useful for tracking the performance or behavior of a group of operations collectively, rather than individually. See example/**
* @param fn The function to be executed within the context of the root span. The function should accept the spanId and traceId as arguments
* @param name Name of the root span
* @param spanAttributes Attributes to be added to the root span
* @param spanKind The kind of span to be created
* @returns result of the function
*/
export async function withLangTraceRootSpan<T>(
fn: (spanId: string, traceId: string) => Promise<T>,
name = 'LangtraceRootSpan',
spanKind: SpanKind = SpanKind.INTERNAL
): Promise<T>;
/**
*
* @param fn function to be executed within the context with the custom attributes added to the current context
* @param attributes custom attributes to be added to the current context.
* Attributes can also be an awaited Promise<Record<string, any>>. E.g withAdditionalAttributes(()=>{// Do something}, await getAdditionalAttributes()) // Assuming you have a function called getAdditionalAttributes defined in your code
* @returns result of the function
*/
export async function withAdditionalAttributes<T>(
fn: () => Promise<T>,
attributes: Record<string, any> | Promise<Record<string, any>>
): Promise<T>;
options
. See example/**
* Fetches a prompt from the registry.
*
* @param promptRegistryId - The ID of the prompt registry.
* @param options - Configuration options for fetching the prompt:
* - `prompt_version` - Fetches the prompt with the specified version. If not provided, the live prompt will be fetched. If there is no live prompt, an error will be thrown.
* - `variables`: - Replaces the variables in the prompt with the provided values. Each key of the object should be the variable name, and the corresponding value should be the value to replace.
* @returns LangtracePrompt - The fetched prompt with variables replaced as specified.
*/
export const getPromptFromRegistry = async (promptRegistryId: string, options?: { prompt_version?: number, variables?: Record<string, string> }): Promise<LangtracePrompt>
withLangtraceRootSpan
function. See example/**
*
* @param userId id of the user giving feedback
* @param score score of the feedback
* @param traceId traceId of the llm interaction. This is available when the inteaction is wrapped in withLangtraceRootSpan
* @param spanId spanId of the llm interaction. This is available when the inteaction is wrapped in withLangtraceRootSpan
*
*/
export const sendUserFeedback = async ({ userId, userScore, traceId, spanId }: EvaluationAPIData): Promise<void>
Langtrace automatically captures traces from the following vendors:
Vendor | Type | Typescript SDK | Python SDK |
---|---|---|---|
OpenAI | LLM | :white_check_mark: | :white_check_mark: |
Anthropic | LLM | :white_check_mark: | :white_check_mark: |
Azure OpenAI | LLM | :white_check_mark: | :white_check_mark: |
Cohere | LLM | :white_check_mark: | :white_check_mark: |
Groq | LLM | :x: | :white_check_mark: |
Perplexity | LLM | :white_check_mark: | :white_check_mark: |
Gemini | LLM | :white_check_mark: | :white_check_mark: |
Mistral | LLM | :white_check_mark: | :white_check_mark: |
xAI | LLM | :white_check_mark: | :white_check_mark: |
Langchain | Framework | :x: | :white_check_mark: |
LlamaIndex | Framework | :white_check_mark: | :white_check_mark: |
Langgraph | Framework | :x: | :white_check_mark: |
AWS Bedrock | Framework | :white_check_mark: | :x: |
DSPy | Framework | :x: | :white_check_mark: |
CrewAI | Framework | :x: | :white_check_mark: |
Ollama | Framework | :white_check_mark: | :white_check_mark: |
VertexAI | Framework | :white_check_mark: | :white_check_mark: |
VercelAI | Framework | :white_check_mark: | :x: |
Pinecone | Vector Database | :white_check_mark: | :white_check_mark: |
ChromaDB | Vector Database | :white_check_mark: | :white_check_mark: |
QDrant | Vector Database | :white_check_mark: | :white_check_mark: |
Weaviate | Vector Database | :white_check_mark: | :white_check_mark: |
PGVector | Vector Database | :white_check_mark: | :white_check_mark: (SQLAlchemy) |
We welcome contributions to this project. To get started, fork this repository and start developing. To get involved, join our Discord workspace.
To report security vulnerabilites, email us at security@scale3labs.com. You can read more on security here.