AgentOps-AI / agentops

Python SDK for agent monitoring, LLM cost tracking, benchmarking, and more. Integrates with most LLMs and agent frameworks like CrewAI, Langchain, and Autogen
https://agentops.ai
MIT License
1.18k stars 93 forks source link
agent agentops ai anthropic autogen cost-estimation crewai evals evaluation-metrics groq langchain llm mistral ollama openai
Logo

AI agents suck. Weโ€™re fixing that.

Python Version

๐Ÿฆ Twitter   โ€ข   ๐Ÿ“ข Discord   โ€ข   ๐Ÿ–‡๏ธ AgentOps   โ€ข   ๐Ÿ“™ Documentation

AgentOps ๐Ÿ–‡๏ธ

License: MIT PyPI - Version

AgentOps Twitter

Discord community channel

git commit activity

AgentOps helps developers build, evaluate, and monitor AI agents. Tools to build agents from prototype to production.

๐Ÿ“Š Replay Analytics and Debugging Step-by-step agent execution graphs
๐Ÿ’ธ LLM Cost Management Track spend with LLM foundation model providers
๐Ÿงช Agent Benchmarking Test your agents against 1,000+ evals
๐Ÿ” Compliance and Security Detect common prompt injection and data exfiltration exploits
๐Ÿค Framework Integrations Native Integrations with CrewAI, AutoGen, & LangChain

Quick Start โŒจ๏ธ

pip install agentops

Session replays in 3 lines of code

Initialize the AgentOps client and automatically get analytics on every LLM call.

import agentops

# Beginning of program's code (i.e. main.py, __init__.py)
agentops.init(<INSERT YOUR API KEY HERE>)

...
# (optional: record specific functions)
@agentops.record_function('sample function being record')
def sample_function(...):
    ...

# End of program
agentops.end_session('Success')
# Woohoo You're done ๐ŸŽ‰

All your sessions are available on the AgentOps dashboard. Refer to our API documentation for detailed instructions.

Agent Dashboard Agent Dashboard
Session Analytics Session Analytics
Session Replays Session Replays

Integrations ๐Ÿฆพ

CrewAI ๐Ÿ›ถ

Build Crew agents with observability with only 2 lines of code. Simply set an AGENTOPS_API_KEY in your environment, and your crews will get automatic monitoring on the AgentOps dashboard.

AgentOps is integrated with CrewAI on a pre-release fork. Install crew with

pip install git+https://github.com/AgentOps-AI/crewAI.git@main

AutoGen ๐Ÿค–

With only two lines of code, add full observability and monitoring to Autogen agents. Set an AGENTOPS_API_KEY in your environment and call agentops.init()

Langchain ๐Ÿฆœ๐Ÿ”—

AgentOps works seamlessly with applications built using Langchain. To use the handler, install Langchain as an optional dependency:

Installation ```shell pip install agentops[langchain] ``` To use the handler, import and set ```python import os from langchain.chat_models import ChatOpenAI from langchain.agents import initialize_agent, AgentType from agentops.langchain_callback_handler import LangchainCallbackHandler AGENTOPS_API_KEY = os.environ['AGENTOPS_API_KEY'] handler = LangchainCallbackHandler(api_key=AGENTOPS_API_KEY, tags=['Langchain Example']) llm = ChatOpenAI(openai_api_key=OPENAI_API_KEY, callbacks=[handler], model='gpt-3.5-turbo') agent = initialize_agent(tools, llm, agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=True, callbacks=[handler], # You must pass in a callback handler to record your agent handle_parsing_errors=True) ``` Check out the [Langchain Examples Notebook](./examples/langchain_examples.ipynb) for more details including Async handlers.

Cohere โŒจ๏ธ

First class support for Cohere(>=5.4.0). This is a living integration, should you need any added functionality please message us on Discord!

Installation ```bash pip install cohere ``` ```python python import cohere import agentops # Beginning of program's code (i.e. main.py, __init__.py) agentops.init() co = cohere.Client() chat = co.chat( message="Is it pronounced ceaux-hear or co-hehray?" ) print(chat) agentops.end_session('Success') ``` ```python python import cohere import agentops # Beginning of program's code (i.e. main.py, __init__.py) agentops.init() co = cohere.Client() stream = co.chat_stream( message="Write me a haiku about the synergies between Cohere and AgentOps" ) for event in stream: if event.event_type == "text-generation": print(event.text, end='') agentops.end_session('Success') ```

LiteLLM

AgentOps provides support for LiteLLM(>=1.3.1), allowing you to call 100+ LLMs using the same Input/Output Format.

Installation ```bash pip install litellm ``` ```python python # Do not use LiteLLM like this # from litellm import completion # ... # response = completion(model="claude-3", messages=messages) # Use LiteLLM like this import litellm ... response = litellm.completion(model="claude-3", messages=messages) # or response = await litellm.acompletion(model="claude-3", messages=messages) ```

LlamaIndex ๐Ÿฆ™

(Coming Soon)

Time travel debugging ๐Ÿ”ฎ

(coming soon!)

Agent Arena ๐ŸฅŠ

(coming soon!)

Evaluations Roadmap ๐Ÿงญ

Platform Dashboard Evals
โœ… Python SDK โœ… Multi-session and Cross-session metrics โœ… Custom eval metrics
๐Ÿšง Evaluation builder API โœ… Custom event tag trackingย  ๐Ÿ”œ Agent scorecards
โœ… Javascript/Typescript SDK โœ… Session replays ๐Ÿ”œ Evaluation playground + leaderboard

Debugging Roadmap ๐Ÿงญ

Performance testing Environments LLM Testing Reasoning and execution testing
โœ… Event latency analysis ๐Ÿ”œ Non-stationary environment testing ๐Ÿ”œ LLM non-deterministic function detection ๐Ÿšง Infinite loops and recursive thought detection
โœ… Agent workflow execution pricing ๐Ÿ”œ Multi-modal environments ๐Ÿšง Token limit overflow flags ๐Ÿ”œ Faulty reasoning detection
๐Ÿšง Success validators (external) ๐Ÿ”œ Execution containers ๐Ÿ”œ Context limit overflow flags ๐Ÿ”œ Generative code validators
๐Ÿ”œ Agent controllers/skill tests โœ… Honeypot and prompt injection detection (PromptArmor) ๐Ÿ”œ API bill tracking ๐Ÿ”œ Error breakpoint analysis
๐Ÿ”œ Information context constraint testing ๐Ÿ”œ Anti-agent roadblocks (i.e. Captchas) ๐Ÿ”œ CI/CD integration checks
๐Ÿ”œ Regression testing ๐Ÿ”œ Multi-agent framework visualization

Why AgentOps? ๐Ÿค”

Without the right tools, AI agents are slow, expensive, and unreliable. Our mission is to bring your agent from prototype to production. Here's why AgentOps stands out:

AgentOps is designed to make agent observability, testing, and monitoring easy.

Star History

Check out our growth in the community:

Logo