Arize-ai / phoenix

AI Observability & Evaluation
https://docs.arize.com/phoenix
Other
4.06k stars 299 forks source link

[BUG] Traces not registering locally with the new instrumentation technique #4716

Open arthurbrenno opened 2 months ago

arthurbrenno commented 2 months ago

Describe the bug Im using a self-hosted phoenix version and playing around with some API requests. My traces are not beeing registered in localhost:6006.

To Reproduce Steps to reproduce the behavior:

  1. Create a docker-compose:

    
    services:
    phoenix:
    image: arizephoenix/phoenix:latest  # Ensure the Phoenix version is 4.0.0 or later
    depends_on:
      - db
    ports:
      - "6006:6006"  # Phoenix UI and OTLP HTTP collector
      - "4317:4317"  # OTLP gRPC collector
      - "9090:9090"  # Optional: Prometheus metrics port, if enabled
    environment:
      - PHOENIX_SQL_DATABASE_URL=postgresql://postgres:postgres@db:5432/postgres  # PostgreSQL connection URL
      - PHOENIX_WORKING_DIR=/mnt/data  # Optional: Specify working directory for Phoenix
    volumes:
      - phoenix_data:/mnt/data   # Persistent volume for Phoenix data
    
    db:
    image: postgres:14  # PostgreSQL version >= 14
    restart: always
    environment:
      - POSTGRES_USER=postgres
      - POSTGRES_PASSWORD=postgres
      - POSTGRES_DB=postgres
    ports:
      - "5432:5432"  # PostgreSQL port
    volumes:
      - postgres_data:/var/lib/postgresql/data  # Persistent volume for PostgreSQL database

volumes: phoenix_data: driver: local # Persistent volume for Phoenix data postgres_data: driver: local # Persistent volume for PostgreSQL data

2. Use the following versions of the dependencies:
```toml
"openinference-instrumentation-llama-index>=2",
    "opentelemetry-proto>=1.12.0",
    "arize-phoenix-otel>=0.5.1",
    "llama-index-callbacks-arize-phoenix>=0.1.6",
  1. Use the following code to setup phoenix:
    
    from openinference.instrumentation.llama_index import LlamaIndexInstrumentor
    from phoenix.otel import register

tracer_provider = register( project_name="default", endpoint="http://localhost:6006/v1/traces", )

LlamaIndexInstrumentor().instrument(tracer_provider=tracer_provider)



**Screenshots**
<img width="1075" alt="image" src="https://github.com/user-attachments/assets/954ae76b-3950-4cef-8e78-1ceca2a660a2">

<img width="1164" alt="image" src="https://github.com/user-attachments/assets/3a5b68ec-3364-41a5-b277-bf9c9c467a04">

**Environment (please complete the following information):**
 - OS: MacOS
 - Notebook Runtime uv as a package manager + python 3.10
mikeldking commented 2 months ago

Hi @arthurbrenno sorry you are facing issues. At first glance I'm not 100% sure what's wrong. Could you try using https://github.com/Arize-ai/openinference/tree/main/python/instrumentation/openinference-instrumentation-llama-index directly?

The errors seem very odd. Will dig in a bit more with the team.

Blackskyliner commented 6 days ago

I (seem) have the similiar problem when trying to use a local Phoenix with LLAMAIndex directly spawned side-by-side via Python.

The Phoenix is spawned within the same application which also spawns itself an FastAPI. I register the Instrumentor for both, the main process and the FastAPI spawned sub/mp process without any errors.

Nothing is logged, no call to /v1/traces seems gets fired/logged. Tried everything, also the mentioned fully-manual OpenInference SDK approach via the LlamaIndexInstrumentor.


main.py

import llama_index.core
import nest_asyncio
import phoenix as px
import uvicorn

# Always register phoenix handler
llama_index.core.set_global_handler("arize_phoenix")

if __name__ == "__main__":
    px.launch_app(use_temp_dir=False)
    uvicorn.run(
        app="api:app",
        host="0.0.0.0",
        port=8000,
        reload=True,
        reload_includes=["../*.yaml"],
        loop="asyncio", # Fix some api internal stuff with LLM/Workflows
    )

    nest_asyncio.apply() # Fix some api internal stuff with LLM/Workflows

elif __name__ != "__mp_main__":
    pass

SideNote: The nest_asyncio and loop-asyncio on/for my application does not interfere with phoenix, as I tried without and it also did not work.

Last Known Good Versions (where it last worked in a current prod like deployment): Phoenix 5.6.0, LLAMAIndex 0.10.67.post1, openinference-instrumentation 0.1.15, openinference-instrumentation-llama-index 2.2.4, llama-index-callbacks-arize-phoenix 0.1.6