Arize-ai / phoenix

AI Observability & Evaluation
https://docs.arize.com/phoenix
Other
3.82k stars 284 forks source link

🗺️ OTEL Instrumentation #1926

Closed mikeldking closed 4 months ago

mikeldking commented 10 months ago

In order to reach more languages and to play better with the OTEL ecosystem, phoenix will switch over to using OTEL.

Spike

Infra

Server

Javascript

Migration

Documentation

dosubot[bot] commented 9 months ago

🤖

Hello @mikeldking! Good to see you again. Let's dive into this issue together, shall we?

Based on the information provided, it seems that the Arize Phoenix project is currently using the OpenInference tracing format for capturing and storing LLM Application execution logs. This is evident from the code in the file src/phoenix/trace/llama_index/callback.py where the OpenInferenceTraceCallbackHandler class is defined. This class is a callback handler for storing LLM application trace data in OpenInference format.

However, the project is planning to switch over to using the OTEL ecosystem. The OTEL ecosystem, also known as OpenTelemetry, is a set of APIs, libraries, agents, and instrumentation that can be used for capturing distributed traces and metrics from your application. It's a CNCF project and is designed to be vendor-agnostic.

The migration to the OTEL ecosystem is likely to provide several benefits. For instance, it could potentially provide better support for more languages, better integration with other tools in the OTEL ecosystem, and improved performance. However, without more specific information about the Arize Phoenix project's requirements and goals, it's difficult to provide a more detailed explanation of the potential benefits.

The migration process seems to involve several steps, including setting up a monorepo for OpenInference, moving the instrumentation to the openinference repo, and migrating to OpenInference OpenAI, LlamaIndex, and LangChain Instrumentation.

Unfortunately, I wasn't able to find more specific information about the OTEL ecosystem and how it relates to the Arize Phoenix project within the repository. If you could provide more details about the project's requirements and goals, it would be easier to provide a more accurate and detailed response.

I hope this information is helpful. If you have any further questions or need more clarification, please let me know.

Sources


This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

dosubot[bot] commented 9 months ago

🤖

Hello @mikeldking! Good to see you again. Let's dive into this issue together, shall we?

Based on the information provided, it seems that the Arize Phoenix project is currently using the OpenInference tracing format for capturing and storing LLM Application execution logs. This is evident from the code in the file src/phoenix/trace/llama_index/callback.py where the OpenInferenceTraceCallbackHandler class is defined. This class is a callback handler for storing LLM application trace data in OpenInference format.

However, the project is planning to switch over to using the OTEL ecosystem. The OTEL ecosystem, also known as OpenTelemetry, is a set of APIs, libraries, agents, and instrumentation that can be used for capturing distributed traces and metrics from your application. It's a CNCF project and is designed to be vendor-agnostic.

The migration to the OTEL ecosystem is likely to provide several benefits. For instance, it could potentially provide better support for more languages, better integration with other tools in the OTEL ecosystem, and improved performance. However, without more specific information about the Arize Phoenix project's requirements and goals, it's difficult to provide a more detailed explanation of the potential benefits.

The migration process seems to involve several steps, including setting up a monorepo for OpenInference, moving the instrumentation to the openinference repo, and migrating to OpenInference OpenAI, LlamaIndex, and LangChain Instrumentation.

Unfortunately, I wasn't able to find more specific information about the OTEL ecosystem and how it relates to the Arize Phoenix project within the repository. If you could provide more details about the project's requirements and goals, it would be easier to provide a more accurate and detailed response.

I hope this information is helpful. If you have any further questions or need more clarification, please let me know.

Sources


This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

amank94 commented 8 months ago

Docs feedback:

mikeldking commented 8 months ago

python3 -m phoenix.server.main fails even when installed.

AparnaDhinakaran commented 8 months ago

Is there someway to distinguish between local Phoenix server and the hosted server?

I had Phoenix pointing to a hosted environment: Screenshot 2024-02-05 at 5 52 02 PM

However it still pointed me to the local phoenix server. This didn't have the traces, and makes sense it was in the hosted one.

Screenshot 2024-02-05 at 5 51 23 PM

Traces showed up in hosted: Screenshot 2024-02-05 at 5 53 26 PM

fjcasti1 commented 8 months ago

For spinning up the LLM app + tracing using Phoenix I think the amount of information is great but I would suggest reorganizing. I would first show how to use the deployment via Docker. Once the reader sees the value, I would then add the section of deployment for development where they deploy BE, FE and Phoenix. I was missing how to deploy Phoenix in this case Screenshot 2024-02-05 at 3 02 05 PM

fjcasti1 commented 8 months ago

Outputs of agents do not appear Screenshot 2024-02-05 at 3 08 32 PM Screenshot 2024-02-05 at 3 08 37 PM

fjcasti1 commented 8 months ago

We should include in the documentation how to extract traces from Phoenix, regardless of how it is deployed Screenshot 2024-02-05 at 3 17 43 PM

axiomofjoy commented 8 months ago
mikeldking commented 8 months ago

Outputs of agents do not appear Screenshot 2024-02-05 at 3 08 32 PM Screenshot 2024-02-05 at 3 08 37 PM

Good catch. This is tracked via https://github.com/Arize-ai/openinference/issues/154

mikeldking commented 8 months ago

Is there someway to distinguish between local Phoenix server and the hosted server?

I had Phoenix pointing to a hosted environment: Screenshot 2024-02-05 at 5 52 02 PM

However it still pointed me to the local phoenix server. This didn't have the traces, and makes sense it was in the hosted one.

Screenshot 2024-02-05 at 5 51 23 PM

Traces showed up in hosted: Screenshot 2024-02-05 at 5 53 26 PM

Makes sense that this is confusing. Will file a ticket to make this more clear.