Closed mikeldking closed 4 months ago
🤖
Hello @mikeldking! Good to see you again. Let's dive into this issue together, shall we?
Based on the information provided, it seems that the Arize Phoenix project is currently using the OpenInference tracing format for capturing and storing LLM Application execution logs. This is evident from the code in the file src/phoenix/trace/llama_index/callback.py
where the OpenInferenceTraceCallbackHandler
class is defined. This class is a callback handler for storing LLM application trace data in OpenInference format.
However, the project is planning to switch over to using the OTEL ecosystem. The OTEL ecosystem, also known as OpenTelemetry, is a set of APIs, libraries, agents, and instrumentation that can be used for capturing distributed traces and metrics from your application. It's a CNCF project and is designed to be vendor-agnostic.
The migration to the OTEL ecosystem is likely to provide several benefits. For instance, it could potentially provide better support for more languages, better integration with other tools in the OTEL ecosystem, and improved performance. However, without more specific information about the Arize Phoenix project's requirements and goals, it's difficult to provide a more detailed explanation of the potential benefits.
The migration process seems to involve several steps, including setting up a monorepo for OpenInference, moving the instrumentation to the openinference repo, and migrating to OpenInference OpenAI, LlamaIndex, and LangChain Instrumentation.
Unfortunately, I wasn't able to find more specific information about the OTEL ecosystem and how it relates to the Arize Phoenix project within the repository. If you could provide more details about the project's requirements and goals, it would be easier to provide a more accurate and detailed response.
I hope this information is helpful. If you have any further questions or need more clarification, please let me know.
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot.
🤖
Hello @mikeldking! Good to see you again. Let's dive into this issue together, shall we?
Based on the information provided, it seems that the Arize Phoenix project is currently using the OpenInference tracing format for capturing and storing LLM Application execution logs. This is evident from the code in the file src/phoenix/trace/llama_index/callback.py
where the OpenInferenceTraceCallbackHandler
class is defined. This class is a callback handler for storing LLM application trace data in OpenInference format.
However, the project is planning to switch over to using the OTEL ecosystem. The OTEL ecosystem, also known as OpenTelemetry, is a set of APIs, libraries, agents, and instrumentation that can be used for capturing distributed traces and metrics from your application. It's a CNCF project and is designed to be vendor-agnostic.
The migration to the OTEL ecosystem is likely to provide several benefits. For instance, it could potentially provide better support for more languages, better integration with other tools in the OTEL ecosystem, and improved performance. However, without more specific information about the Arize Phoenix project's requirements and goals, it's difficult to provide a more detailed explanation of the potential benefits.
The migration process seems to involve several steps, including setting up a monorepo for OpenInference, moving the instrumentation to the openinference repo, and migrating to OpenInference OpenAI, LlamaIndex, and LangChain Instrumentation.
Unfortunately, I wasn't able to find more specific information about the OTEL ecosystem and how it relates to the Arize Phoenix project within the repository. If you could provide more details about the project's requirements and goals, it would be easier to provide a more accurate and detailed response.
I hope this information is helpful. If you have any further questions or need more clarification, please let me know.
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot.
Docs feedback:
python3 -m phoenix.server.main
fails even when installed.
Is there someway to distinguish between local Phoenix server and the hosted server?
I had Phoenix pointing to a hosted environment:
However it still pointed me to the local phoenix server. This didn't have the traces, and makes sense it was in the hosted one.
Traces showed up in hosted:
For spinning up the LLM app + tracing using Phoenix I think the amount of information is great but I would suggest reorganizing. I would first show how to use the deployment via Docker. Once the reader sees the value, I would then add the section of deployment for development where they deploy BE, FE and Phoenix. I was missing how to deploy Phoenix in this case
Outputs of agents do not appear
We should include in the documentation how to extract traces from Phoenix, regardless of how it is deployed
openinference
repo more prominent.Outputs of agents do not appear
Good catch. This is tracked via https://github.com/Arize-ai/openinference/issues/154
Is there someway to distinguish between local Phoenix server and the hosted server?
I had Phoenix pointing to a hosted environment:
However it still pointed me to the local phoenix server. This didn't have the traces, and makes sense it was in the hosted one.
Traces showed up in hosted:
Makes sense that this is confusing. Will file a ticket to make this more clear.
In order to reach more languages and to play better with the OTEL ecosystem, phoenix will switch over to using OTEL.
Spike
Infra
Python
Server
Javascript
Migration
Documentation