Arize-ai / phoenix

AI Observability & Evaluation
https://docs.arize.com/phoenix
Other
3.54k stars 264 forks source link

[ENHANCEMENT] Support for Multi-Modal RAG Tracing and Evaluation #2895

Open hakantekgul opened 5 months ago

hakantekgul commented 5 months ago

It would be great if Phoenix supported Multi-Modal applications, such as a Multi-Modal RAG. There are a couple of features to consider in order to fully support this:

MM RAG Tracing:

Screenshot 2024-04-15 at 3 51 08 PM

MM RAG Evaluation:

Screenshot 2024-04-15 at 3 51 27 PM

axiomofjoy commented 5 months ago

Thanks @hakantekgul

mikeldking commented 5 months ago

Hey @hakantekgul appreciate the feedback. We probably have a bit too much on our plate right now to tackle at the current moment but keep us honest on the priority of this.

stefanadelbert commented 2 months ago

Adding support for working with multimodal models would be a huge step forward, e.g. being able to visualise images in prompt inputs (base64 encoded PNG or JPEG) in the Phoenix UI Trace Details. rather than seeing the base64 encoded text as shown in this screenshot.

image

mikeldking commented 2 months ago

Adding support for working with multimodal models would be a huge step forward, e.g. being able to visualise images in prompt inputs (base64 encoded PNG or JPEG) in the Phoenix UI Trace Details. rather than seeing the base64 encoded text as shown in this screenshot.

image

Soon:)