Open suprjinx opened 4 months ago
Presumably we could use artifact support for this?
In general, an Aim's image explorer is similar to the metrics explorer. Aim /runs/search/images
will return a stream response of "traces" corresponding to each image sequence. The UI allows various settings for density, index range, etc, in the request. The image itself is returned as a path (S3 or say network volume path), along with meta data such as 'caption', 'height', 'width', etc.
To support this, we will need MLflow api logging capability for storing the path and metadata. The image itself can be stored as artifact, directly to S3 or local using the existing client methods -- so likely a two-phase save, where we first store artifact and then log it as image. These could be consolidated in one function in the FasttrackML client.
Test coverage in the Aim Python tests will give us a good starting point for compatibility in the UI. Just add this to tests/integration/python/config.json: tests/api/test_run_images_api.py
. Our patch will need to provide a new implementation of the fixtures method, generate_image_set
.
As far as "text support", Aim does not seem to have explorer-level support for text artifacts.
@suprjinx can we turn this into a list of tickets in the public repo? I'm happy to work with you on it next week.
@dave-gantenbein added this epic and some children https://github.com/G-Research/fasttrackml/issues/1230
Suggested on the MLOps slack channel: