ai-cfia / nachet-frontend

Frontend application for seed classification of images acquired from digital microscopes
MIT License
2 stars 2 forks source link

Enhancement: Streamline Model Switching Process in Image Classification #96

Closed CFIALeronB closed 1 month ago

CFIALeronB commented 3 months ago

Current Behaviour

In the Nachet Frontend, the process for switching between image classification models involves navigating to the results configuration settings. Users select their preferred model there, which updates the 'RESULTS' label and sets a global environment variable for the model choice. However, applying the selected model for classification requires users to click the 'Results Tuner' button, initiating an inference request to the Nachet backend with the model as a parameter.

Proposed Enhancement

To enhance user experience and streamline model switching, we propose the following modifications:

Expected Benefits

Implementation Considerations

Test Cases Checklist

This enhancement is aimed at resolving existing inconveniences and refining the model selection and classification workflow for an improved user experience.

rngadam commented 3 months ago

Where does this feature come from? is it related to feedback from the representative user?

I'm somewhat skeptical of the workflow as described here.

There doesn't seem to be much in terms of identifying the overall UX before launching into the UI details.

MaxenceGui commented 3 months ago

It comes from our meeting with Taran (William and I). She mentioned that she likes to have the model button next to the classify button. So yesterday, I told Leron about it. Since it's related to model switching I assign M1 (even if it could be debatable).

CFIALeronB commented 3 months ago

Where does this feature come from? is it related to feedback from the representative user?

I'm somewhat skeptical of the workflow as described here.

There doesn't seem to be much in terms of identifying the overall UX before launching into the UI details.

@rngadam

This design is just more intuitive as the results label doesn't just update with the selected model name - without any classification being done right after. Now results are populated with the model label only after successful classification, following a 200 status code response from the backend. Very fast and efficient way of choosing which model to classify images with. And user selection remains, so that when they click on CLASSIFY again their previous choice still appears.

Screenshots below:

Screenshot1 Screenshot2 Screenshot3 Screenshot4

rngadam commented 3 months ago

OK, if it comes from feedback from Taran than that is good!

Follow up question for the backend : how are we persisting the results of inference from multiple pipelines? Guessing we don't want to rerun inference everytime we rerun against a specific version-pipeline, so it should be cached to storage.

MaxenceGui commented 3 months ago

I'm not exactly sure how it works on the front end, but there is already a caching functionality that keeps track of the result.

scores: [],
classifications: [],
boxes: [],
annotated: false,
imageDims: [],
overlapping: [],
overlappingIndices: [],

As of now, inferencing would overwrite what is in there. Maybe we could add another parameter that would contain the model that displays the score, classification, boxes, and everything else related?

 model:{
   scores: [],
   ...
}

function loadResultToCache

For the backend

To implement the same idea in the backend, we would need to keep track of the image passing in the inference request, pair it with the pipelines, and keep the result when returned. Then if the image is call again, we send back the result from cache.

---
title: Test
---
flowchart TD

image[Send image to inference request]
check(back end check if image already exists <br> in cache and if pipelines name is the same)
inference[image not in cache <br> or pipeline not call]
cache[image in cache <br> and pipeline call]
inf_res[send back result from inference]
cache_res[send back result stored in cache]

image -- from Front end--> check
check --> inference & cache
inference-->inf_res
cache-->cache_res
rngadam commented 3 months ago

1) @CFIALeronB ensure that for now, the caching works on multiple models to not re-trigger reference 2) @MaxenceGui create an issue in nachet-backend to move the inferences caching from the frontend to the backend (where it ought to be). I also see that the code you pointed to in the frontend contains a number of formatting directives that should probably be informing how we return results to the frontend.

image