Open georgeblu1 opened 3 months ago
I believe this is really useful feature. So, in test, there is an option to compute confusion matrix.
Update: I am currently extracting the precision and recall from the cocometrics's cocoeval, and logging them into mlflow through the typical visbackend.
But, it doesnt work for runner.test(). When running runner.test(), no metrics from cocometrics was actually logged into MLflow. I believe this is another issue, relating to https://github.com/open-mmlab/mmengine/issues/1482
I am using the MLFlow vis backend to log my model coco metric into MLFlow but within the coco metric, it doesnt have the result of confusion matrix that would allow me to derive the average precision and average recall.
How do I log the result from tools/analysis_tools/confusion_matrix.py into the MLFlow?