Open AyushExel opened 3 years ago
@ppwwyyxx what do you think about this? Would this be a useful add-on?
Would be useful for me! I ended up adding it myself to debug some models.
Hi @nbardy would you mind sharing the code on how you added this?
@BenSpex Im unfortunately busy focused on synthesis. But looking at detectron 2 integration next quarter. I'm happy to upstream logging when I iron it out.
Their Fully Connected forums often have examples of community integrations.
@nbardy awesome, thanks for coming back. let me know when you need someone to test the integration.
@BenSpex responded to you in the W&B forum :)
Hi all, the visualizations look pretty awesome and would be a great addition to detectron2! However we're not familiar with w&b and uncertain of how much work is needed to support them. If you'd like to contribute this to detectron2, could you provide an initial design (or even code, if available) about what changes will need to be added? This will give us a better idea how to integrate.
@ppwwyyxx thanks. What would be the best way to share the code. Should I open a WIP draft PR to facilitate easier disscussion? I'm happy to do it via other mediums if you prefer.
Regarding changes - I think the only thing that is needed to log these visualizations efficiently is some predictions to be stored in event storage after evalhook is called.
In my WIP, I didn't change anything from the existing detecron2 codebase, I just created a new temporary EvalHookv2
to manually infer and save predictions in event storage. This adds overhead so it would be nice to have a split of predictions stored when they are calculated while evaluation. What do you think? Happy to discuss in a dedicated thread.
Yeah a draft PR could be a good starting point.
If what is needed is just to access the predictions, it seems a better approach is to implement a new evaluator (subclass of DatasetEvaluator
https://detectron2.readthedocs.io/en/latest/modules/evaluation.html#detectron2.evaluation.DatasetEvaluator) that will work with the existing evaluation logic. All evaluators have access to the inputs & predictions during inference, so there is no need to store them anywhere or recompute them. This is also how we do some visualizations internally.
@ppwwyyxx I've made a draft PR with a design proposal. Sorry for the dealy, was waiting on some UI changes
π Feature
Allow visualization of training progress using media panels and tables.
Motivation & Examples
This is based on this issue. I'm an engineer at W&B and I've been working with object detection tasks. I regularly use some of the following visualizations on W&B dashboards. Would the following features be useful to other detection users? I'd like to know what the maintainers think.
Bounding box & Segmentation maps debugger
W&B supports interactive media panels, where you can track how training progresses by adjusting the steps, confidence scores, and classes of predictions in real-time.
Try it live here
The Media panel also supports debugging segmentation maps.
Dataset visualization and versioning
With W&B tables, you can visualize, query, and filter your datasets in your browser.
Quickly compare results across different training epochs, datasets, hyperparameter choices, model architectures etc. For example, take a look at this comparison of two models on the same test images β
Model versioning and DAGs
We can have the user set
Based on that, we'll log models after every
model logging period
with aliasbest
if the current model performs best on thedesired metric