hotosm / fAIr

AI Assisted Mapping
GNU Affero General Public License v3.0
69 stars 58 forks source link

Transparency increase through using explainability analysis on ML models. #123

Open pomodoren opened 1 year ago

pomodoren commented 1 year ago

Is your feature request related to a problem? Please describe. It is of value to add a layer of understanding on how the models are making a decision, and to try to follow the decision (of classification) for the inner layers of the deep learning models (like U-Net or ResNet). This can increase the transparency and understanding on how the models are acting. This is more common in use-cases of medicine, but probably can be transferable to fAIr models. Not creating it can create a disruption between the contributors and model-creators, making "magic" models.

Describe the solution you'd like Together with the models, a downloadable report on the segmentation. Practically, you see that a model does well on the buildings that have a specific property (say - round shape), and less well on the other buildings. This can be done through Shapley values and similar methodology.

Describe alternatives you've considered

Additional context

Tasks

kshitijrajsharma commented 1 year ago

Related discussion : https://forum.openlabs.cc/t/hot-osm-style-models-a-discussion/2869