laura-rieger / deep-explanation-penalization

Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" https://arxiv.org/abs/1909.13584
MIT License
127 stars 14 forks source link

Extension to natural images? #4

Closed alvinwan closed 4 years ago

alvinwan commented 4 years ago

First, really interesting work -- I happened to chance upon this repo, as Github recommended it.

For computer vision e.g., ColorMNIST have you considered natural images? Is there a bottleneck in terms of memory or compute, that required an MNIST variant? (Or is it just more difficult to find an obvious bias in natural images?)

The text gender bias is really interesting. Will have take a closer read before asking questions though!

laura-rieger commented 4 years ago

Hi Alvin,

Glad you like the work! We do show experiments with natural images (see the skin cancer dataset) on a VGG network where we increase predictive accuracy by penalizing the importance of bright patches pasted on the skin. In the paper this is described in sec 4.1 In table S1 we also show memory and computation requirements of a CDEP network vs a vanilla network (roughly twice as much memory requirements). Complementary to the natural image dataset, the ColorMNIST allows us to precisely measure how effective CDEP and compared methods are since we know exactly what the bias is (as we induced it ourselves). Hope this clears it up :)

Best, Laura