Closed teabolt closed 4 years ago
Merging #329 into master will increase coverage by
0.02%
. The diff coverage is96.36%
.
@@ Coverage Diff @@
## master #329 +/- ##
==========================================
+ Coverage 97.29% 97.31% +0.02%
==========================================
Files 49 49
Lines 3107 3134 +27
Branches 581 584 +3
==========================================
+ Hits 3023 3050 +27
Misses 44 44
Partials 40 40
Impacted Files | Coverage Δ | |
---|---|---|
eli5/keras/gradcam.py | 100% <100%> (ø) |
:arrow_up: |
eli5/formatters/image.py | 100% <100%> (+2.73%) |
:arrow_up: |
eli5/keras/explain_prediction.py | 96.38% <95.65%> (-1.83%) |
:arrow_down: |
@teabolt I'm not sure I follow the comments in previous PR regarding this. Could you please summarize, why should image argument be required, not optional?
I'm not sure I follow the comments in previous PR regarding this. Could you please summarize, why should image argument be required, not optional?
@kmike making image
required makes a couple of things easier:
image
is passed dispatch to image, if tokens
is passed dispatch to text). Otherwise we'd need to check the shape of the doc
array which may not be ideal (eg: if we later end up supporting video with array shape (height, width, frames)
, this has the same shape as an RGB image (height, width, 3)
).doc
to an image
. Classic (height, width, channels)
arrays can be easily converted to images, but the user would have issues if doc
is something else but still represents an image.Edit: I can see how my arguments are edge cases. We can try infer things from doc
and if that fails the user can explicitly pass image
?
Otherwise we'd need to check the shape of the doc array which may not be ideal
Interesting - in most other places we're dispatching based on model, not based on input data type. E.g. you've got ResNet, with 2d convolutions and stuff => dispatch to image handling.
Edit: I can see how my arguments are edge cases. We can try infer things from doc and if that fails the user can explicitly pass image?
Yeah, that's what I was thinking - don't require user to pass image if we can figure it out automatically; if it works in 80% cases that's fine, as soon as there is a way to pass image explicitly.
you've got ResNet, with 2d convolutions and stuff => dispatch to image handling.
That is a good idea. For text we can check for Embedding, recurrent, or 1D layers. For images we'd expected 2D layers.
Added a Grad-CAM image (https://github.com/teabolt/eli5/blob/keras-gradcam-img-v2/docs/source/static/gradcam-catdog.png) to the README after text highlighting image.
Thanks @kmike. I'll have to clean up my backslashes in other PR's.
Thanks @teabolt!
API improvements for https://github.com/TeamHG-Memex/eli5/pull/315.
For Keras image classifiers, the call for explanations is
eli5.explain_prediction(model, doc)
. We will add the ability to pass an optionalimage
argument like this:eli5.explain_prediction(model, doc, image=image)
, whereimage
is a Pillow image corresponding todoc
. In this case we will no longer usedoc
to create an image. This is useful for experimental models or unusual input shapes.These changes should've been introduced in https://github.com/TeamHG-Memex/eli5/pull/325 but need to be merged earlier. See also https://github.com/TeamHG-Memex/eli5/pull/315#discussion_r292402958. (Originally
image
was going to be a required argument, but we decided to keep it optional).