Open chihuahua opened 7 years ago
Where would the API for writing the salience summary live? In TensorBoard python space, or would it ship with the saliency library?
Maybe TensorBoard space since the API takes a 2D np array of values (generated by saliency
), but shipping with saliency
nicely clarifies that this plugin is paired with it. What do you think?
Upon 2nd thought, I actually prefer coding the API for creating data within the saliency library.
saliency
library.saliency
.)
Saliency maps (or sensitivity masks) visualize which inputs contribute towards a model decision. They are generated from gradient-based methods such as integrated gradients, guided back propagation, or SmoothGrad (original link). Teams including the diabetic retinopathy folks have asked for TensorBoard to show them.
PAIR has a wonderful library for computing saliency maps. The main challenge with making a plugin that makes use of the library's output is that the gradient-based methods operate at a level higher than the model run, making summary ops not conducive towards collecting data. For instance, integrated gradients runs a graph several (say 50 or 100) times with inputs that vary, and the saliency library outputs a 2D numpy array of sensitivity values.
In response to that challenge, I think this 'saliency' plugin could deviate from other plugins and have a python helper method that takes that numpy array and writes it into a summary. That has several advantages. TensorBoard's bazel targets would not have to depend on the saliency library - we rely on the user to install and use it. This lets users use the saliency library as expected and just slip the output into a TensorBoard summary.
At the end of the day, the frontend UI could let the user impose saliency maps atop images and toggle the map and the image. The map could be semi-transparent, and users could change the color tinge. Cards in the UI could mimic the examples shown in the SmoothGrad page: https://pair-code.github.io/saliency/
@wchargin @jart @dandelionmane