Bartzi / see

Code for the AAAI 2018 publication "SEE: Towards Semi-Supervised End-to-End Scene Text Recognition"
GNU General Public License v3.0
574 stars 147 forks source link

Textrec model test Backprop visualization #29

Open Frozenmad opened 6 years ago

Frozenmad commented 6 years ago

I want to get the result of Backprop visualization from the text-recognition demo because I need to use the pixels of the words in the picture. The ideal output of a given picture input is a black-white picture of the same size that has white space representing the characters. Is there any way I can get them?

Bartzi commented 6 years ago

Yes it is not problem to get those images:

The Variable that is used to create those visualizations is called vis_anchor(https://github.com/Bartzi/see/blob/master/chainer/models/svhn.py#L52).

Visual Backprop is performed by the BBoxPlotter (https://github.com/Bartzi/see/blob/master/chainer/insights/bbox_plotter.py#L126). The BBoxPlotter uses this code to do visual backprop. You should be able to use this information to generate the pictures on your own,