keisen / tf-keras-vis

Neural network visualization toolkit for tf.keras
https://keisen.github.io/tf-keras-vis-docs/
MIT License
318 stars 45 forks source link

have been unable to visualize dense layer on graphs/dataSets #34

Open omerbrandis opened 4 years ago

omerbrandis commented 4 years ago

hello,

i've managed to execute the visualize sense layer example, but I have not manged to reproduce results using other graphs (like the current tensorflow classification tutorial graph ) on my dataset (9 classes , image size 50X50 , 2400 training , 100 validation).

training converges, I get decent results ( for example loss: 0.0714 - sparse_categorical_accuracy: 0.9766 - val_loss: 0.6758 - val_sparse_categorical_accuracy: 0.8304) ,

and yet the visuals generated by activation_maximization are meaningless ( attached a few examples ) vgg-several1 ( all images contain apples , I expected to see round shapes )

( I've tried both with and without image preprocessing/normalization ( i.e dividing input pixel data by 255 ) , I've tried visualizing the results directly after fitting ( without saving/loading the graph), I've tried modifying the tutorial graph in several different ways )

can someone point me in the right direction ?

thanks Omer.

keisen commented 4 years ago

Hi, @omerbrandis . I'm sorry for the late reply.

In default, ActivationMaximization is tuned to visualize well with VGG16. So, when you visualize the dense layer of other than VGG16, you have to adjust the parameters of ActivationMaximization.

https://github.com/keisen/tf-keras-vis/blob/b420516ac29ddd9f3f5b964d53a6f3177241d014/tf_keras_vis/activation_maximization.py#L13-L64

The way to tune ActivationMaximization is almost the same when model tuning. I recommend to tune in this order: input-seed, Optimizer and learning rate, regularizers.

Thanks!

omerbrandis commented 4 years ago

Thank you very much Keisen, I'll try to follow these instructions. Omer.