Open omerbrandis opened 4 years ago
Hi, @omerbrandis . I'm sorry for the late reply.
In default, ActivationMaximization is tuned to visualize well with VGG16. So, when you visualize the dense layer of other than VGG16, you have to adjust the parameters of ActivationMaximization.
The way to tune ActivationMaximization is almost the same when model tuning. I recommend to tune in this order: input-seed, Optimizer and learning rate, regularizers.
Thanks!
Thank you very much Keisen, I'll try to follow these instructions. Omer.
hello,
i've managed to execute the visualize sense layer example, but I have not manged to reproduce results using other graphs (like the current tensorflow classification tutorial graph ) on my dataset (9 classes , image size 50X50 , 2400 training , 100 validation).
training converges, I get decent results ( for example loss: 0.0714 - sparse_categorical_accuracy: 0.9766 - val_loss: 0.6758 - val_sparse_categorical_accuracy: 0.8304) ,
and yet the visuals generated by activation_maximization are meaningless ( attached a few examples ) ( all images contain apples , I expected to see round shapes )
( I've tried both with and without image preprocessing/normalization ( i.e dividing input pixel data by 255 ) , I've tried visualizing the results directly after fitting ( without saving/loading the graph), I've tried modifying the tutorial graph in several different ways )
can someone point me in the right direction ?
thanks Omer.