raghakot / keras-vis

Neural network visualization toolkit for keras
https://raghakot.github.io/keras-vis
MIT License
2.97k stars 664 forks source link

nan in visualize_activation #94

Open marcosterland opened 6 years ago

marcosterland commented 6 years ago

Hi, I'm trying to use the activation maximization, but the optimizer's loss is nan after the third iteration.

Iteration: 1, named_losses: <zip object at 0x7f6632189c08>, overall loss: -0.044139932841062546 Iteration: 2, named_losses: <zip object at 0x7f66321d87c8>, overall loss: -0.028231114149093628 Iteration: 3, named_losses: <zip object at 0x7f6633f26148>, overall loss: nan Iteration: 4, named_losses: <zip object at 0x7f6633f269c8>, overall loss: nan

I'm using the InceptionV1/GoogLeNet model, trained on grey value images in range (0, 1). Keras, keras-vis and TF are all at the latest version. I've tried the backprop modifiers None, 'guided', 'relu', and tried different seed images. Input range ist set to (0.0, 1.0). Lowering the weights (act_max_weight, lp_norm_weight, tv_weight) has no effect besides the magnitude of the loss. EDIT: The error does not occur, when using uint16 images (same network topology, but trained on the uint16 images).

Is there a way to pass a learning rate to the optimizer? Is there something else I can do?

Thanks!

patrick-ucr commented 5 years ago

I am facing the same problem. I use a keras model loaded by a converted Tiny Yolo model.