raghakot / keras-vis

Neural network visualization toolkit for keras
https://raghakot.github.io/keras-vis
MIT License
2.98k stars 661 forks source link

TimeDistributed and Multiple Inputs #165

Open wstang35 opened 5 years ago

wstang35 commented 5 years ago

Hi, this is a good project! I am trying to visualize the class features captured by CNN+RNN model, which means using CNN as feature extractor to extract each timestep input's features, and then feed to RNN, to predict each timestep's label. And as for predictions, I am using TimeDistributed(Dense(5)), so I hope to find a solution to expand keras-vis to TimeDistributed layer. In addition, I am using multiple inputs, and It looks like that for now keras-vis doesn't support multiple input models.

So, how can keras-vis expand to these two situations? Really appreciate for any kinds of suggestions, or I can only try to modify my model into single input and separate TimeDistributed intomultiple output Dense layers.... That will not be a neat one...

keisen commented 5 years ago

Same problem: https://github.com/raghakot/keras-vis/issues/48 and https://github.com/raghakot/keras-vis/issues/33

keisen commented 5 years ago

@wstang35 , Just in case, You'd like to visualize your network using visualize_cam or visualize_saliency (not visualize_activation ) , wouldn't you?

I can help you. I think the reasonable way is that we make it based on the implement of PR/#128 . Could you share your model source code and some data by Gist, Slack or somewhere ?

wstang35 commented 5 years ago

Hi @keisen ! Thanks for reply. I want to know what kind of features of my network are capturing, so either visualize_saliency or visualize_activation can be of great help! As for my model, can I just send my model.h5 file to your email? Because I wrap up some conv layer in my source code a little bit, and It would be somehow cumbersome to sort out my model from my project.(And if .h5 model is not enough, I can clean up some model source code, but it might take some time!) Thank you again!

keisen commented 5 years ago

Yes, you can. There is my email-address in my profile. Thank you. And I'd like to send me some input-data, because it will be used as variable argument value of visualize_* function.

Kristufi commented 5 years ago

Hi,

So how should the TimeDistributed be used with the visualize_activation? My model consists of CNN layers and LSTM layers.

Thanks in advance!

gabarlacchi commented 4 years ago

Hi! I am facing exactly the same situation: CNN for features extraction + LSTM for classification (I label a sequence, not each frame). Did you find a way to visualize features activation within the image? Thanks for the help!