alewarne / Layerwise-Relevance-Propagation-for-LSTMs

Tensorflow 2.1 implementation of LRP for LSTMs
37 stars 12 forks source link

Batchprocessing about 1 million sequences #6

Closed oldbighorn closed 2 years ago

oldbighorn commented 2 years ago

I'm working with the example file and implemented it for explaining clickstream sequences. Now I want to batch wise explain about 1 million sequences. The batch wise explanation works for maximum ca. 10,000 sequences, but with more it crashes automatically due to memory issues. I'm using Google Colab with 25Gb of memory. Is it possible to epxlain more sequences without crashing due to memory issues?

oldbighorn commented 2 years ago

Edit: fixed with for-loop

batchsize = 500 relevances = np.zeros(sentences.shape) for i in range(0, sentences.shape[0], batchsize): batch = sentences[i:i+batchsize] relevances[i:i+batchsize, :, :], _ = lrp_model.lrp(batch)