Closed marreapato closed 3 months ago
All of my explanations appear like this image, where i have no feature importance associated with, however by using tensorflow i get different results:
Hi @marreapato ,
would you mind printing the explanation (explanations2) ? Just to figure out if the Algorithm implementation or the plot function is the issue.
Which version of TSInterpret are you using ?
Thanks for the reply @JHoelli
I am using version 0.4.5 and the array of explanations appears as [array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.])]
Hi @marreapato,
I think the issue is the output layer of your model. Most approaches in here expect a Softmax Ouput, i.e. two neurons for a binary classification problem.
I did some changes in the PyTorchWrapper of TSInterpret. Please try the changes by installing and let me know if it works:
pip install https://github.com/fzi-forschungszentrum-informatik/TSInterpret/archive/refs/heads/main.zip
If that does not work, as fallback, retraining your model with softmaxoutput, two neurons and nn.Crossentropyloss might also be a solution.
@JHoelli Thanks for your reply, i haven't even tried the first solution you provided me with (i can still do it though, in case you need some feedback). I switched my activation function of the output layer and the loss function, and i finally got the feature importance of the timesteps!
Btw i am using the library for my masters research, i am certainly going to cite the paper in my publications related with it, thanks for the effort in giving me a solution for my problem.
It's an applied research with financial time-series data, so if there's any interest in having your names attached with it, or in doing any sort of collaboration, let me know, my institutional email (From a Public University based in Brazil - Federal University of Pernambuco) is lram2@cin.ufpe.br
Off-topic:
Would you also happen to know any online events based on Explainability for time series?
Dear Authors and community, thanks for the effort you put in building this library, it's being really useful for me, however i am having trouble trying to generate explanations in the LEFTIST method for pytorch models.
Basically i have this structure:
Shape of X_train: (181177, 30, 1) Shape of X_test: (45295, 30, 1) Shape of y_train: (181177,) Shape of y_test: (45295,)
And the code is the following: