Closed expectopatronum closed 3 years ago
Wonderful work, I would also consider it be done by pytorch
I do wish the pytorch version could be released as soon as possible.
Hi, what is progress now? I would like to join.
I was able to reproduce the hospital readmission notebook experiments in Pytorch with a few issues:
Since I couldn't get it to run in reasonable time and some things from the original implementation are unclear to me (I sent an email to the first author of the paper but I haven't received an answer yet) I have moved on to other interpretability methods.
My code is messy so I didn't put it online. If someone is interested in helping me - feel free to contact me, I'd like to give it another shot.
@expectopatronum Hi, I am working on the first experiment by translating the TensorFlow code to PyTorch, it is difficult though. I would like to help and work on it together.
What is your approach? Do you translate the codes file by file or organize them by yourselves?
First I tried to translate the code file by file but I think how Pytorch and Tensorflow work is too different. I also want the influence code extracted from the model, so I put it in a separate file. In the end I want it to work for every model and not copy the code to all models.
I also tried to figure out which parts are actually used (in the example) and only implement those (for now). E.g. in the hospital_readmission example (which I use to test my implementation) they pass test_indices
, so I currently don't care about the part of the function that deals with the case that this is None.
I will put my code on Github in the next couple of days and share it with you - maybe we can solve it together.
Here is one of the questions I asked the author, maybe you have an answer to this:
The function update_feed_dict_with_v_placeholder is not clear to me. First you fill the feed_dict with a batch of the data (https://github.com/kohpangwei/influence-release/blob/master/influence/genericNeuralNet.py#L496) and afterwards you seem to update this batch with 'cur_estimate'. What does the feed_dict look like at this stage?
a) Is the input replaced by v? Is the prediction computed on v or input? b) Or is v added to the feed_dict and it now contains input, label and v?
@expectopatronum
The function update_feed_dict_with_v_placeholder
just try to insert a pair of placeholder in v_placeholder
and corresponding values into the feed_dict
. The key in feed_dict
is the tensor and value is corresponding value.
Hope it can help. By the way, may I ask when do you think that your code will be ready online?
Alright, thanks!
I am currently working on it, so I'd expect it to be ready in a couple of hours.
I've now created a private repository with my current status and invited @tengerye. If anyone else is interested in having a look, just let me know.
Hi @expectopatronum, just stumbled upon this...I'm also working on a currently unreleased PyTorch implementation of the paper, feel free to reach out...
kohpangwei seems not really care about this repository anymore, what a shame
Hello,
@expectopatronum I don't think I saw any email (sorry if I missed it). But thanks @tengerye for answering it.
This repo is frozen to what was used for the paper. I'm glad that there's interest in making a Pytorch version; thank you and good luck! In case it helps, we have a more recent paper that also uses influence functions, and the code there is cleaner and easier to read: https://github.com/kohpangwei/group-influence-release
Hi @kohpangwei, that's strange. I used the email adress from your influence paper, is that still valid? I still have some theoretical questions about the paper that probably can not be answered by someone on Github.
I am aware of the new paper, I didn't have time yet to check it out but I will soon :)
Thanks a lot!
Yup, that email address still works! Feel free to drop me a note there. :)
Thanks, I did! Hopefully it won't get lost this time :)
Hi, @expectopatronum @Kunlun-Zhu @markus-beuckelmann has anyone successfully repeat the experiment of CNN (fig2-c) successfully yet? Although the paper states that the methods works well with non-convergence case but I found I can never make the get_inverse_hvp_cg
convergence. The original code achieves 0.9996 on training CNN and 0.9746 on test set. In my case, it is 0.9325 and 0.8972 respectively.
I guess it must be related to the damping term.
@kohpangwei If possible, would you please share some experience in how to determine if the training is good for the next step? e.g., did you check eigenvalues of hessian inverse?
Hi @tengerye, unfortunately not. I have given up for now since I didn't even manage to exactly reproduce the hospital notebook (and it is super slow in my Pytorch implementation). Would you like to share your code?
Yup, checking the eigenvalues of the Hessian was a helpful diagnostic, and damping it "appropriately" (to make sure it's PSD) is important in the non-convex case. Increasing L2 regularization can also be helpful.
@kohpangwei Thank you for your kind reply. @expectopatronum Sharing is the reason for me to produce it. Allow me a few days to fix the problem before making it public.
I've now created a private repository with my current status and invited @tengerye. If anyone else is interested in having a look, just let me know.
hi,@expectopatronum,i am also interested in PyTorch implementation of the paper,could you share me with your code?Thanks.
@expectopatronum I'm also very interested in the Pytorch implementation, could you also share your code with me as well? It'd be a fantastic help!
@expectopatronum I'm also looking for the pytorch implementation of influence functions! It'll be very helpful if you share your code😆
I've had a pytorch implementation lingering around for some time on my hard drive. I've just polished it up a bit (hope it's readable at all...) and wrote a few docs to go along with it. You can find it here: https://github.com/nimarb/pytorch_influence_functions
It doesn't implement all the graphics, tests, examples of the original paper - just the algo itself.
@nimarb This is amazing, thanks for sharing! If you don't implement stuff from the paper - how do you know if it is correct? (not saying that everything in the paper must be correct)
Initially, I recreated the Inception and adversarial use-cases (were most interesting for my use) where I got the same images for helpful data points. I hope to find the time to put those out over the christmas holidays :)
Closing this thread; thanks @nimarb for the implementation. :)
Hi, this might not be a question for the repo owner but maybe someone else sees this - I hope it is ok I put this question here. Is anyone aware of a Pytorch implementation of influence functions? I think I got the implementation of the hessian vector product right but there is also a lot of data handling involved (to replace the Tensorflow feed_dict stuff by more Pytorchy data types). If no one has done it - I am currently working on it and can also share it (but this might take some time).
Best regards Verena