kohpangwei / influence-release

MIT License
776 stars 175 forks source link

Pytorch implementation #10

Closed expectopatronum closed 3 years ago

expectopatronum commented 5 years ago

Hi, this might not be a question for the repo owner but maybe someone else sees this - I hope it is ok I put this question here. Is anyone aware of a Pytorch implementation of influence functions? I think I got the implementation of the hessian vector product right but there is also a lot of data handling involved (to replace the Tensorflow feed_dict stuff by more Pytorchy data types). If no one has done it - I am currently working on it and can also share it (but this might take some time).

Best regards Verena

Kunlun-Zhu commented 5 years ago

Wonderful work, I would also consider it be done by pytorch

WonderSeven commented 5 years ago

I do wish the pytorch version could be released as soon as possible.

tengerye commented 5 years ago

Hi, what is progress now? I would like to join.

expectopatronum commented 5 years ago

I was able to reproduce the hospital readmission notebook experiments in Pytorch with a few issues:

  1. The bar charts are similar (so it returns the influential samples in the same/correct order) but the computed influence values are all too large (all by the same factor). I am not yet sure whether the error is in my loss functions, some missing scaling, ...
  2. And the second thing is that is super slow (about 35 times slower than the TF implementation), so far I didn't find a solution for that (from profiling it looks like it might be the DataLoaders that are slow).

Since I couldn't get it to run in reasonable time and some things from the original implementation are unclear to me (I sent an email to the first author of the paper but I haven't received an answer yet) I have moved on to other interpretability methods.

My code is messy so I didn't put it online. If someone is interested in helping me - feel free to contact me, I'd like to give it another shot.

tengerye commented 5 years ago

@expectopatronum Hi, I am working on the first experiment by translating the TensorFlow code to PyTorch, it is difficult though. I would like to help and work on it together.

What is your approach? Do you translate the codes file by file or organize them by yourselves?

expectopatronum commented 5 years ago

First I tried to translate the code file by file but I think how Pytorch and Tensorflow work is too different. I also want the influence code extracted from the model, so I put it in a separate file. In the end I want it to work for every model and not copy the code to all models. I also tried to figure out which parts are actually used (in the example) and only implement those (for now). E.g. in the hospital_readmission example (which I use to test my implementation) they pass test_indices, so I currently don't care about the part of the function that deals with the case that this is None. I will put my code on Github in the next couple of days and share it with you - maybe we can solve it together.

Here is one of the questions I asked the author, maybe you have an answer to this:

  1. The function update_feed_dict_with_v_placeholder is not clear to me. First you fill the feed_dict with a batch of the data (https://github.com/kohpangwei/influence-release/blob/master/influence/genericNeuralNet.py#L496) and afterwards you seem to update this batch with 'cur_estimate'. What does the feed_dict look like at this stage?

    a) Is the input replaced by v? Is the prediction computed on v or input? b) Or is v added to the feed_dict and it now contains input, label and v?

tengerye commented 5 years ago

@expectopatronum The function update_feed_dict_with_v_placeholder just try to insert a pair of placeholder in v_placeholder and corresponding values into the feed_dict. The key in feed_dict is the tensor and value is corresponding value.

Hope it can help. By the way, may I ask when do you think that your code will be ready online?

expectopatronum commented 5 years ago

Alright, thanks!

I am currently working on it, so I'd expect it to be ready in a couple of hours.

expectopatronum commented 5 years ago

I've now created a private repository with my current status and invited @tengerye. If anyone else is interested in having a look, just let me know.

markus-beuckelmann commented 5 years ago

Hi @expectopatronum, just stumbled upon this...I'm also working on a currently unreleased PyTorch implementation of the paper, feel free to reach out...

Kunlun-Zhu commented 5 years ago

kohpangwei seems not really care about this repository anymore, what a shame

kohpangwei commented 5 years ago

Hello,

@expectopatronum I don't think I saw any email (sorry if I missed it). But thanks @tengerye for answering it.

This repo is frozen to what was used for the paper. I'm glad that there's interest in making a Pytorch version; thank you and good luck! In case it helps, we have a more recent paper that also uses influence functions, and the code there is cleaner and easier to read: https://github.com/kohpangwei/group-influence-release

expectopatronum commented 5 years ago

Hi @kohpangwei, that's strange. I used the email adress from your influence paper, is that still valid? I still have some theoretical questions about the paper that probably can not be answered by someone on Github.

I am aware of the new paper, I didn't have time yet to check it out but I will soon :)

Thanks a lot!

kohpangwei commented 5 years ago

Yup, that email address still works! Feel free to drop me a note there. :)

expectopatronum commented 5 years ago

Thanks, I did! Hopefully it won't get lost this time :)

tengerye commented 5 years ago

Hi, @expectopatronum @Kunlun-Zhu @markus-beuckelmann has anyone successfully repeat the experiment of CNN (fig2-c) successfully yet? Although the paper states that the methods works well with non-convergence case but I found I can never make the get_inverse_hvp_cg convergence. The original code achieves 0.9996 on training CNN and 0.9746 on test set. In my case, it is 0.9325 and 0.8972 respectively.

I guess it must be related to the damping term.

@kohpangwei If possible, would you please share some experience in how to determine if the training is good for the next step? e.g., did you check eigenvalues of hessian inverse?

expectopatronum commented 5 years ago

Hi @tengerye, unfortunately not. I have given up for now since I didn't even manage to exactly reproduce the hospital notebook (and it is super slow in my Pytorch implementation). Would you like to share your code?

kohpangwei commented 5 years ago

Yup, checking the eigenvalues of the Hessian was a helpful diagnostic, and damping it "appropriately" (to make sure it's PSD) is important in the non-convex case. Increasing L2 regularization can also be helpful.

tengerye commented 5 years ago

@kohpangwei Thank you for your kind reply. @expectopatronum Sharing is the reason for me to produce it. Allow me a few days to fix the problem before making it public.

Jinjicheng commented 5 years ago

I've now created a private repository with my current status and invited @tengerye. If anyone else is interested in having a look, just let me know.

hi,@expectopatronum,i am also interested in PyTorch implementation of the paper,could you share me with your code?Thanks.

pianpwk commented 5 years ago

@expectopatronum I'm also very interested in the Pytorch implementation, could you also share your code with me as well? It'd be a fantastic help!

stovecat commented 5 years ago

@expectopatronum I'm also looking for the pytorch implementation of influence functions! It'll be very helpful if you share your code😆

nimarb commented 4 years ago

I've had a pytorch implementation lingering around for some time on my hard drive. I've just polished it up a bit (hope it's readable at all...) and wrote a few docs to go along with it. You can find it here: https://github.com/nimarb/pytorch_influence_functions

It doesn't implement all the graphics, tests, examples of the original paper - just the algo itself.

expectopatronum commented 4 years ago

@nimarb This is amazing, thanks for sharing! If you don't implement stuff from the paper - how do you know if it is correct? (not saying that everything in the paper must be correct)

nimarb commented 4 years ago

Initially, I recreated the Inception and adversarial use-cases (were most interesting for my use) where I got the same images for helpful data points. I hope to find the time to put those out over the christmas holidays :)

kohpangwei commented 3 years ago

Closing this thread; thanks @nimarb for the implementation. :)