frederick0329 / TracIn

Implementation of Estimating Training Data Influence by Tracing Gradient Descent (NeurIPS 2020)
Apache License 2.0
219 stars 15 forks source link

where is the loss gradient calculated in the proponent/opponent example #10

Open Helaly96 opened 1 year ago

Helaly96 commented 1 year ago

unlike the colab example of self influence where the gradient of the loss is clearly calculated using tape, i don't see where the loss_grad is being calculated in the proponent/opponent example loss_grad = tf.one_hot(labels, 1000) - probs this is just the loss with no gradients, and it's not calculated anywhere else can someone clarify?

gumityolcu commented 10 months ago

I also need this, especially can someone clarify where the random projection trick is being done ?

Thanks

SeanZh30 commented 9 months ago

I am also a little confuesed about this I check the other issues discussed by author. He mentioned that in #6 we can get the influence(tracin score) = lg_sim (error_similarity) * a_sim (encoding similarity) according to the appendix F. This seems a bit confusing to me. If anyone could show me the derivation process, I would be very grateful.