This is a PyTorch reimplementation of Influence Functions from the ICML2017 best paper: Understanding Black-box Predictions via Influence Functions by Pang Wei Koh and Percy Liang.
Other
320
stars
71
forks
source link
Gap between influence and real loss difference #18
Hi, I am trying to perform a leave-one-out retraining to compare the difference between influence computed by calc_influence_singlehere and actual loss difference by computing two times of loss before and after retrain. I use calc_loss here to compute loss. Also, I scale the influence by len(trainset). Surprisingly I found big difference between computed influence and real loss difference after retrain. For a random pick test_idx=10, train_idx_to_remove = 609, I'm getting the following result:
So far I think it may have something to do with the net in the example is only trained with 10 epochs, which does not achieve close to global optimum, but I'm not sure.
Hi, I am trying to perform a leave-one-out retraining to compare the difference between influence computed by
calc_influence_single
here and actual loss difference by computing two times of loss before and after retrain. I usecalc_loss
here to compute loss. Also, I scale the influence bylen(trainset)
. Surprisingly I found big difference between computed influence and real loss difference after retrain. For a random picktest_idx=10, train_idx_to_remove = 609
, I'm getting the following result:which doesn't seem very relevant to me.
Thanks in advance for any kind suggestions!