Closed hhhzzj closed 5 years ago
Good catch! Indeed, given that I am using using an external optimizer (SpiPy's) for L-BFGS, so far I have been calling tf.gradients
every time stadv.optimization.lbfgs
is called. But what I missed is that it makes the graph grow every time the routine is called. I have pushed modifications for that (see https://github.com/rakutentech/stAdv/commit/26286a8e84b61d474a958735dcff8f70d31deccc) and made it a version 0.2.1. You can upgrade with pip install -U stadv
. Providing grad_op
as input (as done in the updated demo notebook) should solve the problem. Please let me know if it works with you.
Let me close this issue. If the fix in v0.2.1 doesn't solve your problem, feel free to reopen.
It solved my problem,thank you so much.
When i ran the program to calculate the ASR,i had a new problem. There are 9913 clean images predicted successfully by model A.And i planned to attack these images to calculate the ASR.But when it was attacking the 2039th clean images,it stopped.And it looked like the memory leak. So i want to finalize the graph to check the program. And above picture shows that every iteration tf.gradients() can build a new node,and eventually there are too much node to run. What do you think about it?