Open Lodour opened 9 months ago
Hi @Lodour Thank you for using ART! How does you proposed solution with the same functionality look like?
My workaround is passing y_pred
to self.loss
, but this only handles the default loss function self._get_logits_diff
The method would become something like
def _get_logits_diff(self, x: np.ndarray, y: np.ndarray, y_pred: Optional[np.ndarray] = None):
if y_pred is None:
y_pred = self.estimator.predict(x, batch_size=self.batch_size)
...
I haven't come up with a general solution for custom loss functions.
Is your feature request related to a problem? Please describe. I was benchmarking the performance of some black-box attacks and noticed that the Square Attack issued about 50% duplicated queries. This occurs because most queries are sent twice in a subtle way; see the code below.
Describe the solution you'd like Update the implementation to reuse the last prediction outputs.
I am happy to send a PR if you find this performance improvement useful.
Describe alternatives you've considered N/A
Additional context For example,
x_adv
is passed toself.estimator.predict
at L347, and its subsetx_robust = x_adv[sample_is_robust]
is immediately passed toself.loss
, which boils down to a duplicated call ofself.estimator.predict
on the same inputs. https://github.com/Trusted-AI/adversarial-robustness-toolbox/blob/337a15f9db8e447426aa9cbe1f090d14e3f1d3a4/art/attacks/evasion/square_attack.py#L346-L357