There was a wrong if condition and it was raising an error when duplicates were found when building the final estimator by sampling from the same point (i.e when not recording duplicate data in the function logger inputs).
The algorithm was raising the error because at the final steps of the optimization the algorithm added the same final point at least twice (like it was stuck and different function evaluations have been carried out at final incumbents)
Therefore the wrong code present when building the final estimator was arising the error when detecting duplicates.
This was a "rare" event because most of the time PyBADS didn't have the same points for the last incumbent during the final steps of the optimization.
Recall that the function logger has three cases when adding points:
the heteroskedastic case (we merge observation and estimate the new variance, the point is recorded)
each point is a new point (unknown-noise case, the point is recorded)
not recorded, we update the number of function evaluations and the time of function evaluation. The issue was in the recorded case
There was a wrong if condition and it was raising an error when duplicates were found when building the final estimator by sampling from the same point (i.e when not recording duplicate data in the function logger inputs). The algorithm was raising the error because at the final steps of the optimization the algorithm added the same final point at least twice (like it was stuck and different function evaluations have been carried out at final incumbents) Therefore the wrong code present when building the final estimator was arising the error when detecting duplicates. This was a "rare" event because most of the time PyBADS didn't have the same points for the last incumbent during the final steps of the optimization.
Recall that the function logger has three cases when adding points: