Closed itsdawei closed 1 year ago
I think there are slight inconsistencies across the CMA-ME, DQD, and CMA-MAE papers -- in practice, lambda = 36 or 37 won't make much of a difference. Feel free to switch it to whatever is consistent with the paper and the official code release.
We adjust the implementation to match the released source code of the paper.
Nevertheless, this is a small discrepancy that probably didn't effect the outcome of the experiments too much.
Description
We originally had batch_size for DQD emitter include the solution returned by ask_dqd, i.e., if you want ask to return 36 solutions you need to put batch_size=37. We reverted this decision and now assume that ask_dqd only returns one solution, i.e., you will put batch_size=36 for the above example. This make the batch_size description between DQD and non-DQD emitters consistent.
However, at some point we must have forgot to change the docstrings according. This PR address that.
Furthermore, I find that some configuration in sphere.py is not consistent with the original paper. For example, the CMA-MAE paper "select[s] a batch size $\lambda = 36$ following prior work" (Appendix A), while this line clearly uses $\lambda=37$. Similarly, our sphere experiment with CMA-MAEGA also uses $\lambda=37$.
I could be missing something here, so I figured I should try to clarify in this PR in case other users have same confusion as me.
TODO
Questions
Status
yapf
pytest
pylint
HISTORY.md
~