Closed graehl closed 2 years ago
Thanks Johnathan!
I agree that defaulting to allow_tf32
to keep behavior consistent to previous versions would be preferred.
While you are at it, you could also update requirements.txt to allow PyTorch 1.12.x (<1.13.0
)
I was unaware that it defaulted true previously. Agree.
I rebased into a single commit for all of the above.
had to revert pytest<3 requirement (but in fact tests fail w/ pytest 3) due to automated test failure above (tests work locally for me)
Thanks for the changes, I realized I never submitted my pending review from over a month ago, apologies for the delay. I'll merge this now.
--tf32 0|1 bool device (torch.backends.cuda.matmul.allow_tf32) enabling 10-bit precision (19 bit total) transparent float32 acceleration. default true for backward compat with torch < 1.12. allow different --tf32 training continuation
device.init_device called by train, translate, and score
allow torch 1.12 in requirements.txt
require pytest 2 (3 fails)
Pull Request Checklist
pytest
)pytest test/system
)./style-check.sh
)sockeye/__init__.py
. Major version bump if this is a backwards incompatible change.By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.