Make PyTorch models up to 40% faster! Thunder is a source to source compiler for PyTorch. It enables using different hardware executors at once; across one or thousands of GPUs.
Before submitting
- [ ] Was this discussed/approved via a Github issue? (no need for typos and docs improvements)
- [x] Did you read the [contributor guideline](https://github.com/Lightning-AI/pytorch-lightning/blob/main/.github/CONTRIBUTING.md), Pull Request section?
- [ ] Did you make sure to update the docs?
- [ ] Did you write any new necessary tests?
What does this PR do?
This is not directly related to any issue. However, this was needed for #530.
Also, I am not sure if there should be a test case for this. Please let me know if so!
PR review
Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in Github issues there's a high chance it will not be merged.
FYI, I have updated type_as_sample in opinfos.py because it was failing a CI test. I passed atol/rtol values that were passed for test_vjp_correctness earlier.
Before submitting
- [ ] Was this discussed/approved via a Github issue? (no need for typos and docs improvements) - [x] Did you read the [contributor guideline](https://github.com/Lightning-AI/pytorch-lightning/blob/main/.github/CONTRIBUTING.md), Pull Request section? - [ ] Did you make sure to update the docs? - [ ] Did you write any new necessary tests?What does this PR do?
This is not directly related to any issue. However, this was needed for #530. Also, I am not sure if there should be a test case for this. Please let me know if so!
PR review
Anyone in the community is free to review the PR once the tests have passed. If we didn't discuss your PR in Github issues there's a high chance it will not be merged.
Did you have fun?
⚡️