Closed jerryzh168 closed 3 months ago
Note: Links to docs will display an error until the docs builds have been completed.
As of commit 36a6f10ac7a310d608123136e042e84718b4e424 with merge base 1759a239a7eceb07ae7b43f33893bb4021d57a4c (): :green_heart: Looks good so far! There are no failures yet. :green_heart:
This comment was automatically generated by Dr. CI and updates every 15 minutes.
This pull request was exported from Phabricator. Differential Revision: D58506712
This pull request was exported from Phabricator. Differential Revision: D58506712
This pull request was exported from Phabricator. Differential Revision: D58506712
Thanks for this fix. Could you leave a comment specifying what that import is doing? Also it appears we're importing a lot of unused imports.
Thanks for this fix. Could you leave a comment specifying what that import is doing? Also it appears we're importing a lot of unused imports.
sure, this is for deciding what quantizer to include, it's used in L48 in the file
This pull request was exported from Phabricator. Differential Revision: D58506712
This pull request was exported from Phabricator. Differential Revision: D58506712
Attention: Patch coverage is 50.00000%
with 2 lines
in your changes missing coverage. Please review.
Project coverage is 66.57%. Comparing base (
74fb5e4
) to head (36a6f10
). Report is 5 commits behind head on main.
Files | Patch % | Lines |
---|---|---|
torchtune/utils/quantization.py | 50.00% | 2 Missing :warning: |
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
Summary: att
Reviewed By: SLR722
Differential Revision: D58506712