pytorch / executorch

On-device AI across mobile, embedded and edge for PyTorch
https://pytorch.org/executorch/
Other
2.21k stars 368 forks source link

add uint16 to serialization #6942

Closed JacobSzwejbka closed 2 days ago

JacobSzwejbka commented 3 days ago

Summary: Support this type directly rather then converting to bits16

Differential Revision: D65915213

pytorch-bot[bot] commented 3 days ago

:link: Helpful Links

:test_tube: See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/6942

Note: Links to docs will display an error until the docs builds have been completed.

:heavy_exclamation_mark: 1 Active SEVs

There are 1 currently active SEVs. If your PR is affected, please view them below:

:white_check_mark: No Failures

As of commit 556136b6679407ca08d5596af459b85a96490686 with merge base eae0b04173e945e365173009508a15b578069285 (image): :green_heart: Looks good so far! There are no failures yet. :green_heart:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

facebook-github-bot commented 3 days ago

This pull request was exported from Phabricator. Differential Revision: D65915213

facebook-github-bot commented 2 days ago

This pull request was exported from Phabricator. Differential Revision: D65915213

JacobSzwejbka commented 2 days ago

@pytorchbot label "release notes: Add UInt16 dtype support for quant in serialization"

pytorch-bot[bot] commented 2 days ago

Didn't find following labels among repository labels: release notes: Add UInt16 dtype support for quant in serialization

JacobSzwejbka commented 2 days ago

@pytorchbot label "release notes: runtime topic: Add UInt16 dtype support for quant in serialization"

pytorch-bot[bot] commented 2 days ago

Didn't find following labels among repository labels: release notes: runtime topic: Add UInt16 dtype support for quant in serialization