issues
search
facebookresearch
/
diffq
DiffQ performs differentiable quantization using pseudo quantization noise. It can automatically tune the number of bits used per weight or group of weights, in order to achieve a given trade-off between model size and accuracy.
Other
234
stars
15
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
Will diffq make model faster?
#13
tz301
opened
1 year ago
0
Getting error by pip install diffq on Windows
#12
PETERCHUU
closed
1 year ago
3
Why checkpoint.pth on the output folder is not in compliance with true model size?
#11
Eurus-Holmes
closed
2 years ago
2
add an example for Vision Transformer and Pretrained Vision Transformer on CIFAR
#10
Eurus-Holmes
closed
2 years ago
1
Quantized Model Output NaN / 0
#9
sophia1488
opened
2 years ago
8
Number of parameters doubled
#8
lyghter
closed
2 years ago
1
Adding TorchScript support for DiffQ
#7
adefossez
closed
3 years ago
0
Forgot to add some files for the new cifar experiments
#6
adefossez
closed
3 years ago
0
Updated version, including LSQ comparison, and improved API.
#5
adefossez
closed
3 years ago
0
where the activation/feature-map is quantized?
#4
xieyi4650
opened
3 years ago
1
require 'override' keyword
#3
xieyi4650
opened
3 years ago
3
Is it compatible with transformers library?
#2
snaik2016
opened
3 years ago
2
Unquant
#1
adefossez
closed
3 years ago
0