issues
search
Mxbonn
/
INQ-pytorch
A PyTorch implementation of "Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights"
164
stars
27
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
Seems don't have acceleration on the inference step
#16
audreyeternal
closed
2 years ago
1
The dataset was replaced with CIFAR100, and the accuracy after quantization was less than 10%
#15
blutofilp
opened
2 years ago
0
can probability be set the same in random strategy?
#14
hheavenknowss
opened
3 years ago
0
The code flow is slightly different from the paper?
#13
pengpenglove
opened
3 years ago
0
Enable Passing Options through Command Line + Save Train Log
#12
mostafaelhoushi
closed
4 years ago
1
4-bits on ResNet18 results in 6% reduction in error
#11
mostafaelhoushi
opened
4 years ago
7
Getting together all parts of our project.
#10
FredericOdermatt
closed
4 years ago
0
can I get inference acceleration on my own model using this tool?
#9
WilliamZhaoz
opened
5 years ago
0
The logic is wrong in 'example' code?
#8
cool-ic
closed
5 years ago
1
Why does the output model size become bigger?
#7
Aaron4Fun
closed
5 years ago
1
use `required_grad = False`
#6
xysun
closed
5 years ago
1
About the pretrained model
#5
tongtyr
opened
5 years ago
5
about accuracy
#4
tongyutyr
closed
5 years ago
5
Don't quantize bn and biases
#3
Mxbonn
closed
5 years ago
1
skip quantization of bias and batch_normalization parameters
#2
ggeor84
closed
5 years ago
0
Running the code results in almost 30% accuracy reduction
#1
ggeor84
closed
5 years ago
5