issues
search
IntelLabs
/
Model-Compression-Research-Package
A library for researching neural networks compression and acceleration methods.
Apache License 2.0
136
stars
24
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
作者您好,可以混合压缩吗?例如同时结合量化,剪枝,蒸馏的bert?
#24
a-row-person
opened
2 months ago
1
Fix KD to support newer Transformers & Pytroch versions
#23
ofirzaf
closed
2 months ago
0
Fix dependencies to avoid security issue
#22
ofirzaf
closed
9 months ago
0
Fix dependencies
#21
ofirzaf
closed
10 months ago
0
Bump transformers from 4.7.0 to 4.30.0 in /research/dynamic-tinybert
#20
dependabot[bot]
closed
1 year ago
0
Fix setup bug
#19
ofirzaf
closed
1 year ago
0
Cannot import the package
#18
g12bftd
closed
1 year ago
2
Bump torch to 1.13.1 to fix security
#17
ofirzaf
closed
1 year ago
0
Ido branch
#16
IdoAmit198
closed
1 year ago
0
Sparse models available for download?
#15
eldarkurtic
closed
2 years ago
2
Code analysis identified several places where objects were either not
#14
michaelbeale-IL
closed
1 year ago
0
Fix distillation of different HF/transformers models
#13
ofirzaf
closed
2 years ago
0
Small optimizations
#12
ofirzaf
closed
2 years ago
0
Uniform magnitude pruning implementation problem
#11
LYF915
closed
2 years ago
13
Fix bug in rewinding LR scheduling
#10
ofirzaf
closed
2 years ago
0
How to save QAT quantized model?
#9
OctoberKat
closed
2 years ago
4
Fix `max_seq_length` bug in language-modeling example
#8
ofirzaf
closed
2 years ago
0
Issue of max_seq_length in MLM pretraining data preprocessing
#7
XinyuYe-Intel
closed
2 years ago
5
How to interpret hyperparams?
#6
eldarkurtic
closed
2 years ago
2
LR scheduler clarification
#5
eldarkurtic
closed
2 years ago
4
Difference between end_pruning_step and policy_end_step
#4
eldarkurtic
closed
2 years ago
6
add Dynamic_TinyBERT
#3
shira-g
closed
2 years ago
0
Fix typo for target_sparsity
#2
eldarkurtic
closed
2 years ago
1
Upstream pruning
#1
eldarkurtic
closed
2 years ago
1