issues
search
IntelLabs
/
FP8-Emulation-Toolkit
PyTorch extension for emulating FP8 data formats on standard FP32 Xeon/GPU hardware.
BSD 3-Clause "New" or "Revised" License
100
stars
10
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
fp8 performance
#18
lyp-liuyipeng
opened
2 months ago
0
load datasets error in FP8 training for BERT
#17
fmo-mt
opened
6 months ago
0
Pascal support
#16
chaiebnadhem
opened
6 months ago
0
Why does the quantized value still exceed the range of FP8 representation?
#15
adfad1
opened
8 months ago
0
Could you briefly explain what does those in modes mean?
#14
pyjhzwh
closed
8 months ago
1
reproduction of results on "FP8 Format for Deep learning"
#13
amine-ammor
closed
11 months ago
0
TypeError: zeros_like(): argument 'input' (position 1) must be Tensor, not NoneType
#12
Siris-Li
opened
11 months ago
2
AMD CPU and GPU support & few other bug fixes
#11
nkmellem
closed
11 months ago
0
fix import error
#10
wm901115nwpu
closed
11 months ago
1
No module named 'fpemu_cpp'
#9
hailuu684
opened
1 year ago
0
How library completes backend support / Why the quantized models show no difference.
#8
xuann6
closed
10 months ago
2
Using GPU and Still Get: Illegal instruction (core dumped)
#7
xuann6
closed
11 months ago
5
examples/inference/bert readme instructions for FP8
#6
willwray
closed
1 year ago
5
support for fine-grained quantization, hybrid training mode
#5
nkmellem
closed
1 year ago
0
Fix a typo in readme file
#4
Kevinpsk
closed
1 year ago
0
Illegal instruction (core dumped)
#3
julianfaraone
closed
1 year ago
6
PTQ for BERT
#2
qingswu
closed
1 year ago
2
Segfault
#1
tjingrant
closed
1 year ago
3