issues
search
huggingface
/
optimum-benchmark
🏋️ A unified multi-backend utility for benchmarking Transformers, Timm, PEFT, Diffusers and Sentence-Transformers with full support of Optimum's hardware optimizations & quantization schemes.
Apache License 2.0
236
stars
41
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
move to new runners
#281
glegendre01
closed
5 days ago
0
Markdown Report
#280
IlyasMoutawwakil
opened
6 days ago
0
Pass backend name to EnergyTracker in Training scenario
#279
asesorov
closed
1 week ago
1
Bump version
#278
IlyasMoutawwakil
closed
1 week ago
0
Test better rocm devices mounting
#277
IlyasMoutawwakil
closed
1 week ago
0
Test passing pid host in CI
#276
IlyasMoutawwakil
closed
1 week ago
0
Distributed trt-llm
#275
IlyasMoutawwakil
closed
1 week ago
0
Update readme with IPEX
#274
IlyasMoutawwakil
closed
1 week ago
0
Removing barriers
#273
IlyasMoutawwakil
closed
1 week ago
0
ipex backend enhancements
#272
yao-matrix
closed
1 week ago
4
Allow multiple runs and handle connection communication errors
#271
IlyasMoutawwakil
closed
1 week ago
0
fix multi gpu ipc
#270
IlyasMoutawwakil
closed
1 week ago
0
Multi-gpu vllm
#269
IlyasMoutawwakil
closed
1 week ago
0
Labeling system in CI
#268
IlyasMoutawwakil
closed
1 week ago
0
Styling
#267
IlyasMoutawwakil
closed
1 week ago
0
Set is_distributed false by default in vllm
#266
asesorov
closed
1 week ago
0
Fix issue with CodeCarbon lock
#265
regisss
closed
1 week ago
7
Fix broken canonical list
#264
baptistecolle
closed
1 week ago
0
fix broken cuda and rocm images
#263
baptistecolle
closed
1 week ago
4
fix broken canonical list
#262
baptistecolle
closed
2 weeks ago
0
Add the logic for Energy Star
#261
regisss
opened
2 weeks ago
1
Error: Another instance of codecarbon is already running
#260
j-irion
closed
1 week ago
3
TensorRT-LLM pipeline parallelism is broken
#259
asesorov
opened
2 weeks ago
12
Refactor llm perf backend handling
#258
baptistecolle
closed
1 week ago
10
1. refine cpu Dockerfile for better performance 2. add ipex_bert example
#257
yao-matrix
closed
3 weeks ago
0
Fix API tests on ROCm
#256
IlyasMoutawwakil
closed
4 weeks ago
0
Fix py-txi ci
#255
IlyasMoutawwakil
closed
4 weeks ago
0
Code Style
#254
IlyasMoutawwakil
closed
1 month ago
0
Update ROCm
#253
IlyasMoutawwakil
closed
4 weeks ago
0
vLLM quantization BrokenPipeError
#252
j-irion
opened
1 month ago
4
Timeout with multiple AMD GPUs tensor parallelism in vLLM
#251
asesorov
closed
1 week ago
6
add optimum-intel ipex backend into benchmark
#250
yao-matrix
closed
1 month ago
3
WIP fix rocm runners
#249
baptistecolle
closed
1 month ago
0
Add support for intel in leaderboard
#248
baptistecolle
closed
3 weeks ago
2
Add intel to leaderboard
#247
baptistecolle
closed
1 month ago
1
Update cuda images
#246
IlyasMoutawwakil
closed
1 month ago
0
fix neural compressor backend
#245
baptistecolle
closed
1 month ago
0
Fix makefile typo
#244
IlyasMoutawwakil
closed
1 month ago
0
Faster quality check
#243
IlyasMoutawwakil
closed
1 month ago
0
Fix images building
#242
IlyasMoutawwakil
closed
1 month ago
1
LLama-3.1-70B on 2x NVIDIA L40S results in torch.cuda.OutOfMemoryError
#241
j-irion
closed
1 month ago
4
Decode output of `nvmlDeviceGetName` to avoid JSON serialize issue
#240
KeitaW
closed
1 month ago
4
Build from source quantization packages
#239
baptistecolle
closed
4 weeks ago
1
Add t4 for llm perf leaderboard
#238
baptistecolle
closed
1 month ago
0
release
#237
IlyasMoutawwakil
closed
2 months ago
0
Misc changes and fixes for llama cpp
#236
IlyasMoutawwakil
closed
2 months ago
0
update
#235
IlyasMoutawwakil
closed
2 months ago
0
MPS support
#234
IlyasMoutawwakil
closed
2 months ago
1
Misc CI updates and multi-platform support
#233
IlyasMoutawwakil
closed
2 months ago
0
Update vllm backend to support offline and online serving modes
#232
IlyasMoutawwakil
closed
2 months ago
0
Next