issues
search
huggingface
/
optimum-intel
🤗 Optimum Intel: Accelerate inference with Intel optimization tools
https://huggingface.co/docs/optimum/main/en/intel/index
Apache License 2.0
388
stars
110
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
add support glm4
#776
eaidova
closed
3 months ago
1
Fix IPEXModel input names
#775
echarlaix
closed
3 months ago
1
Set openvino slow test
#774
echarlaix
closed
3 months ago
1
Fix openvino quantization config
#773
echarlaix
closed
3 months ago
1
Fix INC WoQ model loading issue
#772
changwangss
closed
1 month ago
1
Enable autoround quantization
#771
echarlaix
opened
3 months ago
1
Fixed issues in the Hybrid quantization notebook
#770
AlexKoff88
closed
3 months ago
4
update nncf test quant dataset
#769
echarlaix
closed
3 months ago
1
Remove default trust remove code for predefined datasets
#768
echarlaix
closed
3 months ago
3
add trust remote code for latest datasets release
#767
echarlaix
closed
3 months ago
2
Enable security scanning
#766
mfuntowicz
closed
3 months ago
1
Add openvino export and supported models sections
#765
echarlaix
closed
3 months ago
1
Disable export for arctic
#764
echarlaix
closed
3 months ago
1
Update to NNCF 2.11
#763
nikita-savelyevv
closed
3 months ago
3
Update token
#762
echarlaix
closed
3 months ago
1
[OV Optimum] Keep ShapeOf on Parameter / ReadValue in case of added beam_idx -> Gather
#761
jane-intel
closed
3 months ago
1
try to resolve default int4 config for local models
#760
eaidova
closed
3 months ago
3
[Docs] optimization_ov.mdx links are updated
#759
daniil-lyakhov
closed
3 months ago
2
Deprecate transformers v4.36.0
#758
echarlaix
closed
3 months ago
3
Create default token_type_ids for openvino inference
#757
echarlaix
closed
3 months ago
1
Need help with model compilation
#756
henryzhuhr
closed
3 months ago
3
Add ipex openvino and inc tests when pushing a release branch
#755
echarlaix
closed
4 months ago
1
Fix compatibility with transformers < v4.39.0 release
#754
echarlaix
closed
4 months ago
2
Modify qwen2 model ID in tests
#753
echarlaix
closed
4 months ago
1
fallback load model
#752
jiqing-feng
closed
3 months ago
4
Trigger openvino slow tests with openvino-test label
#751
echarlaix
closed
4 months ago
4
loading generation config if it is part of model
#750
eaidova
closed
4 months ago
2
Fix TemporaryDirectory error when exporting models on Windows
#749
helena-intel
closed
4 months ago
1
Disable future warnings/info messages on import
#748
helena-intel
closed
4 months ago
1
Add setuptools to fix issue with Python 3.12, add Windows to OpenVINO basic test
#747
helena-intel
closed
4 months ago
1
Udpate openvino export CLI documentation
#746
echarlaix
closed
4 months ago
1
Clarify load_in_8bit default value in documentation
#745
echarlaix
closed
4 months ago
1
multi instances of infer
#744
wgzintel
opened
4 months ago
1
improve doc around supported tasks and accelertor options
#743
rbrugaro
closed
4 months ago
2
incorrect documentation pipeline
#742
rbrugaro
closed
4 months ago
2
Udpate openvino documentation
#741
echarlaix
closed
4 months ago
1
Add OpenVINO pipelines
#740
echarlaix
closed
3 months ago
1
Remove default pipeline accelerator value
#739
echarlaix
closed
4 months ago
1
Describe `OVModelForCausalLM.from_pretrained()` args
#738
Wovchena
closed
4 months ago
3
fix pipeline accelerator default to ipex not ort
#737
rbrugaro
closed
4 months ago
1
Fix bloom generation
#736
echarlaix
closed
4 months ago
1
Pipeline accelerator defaults to 'ort' runtime instead of 'ipex'
#735
rbrugaro
closed
4 months ago
0
Temporary PR to check CI status for new openvino release
#734
nikita-savelyevv
opened
4 months ago
1
Enable ITREX v1.4.2 for specific torch version
#733
echarlaix
closed
4 months ago
1
Enable IPEXModel with deepspeed
#732
jiqing-feng
closed
3 months ago
5
Not able to load Intel/bge-base-en-v1.5-rag-int8-static model
#731
dilip467
opened
4 months ago
4
Fix itrex WOQ model loading
#730
echarlaix
closed
4 months ago
1
Limit ITREX version for WOQ
#729
echarlaix
closed
4 months ago
1
refactor CPU llama inference code
#728
faaany
closed
4 months ago
4
Fix nncf quantization for decoder models
#727
echarlaix
closed
4 months ago
1
Previous
Next