-
### 🚀 The feature, motivation and pitch
Llama3.2 vision (Mllama) models requires model runner as "Enocoder_Decoder_Model_Runner"
which includes:
1. prepare "encoder_seq_lens" and "encoder_seq_len…
-
### System Info
```shell
Docker image: pytorch-installer-2.3.1:1.17.0-417
optimum-habana: main branch
```
### Information
- [ ] The official example scripts
- [X] My own modified script…
-
Habana drivers expose several sysfs attributes through the _accel_ class: https://www.kernel.org/doc/html/latest/admin-guide/abi-testing.html#abi-file-testing-sysfs-driver-habanalabs
Habana drivers…
-
### System Info
```shell
Bad
Optimum Habana latest main: c495f479d9abf04fb7adb6f0a5607d7963186649
Synapse docker image: v1.16
Good:
Optimum Habana one commit before Transformer 4.40 upgrade: 56…
-
### Your current environment
I am testing the offline performance using `benchmark_latency.py`. And I found there's no any increasing/decreasing when I change the prompt bucket shape, even I use (1,1…
-
### System Info
```shell
optimum 1.21.4
optimum-habana 1.14.0.dev0
transformers 4.45.2
+------------------------------------------------------------------…
-
follow the instructions on
https://github.com/HabanaAI/Model-References/tree/master/MLPERF3.1/Training/benchmarks
to execute comamnd:
`python3 pack_pretraining_data_pytorch.py --input_dir=$PYT…
-
-
### System Info
```shell
optimum-habana 1.14.0.dev0
HL-SMI Version: hl-1.18.0-fw-53.1.1.1
Driver Version: 1.18.0-ee698fb
```
### Information
- [X] The off…
-
Getting the below error when trying to run the Llam2 70B benchmark as given in the link - [Here](https://github.com/HabanaAI/Model-References/tree/master/MLPERF4.0/Training/benchmarks/llm_finetune) wi…