-
when we try to run gpt2 torch script on habana gaudi2, there is a error comes, see following picture
![image](https://github.com/bytedance/ByteMLPerf/assets/80079571/2f7c29b7-cebb-4d36-99cc-c7adef142…
-
### System Info
```shell
running this command in single Gaudi works very well:
optimum-habana/examples/language-modeling/run_lora_clm.py \
--model_name_or_path meta-llama/Llama-2-7b-hf \
…
-
**Environment:**
1. Framework: (TensorFlow, Keras, PyTorch, MXNet) : pytorch
2. Framework version: 2.0
3. Horovod version: 0.28
4. MPI version:
5. CUDA version:
6. NCCL version:
7. Python ver…
-
### System Info
```shell
model=bigscience/bloom-560m (same issue with
docker run -p 8080:80 -v $volume:/data --runtime=habana --privileged -e HABANA_VISIBLE_DEVICES=all -e HUGGING_FACE_HUB_TOKEN=$…
-
#### This is about ending Nvidia's vendor lock-in, insists Greg Lavender
Saddled with a bunch of legacy code written for Nvidia's CUDA platform? Intel CTO Greg Lavender suggests building a large la…
-
### System Info
- `transformers` version: 4.32.0.dev0
- Platform: Windows
- Python version: 3.10.6
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.1
- Accelerate version: 0.21.0
- …
-
### System Info
Hello Team,
We are trying to fine tune the `bigcode/starcoderbase-7b` model on a multi HPU (8 HPU) node and have been following the guidance https://github.com/huggingface/optimum-…
-
#### Not even Uncle Sam can stand between x86 titan and its profits
Intel has followed Nvidia's lead and will produce a modified version of its AI accelerator – specifically the Habana division's G…
-
#### Not even Uncle Sam can stand between x86 titan and its profits
Intel has followed Nvidia's lead and will produce a modified version of its AI accelerator – specifically the Habana division's G…
-
### System Info
```shell
optimum-habana version >1.7.0
deepspeed 1.11.0
```
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An off…