-
### Priority
P3-Medium
### OS type
Ubuntu
### Hardware type
Xeon-ICX
### Installation method
- [ ] Pull docker images from hub.docker.com
- [ ] Build docker images from source
…
-
On an M2 Mac, I am getting the error shown below. I do have the fallback variable set properly:
```
% echo $PYTORCH_ENABLE_MPS_FALLBACK
1
```
I am using the following version:
```
Casano…
-
### System Info
```shell
The examples provided do not work correctly, I think there has been updates in the intel neural compressor toolkit, which is now 3.0. and the habana quantization toolkit, and…
-
## GenAIExample ChatQnA,DocIndexRetriever,SearchQnA compose.yaml got changed
Below files are changed in [this commit](https://github.com/opea-project/GenAIExamples/commit/9ff7df92029f17e7324edd1317c3…
-
### 🐛 Describe the bug
When doing pretraining of LLAMA2 70b model with LoRA with torch.compile in our environment, we saw an exception raised from partitioner.py while splitting the Joint graph to fo…
-
`
if torch.cuda.is_available():
device = "auto"
else:
device = "CPU"
`
当device被设置为auto时候会报以下错误:
`RuntimeError: Expected one of cpu, cuda, ipu, xpu, mkldnn, opengl, opencl, ideep, hip,…
-
## ❓ Questions and Help
Hi, was working on porting code to work with TPU's and the TRC, and was testing TPU VMs with kaggle
Working with https://github.com/pytorch/xla/blob/master/docs/pjrt.md…
-
### System Info
```shell
Optimum Habana version v1.12.1
Synapse 1.16.2
docker vault.habana.ai/gaudi-docker/1.16.2/ubuntu22.04/habanalabs/pytorch-installer-2.2.2:latest
```
### Information
- [ ] …
-
Great job! Thanks for sharing the tool.
Do you have recommendations for the GPU memory to run a prediction? I was trying to run the prediction in the examples with 4090(24GB), but it failed with 'ra…
-
gpt模型训练的时候,报了个错是不能访问本地的1107端口
虽然可以正常训练,但是这个端口是做什么的呀?
Using 16bit Automatic Mixed Precision (AMP)
GPU available: True (cuda), used: True
TPU available: False, using: 0 TPU cores
IPU available: F…