-
error as the subject
File "D:\ComfyUI-aki-v1.2\python\lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\Comf…
-
**Describe the bug**
Circular Buffers currently FATAL when trying to set page size to a non-multiple of 4 bytes (uint32_t).
**To Reproduce**
Steps to reproduce the behavior:
Try to create a buffer wi…
-
Using Google Colab
This issue occurs with the A100 GPU, L4 GPU, T4 GPU and TPU v2-8
Everything works as normal (however slower) with the regular CPU runtime
######################################…
-
### Describe the problem
I'm looking to train model with `bfloat16` datatype without having to use mixed precision on nvidia GPUs (A100 and beyond). This is important because directly using `bfloat16…
-
Original issue from pytorch: https://github.com/pytorch/pytorch/issues/141083
The XNNPACK commit is at https://github.com/google/XNNPACK/commit/4ea82e595b36106653175dcb04b2aa532660d0d8
Build error:…
-
### Model Series
Qwen2.5
### What are the models used?
Qwen2.5-7b-base
### What is the scenario where the problem happened?
sft with huggingface trainer
### Is this a known issue?
…
-
### 🚀 The feature, motivation and pitch
I'd like to use `torch.fft.rfft` function with bfloat16 tensor, but the operator doesn't support bfloat16 complex type.
Repro code below:
```python
import t…
-
### Description
Are there plans to support the bfloat16 data type in the near future? This data type is becoming increasingly popular in LLM training. It looks like currently it's not supported. I.e.…
-
There is a question about wheter vector bfloat16 extension supports the vfmv.v.f.
In the "https://raw.githubusercontent.com/riscv-non-isa/rvv-intrinsic-doc/refs/heads/main/auto-generated/intrinsic_f…
-
Hello,
I install flashinfer by AOT, where to modify q_data_type into torch.bfloat16 in plan function?
![image](https://github.com/user-attachments/assets/45bfc9ac-2748-47c0-b007-41f734ffadc8)
Th…