-
I am running the full finetune distributed recipe, when setting `clip_grad_norm: 1.0` and `fsdp_cpu_offload: True`, it raises error
`RuntimeError: No backend type associated with device type cpu`
…
-
Thank you for developing the STAMP.
I have 4 stereo-seq chip data and want to do time series analysis. However, when running to this step, **model.train(device="cpu", sampler = "W")** keeps reporti…
-
请教一下,说明手册 https://k2-fsa.github.io/icefall/model-export/export-ncnn-conv-emformer.html#optional-int8-quantization-with-sherpa-ncnn 中是描述如何将 *bin 和 *param 文件转成对应的 int8 轻量化形式,但我遇到了一个问题:
在使用命令时,发生了崩溃(S…
-
The `*` op is supposed to concatenate a shape tensor to itself. However, if we use the resulting shape tensor with a fill ops, we get a failure:
```py
>>> a = tp.Tensor([1, 2, 3])
>>> tp.ones(a.shape[…
-
I use the following script to export to ONNX
```
class ModelArgs:
hidden_dims: List[int] = field(default_factory=lambda: [128]*3)
n_downsample: int = 2
mixed_precision: bool = True
…
-
### 🐛 Describe the bug
When I try multi-gpu on torch with `backend = custom_backend` it leads to the error-
`aot_export is not currently supported with traceable tensor subclass`
The following …
-
### 🐛 Describe the bug
Hi torch distributed team!
As we discussed in PTC, we found functional collectives are 34%~67% slower than c10d collectives due to the heavy CPU overhead.
To be specific,…
-
# ComfyUI Error Report
## Error Details
- **Node Type:** HMPipelineVideo
- **Exception Type:** ValueError
- **Exception Message:** Cannot generate a cpu tensor from a generator of type cuda.
## S…
-
Test on commit:(https://github.com/llvm/llvm-project/commit/efd8938d575d1f8058bfe220e4c672d969c82be0)
steps to reproduce:
```
mlir-opt test.mlir --test-print-liveness
```
test case:
```
modul…
-
**Describe the bug**
When setting the inference device to "Compute Shader" in Unity ML-Agents, this error occurs:
`InvalidOperationException: Tensor data cannot be read from, use .ReadbackAndClone…