-
clpeak version: 1.1.2
```
Platform: Moore Threads OpenCL
Device: MUSA GEN1-104
Driver version : 20241010 release kuae1.3.0_musa3.1.0 db329f8fb@20241009 (Linux x64)
Compute units :…
-
### Before start
- [X] I have read the [RISC-V ISA Manual](https://github.com/riscv/riscv-isa-manual) and this is not a RISC-V ISA question. 我已经阅读过 RISC-V 指令集手册,这不是一个指令集本身的问题。
- [X] I have read the […
-
### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
2.13.0
### Custom code
Yes
### OS platform and distribution
_No resp…
-
See https://developer.chrome.com/blog/new-in-webgpu-120#support_for_16-bit_floating-point_values_in_wgsl for change description.
## Acceptance criteria
- [x] Add a Validation section to https://…
-
Currently, the output of our spec_infer program is misaligned with the incr_decoding program for both the LLAMa and OPT models. In particular, using the prompt "Give three tips for staying healthy." w…
-
There is great current interest in half-precision floating-point, either in IEEE FP16 or bfloat16 format. The vector spec already has encoding space for these typesl, but the scalar support for half…
-
##### System information (version)
- OpenCV => master @ https://github.com/opencv/opencv/commit/ad0ab4109aec8173b34ee68d25bb488fbcfe5286
- Operating System / Platform => Ubuntu 18.04 64 Bit
- Compi…
-
### 🐛 Describe the bug
1. It seems blip2 testing doesn't work correctly at all if model is half precision (torch.float16).
2. With bfloat16, `colossalai.shardformer.layer.FusedLayerNorm` doesn't see…
-
Is there any subsequent plan to support f16 (half-precision) floating-point value compression? At this point, we can only select -f for single- and -d for double-precision compression.
-
Hi
I am trying to run models with half-precision inference. I am able to set the model to half-precision (model.half()) and I also set the data_['inputs'] to half-precision, but at some point in th…