-
Is there any subsequent plan to support f16 (half-precision) floating-point value compression? At this point, we can only select -f for single- and -d for double-precision compression.
-
In the case where the `handle` is an image carousel (such as the example on the dragdealer web page), and you drag the carousel horizontally, it will animate (slide) to a particular position defined b…
-
## ❓ Questions and Help
Does maskrcnn benchmark support half precision inference? If not, what should I add?
-
It seems that the move_to_gpu & move_to_cpu is not working as expected in the branch fast_inference.
https://github.com/RVC-Boss/GPT-SoVITS/blob/fast_inference_/api_v3.py#L327-L343
It will alway…
-
### Search before asking
- [X] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/ultralytics/ultralytics/discussion…
-
I tested from [`.github/workflow`](https://github.com/AUTOMATIC1111/stable-diffusion-webui/blob/master/.github/workflows/run_tests.yaml). Forge only runs on gpu. The values are hardcoded.
```bash
Ru…
-
clpeak version: 1.1.2
```
Platform: NVIDIA CUDA
Device: NVIDIA GeForce RTX 3090
Driver version : 525.89.02 (Linux x64)
Compute units : 82
Clock frequency : 1725 MHz
G…
-
[proposal-float16array](https://github.com/tc39/proposal-float16array) is a proposal to add float16 (aka half-precision or binary16) TypedArrays to JavaScript.
This issue is for tracking the work i…
-
When I run my model on half precision(fp16) the Loss function returns NaN. It all works fine when I use normal floating point precision (fp32) so I don't think it is a problem of the learning paramete…
mbcel updated
2 years ago
-
i am getting around 17s/it on all types f32, f16, etc with the metal backend.
Is quantization for cpu-only?