-
- [ ] Integrate in Bianca Slurm pages
- [ ] links to these new pages, from Bianca pages
- [ ] Integrate in Snowy Slurm pages
- [ ] Links to these new pages, i.e from relevant/connect Snowy pages
…
-
Hello, is there any way to run a inference with 2 or more GPUs?
-
### System Info
NVIDIA GPU A30
nvidia-smi
```
Thu Oct 31 11:43:51 2024
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 555.42.02 …
-
We should build polars-gpu as well
https://pola.rs/posts/gpu-engine-release/
-
### Search before asking
- [X] I have searched the HUB [issues](https://github.com/ultralytics/hub/issues) and found no similar bug report.
### HUB Component
Training
### Bug
After successfully …
-
### System Info
Python version: 3.10.12
Pytorch version:
llama_models version: 0.0.42
llama_stack version: 0.0.42
llama_stack_client version: 0.0.41
Hardware: 4xA100 (40GB VRAM/GPU)
local-…
-
I might be missing something somewhere, but I cannot for the life of me get GPU passthrough working with Scriberr.
I get errors for missing files, authentication token errors. I can run it fine wit…
-
Hi,
I am simulating a quantum dynamical system using the great ITensorMPS.jl package.
(https://github.com/ITensor/ITensorMPS.jl)
Without getting into details about this package and the specific com…
-
**Idea:**
Cast FP32/FP16 to BF16.
Casting will be different based on type:
- FP32 to BF16: truncate last 16 bits from mantissa, exponent stays the same
- FP16 to BF16: more involved process --…
-
我的电脑是amd核显, 为什么这里会判断我有NVIDIA GPU.
![image](https://github.com/user-attachments/assets/6b853235-8395-431d-8eca-1882d4bfca05)
而我自己运行check_gpu_available函数的时候却是False
![image](https://github.com/user-at…