-
I am unsure what is the best way to translate requirements into settings.
If we have a requirement of `gpu.amd` this means we need to add the flag to `--roccm` to the start for a simple setup. This…
-
> Hmmm... I don't have the issue in _GPU_:
>
> I wonder if this could also be related to case 4 of issue #68.
Some detected names are too long or simply repetitions. A method to tr…
-
### Your current environment
Deploy using llama factory
CUDA_VISIBLE_DEVICES=0 API_PORT=9092 python src/api_demo.py
--model_name_or_path /save_model/qwen1_5_7b_pcb_merge
--template qwen
--infer_b…
-
**Describe the bug**
As a user I find that when I intstall the tari universe aapp and begin to mine, my GPU is not being detected as it currently shows as `-`
When I check Experimentaal settings I …
-
We have a workstation driving in total 10 displays via 3 GPUs:
> nvidia-smi -L
GPU 0: NVIDIA GeForce RTX 4090 (4 displays)
GPU 1: NVIDIA GeForce GTX 1650 (2 displays)
GPU 2: NVIDIA GeForce RTX 4090…
-
I get that splitting a single gen across multiple GPUs is tough, and there's at least one still-open issue regarding this. But what about using multiple GPUs in parallel, and just letting each do its…
-
**Describe the bug**
When a user disabled their CPU and/or GPU Power in mining settings the user gets stuck in a loading state when they enable the CPU and/or GPU Power again.
**To Reproduce**
St…
-
### Before continuing...
- [X] I agree to follow Atlas' [Code of Conduct](https://github.com/Atlas-OS/.github/blob/main/profile/CODE_OF_CONDUCT.md)
- [X] I have searched our [issue tracker](https:…
-
Hi,
Thank you for your good work for the community. Can I ask about the settings for training VILA-U? For example, GPU type, quantity, and days.
Best regards,
BAI Fan
-
### Jan version
0.5.3
### Describe the Bug
I imported many models and for some of them, they are failing to load if I selected my both graphic cards (RTX 3060 12Go).
If I unselect one of them, the…