-
### Describe the question.
nvidia.dali.fn.readers.webdataset supports reading from multiple tar files, specified as [a list of paths](https://docs.nvidia.com/deeplearning/dali/user-guide/docs/operati…
-
### Voice Changer Version
v.1.5.3.15 onexcpu-cuda
### Operational System
windows 10pro
### GPU
none
### Read carefully and check the options
- [ ] I've tried to Clear Settings
- [ ] Sample/Def…
-
Hi,
I've just spent way too long deducing the subtleties of calling `alpaka::memset`: Down in the [GPU path](https://github.com/alpaka-group/alpaka/blob/73642da70d35d0936b57cceddda108055ce86c87/inclu…
-
This happens to me in "Phi-3-mini-4k-instruct-q4f32_1-MLC-1k", "gemma-2b-it-q4f32_1-MLC-1k",
After updating my GPU drivers to:
> Intel(R) UHD Graphics 630
>
> Driver version: 31.0.101.2115
…
-
What is this memory overflow related to? My graphics card has 8g of video memory, so it's impossible to load the model until it reaches 8% and then terminate? Can you help analyze the reason? The foll…
-
### What is the issue?
aarch 64 it can pull the model but finally ‘Error: no suitable llama servers found’, here is the log:
Sep 28 08:43:15 orangepi5 ollama[2563639]: Couldn't find '/usr/share/oll…
-
### 🐛 Describe the bug
Under specific inputs, torch._add_batch_dim throws a false INTERNAL ASSERT FAILED, accompanied by the message: "please report a bug to PyTorch."
minimal example:
```
im…
x0w3n updated
2 weeks ago
-
we need:
beta. -> beta server
wiki. -> wiki
dev. -> a prod ready instance but no sec layer and separate DBs
gpu. -> [TBD as we are working on bring the gpu cluster online]
vault. -> to some sort …
-
### Your current environment
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04 LTS…
-
### Your current environment
PyTorch version: 2.4.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (U…