-
Hi there,
Thanks for this awesome software.
I saw a couple of comments about making HD-BET compatible with Apple GPUs but it was not clear whether this is the case. Can you guys confirm?
Thank…
-
When the Hierarchical Depth Buffer is enabled (either for the old SSAO, or the new Filmic Post FX passes), I see problems on Integrated GPUs as follows:
1. On the master branch when running Vulkan …
-
I'm enthusiastic about contributing my machine's spare processing power to this project. Currently, I'm using Apple Silicon devices, and while the client works well for CPU-based folding, I wondered i…
-
Hi,
Just getting started with MACE but am really digging it! I was excited to see that you support Apple GPUs, but is that only for training? When I try to use a `mace_off()` or `mace_mp()` ASE cal…
-
To determine which GPUs people need for their projects it would be great to have some benchmarks for the GPUs that you test dorado on. If you expand the table you already have with some values for the…
-
### What is the issue?
with the same code on the same machine
Apple M2 Pro
macos 15.1.1 (24B91)
```python
import time
import ollama
start_time = time.perf_counter()
#len(final_chat_messa…
krmao updated
16 hours ago
-
### What happened?
I have 2 GPUs in imc2017 RAM64G, one of which is connected through eGPU. Llama-cli->ggml->always don't use a GPU with higher performance? How can I use a higher GPU, or both?
### …
-
### 🐛 Describe the bug
In the example, a larger tensor is filled with False, except seven entries that are True. The torch.where() method finds True in the wrong place, in a seemingly random way one …
-
### What is the issue?
`ollama run gemma2:2b`
pulling manifest
Error: pull model manifest: Get "https://registry.ollama.ai/v2/library/gemma2/manifests/2b": write tcp [2601:19b:0:b8a0:915f:c8c:3de4…
-
When the whisper model is loaded, it prints a lot of initialization information to the console. I'd like to be able to redirect this to a separate log file and silence the console output.
`llama-c…