-
### 🐛 Describe the bug
Blocking send and recv is non-blocking in-practice for the NCCL backend. The first send/recv pair is blocking, possibly due to warmup with NCCLUniqueID exchange but not the su…
-
1. Please check that no similar bug is already reported. Have a look on the list of open bugs at https://github.com/anbox/anbox/issues
2. Make sure you are running the latest version of Anbox before …
-
### Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
### What happened?
The `webui` loading screen is shown in the browser, but l…
-
1. Please check that no similar bug is already reported. Have a look on the list of open bugs at https://github.com/anbox/anbox/issues
Checked. All previous issues were empty and marked as "decaying"…
-
anbox system-info output
```
version: 4
snap-revision: 186
cpu:
arch: x86
brand: Intel(R) Core(TM) i5-7200U CPU @ 2.50GHz
features:
- aes
- sse4_1
- sse4_2
- avx…
-
```
$ anbox system-info
version: local-c8a760c
cpu:
arch: x86
brand: Intel(R) Core(TM) i7-7500U CPU @ 2.70GHz
features:
- aes
- sse4_1
- sse4_2
- avx
- avx2
os:
…
-
(reopening because #1215 went stale)
6. ** Please paste the result of `anbox system-info` below:**
anbox system-info output
```
version: local-73804a3
cpu:
arch: x86
brand: AMD Ryzen…
-
### 🐛 Describe the bug
Unable to convert 30B model to ONNX. I am using 4x A100's , 500GB RAM, 2.5TB Memory, still running out of memory.
Here's the repro:
I believe this is reproable in any…
-
# Current Behavior
I run the following:
CMAKE_ARGS="-DGGML_CUDA=on" pip install llama-cpp-python --verbose
an error occured:
ERROR: Failed building wheel for llama-cpp-python
# Environment …
-
### Your current environment
```text
Collecting environment information...
PyTorch version: 2.1.2+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
…