-
Traceback (most recent call last):
File "E:\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1931, in load_custom_node
module_spec.loader.exec_module(module)
File "", line 883, in exec_modu…
-
### 🐛 Describe the bug
i am stuck in what seems to be a PyTorch bug. i isolated it down, and the following is an example code of the hiccup:
```
if not torch.distributed.is_initialized():
**th…
-
Hi, I am getting this issue. I am running it on following system. I followed the instructions given in README.
Windows 11 Home
Intel Core i9
32GB RAM
( I tried with Anaconda and python3.11.9 and…
-
**Description:**
When running the command, a RuntimeError is encountered with the message "unmatched '}' in format string."
Run command
```
torchrun --nproc_per_node 1 -- rd example.py --ckpt_dir…
-
GGML_ASSERT: D:\a\ctransformers\ctransformers\models\ggml/llama.cpp:453: data
I get this error sometimes when loading a model. At first, I thought it was a corrupted model, and I redownloaded it wh…
-
### 🐛 Describe the bug
Referring to the issue https://github.com/mit-han-lab/streaming-llm/issues/37#issue-1940692615
Description:
When running the run_streaming_llama.py script with the --enable…
-
Running on Ubuntu, 32GB RAM.
I get a segmentation fault by running the following code:
```
import sys
import llamacpp
def progress_callback(progress):
print("Progress: {:.2f}%".format(…
-
## Description
Run command `make build` failed in container `astra_agents_dev` with non-root user cloned project.
The container user is root, But the directory owner is non-root.
git output `…
-
### Describe the feature
anyone working on porting llama.cpp to vlang? that'll be something.
### Use Case
llama.cpp being used by vlang
### Proposed Solution
_No response_
### Other Information…
ouvaa updated
6 months ago
-
**LocalAI version:**
Where to find? Advise and I'll update
**Environment, CPU architecture, OS, and Version:**
```sh
$ system_profiler SPHardwareDataType SPSoftwareDataType SPNetworkDataType…