-
### Is there an existing issue for the same bug?
- [X] I have checked the troubleshooting document at https://github.com/OpenDevin/OpenDevin/blob/main/docs/guides/Troubleshooting.md
- [X] I have chec…
-
Any invocation of python -m sillm.chat model seems much slower on my machine than in the reference video--more than a minute to get to the prompt, and maybe 1-2 TPM in the response.
I have tried si…
-
I am trying benchmarking new models, e.g.:
`glm-4-9b-chat, , THUDM/glm-4-9b-chat, , , 1, transformers, , ,`
```
python eq-bench.py --benchmarks eq-bench -v -r 1
...
Running benchmark 1 of 1…
-
It seems like the code is forced to run on CPU (sending my computer out of ram). If I output torch gpu available is says true, and it's using GPU, but the model still loads on CPU ram. Looking into th…
-
On merging the PT Lora Adaptor to base model it increased the size to almost double to that of orig base model.
I am pre-training LLama2 for a non-Englist language.
For this i expanded Tokenizer v…
-
### Context
This task regards enabling tests for Phi-1_5. You can find more details under openvino_notebooks [LLM chatbot README.md](https://github.com/openvinotoolkit/openvino_notebooks/tree/main/no…
-
How do I make my data run on my gpu not cpu there is a bash command but that is only for linux and it doesn't work for windows is there a specific command for windows to make the training on gpu not c…
-
Hi, I found currently the image generating way hard to make input image as reference:
![image](https://github.com/dvlab-research/MiniGemini/assets/21303438/e0c5a1bd-3ecf-44d7-a596-34465db1a141)
…
-
Hello, I could not make it work on Linux for some reason.
```
(.venv) (base) ubuntu@ubuntu-server:~/llm$ llm_benchmark run
-------Linux----------
error!
╭─────────────────────────────── Traceba…
-
dbg-log.org