-
After finetuned, I can convert the .pth to `official` and `xtuner` format, however I cannot convert to `huggingface` format because some errors, please help me:
```bash
xtuner convert pth_to_hf ll…
-
### What happened?
my run u:\llama\llama.cpp\build\bin\llama-cli.exe -mli -co -fa -ngl 64 -cnv --chat-template gemma -m llama3-8B-Chinese-Chat-q8.gguf
win11 amd 7900x hip 6.1 vs 2022
cmake -DGGM…
-
错误信息:
```
Traceback (most recent call last):
File "/home/zwj/GitHub/xtuner-main/xtuner/tools/train.py", line 360, in
main()
File "/home/zwj/GitHub/xtuner-main/xtuner/tools/train.py", lin…
-
### How are you running AnythingLLM?
Docker (local)
### What happened?
I installed the multimodal Llava LLM as it shows on settings. Saved after installing but can't select it. Just shows the three…
-
While rare, `ollama pull` will sometimes result in a digest mismatch on download
```
% ollama run wizard-vicuna-uncensored:30b-q5_K_M
pulling manifest
pulling b1571c5cbd28... 100% |█████████████…
-
Hi, nice work and well written paper !
I would like to ask that since you have used `chatml` template when training with orpo for llama3, I want to know what chat template to use hen evaluating mod…
-
If I set --api-key , then always get invalid api key from server.
e.g.
```
python -m sglang.launch_server --model-path lmms-lab/llama3-llava-next-8b --tokenizer-path lmms-lab/llama3-llava-next-8b…
-
### Issue
I encountered multiple issues while trying to use the deepseek-coder-v2 model with the Aider tool. Despite having the model deepseek-coder-v2:latest installed locally via Ollama, I am unabl…
-
Thanks for the great work! I'm trying out your prompt with a [llava hf space](https://huggingface.co/spaces/badayvedat/LLaVA). However, rather than directly getting to the point ("A [object name] ..."…
-
lease help, why can I run Ollama 3.1 locally, but comfyui gives an error when I enter the model name?
![Snipaste_2024-07-28_02-55-25](https://github.com/user-attachments/assets/d6d0311a-0ec1-4ab5-9f6…