-
您好,我在指定本地路径后,仍然需要到 HuggingFace 下载,结果报错。我希望知道哪里设置错了。我想训练 llava-llama3-8b。
- 命令:`NPROC_PER_NODE=1 xtuner train llava_llama3_8b_instruct_quant_clip_vit_large_p14_336_e1_gpu1_pretrain --deepspeed deepspe…
-
### Checklist
- [X] 1. I have searched related issues but cannot get the expected help.
- [X] 2. The bug has not been fixed in the latest version.
### Describe the bug
The latest lmdeploy 0.4.1
Th…
-
# Bug Report
## Description
**Bug Summary:**
Maybe from v0.1.123, image generation using AUTOMATIC1111 doesn't work.
**Steps to Reproduce:**
1. Click the image generation button
2. Error
…
-
请问使用llava进行SFT时,如何添加special tokens呢?主要是tokenizer.add_tokens后,模型应该如何操作
-
Whatever URL I am opening, it will not bring back search results.
After some time it will only do an image analysis.
URL I tried: https://www.youtube.com/watch?v=RPL2CGoI1I4
-
It would be good to replace llama 2 with llama 3 as 2 is a very old model now.
-
A.I says:
I encountered an error while trying to use the tool. This was the error: SerperDevTool._run() missing 1 required positional argument: 'search_query'.
Tool Search the internet accepts thes…
-
### What is the issue?
brett@brett:~$ ollama pull llama3.2
Error: registry.ollama.ai/library/phi3:latest: EOF
really confused. This is not an out of memory error. Tried reseting the systemc…
-
Error occurred when executing IF_ChatPrompt:
Invalid model selected: for engine ollama. Available models: []
File "F:\BaiduNetdiskDownload\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyU…
-
my openui is in ubuntu18 vmware workstation like 192.168.1.169,my ollama and models is in physical host like 192.168.1.103. how can i use ollama models in openui of vmware workstation.