-
Hello after training Qlora I got produce checkpoint under
```
ll output/lora_vision_test/
adapter_config.json
adapter_model.safetensors
checkpoint-178/
config.json
non_lora_state_dict.bin
…
-
Hi, thanks for your great work!
Currently using standard code from transformers I can train Phi-3, but only with batch size of 1. Can I ask specifically what was the change needed to make it work …
-
### 📚 The doc issue
![issue](https://github.com/InternLM/lmdeploy/assets/120365110/e96b9d5f-9d1f-4e77-a0e7-c31c7e5c70c3)
AssertionError: 'internlm2_5-7b-chat’is not supported. The supported models…
-
### What happened?
I'm using Fadora 40 and installed it today.
I am using ollama model: codegemma, but at the end after answering
I get this message
…
-
```python```
Traceback (most recent call last):
File "/export/App/training_platform/PinoModel/xtuner/xtuner/configs/llava/phi3_mini_4k_v16/convert_xtuner_weights_to_llava.py", line 99, in
ma…
-
First of all, Thank you for this app. It works great and the UI is top notch.
I am using your app for the last 4-5 hours particularly for chat and It works great.
I tried vision model(llava-phi3) n…
-
- [ ] I have read and agree to the [contributing guidelines](https://github.com/griptape-ai/griptape#contributing).
**Describe the bug**
When I use the node the prompt model list does not appear
…
-
I have a intel CPU that supports a number of AVX features, but most of them are not picked up when using ollama. Below is the llama.log file:
system info: AVX = 1 | AVX2 = 0 | AVX512 = 0 | AVX512_…
-
Hi, I'm trying to perform SFT training with Phi-3-vision, I followed the example with llava here https://github.com/huggingface/trl/blob/main/examples/scripts/vsft_llava.py. That however didn't work o…
-
I'm wondering what causes this error?
Do I have to set --version phi3 during pre-training stage? I use --version plain in pre-train stage and --version phi3 in fine-tune stage. Is this the correct s…