-
For every model I've downloaded, the speed saturates my bandwidth (~13MB/sec) until it hits 98/99%. Then the download slows to a few tens of KB/s and takes hour(s) to finish.
I've tried multipl…
-
In [this colab](https://colab.research.google.com/drive/17XEqL1JcmVWjHkT-WczdYkJlNINacwG7?usp=sharing#scrollTo=2QK51MtdsMLu) you show how to load adapter and merge it with initial model. Notice it loa…
-
Hi, thanks for your great work!
Currently using standard code from transformers I can train Phi-3, but only with batch size of 1. Can I ask specifically what was the change needed to make it work …
-
### What happened?
I'm using Fadora 40 and installed it today.
I am using ollama model: codegemma, but at the end after answering
I get this message
…
-
Hello after training Qlora I got produce checkpoint under
```
ll output/lora_vision_test/
adapter_config.json
adapter_model.safetensors
checkpoint-178/
config.json
non_lora_state_dict.bin
…
-
### 📚 The doc issue
![issue](https://github.com/InternLM/lmdeploy/assets/120365110/e96b9d5f-9d1f-4e77-a0e7-c31c7e5c70c3)
AssertionError: 'internlm2_5-7b-chat’is not supported. The supported models…
-
```python```
Traceback (most recent call last):
File "/export/App/training_platform/PinoModel/xtuner/xtuner/configs/llava/phi3_mini_4k_v16/convert_xtuner_weights_to_llava.py", line 99, in
ma…
-
Is it working right now in any way?
-
First of all, Thank you for this app. It works great and the UI is top notch.
I am using your app for the last 4-5 hours particularly for chat and It works great.
I tried vision model(llava-phi3) n…
-
Hi, I'm trying to perform SFT training with Phi-3-vision, I followed the example with llava here https://github.com/huggingface/trl/blob/main/examples/scripts/vsft_llava.py. That however didn't work o…