-
Hi,
Thank you for sharing a great work.
I'm trying to reproduce the performance of InternLM-XComposer2 + DPO + VLFeedback, but I found that the baseline performance ([InternLM-Xcomposer2-VL-7b](h…
-
Hi, Thanks for your great work! When I fine-tune InternLM-XComposer2(unfreeze the proj and the whole LLM, freeze vit). In order to avoid OOM, I use zero3 and offload the optimizer to CPU(by modifying …
-
您好,我看webui里有使用一个外部模型接口,我应该怎么启动LMDEPLOY_IP 接口对应的服务,有什么要求吗
```bash
LMDEPLOY_IP = '0.0.0.0:23333'
MODEL_NAME = 'internlm2-chat-7b'
```
-
models: https://huggingface.co/internlm/internlm-xcomposer2-4khd-7b
### Reproduce code:
```
import torch
from transformers import AutoModel, AutoTokenizer
torch.set_grad_enabled(False)
# i…
-
model:internlm/internlm-xcomposer2d5-7b
self._model = accelerator.prepare(self.model)
-
I'm trying to run your model on GCP L4 (vertex workbench defaults), but it always crashes out with assertion failures without any additional context. Is there something I'm doing wrong?
Python / Cu…
-
I have used cogvlm v7 (through a guy on patreon) to caption 120,000 images and itndies a very good job, it does it even on some uncensored photos (in detail) if I use a special prompt with English a…
-
CUDA_VISIBLE_DEVICES=0 python /home/ubuntu/TextToSQL/DB-GPT-Hub/src/dbgpt-hub-sql/dbgpt_hub_sql/train/sft_train.py\
--model_name_or_path /home/ubuntu/.cache/modelscope/hub/qwen/Qwen2___5-Coder-7B…
-
The readme links to this site:
https://huggingface.co/internlm/internlm-xcomposer2-vl-7b-4bit
But what do I need to download on that site?
The readme also said it will work automatically.. but I …
ededs updated
3 months ago
-
python others/test_diff_vlm/InternLM_XComposer.py
Set max length to 16384
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████████████…