-
执行命名如下
torchrun --nproc_per_node=8 /home/jn/th/work/Multimodal-GPT/mmgpt/train/instruction_finetune.py \
--lm_path /home/jn/th/work/Multimodal-GPT/checkpoints/llama-7b_hf \
--tokenizer_path /ho…
-
### Feature request
Quite recently, I was exploring zero-shot classification to segment medical images. And it looks quite promising. I stumbled upon ```ClipSeg``` a few days ago and it looked wonder…
-
Hi, it's nice to see your code released. Excellent work!
In the instruction, there is an argument when evaluating kitti2015, which is --split_file checkpoint/kitti2015_ck/split.txt
I cannot see the …
MJITG updated
4 years ago
-
I would like to know how to evaluate the performance of InstructBLIP on VQA tasks, because after instruction tuning, it is hard for the mod to generate short answer, so using the original VQA metric s…
-
Legal professionals often grapple with navigating lengthy legal judgements to pinpoint information that directly address their queries. This paper focus on this task of extracting relevant paragraphs …
-
```
The main feedback I have is around the tuning. It was a bit of a pain as
you correctly pointed out a 'special insulated device' was required. First
I tried tuning Niftymitter and then the radio - …
-
I followed the instructions mentioned in README. However, running the command below is giving an error.
>python insults.py --competition
`Traceback (most recent call last):
File "insults.py", lin…
-
Dear authors,
Great work, thanks for sharing.
I am trying to fine-tune bge-reranker-v2-gemma using my own dataset.
However, according to the officail finetuning command provided:
```bash
…
-
well its not an issue, I just have a question and curious to know
Hi, can we use this to generate the youtube shorts videos from the input video ?
-
I can't find any instructions or config for fine tuning Open LLaMA 3B. It seems that the EasyLM doesn't support the 3b config option. Am I missing something?