-
Hi, I was prompt tuning "glip_large_model.pth" with the following commands:
`python -m torch.distributed.launch --nproc_per_node=1 tools/finetune.py \
--config-file configs/pretrain/glip_Swin_…
-
Hello,
We successfully fine-tuned the Mistral7b_v0.3 Instruct model using a single GPU, but we encountered issues when trying to utilize multiple GPUs.
The successful fine-tuning with one GPU (A…
-
In the page:
https://huggingface.co/docs/peft/main/en/task_guides/clm-prompt-tuning
all the links to colab repositories are broken:
mixed: https://colab.research.google.com/github/huggingface…
-
Hi! Thanks for the great work!
Could you share any configs on fine-tuning Osprey on RefCOCOg dataset? I am trying to follow your work and reproduce the results on it, what's the starting checkpoint…
-
Hi,
For fine-tuning the current model to other languages, is it better to use the existing trained model and prompt tokenizer "parler-tts/parler_tts_mini_v0.1" or maybe it better train from scratch…
-
Hi - I am working on chatbot to answer the questions from the document using RAG method. I have used DSPy framework for prompt tuning. I have done experimentation with DSPy for our use case and comput…
-
Dear repository owner,
I am reaching out to express our admiration for your repository.
I am the author who recently published a paper titled "Learning Semantic Proxies from Visual Prompts for P…
-
Hi there!
Currently, columns not used by the model are removed in `self.get_*_dataloader()` upon data loader creation, but one might want to have them in `compute_metrics` (when `include_inputs_for…
-
Follow the steps in prompt_yolo_world.md to finetune yolo-world-s in coco dataset, the validation map can not improve during the training process. More specifically, the validation map in epoch 5 is …
-
hi, can I ask about the beta and leaning rate of Mistral-7B-Instruct-DPO? I can't reproduce the results in the paper.