-
https://github.com/OpenMOSS/AnyGPT/blame/6404dbafccc10943be6bf6e24a4b99b3a6545501/anygpt/src/m_utils/prompter.py#L45
Hello,
Is this line correct? Is this for speech-to-speech conversation?
In tha…
-
I rush into the same question like before, #71 , #78 .
I modify the config in configs/prompt_tuning_coco/, generate custom embedding file, to fine-tune my dataset which has 4 categories.
When infer…
-
Hi there!
Currently, columns not used by the model are removed in `self.get_*_dataloader()` upon data loader creation, but one might want to have them in `compute_metrics` (when `include_inputs_for…
-
Hi everyone,
First of all let me say a big "THANK YOU" for your work !
I have successfully fine tuned phi-2. However I noticed the following:
- Inferencing with the fused model usually gives …
-
Hi there, does Petals currenly support batch processing/parallel processing? For example, to increase resource usage or system throughput, we would like to see servers parallelly processing multiple p…
-
Follow the steps in prompt_yolo_world.md to finetune yolo-world-s in coco dataset, the validation map can not improve during the training process. More specifically, the validation map in epoch 5 is …
-
Hi,
For fine-tuning the current model to other languages, is it better to use the existing trained model and prompt tokenizer "parler-tts/parler_tts_mini_v0.1" or maybe it better train from scratch…
-
Hi - I am working on chatbot to answer the questions from the document using RAG method. I have used DSPy framework for prompt tuning. I have done experimentation with DSPy for our use case and comput…
-
trying to fine-tune T5-small v1.1 on single GPU using on the sample script ([singlenode_ft_frompile.sh](https://github.com/google-research/t5x/blob/main/t5x/contrib/gpu/scripts_gpu/singlenode_ft_fromp…
-
Hi!
When I queue an image for the first time it takes significantly longer than subsequent requests. It seems like the issue is related to applied providers. It shows antelopev2 and buffalo_l in th…