-
Hi Team,
I have successfully finetuned a QLoRA adapter on a custom dataset. When I try to load it in full precision, it gets loaded and works well
But this takes too much time and GPU memory to …
-
After running this command ```python src/finetune.py --base_model 'mosaicml/mpt-7b-instruct' --data_path 'yahma/alpaca-cleaned' --output_dir './lora-mpt' --lora_target_modules '[Wqkv]'```, i got this …
-
Dubious! I'll start my explanation with this deliberately provocative adjective to make progress on the subject and find my mistake.
On the web you can see a craze for finetuning (with unsloth or o…
-
Hi ollama team. Is it possible to ilsp/Meltemi-7B-Instruct-v1-GGUF model in the repository. Thanks in advance
-
-
Hi and thanks for the great resources.
I used "train-deploy-llama3.ipynb" and trained a similar Llama3 model as shown in the notebook.
I pushed my model on hugging face and now I want to use that …
-
Hey unsloth team, beautiful work being done here.
I am the author of [MachinaScript for Robots](https://github.com/babycommando/machinascript-for-robots) - a framework for building LLM-powered robo…
-
It is really impressive when doing inference with Mistral 7B. Thank you so much for open source it.
May I kindly ask what kind of format is the best way to finetune the model?
I read some blog p…
-
🎉The finetuning(VQA/OCR/Grounding/Video) for Qwen2-VL-Chat series models has been supported, please check the documentation below for details:
# English
https://github.com/modelscope/ms-swift/blob/m…
-
I would like to ask you if I have to create a new python file for my finetuned model in the 'lmms_eval/models' directory and make a class for the model in the code, or if I just need to use the python…