-
Hey unsloth team, beautiful work being done here.
I am the author of [MachinaScript for Robots](https://github.com/babycommando/machinascript-for-robots) - a framework for building LLM-powered robo…
-
Hi, I'm new to embedding and vector search.
When I use the example code with softmax, the score is correct, but when I try to use this model with Qdrant consine similarity search, the score is very…
-
请问如果需要极致的端上性能,是否可以使用参数量更小的语言模型呢?以及siglip是否可以换更轻量的图像编码器呢?如果要切换的话,该如何训练和部署呢,谢谢!
-
is that normal on 16GB rtx4060
(venv) g:\caption\joy-caption-batch>g:\caption\joy-caption-batch\venv\Scripts\python.exe batch.py
Captioning Batch Images Initializing...
image_adapter.pt already e…
-
Sorry if this is a silly question, but is it possible for the model to somehow keep a memory of the previous images it got? To put it simply, can i give it different frames from a single video and it …
-
Hi, Excellent work!
Have you tried other SSL models rather than CLIP for semantic tokenizer training?
I also find that the feature of SSL models can significantly boost the performance of auto…
-
Hi! Thank you for providing this framework. I found it very useful and helpful!
I am new to VLM domain and currently using SigLip vison encoder and Phi-2 LLM as main components on my task. I was w…
-
我使用doc中的方法进行pretrain和Finetune 过程
生成的文件目录为
```
-rw-r--r-- 1 work work 11000 Sep 25 08:10 adapter_config.json
-rw-r--r-- 1 work work 323020440 Sep 25 08:10 adapter_model.safetensors
-rw-r--r-…
-
After using the "uncensored" joy config, i get the following error message:
`python caption.py --joy_config configs/uncensored_joy.json /images/fun/RDBMS_Genealogy_V5.jpg
2024-09-09 00:41:12,099 -…
-
I am continous fine-tuning Bunny 1.1 4B. I have a question about the training codes.
In train.py, when calling `model.get_model().initialize_vision_modules(model_args=model_args)`, does it load the…