-
### Reminder
- [X] I have read the README and searched the existing issues.
### System Info
Have installed all the requirements for Qwen2-vl
### Reproduction
train_mm_proj_only:True
Hello, I wan…
-
Hi! Thanks for your wonderful work! I have a couple of questions:
1. Could you please share the details of the test dataset you used in the paper to evaluate the reference-based restoration?
2. …
-
hello
i used
python train_motor.py --mode=eval --dataset_dir=resources/motor-evaluation-master/ --loss_mode=cosine-softmax --log_dir=./output/motor/ --run_id=cosine-softmax --eval_log_dir=./eval_…
-
@Qidian213 你好, nwojke的deep_sort中使用cosine-softmax分类器,你提到的Triplet Loss with hard negative mining 是nwojke在他的论文中提到的 triplet loss.方式吗
-
Hello, I cloned your repository and run the example script on llama 2 7b with 4 bit quantization, below is the command I ran:
The only thing I changed from the example script is to use fake quantizat…
-
Hey, Congratulations for your perfect and creative work.
when I read the implementation code here, I am very confused about [SampledSoftmaxLoss](https://github.com/facebookresearch/generative-recomme…
-
I am very beginer to this field. what i have understood is that features extracted from person image are compared using cosine distance metric and ranking is done based on the distance.But if i have s…
-
What would the classification accuracy score look like on the training set? Let's say if we had around 5-6 million images and around 90K classes(Assuming the training loss objective is some variant of…
-
How can I solve this question? Do it really need so much memory?
Traceback (most recent call last):
File "main.py", line 33, in
main()
File "main.py", line 27, in main
t.train()
…
-
### Reminder
- [X] I have read the README and searched the existing issues.
### System Info
- `llamafactory` version: 0.9.1.dev0
- Platform: Linux-5.15.120.bsk.2-amd64-x86_64-with-glibc2.31
…