Open xdobetter opened 1 month ago
There are several potential reasons:
gpt4v_simple.json
, the GroundedSAM segmentation results can vary. Check the results after running island_all.py
. If there aren't enough face pictures, the identity of final 3D face may not be well-preserved. If this occurs, try adjusting the TEXT_THRESHOLD
and BOX_THRESHOLD
at grounding_dino_sam.py#L308 to ensure more segmented faces and other garment parts are well maintained.I understand. Thank you very much. 1: So, if I want to reproduce your results, I can skip Step 0 and choose to use your default preprocessed data, right? 2: I find the two parameters https://github.com/YuliangXiu/PuzzleAvatar/blob/e1d46171f3f8cdada56b22d556f2cdfc291f446f/multi_concepts/grounding_dino_sam.py#L308-L309 you gave very difficult to adjust for face. Can you give me some suggestions?
Yes, you can skip step 0 and use my provided proprocessed data. And there are many discussions on these parameters in issue of GroundedSAM, for example, https://github.com/IDEA-Research/Grounded-Segment-Anything/issues/340
I have a question about multi-concept DreamBooth training. I found that the training time on different data is quite different, such as yuliang and yamei. Do you know if this is normal? I trained them individually using a single A6000.
I have a question about multi-concept DreamBooth training. I found that the training time on different data is quite different, such as yuliang and yamei. Do you know if this is normal? I trained them individually using a single A6000.
This is not normal, you should check the dataloader if both experiments share the same hyperparameters.
I used the data you provided "yuliang" , but I found that the results I got from running the program were quite different from those shown in the paper. Is the main reason for this the multi-concept DreamBooth? If so, how can I adjust it to reduce the deviation?
https://github.com/user-attachments/assets/e109a5a7-e06d-454a-a4dc-9026acefa427