Closed zl2048 closed 9 months ago
Hello, we attempted to collaboratively train the cross-modal teacher through prompt learning(Visual Prompt Tuning) and initialize the encoder with downsampled global point clouds in the global query; however, these approaches did not yield significant performance improvements. Subsequently, we cleaned up unused portions of our codebase.
I see, thank you.
In the given logs in Google Drive, there are some unknown args and configs that can't be found in the released code. For example, "args.pretrain_prompt : False", "config.model.cls_sample : 256" in hardest_90_63.log, and "config.model.cls_embeding : False" in "objbg_95_18.log". What are them? Are these args and configs related to the final result?