Closed Z0721Z closed 5 months ago
Dear Zhao,
You can find more detailed instructionz from https://github.com/MinghanLi/UniVS/blob/main/datasets/README.md . Thanks.
Best, Minghan
zhao @.***> 于2024年5月16日周四 17:07写道:
Again, I would like to inform cfg. MODEL. UniVS.CLIP_CLASS_EMBED_PATH = 'datasets/concept_emb/combined_datasets_cls_emb_rn50x4.pth' where should I download the weight file here, thank you very much!
— Reply to this email directly, view it on GitHub https://github.com/MinghanLi/UniVS/issues/7#issuecomment-2114618847, or unsubscribe https://github.com/notifications/unsubscribe-auth/AHOANLQGMTTA4WBXTTXT5RLZCRZNZAVCNFSM6AAAAABHVI2UUWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCMJUGYYTQOBUG4 . You are receiving this because you commented.Message ID: @.***>
Thanks to the author for the reply and I wish you all the best in your work!
Thanks for your attantion.
Stage1 is trained on two machines, each equipped with eight V100 or A100 GPUs, and the code supports PyTorch's native multi-node and multi-GPU parallelism. If you want to train Stage1, you simply need to modify the configuration file for Stage2 located in the tools directory. Here are the steps:
Step 1: Configure the parameters for multi-node and multi-GPU parallelism. For detailed instructions, refer to Detectron2's launch script documentation.
Step 2: Change the configuration file from --config-file configs/univs/univs_r50_stage2.yaml to --config-file configs/univs/univs_r50_stage1.yaml.