ZelongZeng / PLCD

[TMM 2022]The official code of IEEE Transactions on Multimedia paper "Geo-localization via ground-to-satellite cross-view image retrieval"
11 stars 0 forks source link

expected one argument #4

Open ahworld22 opened 5 days ago

ahworld22 commented 5 days ago

hello ,what is this problem and how to deal with? ![Uploading image.png…]()

ZelongZeng commented 3 days ago

Hi, you need to input a argument to --name.

ahworld22 commented 3 days ago

OK,if i use windows,it also can be done right?

---- Replied Message ---- | From | Zelong @.> | | Date | 09/20/2024 10:02 | | To | @.> | | Cc | @.>@.> | | Subject | Re: [ZelongZeng/PLCD] expected one argument (Issue #4) |

Hi, you need to input a argument to --name.

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>

ZelongZeng commented 3 days ago

You only need to enter it in the command line. Please refer to the corresponding section in the "README" for the specific command.

ahworld22 commented 3 days ago

您只需在命令行中输入它。具体命令请参考 “README” 中的相应部分。 I donn't know what should I input with --name

ahworld22 commented 3 days ago

I download your models and put it into file,then I input the name of G2D and it runs!But it says : CUDA out of memory. Tried to allocate 28.00 MiB (GPU 0; 8.00 GiB total capacity; 12.49 GiB already allocated; 0 bytes f ree; 12.55 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF I wanna know if i must use apex?And if I use apex must i install it with ext-cudnn?

ZelongZeng commented 3 days ago

You want to train the model, right? If you want to use apex, you can input --fp16, it will saves around 50% memory but reduce the result. Another approach is to use multiple GPUs for training. You can search for the commented “nn.DataParallel” code in the relevant .py files, uncomment it, and then use the --gpu_ids command-line argument to select the desired GPUs. Of course, I would highly recommend that you modify the code to use PyTorch DDP for multi-GPU training (I haven’t implemented PyTorch DDP in this project).

ahworld22 commented 2 days ago

OK thanks a lot,i will try it.If I use apex must i install apex with install --cuda_ext --cpp_ext?or i only need to install apex itself?

---- Replied Message ---- | From | Zelong @.> | | Date | 09/21/2024 05:38 | | To | @.> | | Cc | @.>@.> | | Subject | Re: [ZelongZeng/PLCD] expected one argument (Issue #4) |

You want to train the model, right? If you want to use apex, you can input --fp16, it will saves around 50% memory but reduce the result. Another approach is to use multiple GPUs for training. You can search for the commented “nn.DataParallel” code in the relevant .py files and then use the --gpu_ids command-line argument to select the desired GPUs. Of course, I would highly recommend you for modifying the code to use PyTorch DDP for multi-GPU training.

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>

ZelongZeng commented 2 days ago

Hi, normally, these two options are selected during the installation of Apex, but I’m not sure if they are mandatory requirements. Please refer to the official Apex documentation for details.

ahworld22 commented 1 day ago

OK,because i'm using windows it seems difficult to install apex with that tow options.I think maybe i will try PythonDDP.Thank you!

---- Replied Message ---- | From | Zelong @.> | | Date | 09/21/2024 21:07 | | To | @.> | | Cc | @.>@.> | | Subject | Re: [ZelongZeng/PLCD] expected one argument (Issue #4) |

Hi, normally, these two options are selected during the installation of Apex, but I’m not sure if they are mandatory requirements. Please refer to the official Apex documentation for details.

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>