Open yusirhhh opened 7 months ago
Try setting (in XGX.yaml)
NUM_ENVIRONMENTS: 1
to
NUM_ENVIRONMENTS: 2
It should still train, but it will be much slower.
Thank you for your response. Could you please provide instructions on training the model from scratch, encompassing both IL learning and fine-tuning with RL learning? I aim to reproduce the results.
Regarding the training time: I'm interested in knowing the duration of training using 64 GPUs.
The details of using IL and finetuning with RL can be found in habitat-web and PIRLNav respectively.
It took me about 24 hours to train.
Thank you for your great work. I'm interested in reproducing your results.
However, I encountered an issue regarding:
Could you provide some suggestions?
I noticed in your paper you used 64 GPUs to train the model, but I only have a GPU server with 8 A6000 GPUs. Can I train the model with 8 A6000 GPUs?"