Open aman1b opened 1 month ago
Can anyone help here? How can I use DDP with the optimize_hyperparameter function?
Potentially related to the windows failures reported here: https://github.com/jdb78/pytorch-forecasting/issues/1623
Can you kindly paste the full output of pip list
, from your python environment, and also let us know what your operating system and python version are?
Hi community,
I have been stuck on this issue for some time now and would greatly appreciate any help! I am trying to run the optimise_hyperparameter function over 2 A100GPU using PyTorch DDP strategy.
When I run this I get the following error: RuntimeError: DDP expects same model across all ranks, but Rank 0 has 160 params, while rank 1 has inconsistent 137 params.
I have tried setting the seed across ranks but no luck. Has anyone experiences this issue or have an example of using this function and training a TFT with DDP?
I am using the latest package versions and training on an Azure VM. The application is run once I trigger the train_model function.
def prepare_data(data_prep_folder):
def hyperparameter_tuner(train_dataloader, val_dataloader, model_train_folder):
Start time