jdb78 / pytorch-forecasting

Time series forecasting with PyTorch
https://pytorch-forecasting.readthedocs.io/
MIT License
3.76k stars 600 forks source link

problem with multigpu optuna #587

Open milkisbad opened 3 years ago

milkisbad commented 3 years ago

Expected behavior

I executed code

study = optimize_hyperparameters(
    train_dataloader,
    val_dataloader,
    model_path="optuna_test",
    n_trials=200,
    max_epochs=50,
    gradient_clip_val_range=(0.01, 1.0),
    hidden_size_range=(8, 128),
    hidden_continuous_size_range=(8, 128),
    attention_head_size_range=(1, 4),
    learning_rate_range=(0.001, 0.1),
    dropout_range=(0.1, 0.3),
#     trainer_kwargs=dict(limit_train_batches=30), # original example
    trainer_kwargs=dict(
                    limit_train_batches=30,
                    gpus=2,
                    accelerator='ddp'
                   ), # my trainer_kwargs
    reduce_on_plateau_patience=4,
    use_learning_rate_finder=False,  # use Optuna to find ideal learning rate or use in-built learning rate finder
    verbose=2,
)

in order to run hyperparameter optimization on multiple gpus.

Actual behavior

However, result was

RuntimeError: replicas[0][4] in this process with sizes [49, 1] appears not to match sizes of the same param in process 0.

the weird part is that the trainer itself with 'ddp' and 'gpus=2' works fine, so does optimize_hyperparameters (from code example) with the same code, but with 'gpus=1' instead of 2. One thing to note is that p in replicas[0][p] and a and b in sizes [a,b] do change between runs. I do think the reason should be with optuna/pytorch_lightning, but still trying to look for help here ;)

Code to reproduce the problem

the code is from tft tutorial, however i added 'CUDA VISIBLE DEVICES' and changed training_kwargs in optimize hyperparameters.

import sys
print(sys.version)

import os
os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID"   # see issue #152
os.environ["CUDA_VISIBLE_DEVICES"]="0,1"

import warnings

warnings.filterwarnings("ignore")

import copy
from pathlib import Path
import warnings

import numpy as np
import pandas as pd
import pytorch_lightning as pl
from pytorch_lightning.callbacks import EarlyStopping, LearningRateMonitor
from pytorch_lightning.loggers import TensorBoardLogger
import torch

from pytorch_forecasting import Baseline, TemporalFusionTransformer, TimeSeriesDataSet
from pytorch_forecasting.data import GroupNormalizer
from pytorch_forecasting.metrics import SMAPE, PoissonLoss, QuantileLoss
from pytorch_forecasting.models.temporal_fusion_transformer.tuning import optimize_hyperparameters

import optuna

from pytorch_forecasting.data.examples import get_stallion_data

data = get_stallion_data()

# add time index
data["time_idx"] = data["date"].dt.year * 12 + data["date"].dt.month
data["time_idx"] -= data["time_idx"].min()

# add additional features
data["month"] = data.date.dt.month.astype(str).astype("category")  # categories have be strings
data["log_volume"] = np.log(data.volume + 1e-8)
data["avg_volume_by_sku"] = data.groupby(["time_idx", "sku"], observed=True).volume.transform("mean")
data["avg_volume_by_agency"] = data.groupby(["time_idx", "agency"], observed=True).volume.transform("mean")

# we want to encode special days as one variable and thus need to first reverse one-hot encoding
special_days = [
    "easter_day",
    "good_friday",
    "new_year",
    "christmas",
    "labor_day",
    "independence_day",
    "revolution_day_memorial",
    "regional_games",
    "fifa_u_17_world_cup",
    "football_gold_cup",
    "beer_capital",
    "music_fest",
]
data[special_days] = data[special_days].apply(lambda x: x.map({0: "-", 1: x.name})).astype("category")

max_prediction_length = 6
max_encoder_length = 24
training_cutoff = data["time_idx"].max() - max_prediction_length

training = TimeSeriesDataSet(
    data[lambda x: x.time_idx <= training_cutoff],
    time_idx="time_idx",
    target="volume",
    group_ids=["agency", "sku"],
    min_encoder_length=max_encoder_length // 2,  # keep encoder length long (as it is in the validation set)
    max_encoder_length=max_encoder_length,
    min_prediction_length=1,
    max_prediction_length=max_prediction_length,
    static_categoricals=["agency", "sku"],
    static_reals=["avg_population_2017", "avg_yearly_household_income_2017"],
    time_varying_known_categoricals=["special_days", "month"],
    variable_groups={"special_days": special_days},  # group of categorical variables can be treated as one variable
    time_varying_known_reals=["time_idx", "price_regular", "discount_in_percent"],
    time_varying_unknown_categoricals=[],
    time_varying_unknown_reals=[
        "volume",
        "log_volume",
        "industry_volume",
        "soda_volume",
        "avg_max_temp",
        "avg_volume_by_agency",
        "avg_volume_by_sku",
    ],
    target_normalizer=GroupNormalizer(
        groups=["agency", "sku"], transformation="softplus"
    ),  # use softplus and normalize by group
    add_relative_time_idx=True,
    add_target_scales=True,
    add_encoder_length=True,
)

# create validation set (predict=True) which means to predict the last max_prediction_length points in time
# for each series
validation = TimeSeriesDataSet.from_dataset(training, data, predict=True, stop_randomization=True)

# create dataloaders for model
batch_size = 128  # set this between 32 to 128
train_dataloader = training.to_dataloader(train=True, batch_size=batch_size, num_workers=4)
val_dataloader = validation.to_dataloader(train=False, batch_size=batch_size * 10, num_workers=4)

# # Hyperparameter tuning

# Hyperparamter tuning with optuna is directly build into pytorch-forecasting. For example:

import pickle

from pytorch_forecasting.models.temporal_fusion_transformer.tuning import optimize_hyperparameters

# create study
study = optimize_hyperparameters(
    train_dataloader,
    val_dataloader,
    model_path="optuna_test",
    n_trials=200,
    max_epochs=50,
    gradient_clip_val_range=(0.01, 1.0),
    hidden_size_range=(8, 128),
    hidden_continuous_size_range=(8, 128),
    attention_head_size_range=(1, 4),
    learning_rate_range=(0.001, 0.1),
    dropout_range=(0.1, 0.3),
#     trainer_kwargs=dict(limit_train_batches=30),
    trainer_kwargs=dict(
#                     fast_dev_run=True,
                    limit_train_batches=30,
                    gpus=2,
                    accelerator='ddp'
                   ),
    reduce_on_plateau_patience=4,
    use_learning_rate_finder=False,  # use Optuna to find ideal learning rate or use in-built learning rate finder
    verbose=2,
)
Traceback (most recent call last):
  File "../pytorch_munimal_optuna.py", line 117, in <module>
    study = optimize_hyperparameters(
  File ".../anaconda3/envs/pyfore2/lib/python3.9/site-packages/pytorch_forecasting/models/temporal_fusion_transformer/tuning.py", line 215, in optimize_hyperparameters
    study.optimize(objective, n_trials=n_trials, timeout=timeout)
  File ".../anaconda3/envs/pyfore2/lib/python3.9/site-packages/optuna/study.py", line 401, in optimize
    _optimize(
  File ".../anaconda3/envs/pyfore2/lib/python3.9/site-packages/optuna/_optimize.py", line 65, in _optimize
    _optimize_sequential(
  File ".../anaconda3/envs/pyfore2/lib/python3.9/site-packages/optuna/_optimize.py", line 162, in _optimize_sequential
    trial = _run_trial(study, func, catch)
  File ".../anaconda3/envs/pyfore2/lib/python3.9/site-packages/optuna/_optimize.py", line 267, in _run_trial
    raise func_err
  File ".../anaconda3/envs/pyfore2/lib/python3.9/site-packages/optuna/_optimize.py", line 216, in _run_trial
    value_or_values = func(trial)
  File ".../anaconda3/envs/pyfore2/lib/python3.9/site-packages/pytorch_forecasting/models/temporal_fusion_transformer/tuning.py", line 206, in objective
    trainer.fit(model, train_dataloader=train_dataloader, val_dataloaders=val_dataloader)
  File ".../anaconda3/envs/pyfore2/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 458, in fit
    self._run(model)
  File ".../anaconda3/envs/pyfore2/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 753, in _run
    self.pre_dispatch()
  File ".../anaconda3/envs/pyfore2/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 778, in pre_dispatch
    self.accelerator.pre_dispatch(self)
  File ".../anaconda3/envs/pyfore2/lib/python3.9/site-packages/pytorch_lightning/accelerators/accelerator.py", line 108, in pre_dispatch
    self.training_type_plugin.pre_dispatch()
  File ".../anaconda3/envs/pyfore2/lib/python3.9/site-packages/pytorch_lightning/plugins/training_type/ddp.py", line 277, in pre_dispatch
    self.configure_ddp()
  File ".../anaconda3/envs/pyfore2/lib/python3.9/site-packages/pytorch_lightning/plugins/training_type/ddp.py", line 249, in configure_ddp
    self._model = DistributedDataParallel(
  File ".../anaconda3/envs/pyfore2/lib/python3.9/site-packages/torch/nn/parallel/distributed.py", line 496, in __init__
    dist._verify_model_across_ranks(self.process_group, parameters)
RuntimeError: replicas[0][4] in this process with sizes [49, 1] appears not to match sizes of the same param in process 0.
jdb78 commented 3 years ago

Is this similar to #441 ?

milkisbad commented 3 years ago

No in the sense that behaviour is different (no mention of replicas), yes in the sense that optuna will not work with a trainer that is accelerated with DDP. I have also tried the example from that issue and there I was able to run multigpu on 'ddp-spawn', but no 'ddp' and I can't run TFT with 'ddp-spawn' at all.