sktime / pytorch-forecasting

Time series forecasting with PyTorch
https://pytorch-forecasting.readthedocs.io/
MIT License
3.99k stars 632 forks source link

wrapped() missing 1 required positional argument: 'X' #1246

Open TDL77 opened 1 year ago

TDL77 commented 1 year ago

Expected behavior

I am trying to reproduce the code from the examples on the site and I am facing the same error https://pytorch-forecasting.readthedocs.io/en/stable/tutorials/stallion.html and https://pytorch-forecasting.readthedocs.io/en/stable/tutorials/ar.html

Actual behavior

...and it appears when executing this bootloader training = TimeSeriesDataSet() 2023-02-04_083915

TypeError: wrapped() missing 1 required positional argument: 'X'

However, result was .... I think it has to do with ... because of ...

Code to reproduce the problem

training_cutoff = data["time_idx"].max() - 6
max_encoder_length = 36
max_prediction_length = 6

training = TimeSeriesDataSet(
    data[lambda x: x.time_idx <= training_cutoff],
    time_idx="time_idx",
    target="volume",
    group_ids=["agency", "sku"],
    min_encoder_length=max_encoder_length // 2,  # allow encoder lengths from 0 to max_prediction_length
    max_encoder_length=max_encoder_length,
    min_prediction_length=1,
    max_prediction_length=max_prediction_length,
    static_categoricals=["agency", "sku"],
    static_reals=["avg_population_2017", "avg_yearly_household_income_2017"],
    time_varying_known_categoricals=["month"],
    # time_varying_known_categoricals=["special_days", "month"],
    # variable_groups={"special_days": special_days},  # group of categorical variables can be treated as one variable
    time_varying_known_reals=["time_idx", "price_regular", "discount_in_percent"],
    time_varying_unknown_categoricals=[],
    time_varying_unknown_reals=[
        "volume",
        "log_volume",
        "industry_volume",
        "soda_volume",
        "avg_max_temp",
        "avg_volume_by_agency",
        "avg_volume_by_sku",
    ],
    target_normalizer=GroupNormalizer(
        groups=["agency", "sku"], transformation="softplus", center=False
    ),  # use softplus with beta=1.0 and normalize by group
    add_relative_time_idx=True,
    add_target_scales=True,
    add_encoder_length=True,
)

conda install -c conda-forge pytorch-lightning , I installed everything via conda

akbism commented 1 year ago

I am facing the same issue.

mmarti92 commented 1 year ago

Same here.

It was working a few days ago.

PyTorch-Forecasting version: '0.10.2' PyTorch version:'1.12.1' Python version: 3.9.16 Operating System: win 10 x64

samwatsonn commented 1 year ago

Also getting this issue.

PyTorch-Forecasting version: 0.10.2 PyTorch version: 1.13.1 Python version: 3.10.9 Operating System: Windows 11 x64

Baptiste-Biausque commented 1 year ago

Facing the same issue here. It was working fine before I updated my conda packages. However, I got the full traceback and it looks like the issue has something to do with Sci-kit Learn (see third paragraph). I don't know why but I put it there in case it helps:

C:\ProgramData\Anaconda3\lib\site-packages\pytorch_forecasting\data\timeseries.py in __init__(self, data, time_idx, target, group_ids, weight, max_encoder_length, min_encoder_length, min_prediction_idx, min_prediction_length, max_prediction_length, static_categoricals, static_reals, time_varying_known_categoricals, time_varying_known_reals, time_varying_unknown_categoricals, time_varying_unknown_reals, variable_groups, constant_fill_strategy, allow_missing_timesteps, lags, add_relative_time_idx, add_target_scales, add_encoder_length, target_normalizer, categorical_encoders, scalers, randomize_length, predict_mode)
    474 
    475         # preprocess data
--> 476         data = self._preprocess_data(data)
    477         for target in self.target_names:
    478             assert target not in self.scalers, "Target normalizer is separate and not in scalers."

C:\ProgramData\Anaconda3\lib\site-packages\pytorch_forecasting\data\timeseries.py in _preprocess_data(self, data)
    775 
    776             elif isinstance(self.target_normalizer, GroupNormalizer):
--> 777                 data[self.target], scales = self.target_normalizer.transform(data[self.target], data, return_norm=True)
    778 
    779             elif isinstance(self.target_normalizer, MultiNormalizer):

C:\ProgramData\Anaconda3\lib\site-packages\sklearn\utils\_set_output.py in wrapped(self, X, *args, **kwargs)
    140     @wraps(f)
    141     def wrapped(self, X, *args, **kwargs):
--> 142         data_to_wrap = f(self, X, *args, **kwargs)
    143         if isinstance(data_to_wrap, tuple):
    144             # only wrap the first output for cross decomposition

C:\ProgramData\Anaconda3\lib\site-packages\pytorch_forecasting\data\encoders.py in transform(self, y, X, return_norm, target_scale)
    913             assert X is not None, "either target_scale or X has to be passed"
    914             target_scale = self.get_norm(X)
--> 915         return super().transform(y=y, return_norm=return_norm, target_scale=target_scale)
    916 
    917     def get_parameters(self, groups: Union[torch.Tensor, list, tuple], group_names: List[str] = None) -> np.ndarray:

TypeError: wrapped() missing 1 required positional argument: 'X'

PyTorch-Forecasting version: 0.10.2 PyTorch version: 1.13.1 Python version: 3.8.13 Operating System: Windows 10 x64

fingoldo commented 1 year ago

pip install pytorch_forecasting -U helps

mmarti92 commented 1 year ago

Kind of solved. Remove and reinstall anaconda, then install pytorch_forecasting and pytorch_lightning through pip. Installing through conda seems to be the problem for me. As far as I know Sci-kit Learn wasn't causing the problem.

Baptiste-Biausque commented 1 year ago

It works! Basically it upgrades PyTorch-Forecasting to version 0.10.3 and downgrades Sci-kit Learn to 1.1.3. But if I try and update Sci-kit Learn back to 1.2.0, it breaks again. Tried downgrading 2 times and I always solved the problem that way. So I think there might be an incompatibility issue somewhere.

ywang1-rbi commented 1 year ago

Facing the same issue here. It was working fine before I updated my conda packages. However, I got the full traceback and it looks like the issue has something to do with Sci-kit Learn (see third paragraph). I don't know why but I put it there in case it helps:

C:\ProgramData\Anaconda3\lib\site-packages\pytorch_forecasting\data\timeseries.py in __init__(self, data, time_idx, target, group_ids, weight, max_encoder_length, min_encoder_length, min_prediction_idx, min_prediction_length, max_prediction_length, static_categoricals, static_reals, time_varying_known_categoricals, time_varying_known_reals, time_varying_unknown_categoricals, time_varying_unknown_reals, variable_groups, constant_fill_strategy, allow_missing_timesteps, lags, add_relative_time_idx, add_target_scales, add_encoder_length, target_normalizer, categorical_encoders, scalers, randomize_length, predict_mode)
    474 
    475         # preprocess data
--> 476         data = self._preprocess_data(data)
    477         for target in self.target_names:
    478             assert target not in self.scalers, "Target normalizer is separate and not in scalers."

C:\ProgramData\Anaconda3\lib\site-packages\pytorch_forecasting\data\timeseries.py in _preprocess_data(self, data)
    775 
    776             elif isinstance(self.target_normalizer, GroupNormalizer):
--> 777                 data[self.target], scales = self.target_normalizer.transform(data[self.target], data, return_norm=True)
    778 
    779             elif isinstance(self.target_normalizer, MultiNormalizer):

C:\ProgramData\Anaconda3\lib\site-packages\sklearn\utils\_set_output.py in wrapped(self, X, *args, **kwargs)
    140     @wraps(f)
    141     def wrapped(self, X, *args, **kwargs):
--> 142         data_to_wrap = f(self, X, *args, **kwargs)
    143         if isinstance(data_to_wrap, tuple):
    144             # only wrap the first output for cross decomposition

C:\ProgramData\Anaconda3\lib\site-packages\pytorch_forecasting\data\encoders.py in transform(self, y, X, return_norm, target_scale)
    913             assert X is not None, "either target_scale or X has to be passed"
    914             target_scale = self.get_norm(X)
--> 915         return super().transform(y=y, return_norm=return_norm, target_scale=target_scale)
    916 
    917     def get_parameters(self, groups: Union[torch.Tensor, list, tuple], group_names: List[str] = None) -> np.ndarray:

TypeError: wrapped() missing 1 required positional argument: 'X'

PyTorch-Forecasting version: 0.10.2 PyTorch version: 1.13.1 Python version: 3.8.13 Operating System: Windows 10 x64

Facing a similar error trace here.

python                    3.10.9               h218abb5_0  
pytorch                   1.13.1                 py3.10_0    pytorch
pytorch-forecasting       0.10.2             pyhd8ed1ab_0    conda-forge
scikit-learn              1.2.0           py310hcec6c5f_0  

Operating System: macOS 13.0.1

EDIT: Works for me after downgrading scikit-learn to 1.0.2

gary1381 commented 1 year ago

Got the same issue. Followed suggestions from ywang1-rbi and Baptiste-Biausque. Works for me after downgrading scikit-learn to 1.0.2. I did not reinstall pytorch-forecasting 0.10.2. It works just well. My os is ubuntu 18.04.

Note: you need to restart conda kernel to make it work.

sairamtvv commented 1 year ago

Installing using poetry will be a safer choice

finnoshea commented 1 year ago

Downgrading scikit from 1.2.2 to 1.0.2 or 1.1.3 works for me as well. It seems like scikit had an API change?

luchungi commented 1 year ago

pytorch-forecasting 0.10.2 requires scikit-learn<1.2,>=0.24