Closed berkeserol closed 11 months ago
Hey @berkeserol, so, I have two comments, first, this runs well, see if it helps:
import numpy as np
import pandas as pd
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
from pytorch_widedeep import Trainer
from pytorch_widedeep.models import TabMlp, Wide, WideDeep # noqa: F401
from pytorch_widedeep.preprocessing import TabPreprocessor, WidePreprocessor
X, y = make_classification(n_samples=100, n_features=10, n_informative=10, n_redundant=0)
X = pd.DataFrame(X, columns=[f"col_{i}" for i in range(10)])
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, shuffle=False)
tab_preprocessor = TabPreprocessor(continuous_cols=X.columns.tolist())
X_train_processed = tab_preprocessor.fit_transform(X_train)
X_test_processed = tab_preprocessor.transform(X_test)
wide_preprocessor = WidePreprocessor(wide_cols=X.columns.tolist())
X_wide = wide_preprocessor.fit_transform(X_train)
# Define the model
tab_mlp = TabMlp(
column_idx=tab_preprocessor.column_idx,
continuous_cols=tab_preprocessor.continuous_cols,
mlp_hidden_dims=[64, 32]
)
wide = Wide(input_dim=np.unique(X_train).shape[0])
model = WideDeep(wide=wide, deeptabular=tab_mlp)
# Define the trainer and train the model
trainer = Trainer(model, objective="binary")
trainer.fit(
X_tab=X_train_processed,
X_wide=X_wide,
target=y_train,
n_epochs=10,
batch_size=32
)
you do not need those .astype(float)
.
Second, and also important, the wide
component is not designed to work only with continuous features. In fact, the way that the WidePreprocessor
prepares the data is such that all ends up in a look up table (dictionary) and the linear layer is coded as embeddings (have a look here for an explanation). For example, if you access the encoding_dict
attribute in the example above you will see this:
>>> wide_preprocessor.encoding_dict
{'col_0_-3.956915512990843': 1,
'col_0_1.3006394509583112': 2,
'col_0_-2.464936360812435': 3,
'col_0_2.4968884101664472': 4,
'col_0_-1.0889744597533615': 5,
'col_0_-1.7765530407990036': 6,
'col_0_-2.9406750132381694': 7,
'col_0_-4.13009658776504': 8,
'col_0_1.3052810320392583': 9,
'col_0_-0.7368640448231503': 10,
'col_0_0.5910531307205757': 11,
'col_0_1.3899029479159295': 12,
'col_0_4.4319842979409145': 13,
'col_0_-1.2444478420364677': 14,
'col_0_1.818636056977371': 15,
...
i.e. one encoding per individual value, per column
Just that. Hope this helps and thanks for opening the issue and trying the library!
Hi @jrzaurin thank you for the answer. I tried your code exactly and it works fine. However, when I change the input dataframe using the file I attached, I got the error that I mentioned in the first message. Can you try? widedeep_test_X.csv
ok, so I see what is happening, I will change the example and here is a solution.
ISSUE:
two of the values in your dataset are equal for different columns:
import numpy as np
import pandas as pd
# from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
X = pd.read_csv("~/Desktop/widedeep_test_X.csv")
y = np.random.randint(2, size=X.shape[0])
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, shuffle=False)
# unique elements column by column
print(len(np.hstack([X_train[col].unique() for col in X_train.columns])))
90
# unique elements consider the whole df at once
print(np.unique(X_train).shape[0])
89
Because of the way that the linear layer (wide component) is implemented, the number that matters is the first one. This is because as I wrote before, is implemented as an Embedding layer where each element will be an entry in the look up table. To that aim we append the column name and the value for each column. In your example, if we refer as the repeated float as repeated_float, and assuming it is in two columns 1 and 2, it will be encoded as:
col_1_repeated_float: encoding_n
...
col_2_repeated_float: encoding_m
Now, if you define your wide model as
wide = Wide(input_dim=np.unique(X_train).shape[0])
we are defining an embedding layer of 90 input dim (89 + index 0 left for 'unseen' categories) when in reality we need an embedding layer of 91 (90 + index 0 left for 'unseen' categories), since col_1_repeated_float
and col_2_repeated_float
should be encoded differently
SOLUTION
It is safer to define the wide model as: wide = Wide(input_dim=len(wide_preprocessor.encoding_dict))
then this code should run
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from pytorch_widedeep import Trainer
from pytorch_widedeep.models import TabMlp, Wide, WideDeep # noqa: F401
from pytorch_widedeep.preprocessing import TabPreprocessor, WidePreprocessor
X = pd.read_csv("~/Desktop/widedeep_test_X.csv")
y = np.random.randint(2, size=X.shape[0])
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, shuffle=False)
tab_preprocessor = TabPreprocessor(continuous_cols=X.columns.tolist())
X_train_processed = tab_preprocessor.fit_transform(X_train)
wide_preprocessor = WidePreprocessor(wide_cols=X.columns.tolist())
X_wide = wide_preprocessor.fit_transform(X_train)
# Define the model
tab_mlp = TabMlp(
column_idx=tab_preprocessor.column_idx,
continuous_cols=tab_preprocessor.continuous_cols,
mlp_hidden_dims=[64, 32],
)
wide = Wide(input_dim=len(wide_preprocessor.encoding_dict))
model = WideDeep(wide=wide, deeptabular=tab_mlp)
# Define the trainer and train the model
trainer = Trainer(model, objective="binary")
trainer.fit(
X_tab=X_train_processed, X_wide=X_wide, target=y_train, n_epochs=1, batch_size=2
)
It is solved. Thanks
I have this code giving the following error. I cannot share the data but it contains all continues float columns and target is binary:
`X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, shuffle=False)
C:\Users\berke\PycharmProjects\pythonProjectDeneme\venv\Scripts\python.exe C:/Users/berke/PycharmProjects/pythonProjectDeneme/widedeep.py C:\Users\berke\PycharmProjects\pythonProjectDeneme\venv\lib\site-packages\pytorch_widedeep\preprocessing\tab_preprocessor.py:343: UserWarning: Continuous columns will not be normalised warnings.warn("Continuous columns will not be normalised") epoch 1: 0%| | 0/1251 [01:32<?, ?it/s] Traceback (most recent call last): File "C:/Users/berke/PycharmProjects/pythonProjectDeneme/widedeep.py", line 44, in
trainer.fit(
File "C:\Users\berke\PycharmProjects\pythonProjectDeneme\venv\lib\site-packages\pytorch_widedeep\utils\general_utils.py", line 61, in call
return wrapped(*args, kwargs)
File "C:\Users\berke\PycharmProjects\pythonProjectDeneme\venv\lib\site-packages\pytorch_widedeep\training\trainer.py", line 492, in fit
train_score, train_loss = self._train_step(
File "C:\Users\berke\PycharmProjects\pythonProjectDeneme\venv\lib\site-packages\pytorch_widedeep\training\trainer.py", line 938, in _train_step
y_pred = self.model(X)
File "C:\Users\berke\PycharmProjects\pythonProjectDeneme\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, *kwargs)
File "C:\Users\berke\PycharmProjects\pythonProjectDeneme\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(args, kwargs)
File "C:\Users\berke\PycharmProjects\pythonProjectDeneme\venv\lib\site-packages\pytorch_widedeep\models\wide_deep.py", line 224, in forward
wide_out = self._forward_wide(X)
File "C:\Users\berke\PycharmProjects\pythonProjectDeneme\venv\lib\site-packages\pytorch_widedeep\models\wide_deep.py", line 330, in _forward_wide
out = self.wide(X["wide"])
File "C:\Users\berke\PycharmProjects\pythonProjectDeneme\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, kwargs)
File "C:\Users\berke\PycharmProjects\pythonProjectDeneme\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, *kwargs)
File "C:\Users\berke\PycharmProjects\pythonProjectDeneme\venv\lib\site-packages\pytorch_widedeep\models\tabular\linear\wide.py", line 67, in forward
out = self.wide_linear(X.long()).sum(dim=1) + self.bias
File "C:\Users\berke\PycharmProjects\pythonProjectDeneme\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(args, kwargs)
File "C:\Users\berke\PycharmProjects\pythonProjectDeneme\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\berke\PycharmProjects\pythonProjectDeneme\venv\lib\site-packages\torch\nn\modules\sparse.py", line 162, in forward
return F.embedding(
File "C:\Users\berke\PycharmProjects\pythonProjectDeneme\venv\lib\site-packages\torch\nn\functional.py", line 2233, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
IndexError: index out of range in self
Process finished with exit code 1