-
### Description
I guess an actual TFT can handle categorical features with embeddings, does nixtla have something similar or it always needs to be done manually with label encodings?
### Use case
I…
-
**Description**:
I encountered an error during the sampling phase using the `tabsyn` method. The issue seems to be related to missing numerical features during preprocessing.
### Steps to Reprod…
-
### What happened + What you expected to happen
According to the [Exogenous Variables Tutorial](https://nixtlaverse.nixtla.io/neuralforecast/examples/exogenous_variables.html):
```
When including e…
-
```Python
# Save original to avoid having to rerun the whole notebook
X = original_X
X_test = original_X_test
# Initialize OHE
ordinal_encoder = OrdinalEncoder(handle_unknown='use_encoded_value…
-
LightGBM:
Efficiency: LightGBM is designed to be highly efficient and can handle large datasets with faster training times.
Accuracy: It often provides better accuracy compared to other gradient b…
-
Hi there,
I'm looking to use xgboost as my nuisance model in my DoubleML setup and use xgboost's own mechanism for encoding categorical features (rather than having to one hot encode them myself).
…
-
### Describe the workflow you want to enable
I am often using random datasets (typically with make_classification). However I often find myself having to add more realistic features to the dataset:
…
-
Hi,
When doing first phase training over DAE and VIME (using unlabeled data), I got negative CrossEntropyLoss for the categorical features which resulted in a negative training and validation loss…
-
Categorical features can't be used by sklearn models without some kind of transformation. Because there are a number of different methods for ([reference](https://en.wikipedia.org/wiki/Categorical_var…
-
`ranger.unify` and `gbm.unify` don't support categorical features, which is essential when working with `factors` in R.