HI, thank you for your sharing, read you codes i have some questuons.
during feature transform, will need to normalize the dense features? Deep networks are said to be sensitive to dense features.
and sparse feature, why use one-hot encoder instead of label encoder?
Feature normalization is highly dependent on the dataset and for the Census Income example, I just stuck with the one-hot encoding for brevity and the paper didn't really clarify their feature engineering approach.
HI, thank you for your sharing, read you codes i have some questuons. during feature transform, will need to normalize the dense features? Deep networks are said to be sensitive to dense features. and sparse feature, why use one-hot encoder instead of label encoder?