I encountered a problem, and the error was reported as follows:
Traceback (most recent call last):
File "/Users/huxinlei/PycharmProjects/dfencoder/markting.py", line 34, in
model.fit(X_train, epochs=1000, val=X_val)
File "/Users/huxinlei/anaconda/anaconda3/envs/dfencoder/lib/python3.6/site-packages/dfencoder/autoencoder.py", line 616, in fit
self.logger.show_embeddings(self.categorical_fts)
AttributeError: 'BasicLogger' object has no attribute 'show_embeddings'
My environment is as follows:
python3.6
My code is as follows:
initial_df = pd.read_csv(r'/Users/huxinlei/Desktop/feature_engineer_data.csv')
positive_sample_df = initial_df.loc[initial_df['lable']==1]
train = positive_sample_df.sample(frac=.8, random_state=42)
test = positive_sample_df.loc[~positive_sample_df.index.isin(train.index)]
X_train = train
X_val = test
model = AutoEncoder(
encoder_layers = [512, 512, 512], #model architecture
decoder_layers = [], #decoder optional - you can create bottlenecks if you like
activation='relu',
swap_p=0.2, #noise parameter
lr = 0.01,
lr_decay=.99,
batch_size=512,
verbose=False,
optimizer='sgd',
scaler='gauss_rank', #gauss rank scaling forces your numeric features into standard normal distributions
min_cats=3 #Define cutoff for minority categories, default 10
)
model.fit(X_train, epochs=1000, val=Xval)
Why do I encounter this problem, and how can I solve this problem?
I encountered a problem, and the error was reported as follows:
Traceback (most recent call last): File "/Users/huxinlei/PycharmProjects/dfencoder/markting.py", line 34, in
model.fit(X_train, epochs=1000, val=X_val)
File "/Users/huxinlei/anaconda/anaconda3/envs/dfencoder/lib/python3.6/site-packages/dfencoder/autoencoder.py", line 616, in fit
self.logger.show_embeddings(self.categorical_fts)
AttributeError: 'BasicLogger' object has no attribute 'show_embeddings'
My environment is as follows: python3.6
My code is as follows: initial_df = pd.read_csv(r'/Users/huxinlei/Desktop/feature_engineer_data.csv') positive_sample_df = initial_df.loc[initial_df['lable']==1]
train = positive_sample_df.sample(frac=.8, random_state=42) test = positive_sample_df.loc[~positive_sample_df.index.isin(train.index)]
X_train = train X_val = test
model = AutoEncoder( encoder_layers = [512, 512, 512], #model architecture decoder_layers = [], #decoder optional - you can create bottlenecks if you like activation='relu', swap_p=0.2, #noise parameter lr = 0.01, lr_decay=.99, batch_size=512, verbose=False, optimizer='sgd', scaler='gauss_rank', #gauss rank scaling forces your numeric features into standard normal distributions min_cats=3 #Define cutoff for minority categories, default 10 ) model.fit(X_train, epochs=1000, val=Xval)
Why do I encounter this problem, and how can I solve this problem?