Closed kschelth closed 1 year ago
Welcome to Talos community! Thanks so much for creating your first issue :)
exclude
is so that you can drop columns (usually metrics) because doing a statistical correlation for example with three different metrics makes no sense. Just input the columns to drop as a list using a string value which is exactly like the column name is. That will work.
Thank you very much for your response. I adjusted the code with the exact names of the metrics (except val loss, which is the one I want to plot):
r.plot_corr('val_loss',exclude=['val_mse','val_f1score','loss','mse','f1score'])
Unfortunately it still does not work. Error message: "ValueError: zero-size array to reduction operation minimum which has no identity"
What am I doing wrong?
Thank you in advance for your help!
Please post the entire trace.
The model:
def modelbuilding_regression(X_train,y_train,X_test,y_test,params):
model = Sequential()
####INPUT LAYER####
model.add(Dense(params['neurons'], input_dim=X_train.shape[1], activation=params['activation'], kernel_initializer=params['kernel_initializer']))
model.add(Dropout(params['dropout']))
####HIDDEN LAYERS####
talos.hidden_layers(model, params['hidden_layers'], 1)
####OUTPUT LAYER####
model.add(Dense(1, activation=params['last_activation'],kernel_initializer=params['kernel_initializer']))
model.compile(loss=params['lossfunction'],
optimizer=params['optimizer'],
metrics=['mse', talos.utils.metrics.f1score])
history = model.fit(X_train, y_train,
batch_size=params['batch_size'],
epochs=params['epochs'],
callbacks=[early_stopper(params['epochs'])],
verbose=0,
validation_data=(X_test, y_test))
return history, model
The script to perform the experiment:
X_train, X_test, y_train, y_test = train_test_split(data_inp, data_label_normalized, test_size=0.4)
p = {'lr': (0.5, 5, 10),
'neurons': (3, 126, 512),
'hidden_layers': [0, 1, 2],
'batch_size': (10, 50, 100),
'epochs': [70],
'dropout': (0, 0.05, 0.1, 0.5),
'kernel_initializer': ['uniform', 'normal'],
'optimizer': ['Adam', 'Nadam', 'RMSprop'],
'lossfunction': [loss_RMSLE],
'activation': ['sigmoid', 'relu', 'elu'],
'last_activation': ['sigmoid']}
expname='HPtuning'
h = talos.Scan(x=X_train.to_numpy(),
y=y_train.to_numpy(),
x_val=X_test.to_numpy(),
y_val=y_test.to_numpy(),
model=modelbuilding_regression,
reduction_metric='val_loss',
minimize_loss=True,
params=p,
experiment_name=expname,
print_params=True,
round_limit=1)
r= talos.Reporting(latest_file)
r.plot_hist(metric='val_loss',bins=30)
r.plot_line(metric='val_loss')
r.plot_corr('val_loss',exclude=['val_mse','val_f1score','loss','mse','f1score'])
plt.show()
Can you post the entire trace i.e. the entire message with the error.
First off, make sure to check your support options.
The preferred way to resolve usage related matters is through the docs which are maintained up-to-date with the latest version of Talos.
If you do end up asking for support in a new issue, make sure to follow the below steps carefully.
1) Confirm the below
2) Include the output of:
talos.__version__
=0.6.63) Explain clearly what you are trying to achieve
I want to plot the correlations from "val_loss" with all paramters:
r = talos.Reporting(file) r.plot_hist(metric='val_loss',bins=30) r.plot_corr('val_loss',exclude='lossfunction') plt.show()
I got the error that I have to use "exclude", but I really don´t know in which manner it would be appropriate. The code seems to need a list, but when I give him a list, it does not work.
4) Explain what you have already tried
It tried something like this:
r.plot_corr('val_loss',exclude='lossfunction')
-->TypeError: can only concatenate str (not "list") to strr.plot_corr('val_loss',exclude=['lossfunction'])
--> ValueError: zero-size array to reduction operation minimum which has no identityr.plot_corr(metric='val_loss',exclude=[])
--> "So, the model and hyperparameter tuning works, it is just about the visualization. Hence, to focus on the problem, I have not included the model and whole code. If this is necessary, please let me know!
5) Provide a code-complete reference
Scan()
commandNOTE: If the data is sensitive and can't be shared, create dummy data that mimics it.
A self-contained Jupyter Notebook, Google Colab, or similar is highly preferred and will speed up helping you with your issue.