autonomio / talos

Hyperparameter Experiments with TensorFlow and Keras
https://autonom.io
MIT License
1.62k stars 270 forks source link

ValueError: The `kernel_size` argument must be a tuple of 1 integers when use params for kernel size #531

Closed Yosafat1997 closed 2 years ago

Yosafat1997 commented 3 years ago

First off, make sure to check your support options.

The preferred way to resolve usage related matters is through the docs which are maintained up-to-date with the latest version of Talos.

If you do end up asking for support in a new issue, make sure to follow the below steps carefully.

1) Confirm the below

2) Include the output of:

talos.__version__

1.0.0

3) Explain clearly what you are trying to achieve

Dear Developers, I want to use Talos with my pre-defined validation set. Let say i have three dataframe which i call Train, Test and Validation. I want to use my Validation dataset as validation for my model beside let talos make it for me.

4) Explain what you have already tried

I have try to use Talos in a separate function and run it. It gives me error: ValueError: The `kernel_size` argument must be a tuple of 1 integers. Received: [3, 5, 7]. I use Conv1D which is differ and not found any case with this in Talos Docs.

5) Provide a code-complete reference

This is how my params are made:

dropout_rate = [0.2, 0.5, 0.7]
feature_len = [64,128,256]
wconv = [3,5,7]
epc = [100,200,500]
para = {'dropout_rate':dropout_rate,'feature_len':feature_len,'wconv':wconv,'epc':epc}

This is how i make the model:

def GetModel(trainX,trainY,validX,validY,params):
   def GetModel(trainX,trainY,x_val,y_val,params):
    model = tf.keras.Sequential([
       Conv1D(filters=params['feature_len'],kernel_size=params['wconv'],activation='relu'),
        Dropout(params['dropout_rate']),
        MaxPooling1D(pool_size=2,strides=1),
        Flatten(),
        Dense(2,activation='softmax')
    ])
    model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy','mse'])
    out=model.fit(x=trainX,
                  y=trainY,
                  epochs=params['epc'],
                  validation_data=[x_val, y_val])
    return model,out

This is how i put talos in a different function:

def training(trainX,validX,trainY,validY,params):
    model,out = GetModel(trainX,validX,trainY,validY,params)
    scan_object = talos.Scan(x=trainX,
                             y=trainY, 
                             model=GetModel, 
                             x_val=validX,
                             y_val=validY, 
                             params=para, 
                             experiment_name='ECG')
    return model,scan_object,out

This is how i run my model with a 100 times test:

def experimental_test(n_exp,sddb,nsr,n_train,n_valid,n_test,start_normal,start_sddb,options):
       ....
        train = [A_train,B_train]
        valid = [A_val,B_val]
        tests = [A_test,B_test]
        train_dt = pd.concat(train)
        valid_dt = pd.concat(valid)
        test_dt = pd.concat(tests)
        trainX,validX,trainY,validY= dataset_maker(train_dt,valid_dt)
        n_timesteps, n_features, n_outputs =trainX.shape[1], trainX.shape[2], trainY.shape[0]
        model,scan_object,out=training(trainX,validX,trainY,validY,para)
        pyplot.plot(out.history['loss'], label='train')
        pyplot.plot(out.history['val_loss'], label='validation')
        pyplot.legend()
        pyplot.show()
        pyplot.plot(out.history['accuracy'], label='train')
        pyplot.plot(out.history['val_accuracy'], label='validation')
        pyplot.legend()
        pyplot.show()
        test_seq(start_sddb,start_normal,test_dt,model)

NOTE: If the data is sensitive and can't be shared, create dummy data that mimics it.

A self-contained Jupyter Notebook, Google Colab, or similar is highly preferred and will speed up helping you with your issue.


mikkokotila commented 3 years ago

Can you post your entire trace. Thank you :)