autonomio / talos

Hyperparameter Experiments with TensorFlow and Keras
https://autonom.io
MIT License
1.62k stars 268 forks source link

Functional API issue #122

Closed FlorianBury closed 5 years ago

FlorianBury commented 5 years ago

Hi,

I just started using Talos because it looked like a practical tool for hyperscans. However, instead of the Sequential() class I prefer to use the functional API. It is my only option given the kind of network that I use to build.

I noticed an error when using the talos module :

File "/home/ucl/cp3/fbury/.local/lib/python3.6/site-packages/talos/metrics/score_model.py", line 38, in get_score y_pred = self.keras_model.predict_classes(self.x_val) AttributeError: 'Model' object has no attribute 'predict_classes' So I checked it out, and it turns out that Sequential() uses _predictclasses while the API with the Model() class uses predict function instead. I went into the talos scripts to change it and got the following error : File "/home/ucl/cp3/fbury/.local/lib/python3.6/site-packages/talos/metrics/performance.py", line 36, in multi_class self.y_pred = self.y_pred.flatten('F') AttributeError: 'list' object has no attribute 'flatten' I figured the predict function somehow returned a list instead of a numpy array.

In the end I replaced : y_pred = self.keras_model.predict_classes(self.x_val) by y_pred = asarray(self.keras_model.predict(self.x_val)) in talos/metrics/score_model.py, and it seems to be working.

Since it is a rather "dirty" fix, I'd like to know if this is viable or at least have your opinion on that.

Thanks

awilliamson commented 5 years ago

Can you provide a bit more information about what the setting is? Is this just for Scanning, Deploying, all, etc? I'm currently using the Functional API (habit from early TF days I guess), and everything is working fine for me. What versions are you running for Talos, Keras, Tensorflow. GPU vs Non-GPU?

FlorianBury commented 5 years ago

The problem occured with Scanning that uses the score_model module described before. Though I am having issues with Evaluate but not sure why yet.

Python : 3.6.4 Talos : 0.4.3 Tensorflow : 1.5.0 Keras : 2.1.3

Currently not using GPU, but intend to in near future

mikkokotila commented 5 years ago

This should be resolved by 0.4.4:

pip uninstall talos
pip install git+https://github.com/autonomio/talos@dev

Note that you no longer need to declare in Scan() that you are using a functional model. Sequential and Functional models are unified in the way they are handled in Talos.

FlorianBury commented 5 years ago

Thanks it solved the problem ! However I have another one with Evaluate()... In my model I defined six output nodes that I want to treat separately :

inputs = Input(shape=(x_train.shape[1],),name='inputs')
    L1 = Dense(params['first_neuron'],
               activation=params['first_activation'],
               name='L1')(inputs)
    L2 = Dense(params['second_neuron'],
               activation=params['second_activation'],
               name='L2')(L1)
    OUT_1 = Dense(1,activation=params['output_activation'],name='OUT_1')(L2)
    OUT_2 = Dense(1,activation=params['output_activation'],name='OUT_2')(L2)                         
    OUT_3 = Dense(1,activation=params['output_activation'],name='OUT_3')(L2)
    OUT_4 = Dense(1,activation=params['output_activation'],name='OUT_4')(L2)
    OUT_5 = Dense(1,activation=params['output_activation'],name='OUT_5')(L2)
    OUT_6 = Dense(1,activation=params['output_activation'],name='OUT_6')(L2)

But when I try to use Evaluate(), I end up with :

File "/home/ucl/cp3/fbury/.local/lib/python3.6/site-packages/talos/commands/evaluate.py", line 39, in evaluate
    y_pred = model.predict(kx[i]) >= 0.5
TypeError: '>=' not supported between instances of 'list' and 'float'

I guess the problem comes from my definition of the outputs which gives a list when using predict().

mikkokotila commented 5 years ago

How is the output in OUT_1 through OUT_6 different?

FlorianBury commented 5 years ago

Right now they are not, but they might need to be in the future (different activation functions or loss functions).

However if this cannot be implemented, I could define a single output layer of six neuron but never used that before.

mikkokotila commented 5 years ago

For the time being, if they are identical, it seems that you can just use one and everything is fine?

I'm curious, what are you trying to achieve by having 6 output layers on a single model?

awilliamson commented 5 years ago

I imagine it's just a comparative experiment of singular neurons Vs one hot output. Perhaps for postgraduate study?

On Wed, 7 Nov 2018, 11:06 Mikko Kotila <notifications@github.com wrote:

For the time being, if they are identical, it seems that you can just use one and everything is fine?

I'm curious, what are you trying to achieve by having 6 output layers on a single model?

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/autonomio/talos/issues/122#issuecomment-436588429, or mute the thread https://github.com/notifications/unsubscribe-auth/AAl7pc_KkRuKT9EfCNrr21uyhEWWkYoHks5usr7CgaJpZM4YM2g2 .

FlorianBury commented 5 years ago

I am trying to regress on the six bins of a histogram in order to interpolate it between different mass points (particle physics analysis). Here yes I can use a single layer but I wanted to keep some freedom by optimizing the bins separately ... probably overkill anyway.

Also, I have done in the past a regression with outputs from different layers where the problem would appear ... If this happens again, I guess I will have to find a more tricky workaround.

mikkokotila commented 5 years ago

I will try to unpack the situation.

Is it correct? If that's the case, then the way you would typically do it is just use Keras as it is intended and one-hot encode your output values into 6 different columns instead of one, and then treat it as a multi-class classification task. If you then wanted to test how different hyperparameters effect the output layer, handle that as its own hyperparameter (or set of) as intended in Talos.

Finally, if you wanted to treat each label as its own model, then you would have 6 different models and therefore 6 different experiments.

Or did I miss something?

FlorianBury commented 5 years ago

My apologies for not being clear enough.

It is not a classification problem, more a multi-output regression one. Basically I have histograms of six bins for different configurations (invariant masses basically). I want to interpolate this histogram in order to have the distributions (aka these 6 bins) for configurations that I do not know. To do so I use the configuration parameters as inputs and 6 outputs representing each bin, that I can either define separately with their own parameters or altogether. The only issue was in the evaluate function that uses the f1_score that breaks down because it is not a classification.

My workaround was to modifiy this script to replace the f1_score by a score suited for regression with multiple outputs (mean_squared_error for example).

So far it has been working, if you have no problem with that I can close the issue.

Thanks for your help anyway !

awilliamson commented 5 years ago

@FlorianBury Yes, F1 Score is for binary classification, and was the wrong metric to be using. :) Glad you got it sorted.