adapt-python / adapt

Awesome Domain Adaptation Python Toolbox
https://adapt-python.github.io/adapt/
BSD 2-Clause "Simplified" License
304 stars 43 forks source link

issue with ‘TrAdaBoostR2’ :TypeError: predict() takes 2 positional arguments but 4 were given #53

Closed 9527-ly closed 2 years ago

9527-ly commented 2 years ago

When I use the following case code in toy regression

from adapt.instance_based import TrAdaBoostR2 model = TrAdaBoostR2(get_model(), n_estimators=30, random_state=0) save_preds = SavePrediction() model.fit(Xs.reshape(-1, 1), ys.reshape(-1, 1), Xt_lab.reshape(-1, 1), yt_lab.reshape(-1, 1), callbacks=[save_preds], epochs=100, batch_size=110, verbose=0)

I encountered the following error message: File "G://adapt-master/test1.py", line 112, in model.fit(Xs.reshape(-1, 1), ys.reshape(-1, 1), Xt_lab.reshape(-1, 1), yt_lab.reshape(-1, 1) File "G:\adapt-master\adapt\instance_based_tradaboost.py", line 267, in fit **fit_params File "G:\adapt-master\adapt\instance_based_tradaboost.py", line 308, in _boost ys_pred = estimator.predict_proba(Xs) File "C:\Users.conda\envs\tgpu\lib\site-packages\tensorflow\python\keras\engine\sequential.py", line 283, in predict_proba preds = self.predict(x, batch_size, verbose) TypeError: predict() takes 2 positional arguments but 4 were given

antoinedemathelin commented 2 years ago

Hi @9527-ly, Thank you for reporting the issue. Which version of Tensorflow are you using? It seems that your tensorflow Model has a "predict_proba" method which is not the case for me? Best,

antoinedemathelin commented 2 years ago

Changes have been made to handle the case of Tensorflow model having a predict_proba method (#54)

9527-ly commented 2 years ago

Hi @antoinedemathelin, Tensorflow CPU = 2.6 and Tensorflow GPU = 2.4 will encounter this problem. Then, I looked at the source code in _tradaboost.py.

I found the following code: if hasattr(estimator, "predict_proba"): ys_pred = estimator.predict_proba(Xs) yt_pred = estimator.predict_proba(Xt) elif hasattr(estimator, "_predict_proba_lr"): ys_pred = estimator._predict_proba_lr(Xs) yt_pred = estimator._predict_proba_lr(Xt)

Then, if I delete _proba and _proba_lr and run the code again, the above error will not be reported. I don't know if my modification is reasonable

antoinedemathelin commented 2 years ago

Hi @9527-ly , Yes, the problem comes from predict_proba being called instead of predict on a Tensorflow Sequential model. I made a change in that sense, you can install the current stable version of the library with the command:

pip install git+https://github.com/adapt-python/adapt.git

Please tell me if you still have the issue. Best,

9527-ly commented 2 years ago

Hi @antoinedemathelin, Thanks a lot.

At the same time, I would also like to ask you a few questions. I would be grateful if I could get your reply. I found that some examples and even some domain adaptive methods can only operate in the two-dimensional FC model. For example, if I replace the default encoder and task network with a custom 1D-CNN in the DeepCORAL. I will get the following error: File "G:\adapt-master\adapt\base.py", line 1338, in call return self.task_(x) G:\adapt-master\adapt\feature_based_deepcoral.py:137 train_step yspred = self.task(Xs_enc, training=True) Input 0 of layer task is incompatible with the layer: expected min_ndim=3, found ndim=2. Full shape received: (1, 10)

Then, I looked at the source code in deepcoral.py and base.py. I found that the following code caused the error: def call(self, inputs): x = self.encoder(inputs) return self.task_(x)

Xsenc = self.encoder(Xs, training=True) yspred = self.task(Xs_enc, training=True)

So I modified them as follows: def call(self, inputs): x = self.encoder_(inputs) x = tf.expanddims(x, -1) return self.task(x)

Xsenc = self.encoder(Xs, training=True) Xs_enc=tf.expand_dims(Xs_enc,-1) yspred = self.task(Xs_enc, training=True)

I don't know if my modification is reasonable. In the MDD method, I also modify it in this way. At the same time, I may test on some more complex models later, such as 2d-cnn, convlstm, etc. This question has always puzzled me. Look forward to your reply. Best,

antoinedemathelin commented 2 years ago

Hi @9527-ly , For what I have seen in the litterature, people mainly use FC model for the task and Conv for the encoder. Thus, if a Flatten layer is put at the end of the encoder, the input shape of the task network is of dimension 2.

Thank you for reporting your needs of using an encoded space of higher dimension. I see that your proposed solution is to put an expand_dim before the task. Did you try to put a Reshape layer at the beggining of the task network to make this expand_dim ? In your case the task network has convolution layers right ?

9527-ly commented 2 years ago

Hi @antoinedemathelin, yes, in my case, the task network has convolution layers. Just like: def encoder(): model1 = Sequential() model1.add(Conv1D(filters=64, kernel_size=(3) ,padding='SAME',activation='relu',input_shape=(10,1)))# model1.add(BatchNormalization()) model1.add(Conv1D(filters=64, kernel_size=(3))) model1.add(BatchNormalization()) model1.add(Flatten()) model1.add(Dense(128, activation='relu')) model1.add(Dense(10, activation='linear')) model1.compile(optimizer=Adam(0.01), loss='mse') return model1

def task(): model1 = Sequential() model1.add(Conv1D(filters=64, kernel_size=(3), padding='SAME', activation='relu', input_shape=(10, 1))) # model1.add(BatchNormalization()) model1.add(Flatten()) model1.add(Dense(128, activation='relu')) model1.add(Dense(1, activation='linear')) model1.compile(optimizer=Adam(0.01), loss="mse") return model1

Thank you very much for your advice. I will try to add the reshape layer to the model.

9527-ly commented 2 years ago

Hi @antoinedemathelin, The reason I do this is to test whether these methods are applicable in more complex models. At the same time, thank you very much for providing the library written in tensorflow.