hexiangnan / neural_factorization_machine

TenforFlow Implementation of Neural Factorization Machine
466 stars 186 forks source link

AttributeError: tf 'module' object has no attribute 'sub' #2

Open wenruij opened 6 years ago

wenruij commented 6 years ago

Q1: It seems there's a deprecated API tf.sub in your implementation, which will throw exception like AttributeError: 'module' object has no attribute 'sub' in tensorflow 1.3.0+.

Change that to tf.subtract will fix it.

Q2: Furthermore, have you ever consider using the tf.estimator.Estimator to replace sklearn.base.BaseEstimator ? A tf.estimator.Estimator model with model.export_model will enable you to deploy a trained model with tensorflow Serving for a product env, while it can also make your training parallelized with high level api tf.contrib.learn.Experiment.

hexiangnan commented 6 years ago

Q1 looks to be a TF version problem. The codes were run on TF 0.10. You may need to adjust some APIs to make them runable on higher versions of TF.

Q2 Thanks for pointing it out. I did not know that before.

On Sat, Oct 21, 2017 at 5:03 PM, Wenrui.J notifications@github.com wrote:

Q1: It seems there's a deprecated API tf.sub in your implementation, which will throw exception like AttributeError: 'module' object has no attribute 'sub' in tensorflow 1.3.0+.

Change that to tf.subtract will fix it.

Q2: Furthermore, have you ever consider using the tf.estimator.Estimator to replace sklearn.base.BaseEstimator ? A tf.estimator.Estimator model with model.export_model will enable you to deploy a trained model with tensorflow Serving for a product env, while it can also make your training parallelized with high level api tf.contrib.learn.Experiment.

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/hexiangnan/neural_factorization_machine/issues/2, or mute the thread https://github.com/notifications/unsubscribe-auth/ABGxjtGhC9G14bkbzOJ5lRWTKJHw7RRYks5subNqgaJpZM4QBfr7 .

-- Best Regards, Xiangnan He

wenruij commented 6 years ago

@hexiangnan I just tried --loss_type=log_loss, there maybe some points need be updated:

  1. Line 149: self.lambda_bilinear > 0, the typo is different from the one self.lamda_bilinear in __init__

  2. Line 150 and Line 152: parameter oftf.contrib.losses.log_loss should be weights, not weight. Maybe TF 1.0 using weight, but TF 1.2+ using weights

  3. Line 304: decision condition of early_stop for self.loss_type == 'log_loss' should be the same as self.loss_type == 'square_loss', or the train loops will always stop at the 6th epoch.

  4. there exist 2 different loss calculation rules: tf.contrib.losses.log_loss and sklearn.metrics.log_loss. Should it be better to unify the calculation rule? But maybe you have your own consideration to use 2 rules. Please ignore my advice, if that is the case.

At last, if you consider using the tf.estimator.Estimator and tf.contrib.learn.Experiment, please let me know. I'm pleasure to join the contribution of this part. Your idea of neural factorization machine looks great ~