hunkim / PyTorchZeroToAll

Simple PyTorch Tutorials Zero to ALL!
http://bit.ly/PyTorchZeroAll
3.9k stars 1.2k forks source link

poor performance of diabetes prediction #2

Open Andy-jqa opened 7 years ago

Andy-jqa commented 7 years ago

Why is the performance of using 07_diabetes_logistic.py to predict diabetes pretty poor?

kopxiong commented 7 years ago

I guess it's due to the optimizer setting, since I modified the update rule to: optimizer = torch.optim.SGD(model.parameters(), lr=0.1, momentum=0.9) the loss decreases dramatically.

Andy-jqa commented 7 years ago

@kopxiong It makes sense to change the hyperparameters. I've tried your suggestion, but the loss didn't drop to a satisfying level. Have you checked the y_pred of the final model?

hunkim commented 7 years ago

@kopxiong Any thoughts? Since the goal of these tutorials is understanding the concept, I did not pay attention to the accuracy.

kopxiong commented 7 years ago

@Andy-jqa @hunkim Sorry for the late reply. I think we can improve the model's performance from the following aspects:

  1. add more layers or increase the number of nodes in each layer, since more units mean more powerful representation capability, like super(Model, self).__init__() self.l1 = torch.nn.Linear(8, 32) #(8, 6) self.l2 = torch.nn.Linear(32, 16) #(6, 4) self.l3 = torch.nn.Linear(16, 1) #(4, 1)
  2. use Adam optimizer to RMSProp instead of SGD
  3. use more epochs for training (like 2000 or more)
  4. sigmoid activation function maybe not good, try ReLU or LeakyReLU (I tried ReLU, but didn't work)

maybe we also should add some regularization terms to avoid overfitting.

akramsystems commented 6 years ago

set lr = 8 and epoch i set to 10000 got around 2% error

akramsystems commented 6 years ago

also made more nodes in the hidden layer so i set my layers to torch.nn.Linear(8, 30) torch.nn.Linear(30, 10) torch.nn.Linear(10, 1)

elcolie commented 5 years ago

I can not get loss below than 0.6 Any ideas?

file.zip