Closed jaadeoye closed 3 years ago
Thank you for the program 'survivalmodels'.
Thanks for using it!
Unlike the Python module with constant training and validation loss on repeated use, the loss on the survivalmodels package keep changing; thus, affecting the survival probabilities predicted to a certain extent. Is this normal? (Note: I have set it at shuffle = FALSE)
I'm sorry this really isn't enough detail for me to help. Can you produce a reprex so I know what exactly is changing?
Has any of the users tried deploying a model based on DeepSurv on the package on ShinyApp? After I created the App in Shiny and tried publishing it, it returned that the module Pycox could not be found. Please advise.
More generally you want to know how to use packages with reticulate in Shiny. Try this tutorial perhaps?
Thank you for your response.
This is what I mean,
I ran the code below on my dummy data:
library(reticulate) library(survivalmodels) if (requireNamespaces("reticulate")) {
data = d fit = deepsurv(data = data[1:213, ])
deepsurv(data = data[1:213, ], activation = "relu", frac= 0.2, num_nodes = c(64L, 64L, 64L), dropout = 0.4, early_stopping = TRUE, epochs = 512L, batch_size = 64L, batch_norm = TRUE, verbose = TRUE, optimizer = "adam", learning_rate = 0.01, shuffle = FALSE) } predict(fit, newdata = data[214:266, ], batch_size = 64L, type = "survival")
It produced this training and validation loss
Response: Surv(time, status) Features: {x2, x4, x7, x9, x10, x11, x12, x13, x14, x17, x18, x19, x20, x21} 0: [0s / 0s], train_loss: 3.7059, val_loss: 3.3683 1: [0s / 0s], train_loss: 3.7664, val_loss: 3.3086 2: [0s / 0s], train_loss: 3.1168, val_loss: 3.1851 3: [0s / 0s], train_loss: 3.2316, val_loss: 3.0291 4: [0s / 0s], train_loss: 3.2364, val_loss: 2.8691 5: [0s / 0s], train_loss: 3.0291, val_loss: 2.7785 6: [0s / 0s], train_loss: 3.1450, val_loss: 2.7872 7: [0s / 0s], train_loss: 2.8432, val_loss: 2.7404 8: [0s / 0s], train_loss: 2.6822, val_loss: 2.6182 9: [0s / 0s], train_loss: 3.0846, val_loss: 2.5323 10: [0s / 0s], train_loss: 2.7846, val_loss: 2.4080 11: [0s / 0s], train_loss: 3.0113, val_loss: 2.2637 12: [0s / 0s], train_loss: 2.6997, val_loss: 2.2014 13: [0s / 0s], train_loss: 2.6311, val_loss: 2.3158 14: [0s / 0s], train_loss: 2.6694, val_loss: 2.3824 15: [0s / 0s], train_loss: 2.4159, val_loss: 2.3650 16: [0s / 0s], train_loss: 2.6002, val_loss: 2.2976 17: [0s / 0s], train_loss: 2.6244, val_loss: 2.1875 18: [0s / 0s], train_loss: 2.6810, val_loss: 2.1325 19: [0s / 0s], train_loss: 2.5249, val_loss: 2.1874 20: [0s / 0s], train_loss: 2.7063, val_loss: 2.3179 21: [0s / 0s], train_loss: 2.3778, val_loss: 2.3940 22: [0s / 0s], train_loss: 2.5256, val_loss: 2.4264 23: [0s / 0s], train_loss: 2.6359, val_loss: 2.3794 24: [0s / 0s], train_loss: 2.6247, val_loss: 2.3415 25: [0s / 1s], train_loss: 2.5218, val_loss: 2.2977 26: [0s / 1s], train_loss: 2.3801, val_loss: 2.2695 27: [0s / 1s], train_loss: 2.3540, val_loss: 2.3472 28: [0s / 1s], train_loss: 2.5183, val_loss: 2.5078
If I run the code again, it the loss changes to this:
Response: Surv(time, status) Features: {x2, x4, x7, x9, x10, x11, x12, x13, x14, x17, x18, x19, x20, x21} 0: [0s / 0s], train_loss: 3.7312, val_loss: 2.9479 1: [0s / 0s], train_loss: 3.5491, val_loss: 2.7511 2: [0s / 0s], train_loss: 3.1089, val_loss: 2.6258 3: [0s / 0s], train_loss: 3.3624, val_loss: 2.5234 4: [0s / 0s], train_loss: 3.1942, val_loss: 2.4790 5: [0s / 0s], train_loss: 2.8854, val_loss: 2.4786 6: [0s / 0s], train_loss: 2.8777, val_loss: 2.5201 7: [0s / 0s], train_loss: 2.7983, val_loss: 2.6030 8: [0s / 0s], train_loss: 2.9053, val_loss: 2.7510 9: [0s / 0s], train_loss: 3.1230, val_loss: 2.6342 10: [0s / 0s], train_loss: 3.0644, val_loss: 2.4522 11: [0s / 0s], train_loss: 2.8871, val_loss: 2.3097 12: [0s / 0s], train_loss: 2.8660, val_loss: 2.2333 13: [0s / 0s], train_loss: 2.7412, val_loss: 2.2001 14: [0s / 0s], train_loss: 2.7904, val_loss: 2.1845 15: [0s / 0s], train_loss: 2.7495, val_loss: 2.1701 16: [0s / 0s], train_loss: 2.7211, val_loss: 2.2158 17: [0s / 0s], train_loss: 2.7342, val_loss: 2.2715 18: [0s / 0s], train_loss: 2.5499, val_loss: 2.3280 19: [0s / 0s], train_loss: 2.7957, val_loss: 2.3891 20: [0s / 0s], train_loss: 2.4376, val_loss: 2.4293 21: [0s / 0s], train_loss: 2.4995, val_loss: 2.4135 22: [0s / 0s], train_loss: 2.4014, val_loss: 2.4045 23: [0s / 0s], train_loss: 2.3508, val_loss: 2.3748 24: [0s / 0s], train_loss: 2.4521, val_loss: 2.3127 25: [0s / 0s], train_loss: 2.4929, val_loss: 2.2439
Please advise.
Thanks again
You haven't set a seed and neural networks are trained with a lot of randomness. Try using set_seed
first then the results should stay the same. Also there is no statistical difference in these values.
Got it! Thanks for your patience
Hello Raphael,
Thank you for the program 'survivalmodels'.
I am currently using the program to model our oral cavity cancer data for deployment on ShinyApp. I just have two queries:
Unlike the Python module with constant training and validation loss on repeated use, the loss on the survivalmodels package keep changing; thus, affecting the survival probabilities predicted to a certain extent. Is this normal? (Note: I have set it at shuffle = FALSE)
Has any of the users tried deploying a model based on DeepSurv on the package on ShinyApp? After I created the App in Shiny and tried publishing it, it returned that the module Pycox could not be found. Please advise.
Thank you.