Open coranholmes opened 6 years ago
The data-loading API for the parameter search isn't that fleshed out. The dataset parameter you provided 'whas' isn't understood by the script. You need to instead specify the location of the dataset file you want to load (from within the docker container)
@jaredleekatzman hi, thank you for the reply. Now I can run the hyper parameter selection code without errors. I have another small doubt. For the example box_constraints.0.json, why do you set learning rate to [-7,-3], shouldn't it be something like [0.0001, 1]?
The learning rate box constraint is actually on a log scale. So it is searching between 10e-7 to 10e-3.
On Jan 21, 2018, at 9:19 PM, Charlotte notifications@github.com wrote:
@jaredleekatzman https://github.com/jaredleekatzman hi, thank you for the reply. Now I can run the hyper parameter selection code without errors. I have another small doubt. For the example box_constraints.0.json, why do you set learning rate to [-7,-3], shouldn't it be something like [0.0001, 1]?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/jaredleekatzman/DeepSurv/issues/33#issuecomment-359307366, or mute the thread https://github.com/notifications/unsubscribe-auth/AFmmZomNdNyzyWrzTyzTarOBhfWOcnxkks5tM_BFgaJpZM4RkHg9.
@jaredleekatzman thank you very much for your explanation. May I ask you a further question. How should I interprete the results for hyper parameter selection? I am running the hyper parameter selection on my own dataset and get the following result.
hp_search_1 | 2018-01-22 04:50:57,950 - __main__ - DEBUG - Optimal Parameters: {u'learning_rate': -1.1917578125000001, u'num_nodes': 29.041210937499997, u'num_layers': 2.2513671875, u'dropout': 0.133017578125, u'lr_decay': 0.00025443359375, u'momentum': 0.8799306640625, u'L2_reg': 1.1320117187499998}
hp_search_1 | 2018-01-22 04:50:57,950 - __main__ - DEBUG - Saving Call log...
hp_search_1 | OrderedDict([('optimum', **0.7068966357018306**), ...
So I set the hyper parameters for my dataset as follows
{"L2_reg": 1.1320117187499998, "dropout": 0.133017578125, "learning_rate": 0.0643046216782325, "lr_decay": 0.00025443359375, "momentum": 0.8799306640625, "batch_norm": false, "activation": "selu", "standardize": true, "n_in": 11, "hidden_layers_sizes": [29,29]}
Then I run the deepsurv method and get the following results:
deepsurv_1 | Test metrics: {'c_index_bootstrap': {'confidence_interval': (0.6047381184204099, 0.6135943363104468), 'mean': 0.6091662273654284}, 'c_index': **0.6081473364476998**}
the c_index is around 0.6 while in the hyper parameter selection, the c-index is around 0.7. Why would it be like that?
@coranholmes are you Chinese,i want to have connet with you and discuss deepsurv
Do I have to use docker to do the random hyperparameter search or is there a way to do it in the JupyterNotebook? Thanks!
Hello,excuse me.Can you teach me how to use docker to find the best hyper-parameters?Thanks!
I try to tune the hyper parameter for whas dataset and get the following error:
my docker file is as follows:
Can anyone tell me why I get the error?