Open johnyboyoh opened 7 years ago
Did not try to train it. This thread is about the steering model not the generative model On Mar 1, 2017 3:59 PM, "zhaozhao" notifications@github.com wrote:
hello,i meet some difficulties,i can't train train_generative_model.py autoencoder successful,I want to know how you train the model ? Do you need to change this code?thanks very much
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/commaai/research/issues/42#issuecomment-283346867, or mute the thread https://github.com/notifications/unsubscribe-auth/AOYw20d9MXifFDBsvR7pu1c08oefBG6Kks5rhXnagaJpZM4MJxKi .
Hi, I know I'm a bit late but I have the same problem. Did you get any further on this? Really appreciate any suggestions.
Hello there, Any word on this? I'm having the same problem and I tried tweaking the model and nothing would work, the validation loss starts a lot higher than the training loss for some reason and does not go down.
Sorry for late reply,
The network is probably working as intended (checked it with my professor), but my guess is that the data is corrupt. Maybe you could make a script that previews the images before they are fed into the network?
Thanks for replying,
So you are still facing the same problem? I also have a feeling that there's something wrong with the validation data since its loss is way higher.
I actually gave up. Can’t remember the different losses but you might be right. Maybe make your own dataset from KITTI?
I will look into that, thanks a lot.
By the way @MahmoudKhaledAli , since this is a regression problem with MSE as a loss function, it makes more sense to use a linear activation function in the last layer.
Just to check, try removing the last model.add(ELU())
and replace model.add(Dense(512))
with model.add(Dense(512, activation='linear'))
.
I already tried setting the activation to None, but I will definitely give that a go, thanks for your help.
in the view steering model.py file I found his error (ValueError: bad marshal data (unknown type code)) result when trying to execute the view steering model.py here is the result from the cmd prompt
Traceback (most recent call last): File "C:\Users\lenovo\Anaconda3\lib\site-packages\keras\utils\generic_utils.py ", line 229, in func_load raw_code = codecs.decode(code.encode('ascii'), 'base64') UnicodeEncodeError: 'ascii' codec can't encode character '\xe0' in position 46: ordinal not in range(128)
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "view_steering_model.py", line 94, in model = model_from_json(json.load(jfile)) File "C:\Users\lenovo\Anaconda3\lib\site-packages\keras\models.py", line 349, in model_from_json return layer_module.deserialize(config, custom_objects=custom_objects) File "C:\Users\lenovo\Anaconda3\lib\site-packages\keras\layersinit.py", l ine 55, in deserialize printable_module_name='layer') File "C:\Users\lenovo\Anaconda3\lib\site-packages\keras\utils\generic_utils.py ", line 144, in deserialize_keras_object list(custom_objects.items()))) File "C:\Users\lenovo\Anaconda3\lib\site-packages\keras\models.py", line 1349, in from_config layer = layer_module.deserialize(conf, custom_objects=custom_objects) File "C:\Users\lenovo\Anaconda3\lib\site-packages\keras\layersinit.py", l ine 55, in deserialize printable_module_name='layer') File "C:\Users\lenovo\Anaconda3\lib\site-packages\keras\utils\generic_utils.py ", line 144, in deserialize_keras_object list(custom_objects.items()))) File "C:\Users\lenovo\Anaconda3\lib\site-packages\keras\layers\core.py", line 711, in from_config function = func_load(config['function'], globs=globs) File "C:\Users\lenovo\Anaconda3\lib\site-packages\keras\utils\generic_utils.py ", line 234, in func_load code = marshal.loads(raw_code) ValueError: bad marshal data (unknown type code)
Yeah I had the same problem, so I went ahead and re-wrote the graph myself and then loaded the weights from the .json file, however the mse is still very high.
Hi, I was able to train the model using the given code and data. The loss on the training data converges to ~350 however it appears that on Dev data the loss remains high (around 3500). This implies model overfitting and failure to generalize. Am I doing something wrong or are these the expected results using your code+data?