Hi, thanks for your work. I have one problem to ask for your advice.
In these two experiments, the goal is included in the original image. But what if it is not included? I try to use World Model to solve the robot navigation task. The state is {_image, relativeposition}, in which image tells the robot to avoid obstacles and _relativeposition tells where the goal is. I use World Models to condense the captured image and MDN-RNN to remember the environment. Then outputs of VAE and MDN-RNN are concatenated with _relativeposition as controller's input. But the it can't work, then I try different structures of controller(I use DNN as controller), it still doesn't work. Any advice?
Details: The image is 64X48X3 and the latent dim of VAE is 32. The training result is good:
The units in RNN is 256, the training result is also good:
I would suggest looking into papers that are better at learning representation of scenes. Have you seen this recent paper? https://arxiv.org/abs/1907.13052
Hi, thanks for your work. I have one problem to ask for your advice. In these two experiments, the goal is included in the original image. But what if it is not included? I try to use World Model to solve the robot navigation task. The state is {_image, relativeposition}, in which image tells the robot to avoid obstacles and _relativeposition tells where the goal is. I use World Models to condense the captured image and MDN-RNN to remember the environment. Then outputs of VAE and MDN-RNN are concatenated with _relativeposition as controller's input. But the it can't work, then I try different structures of controller(I use DNN as controller), it still doesn't work. Any advice? Details: The image is 64X48X3 and the latent dim of VAE is 32. The training result is good: The units in RNN is 256, the training result is also good: