duckietown / gym-duckietown

Self-driving car simulator for the Duckietown universe
http://duckietown.org
Other
52 stars 19 forks source link

Image Distortion #98

Closed bhairavmehta95 closed 5 years ago

bhairavmehta95 commented 5 years ago

Fixing the manual scripts (ty @fgolemo)

From here out this branch is concerned adding the changes to image distortion (the change in image dimension already done).

bhairavmehta95 commented 5 years ago

Adding the distortion to be default with the simulator, and also providing an undistortion wrapper (that depends on cv2.)

The DuckietownEnv now takes a distortion boolean parameter (Default is True). The UndistortWrapper can be used like this:

env = UndistortWrapper(DuckietownEnv(
        map_name = args.map_name,
        draw_curve = args.draw_curve,
        draw_bbox = args.draw_bbox,
        domain_rand = args.domain_rand,
        frame_skip = args.frame_skip,
        distortion = True,
    ))

Some artifacts from rectification remain around the borders, but not much we can do about that. The UndistortWrapper makes sure distortion is True, otherwise it throws an error:

assert env.unwrapped.distortion, "Distortion is false, no need for this wrapper"
AssertionError: Distortion is false, no need for this wrapper

I tried to separate as much of the image distortion code as possible, mainly living inside of distortion.py. The values used are the ones from the default calibration file inside of the Software repo of Duckietown.

Here's what it looks like:

Distorted: distorted_screenshot_16 10 2018

Rectified rerectified_screenshot_16 10 2018

@maximecb @fgolemo I was hoping for some input on one issue:

Edit: NVM there's an easy solution to this The main issue I am facing is that our default size for the image has been 160 x 120, but now, since we are by default distorting the image, the image becomes uninterpretable if you have 160x120 and forget to turn off the distortion with the wrapper. I think the only way to solve this issue is by making the default 640x480, but let me know your thoughts (hopefully I explained it well enough).

@AndreaCensi @liampaull please play around with ./manual_control.py and make sure this looks right before we merge.

maximecb commented 5 years ago

I don't really know how the distortion works, so I can't really comment on that.

Making the rendered image larger will make the simulator a bit slower, but it might not matter if the bottleneck is in the NN training.

Code wise, I would put the distortion in a wrapper too, seems that would be more symmetric?

liampaull commented 5 years ago

just discussed this with @bhairavmehta95, my feeling is that the field of view of the image needs to get increased to be the same as the fisheye (160 deg) before the distortion is applied.

liampaull commented 5 years ago

I think wherever that is in the code must have been so old that it's before we starting using the fisheye ???

bhairavmehta95 commented 5 years ago

image

This is where we're at now with adjusted camera parameters. Distortion at the absolute edges is bad, but not sure how much better we can do.

Below is a reference. real

@maximecb

Making the rendered image larger will make the simulator a bit slower, but it might not matter if the bottleneck is in the NN training.

Agreed, but it makes it a lot slower. I just don't think we can get around this if we want the distortion.

Code wise, I would put the distortion in a wrapper too, seems that would be more symmetric? Originally, I had exactly this, but then, actually rendering (i.e for a human in something like manual_control) that distorted image becomes difficult due to the way that the render function works.

I figured that it wouldn't be good if a human used the manual_control.py as a test, didn't see a distortion, and then trained an agent with the distorted image. As a compromise, I just put it inside the Simulator class, but let me know if I am missing an obvious solution.

maximecb commented 5 years ago

Agreed, but it makes it a lot slower. I just don't think we can get around this if we want the distortion.

One thing you can do to improve performance is reduce the number of samples, from 16 to 4 or even to 1: https://github.com/duckietown/gym-duckietown/blob/master/gym_duckietown/simulator.py#L233

I figured that it wouldn't be good if a human used the manual_control.py as a test, didn't see a distortion, and then trained an agent with the distorted image. As a compromise, I just put it inside the Simulator class, but let me know if I am missing an obvious solution.

Obvious solution is to make manual_control use the distortion wrapper too, but it's your call.

bhairavmehta95 commented 5 years ago

Right but render takes the image (which in our case would be undistorted):

https://github.com/duckietown/gym-duckietown/blob/master/gym_duckietown/simulator.py#L1357

And then executes the OpenGL instructions. Where as a gym.ObservationWrapper would only distort the observation on step. (so the agent would see the right image, but a human just playing around with the simulator wouldn't)

On the second note, I will try that. Thanks!