uzh-rpg / flightmare

An Open Flexible Quadrotor Simulator
https://uzh-rpg.github.io/flightmare/
Other
1.02k stars 345 forks source link

ddc_challenge: Observations? #100

Open hai-h-nguyen opened 3 years ago

hai-h-nguyen commented 3 years ago

I know observations are being states (not images). But what is the format/meaning of those states? It seems that the environment QuadrotorEnv_v1 in Flightgym is hidden from us.

yun-long commented 3 years ago

hi, you can find the detailed definition of the state in the quadrotor_env.cpp

https://github.com/uzh-rpg/flightmare/blob/ddc_challenge/flightlib/src/envs/quadrotor_env/quadrotor_env.cpp#L173-L183

hai-h-nguyen commented 3 years ago

I see. How about the objects' states? Also, even with images, how can the drone avoid objects that are coming from various angles (not captured in images)?

yun-long commented 3 years ago

Indeed, the environment is partially observable. This is part of the challenge.

hai-h-nguyen commented 3 years ago

Do you think it is a bit unrealistic? Even humans will need to rely on other types of information: like sound, depth to avoid objects for instance when the objects come from behind.

jhurlbut commented 3 years ago

are there rules about changing the sensors? For example can we change the camera field of view or add more cameras to the simulation?

yun-long commented 3 years ago

@hai-h-nguyen it is true, but in our case, the random dynamic object generator will not throw objects from behind. Still, when the drone turns, it might not see the object.

@jhurlbut For training, no, you can create as many cameras as you want. But for evaluation, only one camera is allowed. We will use the default observation for all participants.

hai-h-nguyen commented 3 years ago

@yun-long you mean the current RGB camera right? Can we also use the depth from that default camera or using segmentation?

lorenzoferrini commented 3 years ago

The camera currently mounted on the drone has both rgb and depth images but only depth images are collected (sorry for the tricky name). Segmentation is allowed but only at training time, the final algorithm has to work with depth and/or rgb. You can set the information you want to collect here but be aware that the optical flow is still under development and does not work yet.

hai-h-nguyen commented 3 years ago

So how can I turn on the RGB images, and are they synchronized with the depth image turned on like you said? Btw, what states of the drone are allowed to use?

hai-h-nguyen commented 3 years ago

@yun-long @lorenzoferrini is the goal location part of the observation of the drone?

antonilo commented 3 years ago

You are allowed to use the goal location and the entire state of the drone for training and evaluation. The final evaluation will be on depth images + states.