Closed rmdwjdrhkrkatk closed 5 years ago
This is an reinforcement learning algorithm and the acquired images are passed as states/observations to the DQN using the environment.
Hence you have
action = agent.act(current_state)
and
agent.observe(current_state, action, reward, done)
I highly recomend that you read more about RL environments and openai gym. even if the latter isn't used here, gym has great resources that will help you here.
If you just need to move the drone and play around, check out other simple examples like orbit.py
. RL requires some solid understanding of ai.
Oh, now I realize my mistake. Thank you for your explanation!
U r welcome. Please don't forget to close the issue.
The example code DQNDrone.py has the following lines : responses = client.simGetImages([ImageRequest(3, AirSimImageType.DepthPerspective, True, False)]) current_state = transform_input(responses)
When I remove those lines, the drone does not show any movement. Awkward thing is that image does not used for the algorithm.
Why those lines are necessary for the example code? If it is possible, I would like to get rid of those lines because those functions takes too much time.