PacktPublishing / Hands-On-Intelligent-Agents-with-OpenAI-Gym

Code for Hands On Intelligent Agents with OpenAI Gym book to get started and learn to build deep reinforcement learning agents using PyTorch
https://www.packtpub.com/big-data-and-business-intelligence/hands-intelligent-agents-openai-gym
MIT License
365 stars 148 forks source link

Question about "import carla_gym" (register new env) #4

Closed AliBaheri closed 5 years ago

AliBaheri commented 5 years ago

Hi, When I run carla_env.py it works great. However, I am not able to run import carla_gym in python.

There is a sentence in a book which I have doubt about it: You can then create new custom CARLA environments for each of those scenarios, which you can use with the usual gym.make(...) command after you have registered the custom environment, for example, gym.make("Carla-v0") .

Particularly, my question is how can import it in python.

What do you mean by registered. Would you elaborate this point?

Thanks.

AliBaheri commented 5 years ago

image

When I run env.reset()

The code is frizzed with the following message:

ERROR: tcpserver 0 : error reading message: Operation canceled ERROR: tcpserver 0 : error writing message: Operation canceled

It runs very slow as well at first. I hues it has trouble to connect to server. I

Is there any solution for that?

praveen-palanisamy commented 5 years ago

What do you mean by registered. Would you elaborate this point?

When you import the carla_gym module in this repository, it will automatically register the Carla-v0 environment with the Gym registry so that, you can create an environment instance just like any other OpenAI Gym environment. Specifically, using this line of code: env = gym.make("Carla-v0")

For your second question as to why it is slow at first, the env.reset() has to launch the CARLA server, and then the client. Starting up the CARLA server will take some time because it is a relatively complex simulator (using UE4). The wait time is for it to load the assets, map, spawn the actors etc. The subsequent calls to env.step(action) should return pretty fast. Please note that it is advisable to run the CARLA server on a machine with a GPU.

The error you got appears when the client tries to connect when the server is not ready yet. It may happen during the initial startup phase but, it is not fatal. It may continue and run fine in the next try.

Although this is an improvement that needs to be done on the CARLA side, the 0.8.x version is still usable. For example, when you get to Chapter 8 , you will be able to run multiple (9 or even more depending on your hardware) parallel CARLA simulator instances like shown below: HOIAWOG A3C Carla 9

AliBaheri commented 5 years ago

Thanks for your clear response.

As a matter of fact, after several tries I am facing with a super slow connection with CARLA. Indeed, I am not able to run env.reset() at all. So per previous post, you would suggest to use GPU or running in AWS or something similar. My concern is I this problem comes from something else as I run this a good CPU machine.

praveen-palanisamy commented 5 years ago

My concern is I this problem comes from something else as I run this a good CPU machine.

Since you mentioned in your the first issue description that you were able to run carla_env.py and it worked well, I guess you have a working setup. The same env.reset() method is called here: https://github.com/PacktPublishing/Hands-On-Intelligent-Agents-with-OpenAI-Gym/blob/0609f2f0ea6b4cdab6a15cc824b2b84d95cfc2c0/ch7/carla-gym/carla_gym/envs/carla_env.py#L538

Yes. it is definitely better to run CARLA on a machine that has a discrete GPU. You can get the rendering to work on a CPU-only machine but the frame rate is going to be very low. Alternatively, on CPU only machines, you could run CARLA in "headless" mode but you will not have the scene rendered to your display that you can see.

AliBaheri commented 5 years ago

Thanks for your response. Unfortunately, I still have that problem and the connection to server is super slow. Would you please clarify what do you mean by "headless" mode? You mean collecting observations, actions, rewards, etc. without actually seeing the car interact with the environment? If so, how can I do that? Is there line in the code which should be commented out?

PS1: I am surprised when I run carla_env.py in command line it goes pretty fast, but when it comes in python everything is almost freezes.

PS2: BTW, what do you mean by "discrete" GPU?

Update: I tried the example presented in Chapter 8 where CARLA is tested with DDPG in 9 batches. It also goes smooth in my machine.

Thanks.

praveen-palanisamy commented 5 years ago

Would you please clarify what do you mean by "headless" mode?

Yes. In "headless" mode, there won't be any graphical user interface for the application. So, the UE4 rendering of the CARLA environment (the window that shows the car driving) will not be displayed. But, the agent will get the observations (camera frames), rewards etc from the environment. There are a couple (xvfb/vgl/vnc) of ways to do that but they involve several steps to configure the system correctly which you can avoid if you are not on a real "headless" system like a remote AWS/Azure/GCP node.

One quick (but not the recommended) way to do that would be to change the following lines: https://github.com/PacktPublishing/Hands-On-Intelligent-Agents-with-OpenAI-Gym/blob/0609f2f0ea6b4cdab6a15cc824b2b84d95cfc2c0/ch7/carla-gym/carla_gym/envs/carla_env.py#L172-L177

to

self.server_process = subprocess.Popen(
      ("SDL_VIDEODRIVER=offscreen SDL_HINT_CUDA_DEVICE={} {} " + 
      self.config["server_map"] + " -windowed -ResX=400 -ResY=300 -carla-server 
       -carla-world-port={}").format(0, SERVER_BINARY, self.server_port),
       shell=True, preexec_fn=os.setsid, stdout=open(os.devnull, "w"))
praveen-palanisamy commented 5 years ago

PS2: BTW, what do you mean by "discrete" GPU?

I meant a dedicated GPU. Not the one that is integrated with the CPU die as they are usually less powerful/compute-capable. Also, it is better to use a desktop grade GPU compared to a mobile GPU (found in laptops).

praveen-palanisamy commented 5 years ago

PS1: I am surprised when I run carla_env.py in command line it goes pretty fast, but when it comes in python everything is almost freezes.

Update: I tried the example presented in Chapter 8 where CARLA is tested with DDPG in 9 batches. It also goes smooth in my machine.

Good to hear that you have been able to run 9 parallel instances! So, there definitely doesn't seem to be any issue with your setup or the code. You can probably skip the "headless" mode and use either the carla_env.py script or the code in Chapter 8.

AliBaheri commented 5 years ago

Thanks for your clear response. I still have a question:

In standard Gym environments, for example CarRacing-v0, when I run:

action = env.action_space.sample()

I get:

array([0.08976637, 0.4236548 , 0.6458941 ], dtype=float32), which totally make scene because there are 3 actions for this environment.

However, for CARLA, the output is just a random real number, which dose not make any sense as the number of actions in CARLA is 9. I am a bit confused here as I am able to run all codes in Ch 8? Probably since the command env = gym.make("Carla-v0") cannot be completed correctly, I will get unreasonable single value for action(s). I am not really sure how to proceed here. My first goal is to collect observations from several rollouts and actions in carla-gym using some random policy for another propose. In general, how can I get observations, actions, and measurements at each step?

Ch 7 says one can get those values by:

measurements, sensor_data = client.read_data()

But I am not sure where this have to be run?

Thanks for any suggestion.

AliBaheri commented 5 years ago

Thanks, I solved that issue, it was my bad.

AliBaheri commented 5 years ago

One more question is if want to create a dataset in CARLA by rolling out the environment (for example 10000 rollouts) with some random policy. It is important for me to have a diverse range of actions and divers range of observations. Would you please give some hint to modify carla_gym.py?

praveen-palanisamy commented 5 years ago

Thanks, I solved that issue, it was my bad.

Glad that you got it solved!

Just to confirm, you should be able to use env.action_space.sample() for the Carla environment and it should print sensible integer values in the range [0,9) corresponding to the discrete actions. image

praveen-palanisamy commented 5 years ago

For you next question regarding sample rollouts with a policy, can you please open a different issue so that it is easier to track & better categorized/ organized? Thanks!

AliBaheri commented 5 years ago

OK, will do that.

AliBaheri commented 5 years ago

Thanks for your great work.

Just a question about running CARLA on AWS. In fact, I am able to run the example in CH8 on AWS, but I am a bit confused how we can run CARLA on cloud as the CARLA and UnrealEngine folder exists on the local machine. Would it be possible to provide some guide to run CARLA on AWS EC2.