Closed lcipolina closed 4 years ago
Hi @lcipolina ! Thank you for catching the typos and the error in the Agent Interface example. I have updated the README with the fixes.
After the fix, I copied the Agent Interface example into example.py
and ran and it runs as expected. It will produce the outputs as shown below:
Could not connect to Carla server because: rpc::rpc_error during call in function version
It looks like you may have forgotten to setup CARLA or set the CARLA_SERVER
environment variable as mentioned in the Getting Started section.
If you already have CARLA binaries downloaded and extracted to your computer, please set the CARLA_SERVER
env var. One way of doing this is shown below:
echo "export CARLA_SERVER=${HOME}/software/CARLA_0.9.4/CarlaUE4.sh" >> ~/.bashrc
Thank you very much for this. I was able to install CARLA's latest version, export the path to the "CarlaUE4.sh" and then running the test script from the command line.
I am using Teamviewer to connect to my lab's computer which has 40 GPUS (not SSH, but teamviewer)
I have encountered 2 problems, one I believe is from the remote connection (not sure) and the second one is some variable that I need to set up in Carla, maybe (the warning)
But basically, I am not able to see the cars moving, I just get the below screen and that's it.
Thank you very much
Glad that you have got it running.
That's a pretty good machine you have got with 40 GPUs! :slightly_smiling_face:
Does the macad-test.py
script that you are using hang after that?
actually no, I am able to use the WSAD commands, but I see no cars.
I guess the second error has to do with this: https://github.com/carla-simulator/carla/issues/1942 https://carla.readthedocs.io/en/latest/adv_synchrony_timestep/
I will try tomorrow to fix it.
Okay then the scenario/training-env is not getting loaded. Which MACAD-Gym environment are you using?
Is macad-test.py
the same code as in the Agent example?
And yes, the warning message w.r.t the synchronous mode and fixed_delta_seconds
is noted. It was added in one of the recent CARLA versions and I can push an update for taking care of that.
Also, could you try with CARLA 0.9.4? Because this version was extensively tested and this version was the version which was used in the NeurIPS 2019 Autonomous Driving Workshop paper to get the results you were interested in. Once this works, we can fix the other issue to run with the latest CARLA version.
Hi yes, that's the code, the same as the Agent example.
The link for Carla0.9.4 posted here on GitHub is not downloading, not sure why. Maybe it's too big or something. I will try with another downloader.
Alright.
The link to CARLA 0.9.4 on this repo should be same as the one on CARLA's release page. Once you open this Google Drive link, there should be a download button. Did the download from the Google drive timeout or you ran into some other issue? You can try downloading from CARLA repo's release page for 0.9.4 here
Thank you for reporting the warning w.r.t synchronous mode with variable time step. I merged #8 with the update to use fixed time for stepping which should take care of that with recent versions of CARLA.
Hi, thanks for the amendment. I've downloaded the CARLA 0.94 file in a Windows machine using Chrome and it worked. The I transferred over to the Lab's machine. I also did a git pull and the warning on the synchronous mode got resolved.
Now to fix this problem sudo no tty present and no askpass program specified teamviewer
I did: /etc/sudoers
Good to see the progress and glad that you have fixed the issue with sudo password entry as well.
Okay now the MACAD-Gym environment is loading! :tada:
This time the GUI opens and it moves (actually too much) when I move the cursor, to the point I can't see anything.
This is because, the CARLA UE4 window captures your mouse movements. You can prevent that by pressing the tilde ~
key when the CARLA window is active.
Could not connect to Carla server because: rpc::rpc_error during call in function version
The above error is likely because there is a mismatch between your CARLA server version and the CARLA python client library version.
To solve it, you can do: pip install carla==0.9.4
and then try running the script again.
Thank you very much Mr Palanisamy. I guess I have to uninstall my Carla 0.9.9 as the issue continues after the pip install 0.9.4?
If the same issue persists then yes, somehow there's still a version mismatch which likely means your client is using carla 0.9.9 while the server version is 0.9.4.
You can run this command: python -m pip freeze | grep carla
to see which version of carla python client library version is active in your python environment.
Also, doing this might help: python -m pip uninstall carla
and then python -m pip install carla==0.9.4
Thank you very much. I was now able to start the environment and run the script. I have some questions if you don't mind.
What is the action space of the agents?I am just guessing that the action space is the coordinates to move on? is speed included?
What is the reward of the agents? I understand from the paper that this environment is a single-agent one and each maximizes it's own reward. Is the objective just to reach a certain point? are collisions penalized on the algo?
When running the sample script, without making any changes, I see that the cars collide at the intersection. I what happens
How can we plot the rewards as in your paper? Thank you very much.
Nice to see that you have it working well on your machine!
- What is the action space of the agents?I am just guessing that the action space is the coordinates to move on? is speed included?
You can check the action space of any MACAD-Gym environment using the action_space
attribute like in an OpenAI Gym environment. The following is the minimal code:
import gym
import macad_gym
env_name = "HomoNcomIndePOIntrxMASS3CTWN3-v0")
env = gym.make(env_name)
env.action_space
The action space for the HomoNcomIndePOIntrxMASS3CTWN3-v0
will be:
Dict(car1:Discrete(9), car2:Discrete(9), car3:Discrete(9))
Which lists the action space for each agent as it is a Multi-agent environment. For each individual car, the action corresponds to the following map:
DISCRETE_ACTIONS = {
# coast
0: [0.0, 0.0],
# turn left
1: [0.0, -0.5],
# turn right
2: [0.0, 0.5],
# forward
3: [1.0, 0.0],
# brake
4: [-0.5, 0.0],
# forward left
5: [0.5, -0.05],
# forward right
6: [0.5, 0.05],
# brake left
7: [-0.5, -0.5],
# brake right
8: [-0.5, 0.5],
}
- What is the reward of the agents? I understand from the paper that this environment is a single-agent one and each maximizes it's own reward. Is the objective just to reach a certain point? are collisions penalized on the algo?
As described in the paper, this environment is a Multi-agent environment with independently acting agent. All the experiment details are in Appendix C where the reward function is described as below:
In the HomoNcomIndePOIntrxMASS3CTWN3-v0
environment the same reward function is used for each of the agent.
You can add your own reward function using the following as an example: https://github.com/praveen-palanisamy/macad-gym/blob/5cb06a3278b049db8bdd8d768ed28d7fd929c11b/src/macad_gym/carla/reward.py#L21-L41
and also create your own environment if you would like to, based on this wiki.
Also, the goal for each of the agent is described in this appendix section of the paper.
- When running the sample script, without making any changes, I see that the cars collide at the intersection. I what happens
That is because, in the sample agent script, for each of the car actors, the agent generate the action of driving forward/full-throttle. This is the line where the environment is stepped with actions from the agents: https://github.com/praveen-palanisamy/macad-gym/blob/d37741bfde03c5a9627a79f2572fcaa8a5742a43/examples/basic_agent.py#L38
And the get_action()
method in the simple agent example returns the following action:
https://github.com/praveen-palanisamy/macad-gym/blob/d37741bfde03c5a9627a79f2572fcaa8a5742a43/examples/basic_agent.py#L25-L28
- How can we plot the rewards as in your paper?
Oh, I missed this question. For the reward plots, I used Tensorboard using this py package.
- How can we plot the rewards as in your paper?
Oh, I missed this question. For the reward plots, I used Tensorboard using this py package.
Thank you very much for all these answers. So the rewards will be dumped in a file ? or should I produce the file output ?
You are welcome! The sample agent script doesn't log the rewards to file. You will have to add the code to log the rewards to a file.
An example (feel free to use a different/simpler logger. This is just an example):
Import tensorboardX
and create a summary writer:
from tensorboardX import SummaryWriter
writer = SummaryWriter("logs")
Log the rewards At every step you can log the reward to the log file using:
writer.add_scalar("reward/actor1", total_reward_actor1)
Visualize the live reward plots
From a command line, run: tensorboard --logdir=.
to visualize the log from the logs
file.
Closing this issue since the questions were answered. Please open a new issue for any further questions/issues. Thank you!
Hello, in trying to run the example on the README.md for the Agent Interface, I did the following changes that you might want to consider:
env = gym.make("HomoNComIndePOIntrxMASS3CTWN3-v0") # There is a typo here
env = gym.make("HomoNcomIndePOIntrxMASS3CTWN3-v0") #this is the correct one
configs = env.configs() #this is a dict, cant assign to a variable
env_config = env.configs["env"] #change this to read directly from dict actor_configs = env.configs["actors"] #change this to read directly from dict
I kept the rest of the code the same, but I got the following error after running exactly what is on the example, without changing anything else:
Could not connect to Carla server because: rpc::rpc_error during call in function version