-
I wonder if there's support for using Imitation Learning API in Carla's environment.
For Example, if I try imitation learning on an Atari Environment I use:
coach -et rl_coach.environments.gym_env…
-
### Thanks for giving yabridge a shot!
- [X] I read through both the troubleshooting and the known issues sections, and my issue wasn't listed there
### Problem description
Hello everyone,
Today I…
-
What is WORK_BASE_DIR?
In ScenarioGeneration/Apptainer/scripts.sh
```
# dependent variables
CarlaUnreal=${WORK_BASE_DIR}/${BASE_IMAGE_DIST}/CarlaUnreal
CARLA_SRC=${WORK_BASE_DIR}/${BASE_IMAGE…
-
**Describe our question or idea**
I want to run Carla Scenario Runner with an Apollo Carla bridge in which it is necessary that Carla runs in a Docker container. Now I still want to run Scenario Runn…
-
## Description
_What does the bug consist in?_
## Environment
_Which version of MathType does this happen in?_
_What is the relevant software and their versions?_
- _Editor (CKEditor, F…
-
If you are submitting a bug report, please fill in the following details and use the tag [bug].
### Describe the bug
A clear and concise description of what the bug is.
When I verify the mode…
-
Hi,
Thanks for the wonderful repository.
I have some questions about the initial setup of the Environment.
I am using Carla Compiled version 0.9.2 and python3, when I try to run ```from Envir…
-
When I wanted to train_scenario, running "python run_train.py --agent_cfg=adv_scenic.yaml --scenario_cfg=train_scenario_scenic.yaml --mode train_scenario --scenario_id 1", an error occurs.
File "/h…
-
Hi all,
I want to apply reinforcement learning using multi agent, specifically algorithms are PPO, TRPO, DDPG and A2C. I don't understand how to write Carla environment for these algorithm. Is any …
-
**Environment :**
Ubuntu 18.0.4
Carla version (tag: 0.9.13) latest.
Ros_bridge : latest
I have build ros_bridge successfully using catkin_make.
Then ros_bridge launched using
roslaunch c…