Closed AIasd closed 2 years ago
These are just examples of sensor configuration. You can create your own from scratch or copy an existing one and modify it.
I believe Full Analysis means that it has an extra sensor attached to detect dangerous vehicle behavior to be included in the test report, while extra sensors are not required by Apollo itself.
Hi @zelenkovsky ,
Thank you for your reply! I can successfully run the DreamView API example (the script in the end of this web page) with Apollo 6.0 (modular testing).
However, when I create a new sensor configuration "Apollo 6.0" (which I just copy from "Apollo 5.0") and use it I encounter lgsvl.dreamview.dreamview.WaitApolloError
A screenshot of this process is shown below. It seems that in this case multiple Apollo modules are not running?
The entire error message is as the following:
Warning: Apollo module Camera is not running!!!
Warning: Apollo module Perception is not running!!!
Warning: Apollo module Prediction is not running!!!
Warning: Apollo module Traffic Light is not running!!!
Warning: Apollo module Camera is not running!!!
Warning: Apollo module Perception is not running!!!
Warning: Apollo module Prediction is not running!!!
Warning: Apollo module Traffic Light is not running!!!
Warning: Apollo module Camera is not running!!!
Warning: Apollo module Perception is not running!!!
Warning: Apollo module Prediction is not running!!!
Warning: Apollo module Traffic Light is not running!!!
Warning: Apollo module Camera is not running!!!
Warning: Apollo module Perception is not running!!!
Warning: Apollo module Prediction is not running!!!
Warning: Apollo module Traffic Light is not running!!!
Warning: Apollo module Camera is not running!!!
Warning: Apollo module Perception is not running!!!
Warning: Apollo module Prediction is not running!!!
Warning: Apollo module Traffic Light is not running!!!
No control message from Apollo within 60.0 seconds. Aborting...
Traceback (most recent call last):
File "demo.py", line 39, in <module>
dv.setup_apollo(destination.position.x, destination.position.z, modules)
File "/home/zhongzzy9/Documents/self-driving-car/PythonAPI/lgsvl/dreamview/dreamview.py", line 338, in setup_apollo
raise WaitApolloError()
lgsvl.dreamview.dreamview.WaitApolloError
I find out that my error might has something to do with GPU memory.
The SVL simulator takes about 1614mb Apollo 6.0 (modular testing) takes about 5257mb while Apollo 6.0 takes about 7376mb When I use a GPU card with 11GB memory, it seems to solve the issue.
A new issue pops up, however, it seems that the traffic light module is turned on initially but then when the car starts to move it is turned off, the module delay for traffic light keeps increasing since then. As a result, the vehicle waits forever in front of the traffic light. The following screenshot demonstrate this:
As a side note, when I add the signal sensor to the car (i.e. giving ground-truth of the traffic light), the car can finish the entire route successfully. Any idea of what is the cause of this issue and potential way to get around it?
@AIasd There is a known bug with Apollo and signal detection. That is why we implemented modular testing to get around it.
Apollo 6.0 (modular testing) takes about 5257mb while Apollo 6.0 takes about 7376mb
Apollo should not take 5GB of GPU memory with "modular testing" enabled.
There should only be one "mainboard" component if you look at nvidia-smi
, not three. Please go to the Apollo module controller in Dreamview and make sure that "Perception" and "Traffic Light" are OFF.
If you are using the modular-testing sensors ("obstacles" and "traffic light") then you don't need those Apollo modules and Apollo should take less than 2GB of GPU memory to run only the "Prediction" module.
A screenshot of this process is shown below. It seems that in this case multiple Apollo modules are not running?
Please show a screen shot of the module controller view in Dreamview.
Also, the messages you are seeing when you run demo.py are from the python "dreamview" library. The Python script should not be trying to enable perception, camera, or traffic light.
Can you post the "modules" list that you are passing to dv.setup_apollo? Camera, Perception, and Traffic Light should NOT be in the list of modules to be enabled.
Hi @EricBoiseLGSVL and @lemketron , Thank you for your reply! Yes, I enabled
'Localization',
'Perception',
'Transform',
'Routing',
'Prediction',
'Planning',
'Camera',
'Traffic Light',
'Control'
in the python script and I guess that's why it had three mainboards in nvidia-smi and took 5GB memory before. Now as you suggested when I disable Camera, Perception, and Traffic Light for modular testing, it takes much less memory.
As of another issue, I wonder do you know if the Apollo master version still have the traffic light module issue? I can currently test all the modules except the traffic light module (by providing the signal sensor ground-truth to the controller). However, it would be great to also get the traffic light module work.
Another thing I notice is that when I use the Camera module and the Perception module instead of the 3D ground-truth, the controller become a bit "jerky" during the driving. Is this phenomenon normal?
Excellent you reduced memory usage. Controller jerk issue is common if your frame rate is very low. This is why we recommend using one linux machine for apollo and a windows machine for simulator. This provides the best performance. Not sure on apollo's progress on signal detection, I would ping them on one of their issues concerning this.
Hi @EricBoiseLGSVL ,
Thank you for your reply! Regarding the frame rate, are you referring to the svl's frame rate or apollo's frame rate? Is there a way to control it or is it automatic? I am currently using a linux machine with 2*2080Ti GPU. I am actually seeing through nvidia-smi that Apollo and SVL are separately put on the two GPUs respectively. So I guess in my case the real bottleneck for the frame rate is CPU rather than GPU? (My machine has a Intel i9 14-cores CPU.)
Also, can you also send me the link regarding the issue concerning the signal detection so I can follow that?
Not sure what the bottleneck is when you enable Dreamview Camera and Perception modules. Could be many things. https://github.com/ApolloAuto/apollo/issues/12916
A screenshot of this process is shown below. It seems that in this case multiple Apollo modules are not running?
Please show a screen shot of the module controller view in Dreamview.
Also, the messages you are seeing when you run demo.py are from the python "dreamview" library. The Python script should not be trying to enable perception, camera, or traffic light.
Can you post the "modules" list that you are passing to dv.setup_apollo? Camera, Perception, and Traffic Light should NOT be in the list of modules to be enabled.
@AIasd Did you manage to resolve the "WaitApolloError" error? I'm getting a similar error with Apollo 6.0. I monitored the GPU memory (10GB on RTX 3080) and it was under control (<40% usage max). I am using only a subset of the available modules (see list in the code below) NOT including the ones @lemketron mentioned. I'm trying to get everything running through Python without touching either of the UIs.
Here's the code:
from environs import Env
import lgsvl
env = Env()
LGSVL__SIMULATOR_HOST = env.str("LGSVL__SIMULATOR_HOST", "127.0.0.1")
LGSVL__SIMULATOR_PORT = env.int("LGSVL__SIMULATOR_PORT", 8181)
LGSVL__AUTOPILOT_0_HOST = env.str("LGSVL__AUTOPILOT_0_HOST", "127.0.0.1")
LGSVL__AUTOPILOT_0_PORT = env.int("LGSVL__AUTOPILOT_0_PORT", 9090)
sim = lgsvl.Simulator(LGSVL__SIMULATOR_HOST, LGSVL__SIMULATOR_PORT)
sim.load("BorregasAve")
spawns = sim.get_spawn()
state = lgsvl.AgentState()
state.transform = spawns[0]
ego = sim.add_agent("2e9095fa-c9b9-4f3f-8d7d-65fa2bb03921", lgsvl.AgentType.EGO, state)
ego.connect_bridge(LGSVL__AUTOPILOT_0_HOST, LGSVL__AUTOPILOT_0_PORT)
dv = lgsvl.dreamview.Connection(sim, ego, LGSVL__AUTOPILOT_0_HOST)
dv.set_hd_map('Borregas Ave')
dv.set_vehicle('Lincoln2017MKZ LGSVL')
modules = [
'Localization',
'Transform',
'Routing',
'Prediction',
'Planning',
'Control'
]
destination = spawns[0].destinations[0]
dv.setup_apollo(destination.position.x, destination.position.z, modules)
sim.run()
This is the error:
Warning: Apollo module Localization is not running!!!
Warning: Apollo module Planning is not running!!!
Warning: Apollo module Localization is not running!!!
Warning: Apollo module Planning is not running!!!
Warning: Apollo module Localization is not running!!!
Warning: Apollo module Planning is not running!!!
Warning: Apollo module Localization is not running!!!
Warning: Apollo module Planning is not running!!!
Warning: Apollo module Localization is not running!!!
Warning: Apollo module Planning is not running!!!
No control message from Apollo within 60.0 seconds. Aborting...
Traceback (most recent call last):
File "run.py", line 36, in <module>
dv.setup_apollo(destination.position.x, destination.position.z, modules)
File "/home/raz/code/PythonAPI/lgsvl/dreamview/dreamview.py", line 338, in setup_apollo
raise WaitApolloError()
lgsvl.dreamview.dreamview.WaitApolloError
Module Controller Page Before: Module Controller Page After: Full screenshot:
Any suggestions on how to bypass this?
@raz4 Are you using Apollo6.0 (modular testing)?
@raz4 Are you using Apollo6.0 (modular testing)?
@AIasd Yes. The "encrypted" id used when adding the agent corresponds to the "Apollo 6.0 (modular testing)" configuration.
ego = sim.add_agent("2e9095fa-c9b9-4f3f-8d7d-65fa2bb03921", lgsvl.AgentType.EGO, state)
@raz4 Did you succeed in running Apollo in svl when not via python api but via apollo web interface manually?
@AIasd yes, this is a good way to trouble shoot this. @raz4 are you able to run apollo without api?
I got it to work! Initially, I wasn't able to get it work with the web interface either (Localization module wouldn't start). I basically recloned apollo master(latest) repository and restarted the docker instructions/compilation from scratch. First I ran with the web interface and then with Python API only and it worked both times.
Thanks for the help!
@raz4 That‘s great to hear! I wonder if you can run the full Apollo 6.0 (which is not just modular testing but also has Camera, Perception, and Traffic Light modules enabled) using apollo latest branch successfully?
Hi @AIasd I am also missing the Apollo 6.0 full analysis after SVL team updated the simulator website. How did you enabled? Just copy/clone the 5.0 autopilot and rename to 6.0? I think not so obvious I tried to add the older features such as main camera, telephoto camera (available in 5.0), but they removed when you try to add in 6.0. These options does not appear anymore...
I also cannot visualize data anymore on cyber_visualizer after this simulator update...sad. I get the error: Channel cannot be empty. I would like to use the older version of this website? Do you if is it possible? Because I am still running the simulator I installed months ago, however it seems it was upgraded automatically. The old options are not more available.
Besides that on Apollo 5.0 when I enabled the traffic light tab I could get a very nice image from camera looking to the traffic light. However on SVL (previous release with the full analysis for 6.0) when I enabled the traffic light, the window with the camera images was not opened...do you know how to enable this traffic light view ?
Hi @EricBoiseLGSVL do you know if/how can I get access to the old sensors configuration on SVL website?
It is not available anymore :(
I would like to use the Apollo 6.0 (full analysis) again. I am experiencing issues now with cyber_visualizer topics and I also would like to have available: the camera data, lidar data (raw perception module running as before, not the modular).
Thanks @EricBoiseLGSVL I have redone the APollo 6.0 Full Anallysis Autopilot: 1 - Just cloned the Apollo 6.0 Modular; 2 - Edited it (Based on sensors available on Apollo 5.0 Full Analysis, adding the Perception sensors found in 5.0 and missing on 6.0 Modular)
If someone need the steps needed, I am sharing in video below:
Excellent, thanks for sharing. I am sure this will help many users. You rock.
Hi @marcusvinicius178 , that looks awesome! I also did the same way. Did you also get the traffic light module work by any chance? I am still stuck there.
Hi @AIasd I didn't try actually. The focus of my master is with path planning, which I am having lot of issues mainly because I don't find a good GPU/MACHINE able to run the planners. Found no good website to rent. .. :( The P6000x2, V100, RTX5000x2 that I rent in Paperspace were not good enough. Do you know other website to rent cloud computing? ( Not Amazon that is too expensive, I am just a student).
But regarding Traffic light I remember that when I triggered it on Apollo5.0 an automatic window appeared with the camera image pointing to the traffic light. I remember that when I tried to enable the Traffic Light tab on Apollo 6.0 this window was not displayes anymore... I do not know how to trigger this window. I am also curious ( I think I have done accidentally). Why don't you use the Apollo 5.0? Maybe you need to add traffic light sensor also after clone the autopilot 6.0? Or Maybe this module is not working because of lack computational resources (same issue as mine). You can try test just with Dreamview Internal simulator. Enabling the SIM CONTROL in Tasks panel... Sorry I am not an expert in Perception
Hi @marcusvinicius178 , what planner are you running that takes that much resources? I am currently using 2080TI*2 locally but my CPU only has 14 cores which seem to be a bottleneck when I enable the perception and camera modules (simulation becomes quite jerky).
The reason I am not using 5.0 is that it does not support 2080Ti. I do not have access to a 1080Ti for that.
I have tried to add a traffic light sensor but that does not seem to work. So currently I am providing the ground-truth for traffic light.
Also, the earlier posts in this issue had some brief discussions on Apollo 6.0 traffic light module. It seems that this is a known issue. I recently also tried their latest master branch version and also no luck there.
@Alasd The Lattice Planner does not work with the LGSVL simulator. It is simple than the ROAD_PUBLIC however I believe it works in a higher frequency or something else...because it returns that the IMU and GPS messages are delayed (cycle and seconds behind)...take a look: https://www.youtube.com/watch?v=a6UKYE7xmMY
Could you run in your machine just to check if the issue is regarding GPU problem? It is just to change the file planning_config.proto.txt inside the modules/planning/conf folder. Just change the line that is written "planner_type = PUBLIC_ROAD" to LATTICE and then you just need to deactivate the planning tab on CONTROL panel in dreamview and activate again (you do not need to compile the workspace again...the change is online). I will be very grateful if you could do this quickly just to confirm that I face a machine problem...
Regarding you issue with traffic light. Why don't you rent a machine on paperspace.com it is just 1 dollar per hour. Take a look:
Hi @marcusvinicius178 , I tried to replace planner_type and then restart the whole thing. I did not see clear differences. Am I doing it right? I only disabled the traffic light module with a sensor feeding into ground-truth.
Hi @Alasd thanks a lot. Actually there are not clear differences. If you saw the blue/green line (planner line) covering the red line (route line). This blue/green line is produced when you activate the planning tab. If it was generated after you changed the planner_type to Lattice. This means that the LATTICE planner works in your machine! There is not difference on visualization. However THERE IS A DIFFERENCE on the graphics. Please DISABLE the LOCK tab and activate the PNC_MONITOR tab on TASK panel. And take a look in the graphics in the right (planning, control, latency). ALL of The planning graphics produced from the standard planner (PUBLIC_ROAD) are always produced and remains forever there. However the graphics produced from the LATTICE planner does not remain displayed there after car achieves its goal spot. LATTICE also DOES NOT GENERATE ALL the planning graphics (generates about 60% of them). So to check you could just scroll down to see these graphs and see if it is the lattice planner being used or the PUBLIC_ROAD...if all graphs are plotted so it is the Public_road, if not it is the LAttice. Please let me know :)
Hi @marcusvinicius178 , could you share the some screenshots for the differences by any chance? I can then compare with your graphics to confirm it.
Hi @AIasd it is better I share you a video where I changed the planners: https://www.youtube.com/watch?v=YG-ReNbU5IM
Take a look before minute 7:22 please:You will check that I tested the LATTICE planner and had delay error: Checking the Delay Module and also checking the cyber_monitor where the planning topic is RED (not active).
Then after I change the planner_type (after minute 7:22, specifically in minute 7:28) You will see that the Delay Module does not display the planning tab red anymore...it is green again:
But Regarding the graphics, please take a look after minute 15:18 I changed from LATTICE to ROAD_PUBLIC again and in minute 15:52 you will see the residual graphics of LATTICE (as I still did not triggered the planning module tab again):
And in minute 16:12 you will check that the NEW graphics were enabled as well as the planner route (In a green blue color):
As you can check when I am try to use the LATTICE planner, it is not activated (red line on DELAY MODULE), and also the red routing line being displayed alone in front of the vehicle. But the graphs are generated (not all graphs) With the PUBLIC_ROAD planner, however the green/blue line is drawed covering the red routing line in front of the vehicle, also the line on DELAY MODULE becomes green again (there is not Delay error from GPS/IMU sensors). In addition you could check how new graphics were added in minute 16:12..
If in your machine the green blue line was displayed when you used the LATTICE planner, and if you did not have red line on planning parameter inside the DELAY Module. So your machine was able to run the LAttice planner. That is what I would like to check.
Thanks very much to kindly run in your machine :)
Hi @marcusvinicius178 , when I replaced PUBLIC_ROAD with LATTICE, the green/blue line is still drawed covering the red routing line. Also, there is no red delay for the planning module. So in summary, there is no clear difference for using any one of the two.
One weird thing though is that the button for the planning when I switched it on, it automatically switched off. However, there is still no delay shown for the planning module. Not very sure about this behavior.
Ok Thanks for the update. Actually when the tab is automatically switched of, it is because the module is not working properly ( this should happen also with your TRAFFIC LIGHT module if does not work will be disabled). However you told me the blue line was drawed when you switched of the planning tab and switched on again. This is controversy. To conclude for real if it works or not you can (if you wish) make this planner modification, save in the planning_config.proto.txt before enter the docker Image (dev_start.sh and dev_into.sh) and switch the modules from scratch. If the blue line appears, then the planner is really working and your PC is powerful! But considering the planning parameter line on MODULE DELAY did not became red I conclude it is working.
About the modification on behavior you won't see. It is pretty similar ( The Public Road planner is an optimization of Lattice planner). You will just see this difference on the graphics plotted as I showed you previously.
But does not matter. It seems that this planner was not developed until the end to become robust, but used just as a basis...also I don't have a powerful machine to run it. So I can explain to my judges on masters presentation. Thanks a lot! Hope you find a solution for your perception issue.
Hi @marcusvinicius178 , I tried to modify the planning_config.proto.txt file before entering the docker Image and that gives the same result so I guess it should work on my machine.
Hi @AIasd thanks very much. Sorry for late reply. Aware that I need a new computer. I would like to ask you which PC are you using for this simulation (If possible to send me your System properties: Model, processor, GPU, CPU, Numb of cores, clock frequency, etc..) I will try to get one similar to execute these simulations. Thanks :)
Hi @marcusvinicius178 , My CPU is intel i9-7940x and for GPU I have 2*2080Ti. I also experience obvious slow-down and jerky simulation (although seems a bit better than yours in the video) when perception is activated.
Yes @AIasd the jerky and slow-down in my record is due the reason I have rent a machine from USA and I am located in south of Brazil. The support told me the reason is the latency to broadcast info because of distance.... To avoid this jerky you can run the simulator on desktop and Apollo in your notebook for example, or inverse, it works!!! I have already tested with my notebook working along the desktop of my relative. However I cannot go to my relative's house everyday hahahahaha. So I am going to buy a good desktop for me in future... If you have already a notebook you can solve this easily. Wish you success! Thanks to share your machine's specification, this helps me a lot!
Hi team,
I saw Apollo 5.0 has three versions: Apollo 5.0 (modular testing), Apollo 5.0, and Apollo 5.0 (full analysis). However, for Apollo 6.0, only Apollo 6.0 (modular testing) exists in the Web API. I wonder if Apollo 6.0, and Apollo 6.0 (full analysis) are also supported?