Closed AIasd closed 3 years ago
Figure out the error. Only one client can be connected to simulator at a time. So I cannot call lgsvl.Simulator
twice. I wonder is there a way to disconnect the current client so I can call lgsvl.Simulator
again? Otherwise, I have to keep track of the instance sim
and pass it across multiple simulations.
@hadiTab do we have any plans to support this?
Another related question is that is there a way to avoid restarting apollo modules for every simulation? This step is quite time-consuming when I also enable the perception module.
I think you will need to post on Apollo's issues to see if they are aware of the issue or have a solution.
If you don't need to have perception running, you can use ground truth from the simulator which will make apollo cycle time a little faster... Check out the modular testing docs.
Hi @lemketron unfortunately I do want to test the perception module so it seems that it is a bit unavoidable there.
Hi @EricBoiseLGSVL , yes I probably should open an issue there asking about this question.
Hi @lemketron unfortunately I do want to test the perception module so it seems that it is a bit unavoidable there.
I understand. Unfortunately perception has been unavailable in Apollo for so long that we have focused most of our 6.0/master testing on using "modular testing". In fact even if you enable the modules, it will likely only work if you have a super powerful graphics workstation, and may still require a dedicated machine to run Apollo.
I can run modular testing with Apollo and SVL Simulator on a 8GB RTX 2070 MaxQ in a 15" Razer Blade laptop with a 6 (12) core i7 but there is not enough GPU memory to enable Apollo's perception modules on this one machine. I mention this just to be aware in case you're trying to do this on one single machine...
In any case if you're having issues with it then yes, please post the details over on Apollo issues and feel free to link to this issue as well.
Hi @lemketron, thanks for the comments! I think my machine should probably be fine since it has 2*2080Ti (one for SVL and one for Apollo) and a 14 core i9. I currently can run Apollo with perception (although still not that smooth).
Another related question related to performance is that is it possible to save camera images and other simulator data (e.g. the ego car's and other cars' locations) in parallel to the simulation process (i.e. sim.run)? Right now I have to run a step (0.1s) for simulation, do the saving step, and then run the next step (0.1s) for simulation. The saving step is really time consuming and thus make the simulation time very long. Any ideas and related issues along that line?
Wow, that sounds like a pretty impressive system, congrats!
As for saving images, we recommend using cyber-recorder to capture the images and any other data you wish to save to cyber bags and then you can post-process it. The python-API is for controlling the simulation but not for retrieving sensor data in real time.
And if you are stepping be sure to check out the clock sensor so Apollo can use simulator time rather than falling behind with real time especially if you're making a lot of python API calls at each step.
Hi @lemketron ,
Are you referring to Cyber RT python API listener example?
Also, do you know how can I check if the clock sensor is being used properly?
I have figured out the clock sensor part. If I understand correctly, do you mean I should create a new sensor (similar to the comfort sensor) that stream data (needs to create new *.proto for that purpose) I want and send to Apollo via CyberRT. Then I can create a listener that listens from that channel?
Is there an easier way for my purpose? Say, for example, I want to know the ego car's speed while sim.run is running, what is the simplest way to record this information locally? (It is ok to lose some of this info as long as most are recorded).
Camera images can be saved with the ColorCameraSensor, no need to make a new one. If you are just looking to get speed I would use already exiting sensors because it is a pain to make new proto files.
Hi @EricBoiseLGSVL , thanks for the reply! in that case, all I need to do is to create a CyberRT listener listening to the ColorCameraSensor channel and store locally while the simulation is running, am I understanding it correctly?
I guess I also want info like the locations of other agents in real time. I might get that from the 3D Ground Truth Sensor? But I also don't want Apollo to use it. Is there a switch I can turn off for that in the config?
Just change the topic name that the simulator publishes the ground truth on and the Apollo perception won't see it.
Instead of: /apollo/perception/obstacles
Use the default from the docs: /simulator/ground_truth/3d_detections
@lemketron I see, thanks a lot!
Looks like the original issue was resolved, marking this as close soon. Thanks for the help @lemketron
Hi, I am trying to run a simulation multiple times without restarting the python script. However, it seems that after the first run, the program gets stuck at "retrieving
self.data
".In particular, when it invokes the second
run_svl_simulation
function and execute to the lineprint('dir(sim)', dir(sim))
, it correctly print all the attributes names (includingcurrent_scene
) of the objectsim
. But then it gets stuck at the lineprint('sim.current_scene', sim.current_scene)
and stay there forever.The code snippet looks like the following:
and the traceback after I exited the program via
ctrl-c
looks like:What can be the potential cause for this behavior? Thanks!