Closed a1wj1 closed 4 months ago
Thank you for your interest in LimSim&LimSim++. Currently, LimSim++ has saved the panoramic image data in the database. You can find the relevant data in the imageINFO
table in the database. However, this image is a compressed image with a size of 560*315. If you need the original image of 1600*900, you can obtain the CameraImages.ORI_CAM_FRONT
series during the running process and save it yourself.
Glad to hear from you! I am using LimSim++ of LLM. If GPT4 is used for description, is the input of GPT4 a description of the BEV image scene? Also, how do I obtain the CameraImages.ORI_CAM_FRONT series?
Yes, if your LLM does not support image input, you can use the text description we provide. In lines 249 to 253 of ExampleVLMAgentCloseLoop.py
, you can see the method of obtaining and using CameraImages
. You can replace images[-1].CAM_FRONT
with images[-1].ORI_CAM_FRONT
. You can check the function model.getCARLAImage()
to get more information.
Thank you. My LLM just can't read the image, how do you get the text description on your code? I saw the introduction to the paper which said: LimSim++ extracts road network and vehicle information around your vehicle. This scenario description and task description information is then packaged and passed in natural language to the driver agent.
But in terms of code, how do you get this process?
In lines 314 to 316 of ExampleLLMAgentCloseLoop.py
, you can see how we get navigation information, action information and environment information.
navInfo = descriptor.getNavigationInfo(roadgraph, vehicles)
actionInfo = descriptor.getAvailableActionsInfo(roadgraph, vehicles)
envInfo = descriptor.getEnvPrompt(roadgraph, vehicles)
In fact, you can build your own driver agent by modifying ExampleLLMAgentCloseLoop.py
directly on top of it.
Thanks for your reply. Excuse me. How do I in ExampleLLMAgentCloseLoop images from three perspectives py reality inside?
ExampleLLMAgentCloseLoop.py
will not provide round-view images, you can get camera images from ExampleVLMAgentCloseLoop.py
Should it be possible to transfer the image display code from ExampleVLMAgentCloseLoop.py to ExampleLLMAgentCloseLoop.py?
In fact, there is no big difference between the two in terms of interface calls, you can take the interfaces in VLMExample and use them in LLMExample to get the image information. However, VLMExample's runtime conditions are different, you can refer to readme.md to run VLMExample.
When I was running ExampleLLMAgentCloseLoop.py, I had already set the link for carla and opened carla, but no image was displayed either. I compared ExampleLLMAgentCloseLoop.py and ExampleVLMAgentCloseLoop.py and felt that apart from the LLM interface, the other parts were not very different, but I could not find the key code to display the image.
Did you set CARLACosim=True
when you initialize the model?
# init simulation
model = Model(
egoID=ego_id, netFile=sumo_net_file, rouFile=sumo_rou_file,
cfgFile=sumo_cfg_file, dataBase=database, SUMOGUI=sumo_gui,
CARLACosim=True, carla_host=carla_host, carla_port=carla_port
)
However, I still recommend that you use VLMExample if you want to work with image data.
Yes,I have CARLACosim=True
So, can you run the VLMExample successfully? You can just run VLMExample to test that your environment is installed correctly and that the application is running properly, without using VLM making a decision.
Stale issue message, no activity
Can the software collect visual image data for reinforcement learning?