Closed wmh02240 closed 4 months ago
Could you share a bit more information about your setup (hardware, camera, config, environment) and a video of the robot running the policy, if possible? I can help debugging.
Hi @wmh02240 , I am responding to your MonoNav issue here as it is a more relevant location. (Hi @robodhruv!! Sorry to jump in - I am just repeating the advice you already gave me! :)
Here are some suggestions from my experience deploying NoMaD :
nactions
) that are output.naction
.get_action
. You might need to play around with the un-normalization process; i.e., stats['min']
and stats['max']
, to match your environment.project_points
.Here is an example of an input image with action candidates demonstrating evasive behavior:
Could you share a bit more information about your setup (hardware, camera, config, environment) and a video of the robot running the policy, if possible? I can help debugging.
Hello, @robodhruv , thanks for your reply,The robot car I use is called wheeltec mini_mec robot
,The camera I used was a wide-angle camera with a 160-degree view,The configs and the environment are consistent with those described in the README document.
this picture is the robot car。
Thanks again!
Hi @wmh02240 , I am responding to your MonoNav issue here as it is a more relevant location. (Hi @robodhruv!! Sorry to jump in - I am just repeating the advice you already gave me! :)
Here are some suggestions from my experience deploying NoMaD :
- It would be helpful to see some sample images that you are feeding into the model, and action candidates (
nactions
) that are output.- To debug, try plotting action candidates post diffusion. I.e., plot
naction
.- Notice how the action candidates are un-normalized using
get_action
. You might need to play around with the un-normalization process; i.e.,stats['min']
andstats['max']
, to match your environment.- If those look good, you can also project them onto the input image using
project_points
.Here is an example of an input image with action candidates demonstrating evasive behavior:
Hi,@natesimon Thanks for your reply,I have visualized the candidate action as you suggested, and the results seem correct. I am a little confused about how to calculate stats ['min'] and stats ['max']
to match my environment? Can you give me some advice?Looking forward to your reply。
Here is an example of the visualization results:
Thank you again!
That image suggests that the model is indeed working, thanks for sharing!
Typically, stats['min'], stats['max']
are fixed constants corresponding to the robot's top speeds. What I suspect might be happening is that despite the trajectories being good, the conversion to your specific robot's local controller is not allowing tracking the trajectory. I would advise trying a few different values or substituting with your controller (assuming you have something already setup that works on your robot) and passing these trajectories as input to your controller.
That image suggests that the model is indeed working, thanks for sharing!
Typically,
stats['min'], stats['max']
are fixed constants corresponding to the robot's top speeds. What I suspect might be happening is that despite the trajectories being good, the conversion to your specific robot's local controller is not allowing tracking the trajectory. I would advise trying a few different values or substituting with your controller (assuming you have something already setup that works on your robot) and passing these trajectories as input to your controller.
I followed your advice and conducted some tests by sending specific values to my robot car controller. I can ensure that the controller correctly executes speed commands. However, I have a few points of confusion and would greatly appreciate your guidance:
You mentioned that stats['min'] and stats['max'] represent fixed constants for the maximum speed of the robot car. I'm unsure how to determine appropriate values for these constants, and I noticed that they have a significant impact on the trajectory predictions of the model. in the code , -2.5 represents linear velocity, and -4 represents angular velocity?
I would like to fine-tune the model using my collected data to enhance its obstacle avoidance capability in narrow spaces or specific areas. I collected the data by manually controlling the robot car and subscribing to the /usb_cam and /odom topics using ROS. Are there any limitations on the speed of the robot car during data collection? Should data collection end if a collision occurs? Looking forward to your reply。
Hi @wmh02240,
Sorry for the confusion. stats['min']
and stats['max']
do not correspond to the maximum angular and linear velocity. They are just the minimum and maximum x
and y
deltas between normalized waypoints (scaled by the average distance between consecutive waypoints) from the dataset. The diffusion policy code relies on these statistics to scale action dimensions minimum and maximum to [-1, 1]
, which insures stability during training. You shouldn't have to worry about calculating these values since they are not specific to any environment. If you want to adjust angular and linear velocity outputs for deployment, you can edit the values in this config file (max_v
, max_w
, and frame_rate
are most relevant).
Recording /usb_cam
and /odom
for fine-tuning should be good enough. You don't have to worry about any limitations on the speed of the car as long as you follow the data processing instructions in the README. You should also end the data collection if a collision occurs because we train our policies with imitation learning, so the quality of the model depends on the quality of the data.
Please let us know if you have any other questions. Good luck!
-Ajay
Hello everyone,
I'm currently attempting to control a car in Carla using NoMaD. I've managed to successfully print out the naction
and chosen waypoints, but I'm unsure if I'm doing it correctly.
Additionally, I'd like to ask experienced users how to project these waypoints onto an image. Currently, I've used the project_points()
function in the code to convert my naction
into 2D coordinates. Should I then use plot_trajs_and_points_on_image()
to draw them onto the image? Would this require modifications to callback_obs()
?
Thank you for your assistance!
Hi,thank you for providing such excellent work. I have currently deployed this Nomad model on my own car, but the model performance is not very good. It is prone to collisions. Could you please tell me the reason for this?