Closed samuk closed 2 years ago
Is the system required to create plans in-between waypoints(the known coordinates)?
Could a similar approach be used, using vox_nav to handle the higher level path planning/ waypoint stuff & turns? Then adopting a visual servo approach when correctly situated in a crop row?
My interpretation is that VisualServoing together with a controller
provides cmd_vel
in-between waypoints and these commands drive the robot, but I am not sure if there is an actual path plan in between waypoints as reference. If so I would suggest you use navigation2, there is a waypoint follower package that is well written and tested for this purpose.
vox_nav currently have a behavior tree node named vox_nav_pose_navigator, which is an action that drives the robot to a single pose, if you want more than one pose, which is the waypoint stuff you mention, there needs to be an additional node/action for that which vox_nav lacks at the moment, I do have a plan to add something like vox_nav_waypoint_navigator
and implement it with behavior tree, but I can't tell for sure when this can be added.
For future reference; waypoint following (gps and normal) are now implemented with behaviour trees in vox_nav.
Complete ROS newbie so please excuse any stupid questions.
I like this visual servo strategy as demonstrated in Agribot
Could a similar approach be used, using vox_nav to handle the higher level path planning/ waypoint stuff & turns? Then adopting a visual servo approach when correctly situated in a crop row?
Specifically, I was interested in using this Tensorflow model when in a crop row
Subscribing to /cmd_vel from this ROS2 code for the Ackerman driven Mars rover Tenacity/Sawppy & Upgrading to BLDC motors