Monash-Connected-Autonomous-Vehicle / ESDA

Software stack for MCAVs annual IGVC entry
0 stars 0 forks source link

VLP16 driver and launch #21

Closed AbBaSaMo closed 5 months ago

AbBaSaMo commented 5 months ago

Refer to the following repository https://github.com/ros-drivers/velodyne/tree/humble-devel

Jiawei-Liao commented 5 months ago

Installed Velodyne from https://github.com/ros-drivers/velodyne/tree/humble-devel. However, when trying to build, it gave the same errors as the Lidar from the ENV200

Followed Velodyne installation instructions from https://github.com/orgs/Monash-Connected-Autonomous-Vehicle/projects/8/views/5?pane=issue&itemId=40893328.

Tested using these commands: ros2 launch velodyne_driver velodyne_driver_node-VLP32C-launch.py Image

ros2 launch velodyne_pointcloud velodyne_transform_node-VLP32C-launch.py Image

But this isn't VLP16. Not sure if it must be VLP16 or if this is fine as well.

Installed slam_toolbox from https://github.com/SteveMacenski/slam_toolbox Replacing eloquent with humble

Tested using this command: ros2 launch slam_toolbox online_sync_launch.py Image

There's some output but no idea how to check if it actually works correctly.

AbBaSaMo commented 5 months ago

Just confirming, yep it should be VLP16 image

What was the error it was outputting? Or can you link the issue you are referring to?

As for slam, once we get the pointcloud outputting such that we can pass it into the slam_toolbox pkg, we can confirm it works by ensuring the output map is correctly displaying in r viz as well as the tranform by viewing the transform graph

Jiawei-Liao commented 5 months ago

Turns out I was being dumb and velodyne could be built.

Lots of stuff was being printed out at the start. It does say poll timeout but I'm guessing it's because I don't have a lidar. Image

The slam_toolbox also gives an output so hopefully it works. Image

Do I need to come into the workshop to test passing pointcloud output into slam_toolbox? (also not sure how to pass that data)

And also not sure what to do to create a launch file. Is it just my current bashrc?

AbBaSaMo commented 5 months ago

Nice, good to hear it's building.

You can find some pre-recorded pointclouds/rosbags in the MCAV team drive and use those when testing things at home and then with the actual lidar once you're in the workshop so we can make sure the actual hardware works.

This is the streetdrone dir in the team trive. Have a look at both the data and rosbag folders. You can play rosbags e.g. as described in this ROS thread.

As for passing it into slam, there's probably some ros2 topic slam_toolbox subscribes. I would look at their README or docs and see if it specifies anything e.g. in a configuration or setup section or smth.

AbBaSaMo commented 5 months ago

Oh and as for writing launch files, refer to the ROS2 tutes https://docs.ros.org/en/humble/Tutorials/Intermediate/Launch/Creating-Launch-Files.html and you can also take a look at other launch files in our repo to get an idea of how they work.

Jiawei-Liao commented 5 months ago

Got ros2 bag to play back data. The data is accessible under /velodyne_points. (Data is from GDrive lidar/open day) Image

Following this guide on slam_toolbox https://www.youtube.com/watch?v=hMTxb8Y2cxI

Nav2 and slam_toolbox use data from /scan topic. So I had to remap it to /velodyne_points. Might as well start on the launch file. This was the launch file created with the help of ChatGPT Image

Build compiles successfully. However, when running that launch file, this error is given Image

nav2_bringup exists under /opt/ros/humble/share. Not sure why it's trying to find it under lib. I don't think it would be correct to move nav2_bringup into /lib. I couldn't find any solutions for this online...

Jiawei-Liao commented 5 months ago

https://www.youtube.com/watch?v=ZaiA3hWaRzE This guide doesn't seem to use nav2.

Jiawei-Liao commented 5 months ago

Visualizing the velodyne_points topic alone using rviz2 shows the car moving nicely. Image (Although it doesn't look like other images online)

Turns out, what I did today is not correct. The velodyne_points topic can be transformed into laserscan from this: https://github.com/ros-perception/pointcloud_to_laserscan/tree/humble After building, can be run using this command: ros2 run pointcloud_to_laserscan pointcloud_to_laserscan_node --ros-args --remap cloud_in:=/velodyne_points

When running the slam toolbox (and nav2) on scan topic, it gives this error: Image

Still, tried to visualize it using rviz2 again. ros2 run rviz2 rviz2 -f velodyne. Then add pointcloud2. Using scan topic, little amounts of white dots could be seen. Image When following the video from the first comment from today, nothing is shown in rviz.

Trying to fix dropping messages error by searching online: https://answers.ros.org/question/389383/slam_toolbox-message-filter-dropping-message-for-reason-discarding-message-because-the-queue-is-full/

tf tree: Image Not sure how this helps but buffer length 0 does seem concerning. It also seems rather small?

odom -> base_link transform: Tried to use this command: ros2 run tf2_ros static_transform_publisher 0 0 0 0 0 0 map scan Map and scan doesn't seem to make much sense so I also tried replacing it with odom and base_link. Also tried different combinations of them. None worked or helped. Although the output from nav2 changed from something like "no odom" to also dropped message because the queue is full. When checking odom topic, nothing is being outputted.

AbBaSaMo commented 5 months ago

At a glance, the pointcloud to laserscan pkg has a min and max height parameter. It might be the case that the default values or what you might have provided if any, are too limiting and thus only providing the few point within the current set range.

As for the original pcl2 image and it not looking right compared to online, it looks as expected but yeah I'm also curious why our lidars are so sparse in general. Might just be the tech or something.

Just confirming and for personal learning, was what you did wrong specifically because slam requires laserscan messages rather than pcl2?

As for the odom -> baselink tf, we missed it in the stack but yeah it's a prereq/input as you've determined. image

It seems like that tf would be produced by the robot_localisation package https://answers.ros.org/question/271168/how-does-robot_localization-package-work-and-what-should-the-output-be/

@dylan-gonzalez is there anyway to mock this transform at this point or would getting robot_localisation set up first be more appropriate?

dylan-gonzalez commented 5 months ago

if by base_frame they mean the map frame, then you can just publish a static transform from map -> odom which is the usual mock transform when localisation hasn't been set up first

https://docs.ros.org/en/foxy/Tutorials/Intermediate/Tf2/Writing-A-Tf2-Static-Broadcaster-Cpp.html#the-proper-way-to-publish-static-transforms

so you'd publish an identity transform in this case

but if by base_frame they mean base_link, then i'm not too sure... i guess you could do what i said above but for base_link, but im not sure if it makes sense to..

dylan-gonzalez commented 5 months ago

But it seems like a good idea to set up robot_localization first https://robotics.stackexchange.com/questions/101774/unable-to-create-a-tf2-broadcaster-from-odom-to-base-link

Jiawei-Liao commented 5 months ago

I don't think it's due to min and max height since there were a few more white dots (just not shown in the screenshot) and I believe there should be more dots in between. Maybe because there are few points originally from pc2, less are generated for laserscan?

I'm not too sure on the specifics of requiring laserscan. There were a few sources online such as this one (https://github.com/SteveMacenski/slam_toolbox/issues/141) that mention laserscan. The /scan topic wasn't being published when playing back rosbag data. It was published after using that pc2 to laserscan.

In the slam toolbox github, it says that it is used for 2D mapping. I think pc2 is 3D? So maybe thats why it needs to be converted to laserscan which I heard is 2D or has something to do with 2D.

In rviz, I also noticed this error in global status, fixed frame: "No tf data. Actual error: Frame [velodyne] does not exist" Not sure how to make use of this message. I'm guessing it means to topic with name of velodyne? which is true.

From Dylan's sources, does slam toolbox also need odom and map? The slam toolbox git just says scan is required and map is published by it. Echoing /scan, there is output from it. Echoing /map and /pose have no output.

Is static transform just this command? ros2 run tf2_ros static_transform_publisher 0 0 0 0 0 0 base_link odom I have tried this before but nothing happened. Or I used it wrong?

I guess I can try the robot_localization

I feel like my biggest issue right now is slam toolbox dropping messages. Would robot_localization help?

Jiawei-Liao commented 5 months ago

Also just for my own reference, the command used to launch rviz2 to show pc2 and laserscan is: ros2 run rviz2 rviz2 -f velodyne Then add pointcloud2 and/or laserscan to the group on the left.

dylan-gonzalez commented 5 months ago

For the static transform publisher command, there should be a 1 in there somewhere (not all 0s) - but I'm not sure if it makes sense to do it with base_link -> Odom

For the velodyne frame error, try viewing rviz from the velodyne fixed frame

Jiawei-Liao commented 5 months ago

After reading online and looking at the rviz error messages, these were the static transforms that seems useful: ros2 run tf2_ros static_transform_publisher 0 0 1 0 0 0 odom base_link ros2 run tf2_ros static_transform_publisher 0 0 1 0 0 0 map odom ros2 run tf2_ros static_transform_publisher 0 0 1 0 0 0 odom velodyne ros2 run tf2_ros static_transform_publisher 0 0 0 0 0 0 1 map velodyne

The first 2 were from online, saying that it was necessary to have map -> odom -> base_link The last 2 were from rviz, saying that [velodyne] fixed frame did not exist and some following warings/errors. Not sure if all of these are correct or needed...

Doing this, slam toolbox gives a new error message Image

When running this command: ros2 run rviz2 rviz2 -d /opt/ros/humble/share/nav2_bringup/rviz/nav2_default_view.rviz This was produced: Image Image These are all 2D and kinda look like a map. However it is not a map since from those 2 pictures, the scans from the first location doesn't show up in the second. This is probably because in rviz, the map display gives a warning/error of no map received.

This might be because there is no output from map topic. I couldn't find anything online on how to solve it.

AbBaSaMo commented 5 months ago

@Jiawei-Liao btw the lidar's location is know known [esda hardware drawer in ws2] so you can start using/testing with it. Remember to go through the hardware checkout process as described in slack.

AbBaSaMo commented 5 months ago

@dylan-gonzalez devices cant detect the ethernet cable when plugged in from the lidar. Any idea why? Tried 2 devices and 2 different cables with no luck.

dylan-gonzalez commented 5 months ago

It looks like the barrel jack power connector you were using was 5V, and pretty sure it needs 12V. The 12V ones we have are broken though so we have to source another one.

@jaimasters

dylan-gonzalez commented 5 months ago

Wait nevermind we got it working, we found another barreljack connector

Sudo ifconifg eth0 192.168.10.100 Sudo route add 192.168.10.201 eth0

Make sure to change eth0 to the corresponding Ethernet interface

192.168.10.201 is the current vlp16's up address (until further notice, I think Jai wants to change the IP address to sit on the mcav network subnet)

The web page can be accessed via http://192.168.10.201 (don't use https)

Also the barrel jack connector that works is the one that connects to the wifi router (we should get another one)

AbBaSaMo commented 5 months ago

LIdar is working following dylans steps above and outputting laserscan and pcl2. Closing this issue now.

Image