Field-Robotics-Lab / nps_uw_multibeam_sonar

Multibeam sonar plugin with NVIDIA Cuda library
Apache License 2.0
35 stars 21 forks source link

Validation and verification case set-up #5

Open woensug-choi opened 3 years ago

woensug-choi commented 3 years ago

I've set-up this issue to continue to discuss the target objects. Also other possible reference data we could use.

@kerednoslo @bsb808 Could you list existing data we could possibly use? @lauralindzey and I have been discussing candidates for her tank experiments (expected to happen Jan/Feb). So far, a cylinder seems doable. But she wants to evaluate performance in the elevation direction (not only looking forward but looking at the seabed)

@lauralindzey We are trying to separate the sonar plugin from the sensor repo to its own (this). Hope this transition doesn't mess up your testing environments. You may try this with PR https://github.com/Field-Robotics-Lab/nps_uw_sensors_gazebo/pull/44. Continuing the last conversation through e-mail, I do not have much preference for the radius/materials. It's more like the bigger the better and heavier the better. Bigger will mean we could have a better chance to investigate angle dependency and heavier means more reflection. Do you already have data for a plane at oblique angles? It could be something to initiate the set-up procedures for simulation-experiment comparisons. I think the lattice of cylindrical pipes is too complicated. I think we may try initial tests with a lattice of bolts.

kerednoslo commented 3 years ago

@woensug-choi I sent an email to Kevin WIlliams at APL-UW asking for his datasets. It's fine to publish this data, but hosting it on github might not be ok. We'll have to see what he says. The two data sets are PondEx, and TRex. Both of these are of objects on the seabed.

lauralindzey commented 3 years ago

@woensug-choi - I'm still planning to do some in-tank data collection in the second half of February. We're still waiting for the hardware to be ready, but that's all chugging along. I have work of my own that I'll be doing in the tank, and I'm more than happy to collect a dataset that'll help make this simulation better =)

Our sonar is an Oculus M1200d. We typically use it at 2.1 MHz, but it also has a 1.2 MHz mode.

Here's a starting point for the data collection plan: 1) Pipe mounted vertically in the center of the tank. a. Sonar looking at it head-on, with image plane as horizontal as possible. b. set of angled views at ~15 degree increments 2) Sonar fixed with image plane horizontal; move board through FOV of sensor while held at various orientations w/r/t sensor plane. (aiming for 15 degree increments, but we don't have great control of the board while moving it. An alternative would be stationary board and a fixed number of sonar poses looking at it)

Given that we're going to spend time collecting a dataset for you, I'd like to make sure it's maximally useful. Any additional details you can give about what you need/refinements to the procedure would be great, and I've added some questions in-line for you.

Q1: Is it valuable to you to get data from both frequencies?

Q2: Do you need just a single frame from each scene, or do you want a bagfile of each scene? (I'm not sure how you're planning to characterize noise, and I don't know how much it varies in a static scene.)

Q3: How precisely do you need to know the pose of the object relative to the sonar? Or is is sufficient to simply know what the object is?

For the cylinder, I think we can just get some pipes from the hardware store. A quick search found this 4" diameter PVC pipe.

Q4: Do you want to pick out a better cylinder? If we place an order from McMaster Carr, they have a very large assortment of metal and plastic pipes. I don't have time to sort through it, but if you send me a link to what you want, we can probably add it to our next order.

For the inclined plane, we already have a dibond board with a charuco pattern printed on it. However, it can be pretty faint when observed at a high angle. In the first image, I roughed up the surface with sandpaper in two of the quadrants of the plus sign, and there's a noticeable increase in reflection strength. The bright dots are thanks to bolts we added to enable a camera-sonar calibration. In the second image, the board is barely visible. image image

Q5: I'm guessing a different planar material would be better for your dataset. Any preferences? If it's something we can print a charuco board on we can easily get its pose relative to the sensor for ground truth data =)

For context, here's a pic of our tank setup: test_tank_small The sonar is attached to the end of the metal arm, and can be rotated around the perimeter of the tank as well as rolled and pitched. All motions are performed by hand, and there are no encoders on the system. We do have april tags inside the tank, so if the sonar + camera have a good relative calibration we can get a trajectory for the sonar.

woensug-choi commented 3 years ago

@lauralindzey Thank you for the details!

Q1: Is it valuable to you to get data from both frequencies?

Yes! It would be something worth to double-check and discuss the relationship distance resolution and sonar frequency as well as some hopes for beam reverberation interference (with variational distances between two cylinders)

Q2: Do you need just a single frame from each scene, or do you want a bagfile of each scene? (I'm not sure how you're planning to characterize noise, and I don't know how much it varies in a static scene.)

I am not sure what you mean by this. If the bagfile means raw data (distance-intensity), yes it would be great to have it. The tank without any acoustic buffer layer (anechoic paints or walls) would have large background noise. I do not have much experience in these high frequencies though. I believe the effect is neglectable looking at the sonar image. Nevertheless, comparing the values that of the Gaussian noise in the simulator and effect of the tank reverberation acoustic fields with and without the tank model might have some possibility to give us some hints on how to model the noise better.

Q3: How precisely do you need to know the pose of the object relative to the sonar? Or is sufficient to simply know what the object is?

Do you mean where the object is? If we can model the whole scene would be good enough. Machine processing ready precision is not required.

Q4: Do you want to pick out a better cylinder? If we place an order from McMaster Carr, they have a very large assortment of metal and plastic pipes. I don't have time to sort through it, but if you send me a link to what you want, we can probably add it to our next order.

I believe the metal one would be better. If possible, two rods with different distances from each other would be interesting. Also, different distances from the wall.

Q5: I'm guessing a different planar material would be better for your dataset. Any preferences? If it's something we can print a charuco board on we can easily get its pose relative to the sensor for ground truth data =)

Again, a metal plate would do better. L shaped plate would be interesting. Looking at the concave/convex.

Thank you for sharing your plans and thoughts! If you are able to get the metal plates, I believe you would be attaching a plastic plate with charuco pattern on top of it with bolts. I think this will also work.

Board with different angles is also something worth for the simulator's reflectivity calculation model. It's a simple cosine model now. For your purpose, I believe it is to check up to what angle the sonar can distinguish its edges comparing the area in the sonar image. To do so, every angle with constant increment would be needed. If this is not possible, I can compare let's say 90 degrees, 30 degrees, and 15 degrees angles and make a reflection model and generate images for other angles.

Do you happen to have a hydrophone? I'm not sure what the source level is for these sonars.

woensug-choi commented 3 years ago

@lauralindzey Made a PR to draft simulation environments at PR https://github.com/Field-Robotics-Lab/nps_uw_multibeam_sonar/pull/10. Could you give it a quick look? There are two PRs ready to merge on their way. They include variational reflectivity and some fixes for incident angle calculation. It may look slightly different after those PRs.

image

woensug-choi commented 3 years ago

@lauralindzey @bsb808 I've drafted plans for the experiments so that we can discuss them. Please give your thoughts! https://docs.google.com/document/d/1jISmNKEtNjsZYoToNY7prpVU92aKSmP9bpRCgvmhaXk/edit?usp=sharing @kerednoslo Thank you for the ideas! The paper you shared was handy to benchmark.

lauralindzey commented 3 years ago

@woensug-choi requested an update on where APL was with regards to actually collecting some data. So, here's my brain dump, and apologies if this is more than you wanted to know.

Our sensor is currently in the tank and spitting out data:

However, I'm also not 100% happy with the sensor configuration, which is why I say any dataset I get you now will be preliminary:

So, I'll be emailing the manufacturer next week to ask about both of these.

woensug-choi commented 3 years ago
woensug-choi commented 3 years ago

https://docs.google.com/document/d/1dLpU1iU7KEFMYI97FGPcv0QFfXcAOutQpveeoitDGrY/edit?usp=sharing

woensug-choi commented 3 years ago

https://docs.google.com/document/d/1ZFIumP2JH-jZ5b2GVY7hYg3_UzqeCl6LdGgNKQtUA8o/edit?usp=sharing

woensug-choi commented 3 years ago

Summary after all in slides format : https://nps01-my.sharepoint.com/:p:/g/personal/woensug_choi_ks_nps_edu/EQfnpGrSY_1Elg8HYtM2kmYBwjURVZXZ1StK52AAyg-jkA?e=sAhg9k

Source code used to generate those images are at https://github.com/Field-Robotics-Lab/nps_uw_multibeam_sonar/pull/20, which will not be merged.