Closed danifpdra closed 2 years ago
Welcome to the reality of Simulation Vs Real :smile:
What are you using to identify the shape?
HI @eupedrosa,
I just created a simple program using cv2.findContours to see how it would work. I think one of the problems is this:
But I think processing the image is not exactly the right way to go because it could change the shape of the rectangle
You may try a combination of erode
and dilate
to handle that specific problem. But it is not a generic solution.
It improves the image but it still doesn't detect the shape.
What improves the image? Erode?
Take a look at the image type as we discussed ...
Another question: what is the real distance between the pattern and the support bar in the highlighted area?
What improves the image? Erode?
Take a look at the image type as we discussed ...
This is a erode followed by a dilate
Another question: what is the real distance between the pattern and the support bar in the highlighted area?
I didn't measure, but as you can see here, they are not very close. I would say 20-30cm
Hi @danifpdra
I was there in the lab and took this picture. The problem are the partes I painted green? If so, why not cut the bar on the red regions?
Why do you think that's the problem? That's not what's visible on the depth image
I thought it was ... in that case lets hope the data compression issue saves the day ...
On Fri, Nov 19, 2021, 13:15 Daniela Rato @.***> wrote:
Why do you think that's the problem? That's not what's visible on the depth image
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/lardemua/atom/issues/323#issuecomment-974062333, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACWTHVXSC53PUTLRLXHYGQ3UMZEWNANCNFSM5HLG6YLA . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
I looked into the compression, and in the bag files depth images are of type float 32 bits
header:
seq: 177
stamp:
secs: 720
nsecs: 893000000
frame_id: "world_camera_depth_optical_frame"
height: 720
width: 1280
encoding: "32FC1"
is_bigendian: 0
step: 5120
data: "<array type: uint8, length: 3686400>"
But you save them to files with 8 bits. There is a significant loss of resolution here, which I think may be the problem ,,, I will look into it a bit more
I would need the bag file with real data for testing (10 secs is enough). Can you send it?
Hi @danifpdra ,
pushed a new version which correctly converts float32 image to uint16, the saves, then loads again.
I would need the bag file with real data for testing (10 secs is enough). Can you send it?
I can't donwload such a large bag. Can you produce a 20 sec bag? For testing its more than enough ...
hi @miguelriemoliveira ,
I was looking at this, and I think it should work with floats but the uploaded images appear all black... https://datacarpentry.org/image-processing/edge-detection/index.html
high_thresh, thresh_im = cv2.threshold(image_from_bag, 0, 5000, cv2.THRESH_BINARY + cv2.THRESH_OTSU)
lowThresh = 0.5 * high_thresh
sigma=1
skimage.io.imshow(image_from_bag_float32)
cv2.waitKey(0)
edges = skimage.feature.canny(
image=image_from_bag_float32,
sigma=sigma,
low_threshold=lowThresh,
high_threshold=high_thresh,
)
skimage.io.imshow(edges)
Are you familiar with scikit image? Could this be because the image is not being read with image = skimage.io.imread(fname=filename, as_gray=True)
?
I have never used, only hear about it. I would say keep trying to find a way ... the idea of loading in scikit format sounds interesting.
I @danifpdra ,
if you want I can try to do a floodfill manually for uint16 images ... let me know if you think this would be interesting...
No need, I will do it
I managed to isolate this for this case
Ok, but this is a very specific algorithm. Your idea is to use this to proceed with real data?
Or are you starting with the simulated data?
Why do you say this is a very specific algorithm?
It works on both, I was just making some experiments. I already started to insert code in ATOM's depth modality for labelling but I wasn't sure how to get chessboard points so I started experimenting and it works for both cases... It just depends on the seed point.
RL Simulation
Hi @danifpdra ,
now I see this is working fine. I thought it was being fine tuned for that particular case, but it seems to work in all cases. I would say the goal now is to see these labels being shown on rviz while running the data collector.
BTW, how much time does it take to process?
Hi @danifpdra ,
I was working quite a while in the flood fill, trying to use numpy arrays.
Int the end it is slower : (
I did a pyrdown of the image twice and now can get a time of 0.2 secs ... you think this would be ok?
Hi @miguelriemoliveira ,
I've saw the published images and it works better, at least when labelling only this one sensor. But there's still the problem with that horizontal line that doesn't occur outside ROS. I will try to look into this
Not sure what you mean. Can you post a bag file with that?
Image in test script (outside ros spin)
Image from interactive data labelled (inside ros spin)
Hi @danifpdra ,
I will take a look at this now...
Hi @danifpdra ,
good news: I have a new version that runs very fast: 0.18 secs (less than the canny) ... for the full resolution image.
If we do the two pyrdowns we get 0.04 secs.
Also, I have an additional idea which I think could makes this even faster, I will try it out if I have the time.
Also, I know what was happening in the image above. It is a bit hard to explain so if you want we can talk about it later. I think now that should not happen anymore.
I will round up the function and then pass it back to you.
Beautifying the result image.
Missing: adding the points which are going to be in the dictionary
Hi @danifpdra ,
I was wrapping up the labelling depth msg function:
Now it produces a nice output image for visualization:
In simulation:
and real data:
Legend:
red dots: seed points green line: detected edges orange points: subsamples points of the pattern's surface blue dot: next seed point
def labelDepthMsg2(msg, seed_x, seed_y, propagation_threshold=0.2, bridge=None, pyrdown=0,
scatter_seed=False, subsample_solid_points=1, debug=False):
"""
Labels rectangular patterns in ros image messages containing depth images.
:param msg: An image message with a depth image.
:param seed_x: x coordinate of seed point.
:param seed_y: y coordinate of seed point.
:param propagation_threshold: maximum value of pixel difference under which propagation occurs.
:param bridge: a cvbridge data structure to avoid having to constantly create one.
:param pyrdown: The ammount of times the image must be downscaled using pyrdown.
:param scatter_seed: To scatter the given seed in a circle of seed points. Useful because the given seed coordinate
may fall under a black rectangle of the pattern.
:param subsample_solid_points: Subsample factor of solid pattern points to go into the output labels.
:param debug: Debug prints and shows images.
:return: labels, a dictionary like this {'detected': True, 'idxs': [], 'idxs_limit_points': []}.
gui_image, an image for visualization purposes which shows the result of the labelling.
new_seed_point, pixels coordinates of centroid of the pattern area.
"""
this is the function and parameters. Take a look and then we can discuss if you think we need more improvements.
Hi @miguelriemoliveira ,
I see you got excited about this! :-D I will take a closer look and get back to you
Yes i was. I was bothering me : )
Hi @miguelriemoliveira,
I cannot run this, I'm getting this error:
[ERROR] [1639392915.099592, 646.116562]: bad callback: <bound method InteractiveDataLabeler.sensorDataReceivedCallback of <atom_calibration.collect.interactive_data_labeler.InteractiveDataLabeler object at 0x7fa60bba6d60>>
Traceback (most recent call last):
File "/opt/ros/noetic/lib/python3/dist-packages/rospy/topics.py", line 750, in _invoke_callback
cb(msg)
File "/home/daniela/catkin_ws/src/calibration/atom/atom_calibration/src/atom_calibration/collect/interactive_data_labeler.py", line 227, in sensorDataReceivedCallback
self.labelData # label the data
File "/home/daniela/catkin_ws/src/calibration/atom/atom_calibration/src/atom_calibration/collect/interactive_data_labeler.py", line 431, in labelData
labels, result_image, new_seed_point = labelDepthMsg2(self.msg, seed_x=seed_x, seed_y=seed_y, bridge=self.bridge,
File "/home/daniela/catkin_ws/src/calibration/atom/atom_calibration/src/atom_calibration/collect/label_messages.py", line 617, in labelDepthMsg2
seeds = np.unique(seeds, axis=1) # make sure the points are unique
File "<__array_function__ internals>", line 5, in unique
File "/usr/lib/python3/dist-packages/numpy/lib/arraysetops.py", line 274, in unique
ar = ar.reshape(orig_shape[0], -1)
ValueError: cannot reshape array of size 0 into shape (0,newaxis)
Hi @danifpdra ,
try to run with the test script first.
https://github.com/lardemua/atom/blob/noetic-devel/atom_calibration/scripts/test_label_depth_msg
I did not adjust the collector yet. First we should validate with the test script, then we move forward to the integration of the collector.
How do I run it?
This doesn't work rosrun atom_calibration test_label_depth_msg topic:=/world_camera/depth/image_raw
This doesn't work rosrun atom_calibration test_label_depth_msg topic:=/world_camera/depth/image_raw
and you have the bag file running?
yes
rosrun atom_calibration test_label_depth_msg depth_image_topic:=/world_camera/depth/image_raw
Hi @miguelriemoliveira ,
I have implemented the frustum functionality for depth cameras
Hi @danifpdra ,
I tested and it look very nice. I have some suggestion for improvement, but perhaps this is not yet the final version. When you have it ready let me know and I will take a more detailed look at it.
Hi @miguelriemoliveira,
If you would like to tell me your suggestions, I can improve everything at the same time!
OK, here they are:
world_camera/image_raw/frustum
I would be nice to see in the published image the part which is a direct projection of the interactive marker, in order to provide a better visual feedback. I know the seed point is at some point the projection, but right after it starts tracking and we loose the idea of where the projection is, which is important to move the marker.
For the same reason as in 2. the interactive marker color / shape should be the same as the one shown in 2 (I think you already did this).
The frustum length could be smaller, or better, configurable (if its not too much work)
For the frustum, do not publish a marker but rather a marker array. It is much better to expand if we need anything more.
The text should be closer and must have a different color, which means you have to do it manually by adding a text marker to the marker array in 5.
The system seems to respond slowly ... are you doing image pyrdown?
Keep up the great work.
Hi @miguelriemoliveira
I tested the bag I recorded in the morning. It works well with the test script but when integrated in ROS it's throwing me a bunch of errors. Do you have time for a quick zoom today?
I am doing some tests with shape detection algorithms.
Simulation looks great:
I went to the lab to retrieve some real images and the quality is not very good and this algorithm doesn't work in this case.