Closed JennahF closed 5 years ago
Thank you for the question. The ids for the markers from left to right, top to bottom are: pg1: 7,3,4,5,11,12 pg2: 9,10,0,6,1,2 pg2: 8
I am not sure if failure to identify the marker is the real issue though since marker registration is only one sub-procedure of the registration, if marker registration fails, ICP is used to register subsequent frames followed by loop closure. Therefore if markers can't be identified for a few frames, it shouldn't be that much of a problem. Plus the example image in your dataset seems to be clear with plenty of markers visible.
I also couldn't quite tell if the registration indeed failed from the snapshot of the mesh, the view of the point cloud in meshlab appears to be quite zoomed out so I can only observe the noise. If you zoom in will you be able to get the registered object along with the markers? Something like the picture below:
I wrote these codes for a class project in a rush, so the codes are somewhat hacky, and they haven't been tested by anyone else except for me. I really like to make it more useful to anyone who may want to use it. If you don't mind, could you send your example dataset to me so that I can try to replicate the error you get? My email address is faninedinburgh@gmail.com.
Hi, I have made major updates to the code. There used to be several places where you need to manually input the camera intrinsics, now the required parameters are saved automatically into the sequence folder. If it's not too inconvenient, I recommend you to record the sequence again with the new code. Thanks
Sry for not replying in time. The mesh has been better. But we still cannot use it for further research. We think the problem lays in the light. Give us more time and and we could figure it out. The raw data is too big to be uploaded to the mailbox.
Hi, I've got some good news. After we downloaded and reprinted the same aruco markers from http://chev.me/arucogen/ and after we chose a bright day with good light to do the recording, the result became much better. We also found out that shaking can influence the performance a lot so we fix the camera on to a robot arm. So the mesh after running compute_gt_poses.py is like this now: And we ran register_segmented.py. But we got an error when running create_label_files.py: idk how to fix it. So I processed the point cloud manually. And I got this: I'm stuck again. I will send the data to your email. Could you please help us find where the problems are? Thank you very much.
Hi Jennah, please see my email response.
This error complained in here is that the mesh being loaded is not a proper triangular mesh with faces and vertices. This is likely due to you didn't perform Poisson surface reconstruction with the pointcloud. The files generated by the previous step (registered_scene and registered_segmented) are registered point clouds that have only "vertices" and no "faces".
As with the previous issue, I am glad to see that the result is improving. With poor lighting condition and/or jerky movements, the registration is much more likely to fail. Of course, the combination of the 2 surely makes things even worse. I have good lighting in my lab and I personally never had a failed registration by moving the camera myself. But if you have a robot with a decent workspace, it's much better to move the camera with the robot arm, plus if you calibrate the camera with the arm you can get an estimation of the transforms too.
Let me know if this solves the issue, thanks! Fan
HI again. Considering what you said in the email that our image and depth images were not aligned, we added some lines in your codes: But the mesh became like this: This is a cup. This is a cube.
sry I didn't check my mailbox! Let me run record2.py first and I will let you know the result!
The result of record2.py is like this: I don't understand..
Hi Jennah,
The last one seems to be a misalignment problem, would mind sending me the original data? Thanks.
Get Outlook for iOShttps://aka.ms/o0ukef
From: JennahF notifications@github.com Sent: Sunday, May 26, 2019 9:25 AM To: F2Wang/ObjectDatasetTools Cc: Fan Wang; Comment Subject: Re: [F2Wang/ObjectDatasetTools] Problems when generating mesh (#1)
The result of record2.py is like this: [Selection_016]https://urldefense.proofpoint.com/v2/url?u=https-3A__user-2Dimages.githubusercontent.com_30460594_58382443-2Dc7acfd80-2D7ffc-2D11e9-2D96f8-2D083c11a6e8c9.png&d=DwMCaQ&c=imBPVzF25OnBgGmVOlcsiEgHoG1i6YHLR0Sj_gZ4adc&r=B-RI8_580p2SmK5PnoFsEkSvNN1P6s9W2mW3avCBuHA&m=64u74ULoS1l0uKZPxk1Ykso15MAnAGctZ4hokNczLuU&s=Gmu54W4tPaNLwReRDHdUOHsGLhKuKL5K-vEn3jNORro&e= I don't understand..
— You are receiving this because you commented. Reply to this email directly, view it on GitHubhttps://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_F2Wang_ObjectDatasetTools_issues_1-3Femail-5Fsource-3Dnotifications-26email-5Ftoken-3DAE325ORS7N2KTFKZKYHVMN3PXKFVDA5CNFSM4HMHHFS2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODWIFZIQ-23issuecomment-2D496000162&d=DwMCaQ&c=imBPVzF25OnBgGmVOlcsiEgHoG1i6YHLR0Sj_gZ4adc&r=B-RI8_580p2SmK5PnoFsEkSvNN1P6s9W2mW3avCBuHA&m=64u74ULoS1l0uKZPxk1Ykso15MAnAGctZ4hokNczLuU&s=bD_4aHrlUvsSYSpvP1p-jn4sf_knTkTSSPE_osXIcSM&e=, or mute the threadhttps://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_notifications_unsubscribe-2Dauth_AE325OVIUEUK4Z2RXLVYQJDPXKFVDANCNFSM4HMHHFSQ&d=DwMCaQ&c=imBPVzF25OnBgGmVOlcsiEgHoG1i6YHLR0Sj_gZ4adc&r=B-RI8_580p2SmK5PnoFsEkSvNN1P6s9W2mW3avCBuHA&m=64u74ULoS1l0uKZPxk1Ykso15MAnAGctZ4hokNczLuU&s=siH18wZo4Qroh1JOVK8huoMeCTyLrWnZMZoyFxYe3ME&e=.
ok sure! But I can only use gmail because I'm in China and I can't use outlook
Gmail works for me!
Get Outlook for iOShttps://aka.ms/o0ukef
From: JennahF notifications@github.com Sent: Sunday, May 26, 2019 9:33:32 AM To: F2Wang/ObjectDatasetTools Cc: Fan Wang; Comment Subject: Re: [F2Wang/ObjectDatasetTools] Problems when generating mesh (#1)
ok sure! But I can only use gmail because I'm in China and I can't use outlook
— You are receiving this because you commented. Reply to this email directly, view it on GitHubhttps://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_F2Wang_ObjectDatasetTools_issues_1-3Femail-5Fsource-3Dnotifications-26email-5Ftoken-3DAE325OQZLOPCOURQ24YKJJ3PXKGSZA5CNFSM4HMHHFS2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODWIF6SQ-23issuecomment-2D496000842&d=DwMCaQ&c=imBPVzF25OnBgGmVOlcsiEgHoG1i6YHLR0Sj_gZ4adc&r=B-RI8_580p2SmK5PnoFsEkSvNN1P6s9W2mW3avCBuHA&m=Id07K8BM94TPYoM1LdCqG4FlJssJtZIVcYyFCun6m6c&s=FteyMKJTTXAzJZWJ5rT0Vn07bclIh5KTAb6OEraOk_w&e=, or mute the threadhttps://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_notifications_unsubscribe-2Dauth_AE325OV42WUYB6P4277N6ALPXKGSZANCNFSM4HMHHFSQ&d=DwMCaQ&c=imBPVzF25OnBgGmVOlcsiEgHoG1i6YHLR0Sj_gZ4adc&r=B-RI8_580p2SmK5PnoFsEkSvNN1P6s9W2mW3avCBuHA&m=Id07K8BM94TPYoM1LdCqG4FlJssJtZIVcYyFCun6m6c&s=Psf1sou0ujMnvIdYkqP-DpTfJz5dgOxY3r2yrvVDbkI&e=.
Hi Jennah, I am still waiting for your access approval for the raw data
Really sorry. I waited for your email that day but I didn't get it. And I forgot to check gmail after that. It is approved now.
Hi Jannah, I checked your original files, the issue is that the color intrinsic provided is for 720 by 1280 image rather than the 480 by 640 images record2.py grabs. After I scale your instinsic to the correct image size the alignment is good again. By any chance you input the intrinsic in yourself? Otherwise, it could be a problem with librealsense and I need to make some modifications to force the intrinsic grabbed and the image size to be consistent.
I have also sent you the raw data and registered scene through email.
Thanks.
Please feel free to reopen the issue any time if you are still experiencing the same problem.
@F2Wang hello, I got the same problem, I can not get the mesh correctly. My Photos are like below.
Could you please give me some advice.
@YanqingWu It's difficult to tell by looking at one frame, could you send the original sequence to my email: faninedinburgh@gmail.com? thanks
@YanqingWu It's difficult to tell by looking at one frame, could you send the original sequence to my email: faninedinburgh@gmail.com? thanks
thanks for your reply, I have solved it, just by increasing the camera frame.
`
We are met with a problem after running compose_gt_poses.py. 'Mesh saved' is printed but when we open the .ply in meshlab, we get this result. The environment is like this: An example of color pictures from JPENImages folder is Our camera is realsense D435 Camera depth intrinsic is: {'fx':640.292, 'fy': 640.292, 'height':720, 'width':1280, 'coeffs':[0,0,0,0,0], 'ppy':357.747, 'ppx':647.852, 'model':2} The resolution of color and depth are both 1280*720.
We think the reason is that markers are not Identified. But we don't know the ids of the markers. Can you provide them? Thanks.
Hello, I'm curious to know why the two top markers in the environment image is repetitive?
Hi, I am also stuck with my dataset, can I send my file to your mailbox? I use Realsense D435. Thanks!
We are met with a problem after running compose_gt_poses.py. 'Mesh saved' is printed but when we open the .ply in meshlab, we get this result. The environment is like this: An example of color pictures from JPENImages folder is Our camera is realsense D435 Camera depth intrinsic is: {'fx':640.292, 'fy': 640.292, 'height':720, 'width':1280, 'coeffs':[0,0,0,0,0], 'ppy':357.747, 'ppx':647.852, 'model':2} The resolution of color and depth are both 1280*720.
We think the reason is that markers are not Identified. But we don't know the ids of the markers. Can you provide them? Thanks.