Open BetterLYY opened 4 years ago
I have been generating training data recently, but I have made no progress. How is the solution? @BetterLYY
I have been generating training data recently, but I have made no progress. How is the solution? @BetterLYY
如果你的问题是生成npy文件然后做detection,那你可以分别把npy文件中包含的四种数据分别存储到txt文件中,在他的源码中分别读取这些文件,而不是纠结一定要生成他说的npy文件。
I have been generating training data recently, but I have made no progress. How is the solution? @BetterLYY
如果你的问题是生成npy文件然后做detection,那你可以分别把npy文件中包含的四种数据分别存储到txt文件中,在他的源码中分别读取这些文件,而不是纠结一定要生成他说的npy文件。
主要是 我怕弄错这几个数据的生成规则,我写了一版(从realsense获取然后组成和numpy类似的格式数据),但是感觉还是不太对劲,你这块写好了吗?
I have been generating training data recently, but I have made no progress. How is the solution? @BetterLYY
如果你的问题是生成npy文件然后做detection,那你可以分别把npy文件中包含的四种数据分别存储到txt文件中,在他的源码中分别读取这些文件,而不是纠结一定要生成他说的npy文件。
主要是 我怕弄错这几个数据的生成规则,我写了一版(从realsense获取然后组成和numpy类似的格式数据),但是感觉还是不太对劲,你这块写好了吗?
我跑离线的,所以没有做这个工作蛤。
I have been generating training data recently, but I have made no progress. How is the solution? @BetterLYY
如果你的问题是生成npy文件然后做detection,那你可以分别把npy文件中包含的四种数据分别存储到txt文件中,在他的源码中分别读取这些文件,而不是纠结一定要生成他说的npy文件。
主要是 我怕弄错这几个数据的生成规则,我写了一版(从realsense获取然后组成和numpy类似的格式数据),但是感觉还是不太对劲,你这块写好了吗?
我跑离线的,所以没有做这个工作蛤。
所以你只跑了作者提供的数据,但是自己的真实数据没有试过,是吗?
I have been generating training data recently, but I have made no progress. How is the solution? @BetterLYY
如果你的问题是生成npy文件然后做detection,那你可以分别把npy文件中包含的四种数据分别存储到txt文件中,在他的源码中分别读取这些文件,而不是纠结一定要生成他说的npy文件。
主要是 我怕弄错这几个数据的生成规则,我写了一版(从realsense获取然后组成和numpy类似的格式数据),但是感觉还是不太对劲,你这块写好了吗?
我跑离线的,所以没有做这个工作蛤。
所以你只跑了作者提供的数据,但是自己的真实数据没有试过,是吗?
我用的自己的数据,代码里面主程序里可以明确看到从npy文件中读取了四种数据,你把这四种数据分别输入就行了
I have been generating training data recently, but I have made no progress. How is the solution? @BetterLYY
如果你的问题是生成npy文件然后做detection,那你可以分别把npy文件中包含的四种数据分别存储到txt文件中,在他的源码中分别读取这些文件,而不是纠结一定要生成他说的npy文件。
主要是 我怕弄错这几个数据的生成规则,我写了一版(从realsense获取然后组成和numpy类似的格式数据),但是感觉还是不太对劲,你这块写好了吗?
我跑离线的,所以没有做这个工作蛤。
所以你只跑了作者提供的数据,但是自己的真实数据没有试过,是吗?
我用的自己的数据,代码里面主程序里可以明确看到从npy文件中读取了四种数据,你把这四种数据分别输入就行了
老哥,我就是问这些数据怎么生成,我自己生成了一些,拼凑了4个dict数据,但是感觉不太对。你可以介绍下吗?尤其是smoothed_object_pc
@BetterLYY 方便留个wx吗?我的935579178
@BetterLYY 方便留个wx吗?我的935579178
这一块也是我问过作者的,但是没有得到回复。在我看来这个内容就是目标的三维点云,需要自行分割出来。
@BetterLYY 方便留个wx吗?我的935579178
这一块也是我问过作者的,但是没有得到回复。在我看来这个内容就是目标的三维点云,需要自行分割出来。因为有团队的研究内容不便透露。
好的,那我自己解决下。现在realsense采集后,我自行做了很多pre process再扔进网络,我总感觉我的处理有些问题~
@BetterLYY 我现在跑自己采集的数据,生成的grasp有点偏移,我想问下: 你是用realsense采集的吗?你的intrinsics_matrix/K 参数的值是多少呢?我直接拿的numpy里面的值,不知道偏移是不是因为intrinsics_matrix参数不对
@BetterLYY 偏移的样子很奇怪
我现在跑自己采集的数据,生成的grasp有点偏移,我想问下: 你是用realsense采集的吗?你的intrinsics_matrix/K 参数的值是多少呢?我直接拿的numpy里面的值,不知道偏移是不是因为intrinsics_matrix参数不对 @gzchenjiajun You can read intrinsics from your own realsense camera using SDK tools. Try to use English to make more people understand your issue and help you.
@BetterLYY 我现在跑自己采集的数据,生成的grasp有点偏移,我想问下: 你是用realsense采集的吗?你的intrinsics_matrix/K 参数的值是多少呢?我直接拿的numpy里面的值,不知道偏移是不是因为intrinsics_matrix参数不对
我用的kinect,直接用的npy文件中给的相机参数,你也可以参考楼上的建议
@BetterLYY 是numpy数据里面那个吧?代码里没看到哦
我现在跑自己采集的数据,生成的grasp有点偏移,我想问下: 你是用realsense采集的吗?你的intrinsics_matrix/K 参数的值是多少呢?我直接拿的numpy里面的值,不知道偏移是不是因为intrinsics_matrix参数不对 @gzchenjiajun You can read intrinsics from your own realsense camera using SDK tools. Try to use English to make more people understand your issue and help you.
Ok, now there's a question. K = np.array([[616.36529541, 0., 310.25881958], [0., 616.20294189, 236.59980774], [0., 0., 1.]] )
I'm not sure what the original NP meant by providing K parameter values for the different latitudes. I can now get the intrinsics on my Realsense D435 (width, height, PPX,ppy, FX,fy) but I don't know how to put it together.
@imdoublecats thank you
我现在跑自己采集的数据,生成的grasp有点偏移,我想问下: 你是用realsense采集的吗?你的intrinsics_matrix/K 参数的值是多少呢?我直接拿的numpy里面的值,不知道偏移是不是因为intrinsics_matrix参数不对 @gzchenjiajun You can read intrinsics from your own realsense camera using SDK tools. Try to use English to make more people understand your issue and help you.
Ok, now there's a question. K = np.array([[616.36529541, 0., 310.25881958], [0., 616.20294189, 236.59980774], [0., 0., 1.]] )
I'm not sure what the original NP meant by providing K parameter values for the different latitudes. I can now get the intrinsics on my Realsense D435 (width, height, PPX,ppy, FX,fy) but I don't know how to put it together. @gzchenjiajun fx 0 cx 0 fy cy 0 0 1
我现在跑自己采集的数据,生成的grasp有点偏移,我想问下: 你是用realsense采集的吗?你的intrinsics_matrix/K 参数的值是多少呢?我直接拿的numpy里面的值,不知道偏移是不是因为intrinsics_matrix参数不对 @gzchenjiajun You can read intrinsics from your own realsense camera using SDK tools. Try to use English to make more people understand your issue and help you.
Ok, now there's a question. K = np.array([[616.36529541, 0., 310.25881958], [0., 616.20294189, 236.59980774], [0., 0., 1.]] ) I'm not sure what the original NP meant by providing K parameter values for the different latitudes. I can now get the intrinsics on my Realsense D435 (width, height, PPX,ppy, FX,fy) but I don't know how to put it together. @gzchenjiajun fx 0 cx 0 fy cy 0 0 1
K has been obtained in the format of Intrinsics and assembled, and the generated grasp is offset. Now we do not know how to deal with it... . Do you have any suggestions
output image
source color image
@imdoublecats
我现在跑自己采集的数据,生成的grasp有点偏移,我想问下: 你是用realsense采集的吗?你的intrinsics_matrix/K 参数的值是多少呢?我直接拿的numpy里面的值,不知道偏移是不是因为intrinsics_matrix参数不对 @gzchenjiajun You can read intrinsics from your own realsense camera using SDK tools. Try to use English to make more people understand your issue and help you.
Ok, now there's a question. K = np.array([[616.36529541, 0., 310.25881958], [0., 616.20294189, 236.59980774], [0., 0., 1.]] ) I'm not sure what the original NP meant by providing K parameter values for the different latitudes. I can now get the intrinsics on my Realsense D435 (width, height, PPX,ppy, FX,fy) but I don't know how to put it together. @gzchenjiajun fx 0 cx 0 fy cy 0 0 1
K has been obtained in the format of Intrinsics and assembled, and the generated grasp is offset. Now we do not know how to deal with it... . Do you have any suggestions
@gzchenjiajun i just set k and generate the object point cloud,then grasps are generated as example images, i woder if your point cloud is not right, it does not look like a single object not including background.
I don't know if it makes any difference to depth except 1000 because realsense units and network units don't seem to match, but if you comment out the division line, you won't see the generated object @imdoublecats
And now I only take one frame of point cloud data for the convenience of testing, I don't know if there is any problem... .
I retried a pure background to test the grasp generation is still wrong.
And for some reason, the image generated by my RealSense Color image is so dark... I've also debugged the channel
My Smoothed_object_PC is generated like this. Is my Smoothed_object_PC data faulty?
@imdoublecats @BetterLYY
Thank you for your answers
@gzchenjiajun GraspNet does not include object segment ,so send the point cloud of the object only, don't send the entire point cloud. Unit of smoothed point cloud in npy is 1m, realsense depth image is 1mm, you can see your data to check the unit. Realsense has a lot of intrinsics, use the RGB 640*480 one.
Visualize the data in npy and compare it with what you get from realsense may help.
@gzchenjiajun GraspNet does not include object segment ,so send the point cloud of the object only, don't send the entire point cloud. Unit of smoothed point cloud in npy is 1m, realsense depth image is 1mm, you can see your data to check the unit. Realsense has a lot of intrinsics, use the RGB 640*480 one.
Visualize the data in npy and compare it with what you get from realsense may help.
link:https://pan.baidu.com/s/1yLL7yYBMcdTWV1UThTlGTg password:iz12 I reviewed the original NUMPY data content and re-checked my code, but the claw generation was still wrong. Can you take a look at it for me? Thank you very much
@imdoublecats @BetterLYY
I'm really in trouble now. Thank you for your answers,
Here are the stages you need to follow to generate grasps for new scenes and execute with the robot. 1) use any instance segmentation methods to segment the objects. Examples: https://github.com/NVlabs/UnseenObjectClustering https://github.com/chrisdxie/uois 2) backproject the depth map to point cloud https://github.com/NVlabs/6dof-graspnet/blob/master/demo/main.py#L60 3) take the segmented point cloud of object and generate grasps: https://github.com/NVlabs/6dof-graspnet/blob/master/demo/main.py#L131 4) if you are using robot, transform grasps from camera frame to robot base base frame using the relative transform of robot base and camera to execute. 5) sort grasps based on predicted scores. 6) Go over grasps according to scores in decreasing order. Execute the first grasp for which motion planner finds a plan.
The model in this repo generates grasps for the target object regardless of the clutter. Grasps may collide with other objects, if you want to remove those grasps you can implement collisionnet from here: https://arxiv.org/abs/1912.03628
[optional[ Regarding the smooth point cloud: since depth images from realsense are noisy, one can smooth the depth images by averaging 10 consecutive frames and removing the pixels with jittery depth. That helps smoothing the performance. Even without smoothing you should be able to get comparable results to the provided examples.
For realsense cameras, intrinsics can be extracted from the realsense API or ros driver itself.
@arsalan-mousavian Thank you. It seems that I did not do the first step, which caused my grasp to generate an anomaly. I will do the first step next
@arsalan-mousavian Thank you very much for your previous reply. Now I have a new problem. With me through https://github.com/NVlabs/UnseenObjectClustering, input image to get the instance of object segmentation mask data, can't will object mask data into the corresponding point cloud data. Do you have any suggestions about this? thank you
@arsalan-mousavian
Thank you very much for your previous reply. Now I have a new problem.
With me through https://github.com/NVlabs/UnseenObjectClustering, input image to get the instance of object segmentation mask data, can't will object mask data into the corresponding point cloud data.
Do you have any suggestions about this? thank you
This step already Success
hi @gzchenjiajun ,i already run the git@github.com:NVlabs/UnseenObjectClustering.git successfully,but how can I get the corresponding segmented data, such as segmented depth and image
这是来自QQ邮箱的假期自动回复邮件。 您好,我最近正在休假中,无法亲自回复您的邮件。我将在假期结束后,尽快给您回复。
Hello, I'm interested in your 6DOF Graspnet project and trying to run this code. Recently, I have some question in generating the new data just like that In the folder demo/data, the provided data (.npy files) contains depth , image , smoothed_object_pc and intrinsics_matrix, but I have troubles in generating those .npy files with above data? Could you give me some instructions of how to generating the .npy files used in this code. Besides, I can use Kinect to get depth images now. I would appreciate for your instructions, thanks !!!!