NVlabs / 6dof-graspnet

Implementation of 6-DoF GraspNet with tensorflow and python. This repo has been tested with python 2.7 and tensorflow 1.12.
MIT License
212 stars 56 forks source link

Generating My Testing Data #11

Open BetterLYY opened 4 years ago

BetterLYY commented 4 years ago

Hello, I'm interested in your 6DOF Graspnet project and trying to run this code. Recently, I have some question in generating the new data just like that In the folder demo/data, the provided data (.npy files) contains depth , image , smoothed_object_pc and intrinsics_matrix, but I have troubles in generating those .npy files with above data? Could you give me some instructions of how to generating the .npy files used in this code. Besides, I can use Kinect to get depth images now. I would appreciate for your instructions, thanks !!!!

gzchenjiajun commented 3 years ago

I have been generating training data recently, but I have made no progress. How is the solution? @BetterLYY

BetterLYY commented 3 years ago

I have been generating training data recently, but I have made no progress. How is the solution? @BetterLYY

如果你的问题是生成npy文件然后做detection,那你可以分别把npy文件中包含的四种数据分别存储到txt文件中,在他的源码中分别读取这些文件,而不是纠结一定要生成他说的npy文件。

gzchenjiajun commented 3 years ago

I have been generating training data recently, but I have made no progress. How is the solution? @BetterLYY

如果你的问题是生成npy文件然后做detection,那你可以分别把npy文件中包含的四种数据分别存储到txt文件中,在他的源码中分别读取这些文件,而不是纠结一定要生成他说的npy文件。

主要是 我怕弄错这几个数据的生成规则,我写了一版(从realsense获取然后组成和numpy类似的格式数据),但是感觉还是不太对劲,你这块写好了吗?

BetterLYY commented 3 years ago

I have been generating training data recently, but I have made no progress. How is the solution? @BetterLYY

如果你的问题是生成npy文件然后做detection,那你可以分别把npy文件中包含的四种数据分别存储到txt文件中,在他的源码中分别读取这些文件,而不是纠结一定要生成他说的npy文件。

主要是 我怕弄错这几个数据的生成规则,我写了一版(从realsense获取然后组成和numpy类似的格式数据),但是感觉还是不太对劲,你这块写好了吗?

我跑离线的,所以没有做这个工作蛤。

gzchenjiajun commented 3 years ago

I have been generating training data recently, but I have made no progress. How is the solution? @BetterLYY

如果你的问题是生成npy文件然后做detection,那你可以分别把npy文件中包含的四种数据分别存储到txt文件中,在他的源码中分别读取这些文件,而不是纠结一定要生成他说的npy文件。

主要是 我怕弄错这几个数据的生成规则,我写了一版(从realsense获取然后组成和numpy类似的格式数据),但是感觉还是不太对劲,你这块写好了吗?

我跑离线的,所以没有做这个工作蛤。

所以你只跑了作者提供的数据,但是自己的真实数据没有试过,是吗?

BetterLYY commented 3 years ago

I have been generating training data recently, but I have made no progress. How is the solution? @BetterLYY

如果你的问题是生成npy文件然后做detection,那你可以分别把npy文件中包含的四种数据分别存储到txt文件中,在他的源码中分别读取这些文件,而不是纠结一定要生成他说的npy文件。

主要是 我怕弄错这几个数据的生成规则,我写了一版(从realsense获取然后组成和numpy类似的格式数据),但是感觉还是不太对劲,你这块写好了吗?

我跑离线的,所以没有做这个工作蛤。

所以你只跑了作者提供的数据,但是自己的真实数据没有试过,是吗?

我用的自己的数据,代码里面主程序里可以明确看到从npy文件中读取了四种数据,你把这四种数据分别输入就行了

gzchenjiajun commented 3 years ago

I have been generating training data recently, but I have made no progress. How is the solution? @BetterLYY

如果你的问题是生成npy文件然后做detection,那你可以分别把npy文件中包含的四种数据分别存储到txt文件中,在他的源码中分别读取这些文件,而不是纠结一定要生成他说的npy文件。

主要是 我怕弄错这几个数据的生成规则,我写了一版(从realsense获取然后组成和numpy类似的格式数据),但是感觉还是不太对劲,你这块写好了吗?

我跑离线的,所以没有做这个工作蛤。

所以你只跑了作者提供的数据,但是自己的真实数据没有试过,是吗?

我用的自己的数据,代码里面主程序里可以明确看到从npy文件中读取了四种数据,你把这四种数据分别输入就行了

老哥,我就是问这些数据怎么生成,我自己生成了一些,拼凑了4个dict数据,但是感觉不太对。你可以介绍下吗?尤其是smoothed_object_pc

gzchenjiajun commented 3 years ago

@BetterLYY 方便留个wx吗?我的935579178

BetterLYY commented 3 years ago

@BetterLYY 方便留个wx吗?我的935579178

这一块也是我问过作者的,但是没有得到回复。在我看来这个内容就是目标的三维点云,需要自行分割出来。

gzchenjiajun commented 3 years ago

@BetterLYY 方便留个wx吗?我的935579178

这一块也是我问过作者的,但是没有得到回复。在我看来这个内容就是目标的三维点云,需要自行分割出来。因为有团队的研究内容不便透露。

好的,那我自己解决下。现在realsense采集后,我自行做了很多pre process再扔进网络,我总感觉我的处理有些问题~

gzchenjiajun commented 3 years ago

@BetterLYY 我现在跑自己采集的数据,生成的grasp有点偏移,我想问下: 你是用realsense采集的吗?你的intrinsics_matrix/K 参数的值是多少呢?我直接拿的numpy里面的值,不知道偏移是不是因为intrinsics_matrix参数不对

gzchenjiajun commented 3 years ago

image @BetterLYY 偏移的样子很奇怪

imdoublecats commented 3 years ago

我现在跑自己采集的数据,生成的grasp有点偏移,我想问下: 你是用realsense采集的吗?你的intrinsics_matrix/K 参数的值是多少呢?我直接拿的numpy里面的值,不知道偏移是不是因为intrinsics_matrix参数不对 @gzchenjiajun You can read intrinsics from your own realsense camera using SDK tools. Try to use English to make more people understand your issue and help you.

BetterLYY commented 3 years ago

@BetterLYY 我现在跑自己采集的数据,生成的grasp有点偏移,我想问下: 你是用realsense采集的吗?你的intrinsics_matrix/K 参数的值是多少呢?我直接拿的numpy里面的值,不知道偏移是不是因为intrinsics_matrix参数不对

我用的kinect,直接用的npy文件中给的相机参数,你也可以参考楼上的建议

gzchenjiajun commented 3 years ago

@BetterLYY 是numpy数据里面那个吧?代码里没看到哦

gzchenjiajun commented 3 years ago

我现在跑自己采集的数据,生成的grasp有点偏移,我想问下: 你是用realsense采集的吗?你的intrinsics_matrix/K 参数的值是多少呢?我直接拿的numpy里面的值,不知道偏移是不是因为intrinsics_matrix参数不对 @gzchenjiajun You can read intrinsics from your own realsense camera using SDK tools. Try to use English to make more people understand your issue and help you.

Ok, now there's a question. K = np.array([[616.36529541, 0., 310.25881958], [0., 616.20294189, 236.59980774], [0., 0., 1.]] )

I'm not sure what the original NP meant by providing K parameter values for the different latitudes. I can now get the intrinsics on my Realsense D435 (width, height, PPX,ppy, FX,fy) but I don't know how to put it together.

gzchenjiajun commented 3 years ago

@imdoublecats thank you

imdoublecats commented 3 years ago

我现在跑自己采集的数据,生成的grasp有点偏移,我想问下: 你是用realsense采集的吗?你的intrinsics_matrix/K 参数的值是多少呢?我直接拿的numpy里面的值,不知道偏移是不是因为intrinsics_matrix参数不对 @gzchenjiajun You can read intrinsics from your own realsense camera using SDK tools. Try to use English to make more people understand your issue and help you.

Ok, now there's a question. K = np.array([[616.36529541, 0., 310.25881958], [0., 616.20294189, 236.59980774], [0., 0., 1.]] )

I'm not sure what the original NP meant by providing K parameter values for the different latitudes. I can now get the intrinsics on my Realsense D435 (width, height, PPX,ppy, FX,fy) but I don't know how to put it together. @gzchenjiajun fx 0 cx 0 fy cy 0 0 1

gzchenjiajun commented 3 years ago

我现在跑自己采集的数据,生成的grasp有点偏移,我想问下: 你是用realsense采集的吗?你的intrinsics_matrix/K 参数的值是多少呢?我直接拿的numpy里面的值,不知道偏移是不是因为intrinsics_matrix参数不对 @gzchenjiajun You can read intrinsics from your own realsense camera using SDK tools. Try to use English to make more people understand your issue and help you.

Ok, now there's a question. K = np.array([[616.36529541, 0., 310.25881958], [0., 616.20294189, 236.59980774], [0., 0., 1.]] ) I'm not sure what the original NP meant by providing K parameter values for the different latitudes. I can now get the intrinsics on my Realsense D435 (width, height, PPX,ppy, FX,fy) but I don't know how to put it together. @gzchenjiajun fx 0 cx 0 fy cy 0 0 1

K has been obtained in the format of Intrinsics and assembled, and the generated grasp is offset. Now we do not know how to deal with it... . Do you have any suggestions

gzchenjiajun commented 3 years ago

output image image

source color image image

@imdoublecats

imdoublecats commented 3 years ago

我现在跑自己采集的数据,生成的grasp有点偏移,我想问下: 你是用realsense采集的吗?你的intrinsics_matrix/K 参数的值是多少呢?我直接拿的numpy里面的值,不知道偏移是不是因为intrinsics_matrix参数不对 @gzchenjiajun You can read intrinsics from your own realsense camera using SDK tools. Try to use English to make more people understand your issue and help you.

Ok, now there's a question. K = np.array([[616.36529541, 0., 310.25881958], [0., 616.20294189, 236.59980774], [0., 0., 1.]] ) I'm not sure what the original NP meant by providing K parameter values for the different latitudes. I can now get the intrinsics on my Realsense D435 (width, height, PPX,ppy, FX,fy) but I don't know how to put it together. @gzchenjiajun fx 0 cx 0 fy cy 0 0 1

K has been obtained in the format of Intrinsics and assembled, and the generated grasp is offset. Now we do not know how to deal with it... . Do you have any suggestions

@gzchenjiajun i just set k and generate the object point cloud,then grasps are generated as example images, i woder if your point cloud is not right, it does not look like a single object not including background.

gzchenjiajun commented 3 years ago

image I don't know if it makes any difference to depth except 1000 because realsense units and network units don't seem to match, but if you comment out the division line, you won't see the generated object @imdoublecats

gzchenjiajun commented 3 years ago

And now I only take one frame of point cloud data for the convenience of testing, I don't know if there is any problem... .

gzchenjiajun commented 3 years ago

image

image I retried a pure background to test the grasp generation is still wrong.

And for some reason, the image generated by my RealSense Color image is so dark... I've also debugged the channel

gzchenjiajun commented 3 years ago

image My Smoothed_object_PC is generated like this. Is my Smoothed_object_PC data faulty?

gzchenjiajun commented 3 years ago

@imdoublecats @BetterLYY

Thank you for your answers

imdoublecats commented 3 years ago

@gzchenjiajun GraspNet does not include object segment ,so send the point cloud of the object only, don't send the entire point cloud. Unit of smoothed point cloud in npy is 1m, realsense depth image is 1mm, you can see your data to check the unit. Realsense has a lot of intrinsics, use the RGB 640*480 one.

Visualize the data in npy and compare it with what you get from realsense may help.

gzchenjiajun commented 3 years ago

@gzchenjiajun GraspNet does not include object segment ,so send the point cloud of the object only, don't send the entire point cloud. Unit of smoothed point cloud in npy is 1m, realsense depth image is 1mm, you can see your data to check the unit. Realsense has a lot of intrinsics, use the RGB 640*480 one.

Visualize the data in npy and compare it with what you get from realsense may help.

link:https://pan.baidu.com/s/1yLL7yYBMcdTWV1UThTlGTg password:iz12 I reviewed the original NUMPY data content and re-checked my code, but the claw generation was still wrong. Can you take a look at it for me? Thank you very much

gzchenjiajun commented 3 years ago

@imdoublecats @BetterLYY

I'm really in trouble now. Thank you for your answers,

arsalan-mousavian commented 3 years ago

Here are the stages you need to follow to generate grasps for new scenes and execute with the robot. 1) use any instance segmentation methods to segment the objects. Examples: https://github.com/NVlabs/UnseenObjectClustering https://github.com/chrisdxie/uois 2) backproject the depth map to point cloud https://github.com/NVlabs/6dof-graspnet/blob/master/demo/main.py#L60 3) take the segmented point cloud of object and generate grasps: https://github.com/NVlabs/6dof-graspnet/blob/master/demo/main.py#L131 4) if you are using robot, transform grasps from camera frame to robot base base frame using the relative transform of robot base and camera to execute. 5) sort grasps based on predicted scores. 6) Go over grasps according to scores in decreasing order. Execute the first grasp for which motion planner finds a plan.

The model in this repo generates grasps for the target object regardless of the clutter. Grasps may collide with other objects, if you want to remove those grasps you can implement collisionnet from here: https://arxiv.org/abs/1912.03628

[optional[ Regarding the smooth point cloud: since depth images from realsense are noisy, one can smooth the depth images by averaging 10 consecutive frames and removing the pixels with jittery depth. That helps smoothing the performance. Even without smoothing you should be able to get comparable results to the provided examples.

For realsense cameras, intrinsics can be extracted from the realsense API or ros driver itself.

gzchenjiajun commented 3 years ago

@arsalan-mousavian Thank you. It seems that I did not do the first step, which caused my grasp to generate an anomaly. I will do the first step next

gzchenjiajun commented 3 years ago

@arsalan-mousavian Thank you very much for your previous reply. Now I have a new problem. With me through https://github.com/NVlabs/UnseenObjectClustering, input image to get the instance of object segmentation mask data, can't will object mask data into the corresponding point cloud data. Do you have any suggestions about this? thank you

gzchenjiajun commented 3 years ago

@arsalan-mousavian

Thank you very much for your previous reply. Now I have a new problem.

With me through https://github.com/NVlabs/UnseenObjectClustering, input image to get the instance of object segmentation mask data, can't will object mask data into the corresponding point cloud data.

Do you have any suggestions about this? thank you

This step already Success

Spj-Zhao commented 2 years ago

hi @gzchenjiajun ,i already run the git@github.com:NVlabs/UnseenObjectClustering.git successfully,but how can I get the corresponding segmented data, such as segmented depth and image

imdoublecats commented 2 years ago

这是来自QQ邮箱的假期自动回复邮件。   您好,我最近正在休假中,无法亲自回复您的邮件。我将在假期结束后,尽快给您回复。