tnikolla / robot-grasp-detection

Detecting robot grasping positions with deep neural networks. The model is trained on Cornell Grasping Dataset. This is an implementation mainly based on the paper 'Real-Time Grasp Detection Using Convolutional Neural Networks' from Redmon and Angelova.
Apache License 2.0
232 stars 84 forks source link

I want to know how much the last loss #9

Open HEUzhouhanwen opened 6 years ago

HEUzhouhanwen commented 6 years ago

Hello Run grab_det.py, after 10000 steps, the loss is stable at 30000-40000, and how much is the reasonable loss!

tnikolla commented 6 years ago

Hi! I don't remember it now. Try evaluating the validation set. I could achieve a little more than 60 percent accuracy.

HEUzhouhanwen commented 6 years ago

Hello Try evaluating the validation set.I use ../robot-grasp-detection/models/grasp/m4/m4.ckpt, but the accuracy is about 30%, what is my problem? Thank you!

tnikolla commented 6 years ago

How do you calculate the accuracy?

For every test example there are multiple ground truth rectangles (grasp positions) and only one rectangle predicted. The algorithm for evaluating if an example is a success, takes one random GT rectangle from the example and compares it with the predicted one. So you need to run the evaluation (grasp_det.py) multiple times so all the GT rectangles from one example are compared.

I did it like this: Run grasp_det.py the first time and note which example was a success, for example 1,3,6,8 from 10 were a success. Run it a second time and you will get a success for 0,1,3,6. Run it a third time and you get for example, 0,5,6. Accumulating the successes you get 0,1,3,5,6,8 from 10 examples (10 images with their annotated ground truth grasping positions). The accuracy is 6/10 = 60%.

You can code something to do this, instead of running the code manually lots of times and noting which was a success (I did it maybe 15 times).

Temporarly I'm unable to contribute to the repository because I lack a PC to do it. I am stuck with my old personal laptop.

HEUzhouhanwen commented 6 years ago

I see! Thank you!

HEUzhouhanwen commented 6 years ago

But I still don not understand why the loss is stable at 30000-40000!

tnikolla commented 6 years ago

The algorithm finds one grasping position for every object (image) and this is not true in the dataset and also in the real life. Let's think about an image of a pencil (symmetry). pencil

When training, only one ground truth (red rectangle) is used in one pass (forward- and backprop, updating the weights). The ground truths lie in the text files of the dataset. there are a few for every image (theoretically this number is infinit) After training, the model has learned the average of all ground truths, the green rectangle. Continuing with training, with a batch size of 100 images, there will always be GTs that are far from the predicted rectangle. So, the RMS loss will move around some value.

Now, if we have again the a pencil-like image in the test set, the algorithem will predict a grasp. To find out if this predicted grasp is a success, it will be evaluated (two criterias) using only one random ground truth from the test set. So, if the first GT is randomly chosed, the predicted grasp is no success because of IoU; although we can see that this is a success for a real robot. The predicted grasp will not be a success for every GT except for the forth one where IoU and the angle meet the criteria.

What do you think?

xiaoshuguo750 commented 6 years ago

Thank you for your great answer! Very clear! Wonderful description! Shu Guo!&HEUzhouhanwen

clvictory commented 6 years ago

When I run grasp_det.py, it seems the x_hat, h_hat, w_hat become NAN with only a few epochs. Is it reasonable and how to avoid that?

xiaoshuguo750 commented 6 years ago

There are some NAN mistakes in the files:

pcd0132cpos.txt pcd0165cpos.txt you can delete them!

At 2018-01-28 14:42:20, "Lu Chen" notifications@github.com wrote:

When I run grasp_det.py, it seems the x_hat, h_hat, w_hat become NAN with only a few epochs. Is it reasonable and how to avoid that?

— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or mute the thread.

clvictory commented 6 years ago

@xiaoshuguo750

It works, thx!

xiaoshuguo750 commented 6 years ago

Are you Chinese?

在 2018-01-30 10:57:51,"Lu Chen" notifications@github.com 写道:

@xiaoshuguo750

It works, thx!

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.

clvictory commented 6 years ago

@xiaoshuguo750

Yeah !

woshisj commented 6 years ago

同学你好 你现在还有在做抓取相关的么

xiaoshuguo750 commented 6 years ago

weixin:409067552

在 2018-04-25 21:25:24,"sujie" notifications@github.com 写道:

同学你好 你现在还有在做抓取相关的么

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.

lx-onism commented 5 years ago

weixin:409067552 在 2018-04-25 21:25:24,"sujie" notifications@github.com 写道: 同学你好 你现在还有在做抓取相关的么 — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.

同学你好!我现在正在做抓取相关的研究工作,你进展怎么样?我用该代码中已保存的模型测试了我自己的数据,但效果很不理想,然后我有去测试康奈尔抓取数据集中的数据,效果竟然同样很不理想,而且我不知道是什么原因,你测试的结果怎么样呢?

weiwuhuhu commented 5 years ago

But I still don not understand why the loss is stable at 30000-40000!

同学,我是做抓取新手。在开始阅读代码的时候,我对一步就很不理解,他说需要将imagenet数据转换为tfrecord,然后需要下imagenet的数据集。。。这里我很不理解,文章不是用的cornell grasping dataset,为什么需要下载imagenet数据集,主要是看到Imagenet数据集太大,学校网速太慢了,希望大佬能给解答一下,谢谢。。。

woshisj commented 5 years ago

看一下论文 V. EXPERIMENTS AND EVALUATION
C. Pretraining z

But I still don not understand why the loss is stable at 30000-40000!

同学,我是做抓取新手。在开始阅读代码的时候,我对一步就很不理解,他说需要将imagenet数据转换为tfrecord,然后需要下imagenet的数据集。。。这里我很不理解,文章不是用的cornell grasping dataset,为什么需要下载imagenet数据集,主要是看到Imagenet数据集太大,学校网速太慢了,希望大佬能给解答一下,谢谢。。。

weiwuhuhu commented 5 years ago

看一下论文 V. EXPERIMENTS AND EVALUATION C. Pretraining z

But I still don not understand why the loss is stable at 30000-40000!

同学,我是做抓取新手。在开始阅读代码的时候,我对一步就很不理解,他说需要将imagenet数据转换为tfrecord,然后需要下imagenet的数据集。。。这里我很不理解,文章不是用的cornell grasping dataset,为什么需要下载imagenet数据集,主要是看到Imagenet数据集太大,学校网速太慢了,希望大佬能给解答一下,谢谢。。。

您好,您给我推荐的论文我没查到。论文名称是:Experiment AND EVALUATION吗?方便加个好友吗

woshisj commented 5 years ago

就是这个代码的论文呀 论文名字是:Real-Time Grasp Detection Using Convolutional Neural Networks 第 V 章 C 节

看一下论文 V. EXPERIMENTS AND EVALUATION C. Pretraining z

But I still don not understand why the loss is stable at 30000-40000!

同学,我是做抓取新手。在开始阅读代码的时候,我对一步就很不理解,他说需要将imagenet数据转换为tfrecord,然后需要下imagenet的数据集。。。这里我很不理解,文章不是用的cornell grasping dataset,为什么需要下载imagenet数据集,主要是看到Imagenet数据集太大,学校网速太慢了,希望大佬能给解答一下,谢谢。。。

您好,您给我推荐的论文我没查到。论文名称是:Experiment AND EVALUATION吗?方便加个好友吗

weiwuhuhu commented 5 years ago

就是这个代码的论文呀 论文名字是:Real-Time Grasp Detection Using Convolutional Neural Networks 第 V 章 C 节

看一下论文 V. EXPERIMENTS AND EVALUATION C. Pretraining z

But I still don not understand why the loss is stable at 30000-40000!

同学,我是做抓取新手。在开始阅读代码的时候,我对一步就很不理解,他说需要将imagenet数据转换为tfrecord,然后需要下imagenet的数据集。。。这里我很不理解,文章不是用的cornell grasping dataset,为什么需要下载imagenet数据集,主要是看到Imagenet数据集太大,学校网速太慢了,希望大佬能给解答一下,谢谢。。。

您好,您给我推荐的论文我没查到。论文名称是:Experiment AND EVALUATION吗?方便加个好友吗

好的,我再仔细看看,您好,方便加个微信好友吗

woshisj commented 5 years ago

在这说吧,看到会回复你的

就是这个代码的论文呀 论文名字是:Real-Time Grasp Detection Using Convolutional Neural Networks 第 V 章 C 节

看一下论文 V. EXPERIMENTS AND EVALUATION C. Pretraining z

But I still don not understand why the loss is stable at 30000-40000!

同学,我是做抓取新手。在开始阅读代码的时候,我对一步就很不理解,他说需要将imagenet数据转换为tfrecord,然后需要下imagenet的数据集。。。这里我很不理解,文章不是用的cornell grasping dataset,为什么需要下载imagenet数据集,主要是看到Imagenet数据集太大,学校网速太慢了,希望大佬能给解答一下,谢谢。。。

您好,您给我推荐的论文我没查到。论文名称是:Experiment AND EVALUATION吗?方便加个好友吗

好的,我再仔细看看,您好,方便加个微信好友吗

weiwuhuhu commented 5 years ago

在这说吧,看到会回复你的

就是这个代码的论文呀 论文名字是:Real-Time Grasp Detection Using Convolutional Neural Networks 第 V 章 C 节

看一下论文 V. EXPERIMENTS AND EVALUATION C. Pretraining z

But I still don not understand why the loss is stable at 30000-40000!

同学,我是做抓取新手。在开始阅读代码的时候,我对一步就很不理解,他说需要将imagenet数据转换为tfrecord,然后需要下imagenet的数据集。。。这里我很不理解,文章不是用的cornell grasping dataset,为什么需要下载imagenet数据集,主要是看到Imagenet数据集太大,学校网速太慢了,希望大佬能给解答一下,谢谢。。。

您好,您给我推荐的论文我没查到。论文名称是:Experiment AND EVALUATION吗?方便加个好友吗

好的,我再仔细看看,您好,方便加个微信好友吗

所以得需要俩个数据集一个是生成抓取姿态的数据集 Cornell grasping dataset还有imagedataset,那个imagedataset数据集150个G都需要下载吗???

1458763783 commented 4 years ago

But I still don not understand why the loss is stable at 30000-40000!

同学,我是做抓取新手。在开始阅读代码的时候,我对一步就很不理解,他说需要将imagenet数据转换为tfrecord,然后需要下imagenet的数据集。。。这里我很不理解,文章不是用的cornell grasping dataset,为什么需要下载imagenet数据集,主要是看到Imagenet数据集太大,学校网速太慢了,希望大佬能给解答一下,谢谢。。。

哥们,你现在抓取搞的怎么样了,方便留个邮箱吗?想交流一下。。

woyuni commented 3 years ago

weihuhuhu同学,你的代码跑通了吗

1458763783 commented 3 years ago

我没有在跑这个程序了,但是我也在做抓取策略生成,可以加微信交流18798824036

贵州大学白强

18798824036@163.com | 签名由网易邮箱大师定制 On 11/9/2020 17:20,2017301500326notifications@github.com wrote:

weihuhuhu同学,你的代码跑通了吗

— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or unsubscribe.

zhoumo1121 commented 3 years ago

Thank you for your great answer! Very clear! Wonderful description! Shu Guo!&HEUzhouhanwen

ValueError: Tensor conversion requested dtype int32 for Tensor with dtype float64: 'Tensor("truediv:0", shape=(), dtype=float64, device=/device:CPU:0)' How to solve the error when running the grasp_det.py?

Jonho111 commented 2 years ago

在这说吧,看到会回复你的

就是这个代码的论文呀 论文名字是:Real-Time Grasp Detection Using Convolutional Neural Networks 第 V 章 C 节

看一下论文 V. EXPERIMENTS AND EVALUATION C. Pretraining z

But I still don not understand why the loss is stable at 30000-40000!

同学,我是做抓取新手。在开始阅读代码的时候,我对一步就很不理解,他说需要将imagenet数据转换为tfrecord,然后需要下imagenet的数据集。。。这里我很不理解,文章不是用的cornell grasping dataset,为什么需要下载imagenet数据集,主要是看到Imagenet数据集太大,学校网速太慢了,希望大佬能给解答一下,谢谢。。。

您好,您给我推荐的论文我没查到。论文名称是:Experiment AND EVALUATION吗?方便加个好友吗

好的,我再仔细看看,您好,方便加个微信好友吗

哥,我想请问一下,如果我要自己训练模型的话,数据集的照片要用RGB-D吗?还是说只要RBG就可以了?