Easonyesheng / CCS

[RA-L&IROS22] A learning-based camera calibration system.
MIT License
35 stars 2 forks source link

How to train #17

Closed 123ioup closed 5 months ago

123ioup commented 6 months ago

How do I train my own checkerboard images with this model

123ioup commented 6 months ago

How do I train my own checkerboard images with this model

A checkerboard image taken with my camera

Easonyesheng commented 5 months ago

You still need to generate checkerboard images as real images don't have ground truth point coordinates; but you should make the generated image as similar as possible to the real image, then the learning model can perform well in real scenes. In practice, you need to modified the image generation code in this repo to make it can generate images as you wish. Therefore, you can use training scripts in this repo to train your own model.

123ioup commented 5 months ago

为什么我自己用模型训练出的权重去测试得到的热力图是错误的Why is the heat map I tested with the weights trained by the model wrong

------------------ 原始邮件 ------------------ 发件人: "Easonyesheng/CCS" @.>; 发送时间: 2024年4月11日(星期四) 下午4:51 @.>; @.**@.>; 主题: Re: [Easonyesheng/CCS] How to train (Issue #17)

You still need to generate checkerboard images as real images don't have ground truth point coordinates; but you should make the generated image as similar as possible to the real image, then the learning model can perform well in real scenes. In practice, you need to modified the image generation code in this repo to make it can generate images as you wish. Therefore, you can use training scripts in this repo to train your own model.

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>

123ioup commented 5 months ago

自己训练出的权重去测试得到的热力图是这样的

------------------ 原始邮件 ------------------ 发件人: "Easonyesheng/CCS" @.>; 发送时间: 2024年4月11日(星期四) 下午4:51 @.>; @.**@.>; 主题: Re: [Easonyesheng/CCS] How to train (Issue #17)

You still need to generate checkerboard images as real images don't have ground truth point coordinates; but you should make the generated image as similar as possible to the real image, then the learning model can perform well in real scenes. In practice, you need to modified the image generation code in this repo to make it can generate images as you wish. Therefore, you can use training scripts in this repo to train your own model.

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>

Easonyesheng commented 5 months ago

不好意思,这里看不到您的热力图,可能是图像上传的问题; 不过既然基于网络的棋盘格点检测可以完全做成一个过拟合的任务,我觉得关键就是训练数据和真实数据的相似性,所以如果热力图有问题,大概率是训练和测试数据之间的差异导致的。

Sorry, I can't see your heat map here, it may be an image upload problem; However, since network-based checkerboard point detection can be completely an overfitting task, I think the key is the similarity between the training data and the real data, so if there is a problem with the heat map, it is most likely caused by the gap between the training and test data.

123ioup commented 5 months ago

[怎么样让训练数据和真实数据尽可能相似呢,我尝试过好久调整生成数据集的参数让生成数据集和真实数据集尽可能贴近,但是效果都不是很理想,这是我拍摄的真实数据集,能否给我一些帮助呢,谢谢你的回答 susan_input1 susan_input1

123ioup commented 5 months ago

left3 left6 3-0 3-1 这是我生成训练集的图片,大概1000张左右,用的您提供的训练权重,学习率为0.0001,训练了1000个epoch,测试得到的结果,能不能给我一些帮助,感谢你的回答

Easonyesheng commented 5 months ago

我的建议如下:

  1. 拍摄一张你办公室的图像(也就是你摆放真实棋盘的位置的图像),取代生成数据所用的TUM数据集图像作为背景图;
  2. 生成数据的黑白格顺序好像与真实棋盘是反的,可以调整一下;
  3. 标定时,棋盘格在全图占比应该大一些,这样有利于精确的检测;
  4. 训练数据可以再多一些;

My suggestions are as follows:

  1. Take an image of your office (that is, where you place the real checkerboard) and replace the TUM dataset image used as the background of generated image;
  2. The order of the black and white grid of the generated data seems to be opposite to the real chessboard, which can be adjusted;
  3. When calibrating, the checkerboard should occupy a larger proportion in the whole image, which is conducive to accurate detection;
  4. Training data can be more;
123ioup commented 5 months ago

非常感谢您的建议,现在检测效果已经好很多了,但是第一排的角点还是有一些问题,这是我最新生成训练的棋盘格,希望您能给出帮助 86 85 62845ae081e861ad30c7d7b303b497d 4858aa4ae7e3c75a4b9f2d2bd4598a2

Easonyesheng commented 5 months ago

这看上去像是因为真实棋盘格摆放一直是仰着的,或许可以专门生成一批和摆放姿态相同的数据进行微调。

123ioup commented 5 months ago

非常感谢您的建议,但是我尝试修改生成数据集的参数,还是没有获得很好的效果,是我修改的参数还是有问题吗,希望能够得到您的帮助 aa0998e50972da51529e6e1b51153aa da48460c8ba405e7b53d10cad2a374d 4a2212304d8708dda8c1a06966e5b94 b00440d3be3a7d5fd822a3aa378a473

Easonyesheng commented 5 months ago

还是建议将棋盘在图像中的比重放大,不行可以裁剪一下图片再输入; 另外可能还是模型的训练数据不够多; 以及目前代码生成的图像还是有不少瑕疵,有优化的空间(或许可以用diffusion)。

123ioup commented 5 months ago

好的好的,非常感谢您的回答

Sun-O-Sun commented 4 months ago

作者,你好!我是相机标定这一块刚入门的,在网上拜读了你的文章,下载了你的代码,对于你的代码文件train_CornerDetect.py中,有一处sp_processer = SuperPointNet_process(**params)报红,这是为什么呢?我知道SuperPoint算法 具有提取图片特征点,并且输出特征描述子的特性,这是要重写SuperPoint特征点提取过程吗?感谢你的回答 1

Easonyesheng commented 4 months ago

Hi @Sun-O-Sun 这部分可以直接注释掉,因为最后用的是UNet结构,之前尝试过superpoint,但是效果不好

Sun-O-Sun commented 4 months ago

好滴,非常感谢你的答复!

---- 回复的原邮件 ---- | 发件人 | Eason @.> | | 日期 | 2024年05月26日 19:15 | | 收件人 | @.> | | 抄送至 | @.>@.> | | 主题 | Re: [Easonyesheng/CCS] How to train (Issue #17) |

Hi @Sun-O-Sun 这部分可以直接注释掉,因为最后用的是UNet结构,之前尝试过superpoint,但是效果不好

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you were mentioned.Message ID: @.***>

Easonyesheng commented 4 months ago

train.txt的内容可以参考这个issue. 替换为自己的数据集就是需要按照训练所需的文件结构整理你自己的数据,其实只需要图像和角点的路径都指定在txt文件里即可。

Sun-O-Sun commented 1 month ago

作者,你好!请问以下,你的那个项目中,GT文件夹中的npy文件是如何生成的?

Sun-O-Sun commented 1 month ago

1724576362711

Easonyesheng commented 1 month ago

是通过我们的数据生成部分的代码生成的真值数据

Sun-O-Sun commented 1 month ago

你好!我反复看了几遍数据生成部分的代码,但确实没有找到GT文件夹中文件的生成方法,还请您指点一二,非常感谢,非常感谢

Easonyesheng commented 1 month ago

如果是我理解的真值,应该就是这一行:https://github.com/Easonyesheng/CCS/blob/1429f5c0873e25b71ab6544f0bec3f847c1df847/dataset/DataGenerator.py#L413 以及这里的一行:https://github.com/Easonyesheng/CCS/blob/1429f5c0873e25b71ab6544f0bec3f847c1df847/dataset/DataGenerator.py#L603 具体看你调用的哪一个函数

Sun-O-Sun commented 1 month ago

如果是我理解的真值,应该就是这一行:

https://github.com/Easonyesheng/CCS/blob/1429f5c0873e25b71ab6544f0bec3f847c1df847/dataset/DataGenerator.py#L413

以及这里的一行: https://github.com/Easonyesheng/CCS/blob/1429f5c0873e25b71ab6544f0bec3f847c1df847/dataset/DataGenerator.py#L603

具体看你调用的哪一个函数

你好!根据你的提示,我已经解决了该问题,谢谢你的帮助!

Sun-O-Sun commented 1 month ago

你好!再次打扰一下了,请教一下你两个问题:

  1. 在数据生成阶段,我直接在你的源码直接运行后(没有改参数),得到的ori_corner, dist_corner是一样的,你也是这样吧?这跟畸变参数有关系吧?那么正常情况下,ori_corner和dist_corner应该是不一样的吧? 1724850578044

  2. train_DistCorr.py文件是针对即便图像校正网络进行训练的,其中用到的角点corner,来源是dist_corner吗?但好像在调用函数cal_radial_model_loss(batch_size, order, parameters, corner, device)时,与函数中你给的注释中corner维度不一样 1724850400532 1724850400535

Easonyesheng commented 1 month ago
  1. 生成数据时有一个flag控制是否生成畸变角点,见这里: https://github.com/Easonyesheng/CCS/blob/1429f5c0873e25b71ab6544f0bec3f847c1df847/dataset/DataGenerator.py#L571 默认情况下应该是没有开的。

  2. 这里的corners是分为两个部分,corners_before是畸变前的角点,在corners[batch,0,:,:]这一维;对应的无畸变角点就是corners_after,在[batch,1,:,:]这一维。然后corners[batch,3,:,:]这一维就存畸变参数(形状和上面两维度一样,但是只有前几个是非零值,也就是参数)。具体可以打印出来看看,时间久远,不确定我记得对不对;这边注释可能写的有点问题,我改一下。

Sun-O-Sun commented 1 month ago

好的,谢谢你的回复,我会具体再看一下!

---- 回复的原邮件 ---- | 发件人 | Eason @.> | | 日期 | 2024年08月31日 15:23 | | 收件人 | @.> | | 抄送至 | @.>@.> | | 主题 | Re: [Easonyesheng/CCS] How to train (Issue #17) |

生成数据时有一个flag控制是否生成畸变角点,见这里: https://github.com/Easonyesheng/CCS/blob/1429f5c0873e25b71ab6544f0bec3f847c1df847/dataset/DataGenerator.py#L571 默认情况下应该是没有开的。

这里的corners是分为两个部分,corners_before是畸变前的角点,在corners的[batch,0,:,:]这一维;对应的无畸变角点就是corners_after,在[batch,1,:,:]这一维。然后corners的[batch,3,:,:]这一维就存畸变参数(形状和上面两维度一样,但是只有前几个是非零值,也就是参数)。具体可以打印出来看看,时间久远,不确定我记得对不对;这边注释可能写的有点问题,我改一下。

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you were mentioned.Message ID: @.***>