yewzijian / RegTR

End-to-end Point Cloud Correspondences with Transformers
MIT License
224 stars 28 forks source link

gt-info, gt.log, and gt_overlap.log in benchmark #21

Open lcxiha opened 1 year ago

lcxiha commented 1 year ago

Hello, 1.I would like to know what are the functions of 3DMatch and 3DLoMatch in the src/datasets/3dmatch/benchmarks folder? Is it used for testing (in the test. py file)? 2.What are the role of gt-info, gt.log, and gt_overlap.log in the dataset folder of 3DMatch and 3DLoMatch (../RegTR main/src/datasets/3dmatch/benchmarks/3DLoMatch/7 scenes redkitchen) ?How to obtain them when I build my own dataset?

I want to build my own dataset to use the REGTR , but I am stumped by this difficulty. Could you help me ?Thanks a lot!

yewzijian commented 1 year ago

Hi. The benchmark folder contain code to evaluate the accuracy of the registration algorithm.

For information on the data, it would be better to refer to original dataset paper. However, I believe most of the information in .log files are not used, and that we only use the ground truth pose and compare it with the estimated one.

On Wed, 21 Jun 2023 at 11:46 AM, lcxiha @.***> wrote:

Hello, 1.I would like to know what are the functions of 3DMatch and 3DLoMatch in the src/datasets/3dmatch/benchmarks folder? Is it used for testing (in the test. py file)? 2.What are the role of gt-info, gt.log, and gt_overlap.log in the dataset folder of 3DMatch and 3DLoMatch (../RegTR main/src/datasets/3dmatch/benchmarks/3DLoMatch/7 scenes redkitchen) ?How to obtain them when I build my own dataset?

I want to build my own dataset to use the REGTR , but I am stumped by this difficulty. Could you help me ?Thanks a lot!

— Reply to this email directly, view it on GitHub https://github.com/yewzijian/RegTR/issues/21, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADIBP62AAIGIAUGAMX2YURDXMJU73ANCNFSM6AAAAAAZOEHWRE . You are receiving this because you are subscribed to this thread.Message ID: @.***>

lcxiha commented 1 year ago

Thanks a lot!Now I have encountered another difficulty. When I ran the train.py on my own dataset, I obtained the model and ran the demo.py, result is as follows:2023-06-27 15-12-51 的屏幕截图 But in fact, the matching between the two frame point clouds is as follows: 2023-06-27 15-22-44 的屏幕截图 (2) After modifying the following parameters, the program can run normally: batch_size:2--->1,first_subsampling_dl:0.025--->0.2,base_lr:0.0001-->0.00005,epoch=60,num_workers=0 But the matching effect is not satisfactory. Could you give me some advice?

yewzijian commented 1 year ago

It’s hard to tell from this image alone. Does it happen on the training point clouds? If not, it might be an overfitting issue due to lack of data. Also, the hierachical KPConv used in the work requires some amount of tuning. You might want to make sure that ample points fall within each ball cluster. Also, REGTR works well when the number of key points at the coarsest level is around 500.

lcxiha commented 1 year ago

Thank you! Yes, this graph visualizes the alignment of the test dataset using the model obtained from the training dataset (i.e. running demo. py code). How many pairs of point clouds do I need to have at least to use this algorithm using my dataset?

yewzijian commented 1 year ago

RegTR is training data hungry since it relies heavily on the transformer without much inductive bias. It does require a large dataset size like the 3DMatch/modelnet.

On Tue, 18 Jul 2023 at 4:10 PM, lcxiha @.***> wrote:

Thank you! Yes, this graph visualizes the alignment of the test dataset using the model obtained from the training dataset (i.e. running demo. py code). How many pairs of point clouds do I need to have at least to use this algorithm using my dataset?

— Reply to this email directly, view it on GitHub https://github.com/yewzijian/RegTR/issues/21#issuecomment-1639728096, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADIBP65SCC52B6IWW3QBBK3XQZAHRANCNFSM6AAAAAAZOEHWRE . You are receiving this because you commented.Message ID: @.***>

lcxiha commented 1 year ago

So does the logarithm of the dataset need to reach the level of ten thousand?

lcxiha commented 1 year ago

Hello, I noticed that when I delete the gt.log and gt-info files in ../RegTR-main/src/datasets/3dmatch/benchmarks/3DMatch/7-scenes-redkitchen ,running the test.py program will report an error, so I guess the gt.log and gt-info files are useful. But I don't know the function of them.

yewzijian commented 1 year ago

Hi, you need the groundtruth files when you're evaluating the algorithm.

For example, in here the groundtruth poses are loaded so you can compute the errors in the following lines.

lcxiha commented 1 year ago

Hello, I found that when running the test.py, the terminal output format is defined by program benchmark_prepator.py, where the gt.log and gt-info files are required. 1690290504544

lcxiha commented 1 year ago

Hello, the gt.log file represents the transformation matrix (i.e. groundtruth) of two point clouds,but when I visualized the 3DMatch.pkl file, I found that the corresponding transformation matrix and the transformation matrix of the same pair of point cloud keyframes in the gt.log file were not the same, shouldn't they be the same?

lcxiha commented 1 year ago

The following values of the Rotation matrix corresponding to the same pair of point cloud data in the same scene are gt.log file and test 3DMatch Info.pkl file. 1690374402687 1690374452476

yewzijian commented 1 year ago

Good observation. I don't have an answer to this since I took the files from Predator's repository. However, this will not affect the 3DMatch benchmark results since the groundtruth poses are read from the gt.log files, as you noted above.

lcxiha commented 1 year ago

Thank you for your answer. I want to know how to generate a gt.log file if I use my own dataset. Just need to know the transformation matrix of the two point clouds? Is the transformation matrix in the gt.log file groundtruth?

That is to say, if I want to verify this algorithm, all I need is a gt.log file, like the benchmark_3dmatch.py , and I need to modify the test.py file based on my benchmark. py program.

yewzijian commented 1 year ago

Yes that's right. gt.log contains the groundtruth poses.

On Wed, Jul 26, 2023 at 9:27 PM lcxiha @.***> wrote:

Thank you for your answer. I want to know how to generate a gt.log file if I use my own dataset. Just need to know the transformation matrix of the two point clouds? Is the transformation matrix in the gt.log file groundtruth?

That is to say, if I want to verify this algorithm, all I need is a gt.log file, like the benchmark_3dmatch.py , and I need to modify the test.py file based on my benchmark. py program.

— Reply to this email directly, view it on GitHub https://github.com/yewzijian/RegTR/issues/21#issuecomment-1651805666, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADIBP62UQFCTNDJSHVPXRKLXSELLPANCNFSM6AAAAAAZOEHWRE . You are receiving this because you commented.Message ID: @.***>

lcxiha commented 1 year ago

And I have another question : how can I get the est.log file? Is the est.log file obtained from the trained model?

yewzijian commented 1 year ago

Yes, it is generated when you run the inference code.

On Thu, 27 Jul 2023 at 10:51 AM, lcxiha @.***> wrote:

And I have another question : how can I get the est.log file? Is the est.log file obtained from the trained model?

— Reply to this email directly, view it on GitHub https://github.com/yewzijian/RegTR/issues/21#issuecomment-1652834224, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADIBP65ZG5YNGJFIPAUVEADXSHJRZANCNFSM6AAAAAAZOEHWRE . You are receiving this because you commented.Message ID: @.***>

lcxiha commented 1 year ago

Hello, I would like to consult: What is the comparison between the point cloud density and scale of the Kitti dataset and the 3DMatch dataset?

yewzijian commented 1 year ago

Hi, we did not do any comparison between the density and scale on the two datasets.

On Wed, 2 Aug 2023 at 3:44 PM, lcxiha @.***> wrote:

Hello, I would like to consult: What is the comparison between the point cloud density and scale of the Kitti dataset and the 3DMatch dataset?

— Reply to this email directly, view it on GitHub https://github.com/yewzijian/RegTR/issues/21#issuecomment-1661679718, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADIBP66D6SNFUPWXB5JQGKDXTIAMZANCNFSM6AAAAAAZOEHWRE . You are receiving this because you commented.Message ID: @.***>

lcxiha commented 1 year ago

But shouldn't some parameters in kernel point convolutional networks be changed based on the density and scale of the point cloud? For example: conv radius, deform radius, KP extent, neighborhood limits, first subsampling dl and overlap_ radius

yewzijian commented 1 year ago

Yes you’re right. You may need to tune those parameters based on your point cloud characteristics.

On Sat, 5 Aug 2023 at 11:30 AM, lcxiha @.***> wrote:

But shouldn't some parameters in kernel point convolutional networks be changed based on the density and scale of the point cloud? For example: conv radius, deform radius, KP extent, neighborhood limits, first subsampling dl and overlap_ radius

— Reply to this email directly, view it on GitHub https://github.com/yewzijian/RegTR/issues/21#issuecomment-1666372607, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADIBP634NBJDBLEFN7CZMXTXTW445ANCNFSM6AAAAAAZOEHWRE . You are receiving this because you commented.Message ID: @.***>

lcxiha commented 1 year ago

Thanks a lot!But I have another question: how can I change the following parameters to fit my dataset, and what are the criteria for changing these parameters? For example :r_p and r_n. 1691284935593(1)

yewzijian commented 1 year ago

As stated in the comments, setting r_p and r_n to 1.0 and 2.0x of the voxel sizes at the coarsest level works well. feature_loss_on works well enough when we set it at the coarsest level alone. The training weightings wt_feature and wt_feature_un works well at its current setting so there's usually no need to tweak them.

For the kpconv parameters, I recommend reading its paper to get the intuition how to set. Nevertheless, RegTR works well when there's around 500 points at the coarsest level where attention is applied.

lcxiha commented 1 year ago

Hi,How to check the quality of the training model during the training process?

During training processing ,should I need to check the changes in loss values and these metrics? In this paper, I only need to pay attention to total、reg_success_final、rot_err_deg_final and trans_err_final , right? 1692001625895

The higher 'totol', the better.The higher 'reg_success_final' , the better. The smaller 'rot_err_deg_final' and 'trans_err_final', the better . I have a question: Are these metrics(rot_err_deg_final and trans_err_final) in the best model during the training process need to less than reg_success_thresh_rot和reg_success_thresh_trans? Which of these metrics and loss value do I need to prioritize?

yewzijian commented 1 year ago

For monitoring the training you should rely on the metrics on the validation dataset. Which metrics to prioritize depends on your application. Some people might find the mean rot/trans errors more important than the registration success rates.

Reg_success is the defined by the thresholds set, I.e. how many registrations are better than the threshold.

Training losses are not as useful since they can overfit.

On Mon, 14 Aug 2023 at 4:32 PM, lcxiha @.***> wrote:

Hi,How to check the quality of the training model during the training process?

During training processing ,should I need to check the changes in loss values and these metrics? In this paper, I only need to pay attention to total、reg_success_final、rot_err_deg_final and trans_err_final , right? [image: 1692001625895] https://user-images.githubusercontent.com/130826884/260397398-7bc5a270-272b-4db6-aec5-5320aa37e273.png

The higher 'totol', the better.The higher 'reg_success_final' , the better. The smaller 'rot_err_deg_final' and 'trans_err_final', the better . I have a question: Are these metrics(rot_err_deg_final and trans_err_final) in the best model during the training process need to less than reg_success_thresh_rot和reg_success_thresh_trans? Which of these metrics and loss value do I need to prioritize?

— Reply to this email directly, view it on GitHub https://github.com/yewzijian/RegTR/issues/21#issuecomment-1676909459, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADIBP66N24VKGUGWBU6VO3TXVHPARANCNFSM6AAAAAAZOEHWRE . You are receiving this because you commented.Message ID: @.***>

lcxiha commented 11 months ago

Hello, I have another question I would like to consult: could this algorithm only be applied to two point clouds with exactly the same frame size? Is it feasible if the size of two point clouds is not exactly the same?Looking forward to your reply.

yewzijian commented 11 months ago

Are you referring to whether the two point clouds being registered need to have the same number of points? No they do not need to be.

However, we only tested it on scenarios where the two point clouds are of similar sizes. So if you are asking if it’s suitable for matching a local scan with a much larger global one, that might not work as well.

On Tue, 5 Dec 2023 at 11:28 AM, lcxiha @.***> wrote:

Hello, I have another question I would like to consult: could this algorithm only be applied to two point clouds with exactly the same frame size? Is it feasible if the size of two point clouds is not exactly the same?Looking forward to your reply.

— Reply to this email directly, view it on GitHub https://github.com/yewzijian/RegTR/issues/21#issuecomment-1839940749, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADIBP6Y5V7TDWFOY7BHT223YH2IFTAVCNFSM6AAAAAAZOEHWRGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQMZZHE2DANZUHE . You are receiving this because you commented.Message ID: @.***>

lcxiha commented 11 months ago

Thanks a lot!Is it feasible to replace the KPConv backbone network with a pointnet network?Looking forward to your reply.

yewzijian commented 11 months ago

I didn’t try that out, but shouldn’t be a problem. Of course you’ll have to retrain the network in this case.

On Mon, 11 Dec 2023 at 2:48 PM, lcxiha @.***> wrote:

Thanks a lot!Is it feasible to replace the KPConv backbone network with a pointnet network?Looking forward to your reply.

— Reply to this email directly, view it on GitHub https://github.com/yewzijian/RegTR/issues/21#issuecomment-1849423327, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADIBP65IWMELPDUEOZCWUGLYI2UFRAVCNFSM6AAAAAAZOEHWRGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQNBZGQZDGMZSG4 . You are receiving this because you commented.Message ID: @.***>

lcxiha commented 11 months ago

Hello, I found that the evaluation metric for this algorithm during the training process is only the loss function value, but during the validation process, there are both loss function values and other evaluation metrics, such as reg_success_final、rot_err_deg_final and trans_err_final, why is this?

yewzijian commented 11 months ago

Hi, it’s just a design choice to use the loss for validation as it doesn’t require setting an additional arbitrary weights for the rotation and translation components when choosing the best checkpoint during training.

During testing, naturally the actual rotation/translation values are important for evaluation.

Zi Jian

On Fri, 15 Dec 2023 at 11:55 AM, lcxiha @.***> wrote:

Hello, I found that the evaluation metric for this algorithm during the training process is only the loss function value, but during the validation process, there are both loss function values and other evaluation metrics, such as reg_success_final、rot_err_deg_final and trans_err_final, why is this?

— Reply to this email directly, view it on GitHub https://github.com/yewzijian/RegTR/issues/21#issuecomment-1857232900, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADIBP65EVNPBFPVFOCFBNRDYJPC2HAVCNFSM6AAAAAAZOEHWRGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQNJXGIZTEOJQGA . You are receiving this because you commented.Message ID: @.***>

lcxiha commented 11 months ago

I didn’t try that out, but shouldn’t be a problem. Of course you’ll have to retrain the network in this case. Thanks!But I'm worried that the matching results may not be good, because theoretically, the performance of pointnet network is not as good as kernel convolution. And the pointnet network only learns global features without local features. Perhaps the pointnet++ network will perform better?

yewzijian commented 11 months ago

Oh yes, PointNet++ will be better. I was thinking you were referring to running PointNet on each point cluster separately, in which case will be similar to PointNet++.

On Mon, 18 Dec 2023 at 9:42 AM, lcxiha @.***> wrote:

I didn’t try that out, but shouldn’t be a problem. Of course you’ll have to retrain the network in this case. Thanks!But I'm worried that the matching results may not be good, because theoretically, the performance of pointnet network is not as good as kernel convolution. And the pointnet network only learns global features without local features. Perhaps the pointnet++ network will perform better?

— Reply to this email directly, view it on GitHub https://github.com/yewzijian/RegTR/issues/21#issuecomment-1859409437, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADIBP64XPHOF4OHGPOMHOUTYJ6NRHAVCNFSM6AAAAAAZOEHWRGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQNJZGQYDSNBTG4 . You are receiving this because you commented.Message ID: @.***>

lcxiha commented 11 months ago

Yes, I would like to run the pointnet network on both the source and target point clouds to extract the features of the point clouds, instead of the kernel point convolution part in your algorithm. In this case, are pointnet and pointnet++similar? Could I use one of them? In addition, I have found that kernel point convolutional networks not only extract point cloud features but also downsample point clouds. Therefore, do I need to use pointnet or pointnet++networks with pooling layers instead of kernel point convolutional networks?

yewzijian commented 11 months ago

Yes, pooling layers are necessary. The attention layers run on downsampled point clouds.

On Mon, 18 Dec 2023 at 10:02 AM, lcxiha @.***> wrote:

Yes, I would like to run the pointnet network on both the source and target point clouds to extract the features of the point clouds, instead of the kernel point convolution part in your algorithm. In this case, are pointnet and pointnet++similar? Could I use one of them? In addition, I have found that kernel point convolutional networks not only extract point cloud features but also downsample point clouds. Therefore, do I need to use pointnet or pointnet++networks with pooling layers instead of kernel point convolutional networks?

— Reply to this email directly, view it on GitHub https://github.com/yewzijian/RegTR/issues/21#issuecomment-1859425529, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADIBP63FIJZ5BTD4Y57QQETYJ6P5BAVCNFSM6AAAAAAZOEHWRGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQNJZGQZDKNJSHE . You are receiving this because you commented.Message ID: @.***>

lcxiha commented 11 months ago

OK!Thanks a lot!

lcxiha commented 10 months ago

Hello, I found that when I load the trained model, I can output the transformation matrix of two point clouds. However, when the model is loaded with two point clouds that do not overlap, the transformation matrix will also be output. So, how can I determine whether two point clouds overlap by simply looking at the output transformation matrix?

lcxiha commented 10 months ago

Hello, I have a question: in this algorithm, the role of Kernel Point Convolution is to downsample the point cloud and extract its local features. Is the role of Transformer to extract the global features of the point cloud?Looking forward to your reply.

yewzijian commented 10 months ago

There’s no point cloud level global feature. The transformer enhances the individual point cloud features by letting them interact with each other, thus improves the matching performance.

On Thu, 4 Jan 2024 at 4:02 PM, lcxiha @.***> wrote:

Hello, I have a question: in this algorithm, the role of Kernel Point Convolution is to downsample the point cloud and extract its local features. Is the role of Transformer to extract the global features of the point cloud?Looking forward to your reply.

— Reply to this email directly, view it on GitHub https://github.com/yewzijian/RegTR/issues/21#issuecomment-1876670405, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADIBP67V2XOREY57TOJ7AXDYMZOY7AVCNFSM6AAAAAAZOEHWRGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQNZWGY3TANBQGU . You are receiving this because you commented.Message ID: @.***>

lcxiha commented 10 months ago

Could the traditional convolutional networks not enhance the individual point cloud features by letting them interact with each other? Or is the performance of traditional convolutions slightly inferior to that of Transformers?

lcxiha commented 10 months ago

Sorry to bother you again, but I would like to cousult why this algorithm cannot directly use transformer to extract features, but instead adds a kernel convolution.

yewzijian commented 10 months ago

It’s trickier to run attention layers on large raw point clouds due to memory constraints.

On Sat, 6 Jan 2024 at 11:56 AM, lcxiha @.***> wrote:

Sorry to bother you again, but I would like to cousult why this algorithm cannot directly use transformer to extract features, but instead adds a kernel convolution.

— Reply to this email directly, view it on GitHub https://github.com/yewzijian/RegTR/issues/21#issuecomment-1879527017, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADIBP64SMHVSBF2H7ISVSRLYNDDPDAVCNFSM6AAAAAAZOEHWRGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQNZZGUZDOMBRG4 . You are receiving this because you commented.Message ID: @.***>

lcxiha commented 10 months ago

Thanks a lot! That is to say, do we only use a network that downsamples the point cloud before using the transformer? Or is it a network that requires both point cloud downsampling and feature extraction?

yewzijian commented 10 months ago

It's good to have meaningful features for the downsampled point cloud.

On Sat, Jan 6, 2024 at 8:47 PM lcxiha @.***> wrote:

Thanks a lot! That is to say, do we only use a network that downsamples the point cloud before using the transformer? Or is it a network that requires both point cloud downsampling and feature extraction?

— Reply to this email directly, view it on GitHub https://github.com/yewzijian/RegTR/issues/21#issuecomment-1879672305, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADIBP6557KOB2XCGSM7FTC3YNFBWJAVCNFSM6AAAAAAZOEHWRGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQNZZGY3TEMZQGU . You are receiving this because you commented.Message ID: @.***>

lcxiha commented 10 months ago

Thanks a lot!

lcxiha commented 9 months ago

Hello, I have found that the 3DMatch dataset is approximately 3m 2m. If the dataset is 200m 100m or even larger, can this algorithm framework still be applicable?Looking forward to your reply.