zju3dv / LoFTR

Code for "LoFTR: Detector-Free Local Feature Matching with Transformers", CVPR 2021, T-PAMI 2022
https://zju3dv.github.io/loftr/
Apache License 2.0
2.31k stars 361 forks source link

how to create ground truth? #254

Open trand2k opened 1 year ago

trand2k commented 1 year ago

Hi authors, Thank you for your repo, i want trainning your model with my custom dataset, i have some question?

  1. What is ground truth of your model. i see your generate pair of keypoint using depth image, is this right?
  2. My dataset dont have depth image, can i label pair of key point and using as a grounth truth?
  3. Can you explain for me how to use Depth image for find matching key point? thanks for your help .
ACSL-ricardo commented 1 year ago

check Issue https://github.com/zju3dv/LoFTR/issues/243 for number 2. you will need to refactor a few parts of the code and do not need the supervision in that case, you will need to build your own "supervisor". As long as your datasets has depth you can build your dataset class updating the important keys. Then the supervision does his job and you could also check the coarse_matching.py module. That's all I understand, I hope it helps (I'm not one of the authors just one more enthusiast here). Edit: I forgot to mention that you will need to add your dataset class in the data.py flow which is the data loader

trand2k commented 1 year ago

Thanks for your response, I already understand how to create grouth truth. But build my "supervisor" is difficult task, if i label pair of point in 2 image, have some area can miss in Fine-level Supervision, Does it affect the results?

ACSL-ricardo commented 1 year ago

Probably it would affect your results, Take in your consideration how the loss function is calculated, it uses the loss of the coarse-level and fine-level. I recommend you keep using rgbd datasets easy to fit to the project like https://cvg.cit.tum.de/data/datasets

trand2k commented 1 year ago

do you try trainning super point and super glue for this task, it seemly helpful for my case

JiamuR commented 1 year ago

Hi authors, Thank you for your repo, i want trainning your model with my custom dataset, i have some question?

  1. What is ground truth of your model. i see your generate pair of keypoint using depth image, is this right?
  2. My dataset dont have depth image, can i label pair of key point and using as a grounth truth?
  3. Can you explain for me how to use Depth image for find matching key point? thanks for your help .

Have you successfully trained the LoFTR model using a custom dataset? I'm also in the process of training it with my own dataset, without using depth information. However, I've encountered some challenges in creating my dataset and understanding the training process. I'd like to ask you a few questions. I would greatly appreciate your assistance.

1.In our own dataset, how can we create our own dataset include h5 (depth) and npz files for proper training? Could you provide guidance on creating npz files that contain information related to the five parameters? 2.In the context of LoFTR training, is it possible to exclude depth information, such as not using h5 (depth) files in the dataset for training? 3.If we wish to create a dataset for training, how should we modify the corresponding code?

trand2k commented 1 year ago

Hi authors, Thank you for your repo, i want trainning your model with my custom dataset, i have some question?

  1. What is ground truth of your model. i see your generate pair of keypoint using depth image, is this right?
  2. My dataset dont have depth image, can i label pair of key point and using as a grounth truth?
  3. Can you explain for me how to use Depth image for find matching key point? thanks for your help .

Have you successfully trained the LoFTR model using a custom dataset? I'm also in the process of training it with my own dataset, without using depth information. However, I've encountered some challenges in creating my dataset and understanding the training process. I'd like to ask you a few questions. I would greatly appreciate your assistance.

1.In our own dataset, how can we create our own dataset include h5 (depth) and npz files for proper training? Could you provide guidance on creating npz files that contain information related to the five parameters? 2.In the context of LoFTR training, is it possible to exclude depth information, such as not using h5 (depth) files in the dataset for training? 3.If we wish to create a dataset for training, how should we modify the corresponding code?

I have some point for you : 1) have video from mono camera, u can use some library support structure from motion for generate depth image and pose of each image, you can use this for training LOFTR 2) yes, if you have each pair in 2 image, note that , LOFTR is have 2 level , coarse level and fine-gain level, I only train coarse level for my dataset, note that if you label each pair, in loss function, you need to filter out all patch in P/8 level don't have key-point matching labeled before push it into cross entropy loss 3) My corresponding code belongs to the company, it is confidential, you can follow my instruction for training LOFTR. Good luck

JiamuR commented 1 year ago

Hi authors, Thank you for your repo, i want trainning your model with my custom dataset, i have some question?

  1. What is ground truth of your model. i see your generate pair of keypoint using depth image, is this right?
  2. My dataset dont have depth image, can i label pair of key point and using as a grounth truth?
  3. Can you explain for me how to use Depth image for find matching key point? thanks for your help .

Have you successfully trained the LoFTR model using a custom dataset? I'm also in the process of training it with my own dataset, without using depth information. However, I've encountered some challenges in creating my dataset and understanding the training process. I'd like to ask you a few questions. I would greatly appreciate your assistance. 1.In our own dataset, how can we create our own dataset include h5 (depth) and npz files for proper training? Could you provide guidance on creating npz files that contain information related to the five parameters? 2.In the context of LoFTR training, is it possible to exclude depth information, such as not using h5 (depth) files in the dataset for training? 3.If we wish to create a dataset for training, how should we modify the corresponding code?

I have some point for you :

  1. have video from mono camera, u can use some library support structure from motion for generate depth image and pose of each image, you can use this for training LOFTR
  2. yes, if you have each pair in 2 image, note that , LOFTR is have 2 level , coarse level and fine-gain level, I only train coarse level for my dataset, note that if you label each pair, in loss function, you need to filter out all patch in P/8 level don't have key-point matching labeled before push it into cross entropy loss
  3. My corresponding code belongs to the company, it is confidential, you can follow my instruction for training LOFTR. Good luck

Thank you for your response.Benefit a lot. I'm new to this field and have just started working on this project, so I have some questions. Thank you for listening. Here's my current situation: I plan to perform feature point matching between drone-captured images and satellite images to improve localization. I already have aerial images and corresponding satellite image data.

1.It's challenging for me to generate depth images based on this setup. I'd like to train without depth information (without using h5 depth files), but I'm not sure how to remove depth information and what to consider during training.And whether it's possible? 2.Can you provide more specific information on using certain library support structure to generate image poses? How should I go about generating intrinsics, poses, and pair_infos for the npz file? 3.Once the npz file is prepared, does this mean the dataset is ready for training? Are there any additional considerations during the training process?

I greatly appreciate your guidance; your insights will help me gain a deeper understanding of this field.

trand2k commented 1 year ago

Hi authors, Thank you for your repo, i want trainning your model with my custom dataset, i have some question?

  1. What is ground truth of your model. i see your generate pair of keypoint using depth image, is this right?
  2. My dataset dont have depth image, can i label pair of key point and using as a grounth truth?
  3. Can you explain for me how to use Depth image for find matching key point? thanks for your help .

Have you successfully trained the LoFTR model using a custom dataset? I'm also in the process of training it with my own dataset, without using depth information. However, I've encountered some challenges in creating my dataset and understanding the training process. I'd like to ask you a few questions. I would greatly appreciate your assistance. 1.In our own dataset, how can we create our own dataset include h5 (depth) and npz files for proper training? Could you provide guidance on creating npz files that contain information related to the five parameters? 2.In the context of LoFTR training, is it possible to exclude depth information, such as not using h5 (depth) files in the dataset for training? 3.If we wish to create a dataset for training, how should we modify the corresponding code?

I have some point for you :

  1. have video from mono camera, u can use some library support structure from motion for generate depth image and pose of each image, you can use this for training LOFTR
  2. yes, if you have each pair in 2 image, note that , LOFTR is have 2 level , coarse level and fine-gain level, I only train coarse level for my dataset, note that if you label each pair, in loss function, you need to filter out all patch in P/8 level don't have key-point matching labeled before push it into cross entropy loss
  3. My corresponding code belongs to the company, it is confidential, you can follow my instruction for training LOFTR. Good luck

Thank you for your response.Benefit a lot. I'm new to this field and have just started working on this project, so I have some questions. Thank you for listening. Here's my current situation: I plan to perform feature point matching between drone-captured images and satellite images to improve localization. I already have aerial images and corresponding satellite image data.

1.It's challenging for me to generate depth images based on this setup. I'd like to train without depth information (without using h5 depth files), but I'm not sure how to remove depth information and what to consider during training.And whether it's possible? 2.Can you provide more specific information on using certain library support structure to generate image poses? How should I go about generating intrinsics, poses, and pair_infos for the npz file? 3.Once the npz file is prepared, does this mean the dataset is ready for training? Are there any additional considerations during the training process?

I greatly appreciate your guidance; your insights will help me gain a deeper understanding of this field.

My work is seem like you, you can discuss with my boss, you can find our demo in here : HERE

trand2k commented 1 year ago

Hi authors, Thank you for your repo, i want trainning your model with my custom dataset, i have some question?

  1. What is ground truth of your model. i see your generate pair of keypoint using depth image, is this right?
  2. My dataset dont have depth image, can i label pair of key point and using as a grounth truth?
  3. Can you explain for me how to use Depth image for find matching key point? thanks for your help .

Have you successfully trained the LoFTR model using a custom dataset? I'm also in the process of training it with my own dataset, without using depth information. However, I've encountered some challenges in creating my dataset and understanding the training process. I'd like to ask you a few questions. I would greatly appreciate your assistance. 1.In our own dataset, how can we create our own dataset include h5 (depth) and npz files for proper training? Could you provide guidance on creating npz files that contain information related to the five parameters? 2.In the context of LoFTR training, is it possible to exclude depth information, such as not using h5 (depth) files in the dataset for training? 3.If we wish to create a dataset for training, how should we modify the corresponding code?

I have some point for you :

  1. have video from mono camera, u can use some library support structure from motion for generate depth image and pose of each image, you can use this for training LOFTR
  2. yes, if you have each pair in 2 image, note that , LOFTR is have 2 level , coarse level and fine-gain level, I only train coarse level for my dataset, note that if you label each pair, in loss function, you need to filter out all patch in P/8 level don't have key-point matching labeled before push it into cross entropy loss
  3. My corresponding code belongs to the company, it is confidential, you can follow my instruction for training LOFTR. Good luck

Thank you for your response.Benefit a lot. I'm new to this field and have just started working on this project, so I have some questions. Thank you for listening. Here's my current situation: I plan to perform feature point matching between drone-captured images and satellite images to improve localization. I already have aerial images and corresponding satellite image data. 1.It's challenging for me to generate depth images based on this setup. I'd like to train without depth information (without using h5 depth files), but I'm not sure how to remove depth information and what to consider during training.And whether it's possible? 2.Can you provide more specific information on using certain library support structure to generate image poses? How should I go about generating intrinsics, poses, and pair_infos for the npz file? 3.Once the npz file is prepared, does this mean the dataset is ready for training? Are there any additional considerations during the training process? I greatly appreciate your guidance; your insights will help me gain a deeper understanding of this field.

My work is seem like you, you can discuss with my boss, you can find our demo in here : HERE

MY ANSWER : 1) yes, it possible 2) with drone image, try opendronemap, you need start with debug opendronemap, native build and debug step by step 3) yes, mono image, depth image, pose of 2 camera is all you need to trainning Loftr

JiamuR commented 1 year ago

嗨,作者,感谢您的存储库,我想使用我的自定义数据集训练您的模型,我有一些问题吗?

  1. 什么是模型的基本事实。我看到您使用深度图像生成一对关键点,对吗?
  2. 我的数据集没有深度图像,我可以标记一对关键点并用作最糟糕的事实吗?
  3. 你能为我解释如何使用深度图像来查找匹配的关键点吗?感谢您的帮助.

您是否使用自定义数据集成功训练了 LoFTR 模型?我也在用我自己的数据集训练它,而不使用深度信息。但是,我在创建数据集和理解训练过程时遇到了一些挑战。我想问你几个问题。我将非常感谢您的协助。 1.In 我们自己的数据集,我们如何创建自己的数据集,包括 h5(深度)和 npz 文件以进行适当的训练?您能否提供有关创建包含与五个参数相关的信息的 npz 文件的指导?2.In LoFTR 训练的上下文中,是否可以排除深度信息,例如不使用数据集中的 h5(深度)文件进行训练?3.如果我们想创建一个用于训练的数据集,我们应该如何修改相应的代码?

我有一些观点要告诉你:

  1. 有来自单色相机的视频,你可以使用一些库支持结构从运动生成深度图像和每个图像的姿势,你可以用它来训练LOFTR
  2. 是的,如果你在 2 张图像中有每对,请注意,LOFTR 有 2 级、粗级和细增益级,我只为我的数据集训练粗级,请注意,如果你标记每对,在损失函数中,你需要过滤掉 P/8 级别的所有补丁,在将其推入交叉熵损失之前没有标记关键点匹配
  3. 我的对应代码属于公司,是保密的,你可以按照我的指示进行LOFTR的培训。 祝你好运

感谢您的回复。受益匪浅。我是这个领域的新手,刚刚开始从事这个项目,所以我有一些问题。感谢您的聆听。这是我目前的情况:我计划在无人机捕获的图像和卫星图像之间进行特征点匹配,以提高定位。我已经有了航拍图像和相应的卫星图像数据。 1.基于此设置生成深度图像对我来说具有挑战性。我想在没有深度信息的情况下进行训练(不使用 h5 深度文件),但我不确定如何删除深度信息以及在训练期间要考虑什么。是否可能?2.您能否提供有关使用某些库支撑结构生成图像姿势的更具体信息?我应该如何为 npz 文件生成内部函数、姿势和pair_infos?3.准备好 npz 文件后,这是否意味着数据集已准备好进行训练?在培训过程中还有其他注意事项吗?我非常感谢您的指导;您的见解将帮助我对这个领域有更深入的了解。

我的工作看起来像你,你可以和我的老板讨论,你可以在这里找到我们的演示:这里

我的回答:

  1. 是的,有可能
  2. 使用无人机图像,尝试OpenDroneMap,您需要从调试OpenDroneMap开始,本机构建和逐步调试
  3. 是的,单色图像,深度图像,2个相机的姿势就是训练Loftr所需要的

Thank you for your reply. Your answer is very helpful to me. I can't get to your demo site right now, but thank you very much for your advice. If there is any follow-up, I will contact you. thank you

oooooha commented 5 months ago

嗨,作者,感谢您的存储库,我想使用我的自定义数据集训练您的模型,我有一些问题吗?

  1. 什么是模型的基本事实。我看到您使用深度图像生成一对关键点,对吗?
  2. 我的数据集没有深度图像,我可以标记一对关键点并用作最糟糕的事实吗?
  3. 你能为我解释如何使用深度图像来查找匹配的关键点吗?感谢您的帮助.

您是否使用自定义数据集成功训练了 LoFTR 模型?我也在用我自己的数据集训练它,而不使用深度信息。但是,我在创建数据集和理解训练过程时遇到了一些挑战。我想问你几个问题。我将非常感谢您的协助。1.In 我们自己的数据集,我们如何创建自己的数据集,包括 h5(深度)和 npz 文件以进行适当的训练?您能否提供有关创建包含与五个参数相关的信息的 npz 文件的指导?2.In LoFTR 训练的上下文中,是否可以排除深度信息,例如不使用数据集中的 h5(深度)文件进行训练?3.如果我们想创建一个用于训练的数据集,我们应该如何修改相应的代码?

我有一些观点要告诉你:

  1. 有来自单色相机的视频,你可以使用一些库支持结构从运动生成深度图像和每个图像的姿势,你可以用它来训练LOFTR
  2. 是的,如果你在 2 张图像中有每对,请注意,LOFTR 有 2 级、粗级和细增益级,我只为我的数据集训练粗级,请注意,如果你标记每对,在损失函数中,你需要过滤掉 P/8 级别的所有补丁,在将其推入交叉熵损失之前没有标记关键点匹配
  3. 我的对应代码属于公司,是保密的,你可以按照我的指示进行LOFTR的培训。祝你好运

感谢您的回复。受益匪浅。我是这个领域的新手,刚刚开始从事这个项目,所以我有一些问题。感谢您的聆听。这是我目前的情况:我计划在无人机捕获的图像和卫星图像之间进行特征点匹配,以提高定位。我已经有了航拍图像和相应的卫星图像数据。1.基于此设置生成深度图像对我来说具有挑战性。我想在没有深度信息的情况下进行训练(不使用 h5 深度文件),但我不确定如何删除深度信息以及在训练期间要考虑什么。是否可能?2.您能否提供有关使用某些库支撑结构生成图像姿势的更具体信息?我应该如何为 npz 文件生成内部函数、姿势和pair_infos?3.准备好 npz 文件后,这是否意味着数据集已准备好进行训练?在培训过程中还有其他注意事项吗?我非常感谢您的指导;您的见解将帮助我对这个领域有更深入的了解。

我的工作看起来像你,你可以和我的老板讨论,你可以在这里找到我们的演示:这里

我的回答:

  1. 是的,有可能
  2. 使用无人机图像,尝试OpenDroneMap,您需要从调试OpenDroneMap开始,本机构建和逐步调试
  3. 是的,单色图像,深度图像,2个相机的姿势就是训练Loftr所需要的

感谢您的回复。你的回答对我很有帮助。我现在无法访问您的演示网站,但非常感谢您的建议。如果有任何后续,我会与您联系。谢谢 My work is similar to yours, focusing on training and matching based on RGB images. Have you achieved this? I have some questions to ask. Thank you very much.

trand2k commented 5 months ago

嗨,作者,感谢您的存储库,我想使用我的自定义数据集训练您的模型,我有一些问题吗?

  1. 什么是模型的基本事实。我看到您使用深度图像生成一对关键点,对吗?
  2. 我的数据集没有深度图像,我可以标记一对关键点并用作最糟糕的事实吗?
  3. 你能为我解释如何使用深度图像来查找匹配的关键点吗?感谢您的帮助.

您是否使用自定义数据集成功训练了 LoFTR 模型?我也在用我自己的数据集训练它,而不使用深度信息。但是,我在创建数据集和理解训练过程时遇到了一些挑战。我想问你几个问题。我将非常感谢您的协助。1.In 我们自己的数据集,我们如何创建自己的数据集,包括 h5(深度)和 npz 文件以进行适当的训练?您能否提供有关创建包含与五个参数相关的信息的 npz 文件的指导?2.In LoFTR 训练的上下文中,是否可以排除深度信息,例如不使用数据集中的 h5(深度)文件进行训练?3.如果我们想创建一个用于训练的数据集,我们应该如何修改相应的代码?

我有一些观点要告诉你:

  1. 有来自单色相机的视频,你可以使用一些库支持结构从运动生成深度图像和每个图像的姿势,你可以用它来训练LOFTR
  2. 是的,如果你在 2 张图像中有每对,请注意,LOFTR 有 2 级、粗级和细增益级,我只为我的数据集训练粗级,请注意,如果你标记每对,在损失函数中,你需要过滤掉 P/8 级别的所有补丁,在将其推入交叉熵损失之前没有标记关键点匹配
  3. 我的对应代码属于公司,是保密的,你可以按照我的指示进行LOFTR的培训。祝你好运

感谢您的回复。受益匪浅。我是这个领域的新手,刚刚开始从事这个项目,所以我有一些问题。感谢您的聆听。这是我目前的情况:我计划在无人机捕获的图像和卫星图像之间进行特征点匹配,以提高定位。我已经有了航拍图像和相应的卫星图像数据。1.基于此设置生成深度图像对我来说具有挑战性。我想在没有深度信息的情况下进行训练(不使用 h5 深度文件),但我不确定如何删除深度信息以及在训练期间要考虑什么。是否可能?2.您能否提供有关使用某些库支撑结构生成图像姿势的更具体信息?我应该如何为 npz 文件生成内部函数、姿势和pair_infos?3.准备好 npz 文件后,这是否意味着数据集已准备好进行训练?在培训过程中还有其他注意事项吗?我非常感谢您的指导;您的见解将帮助我对这个领域有更深入的了解。

我的工作看起来像你,你可以和我的老板讨论,你可以在这里找到我们的演示:这里

我的回答:

  1. 是的,有可能
  2. 使用无人机图像,尝试OpenDroneMap,您需要从调试OpenDroneMap开始,本机构建和逐步调试
  3. 是的,单色图像,深度图像,2个相机的姿势就是训练Loftr所需要的

感谢您的回复。你的回答对我很有帮助。我现在无法访问您的演示网站,但非常感谢您的建议。如果有任何后续,我会与您联系。谢谢 My work is similar to yours, focusing on training and matching based on RGB images. Have you achieved this? I have some questions to ask. Thank you very much.

feel free to ask !

oooooha commented 5 months ago

嗨,作者,感谢您的存储库,我想使用我的自定义数据集训练您的模型,我有一些问题吗?

  1. 什么是模型的基本事实。我看到您使用深度图像生成一对关键点,对吗?
  2. 我的数据集没有深度图像,我可以标记一对关键点并用作最糟糕的事实吗?
  3. 你能为我解释如何使用深度图像来查找匹配的关键点吗?感谢您的帮助.

您是否使用自定义数据集成功训练了 LoFTR 模型?我也在用我自己的数据集训练它,而不使用深度信息。但是,我在创建数据集和理解训练过程时遇到了一些挑战。我想问你几个问题。我将非常感谢您的协助。1.In 我们自己的数据集,我们如何创建自己的数据集,包括 h5(深度)和 npz 文件以进行适当的训练?您能否提供有关创建包含与五个参数相关的信息的 npz 文件的指导?2.In LoFTR 训练的上下文中,是否可以排除深度信息,例如不使用数据集中的 h5(深度)文件进行训练?3.如果我们想创建一个用于训练的数据集,我们应该如何修改相应的代码?

我有一些观点要告诉你:

  1. 有来自单色相机的视频,你可以使用一些库支持结构从运动生成深度图像和每个图像的姿势,你可以用它来训练LOFTR
  2. 是的,如果你在 2 张图像中有每对,请注意,LOFTR 有 2 级、粗级和细增益级,我只为我的数据集训练粗级,请注意,如果你标记每对,在损失函数中,你需要过滤掉 P/8 级别的所有补丁,在将其推入交叉熵损失之前没有标记关键点匹配
  3. 我的对应代码属于公司,是保密的,你可以按照我的指示进行LOFTR的培训。祝你好运

感谢您的回复。受益匪浅。我是这个领域的新手,刚刚开始从事这个项目,所以我有一些问题。感谢您的聆听。这是我目前的情况:我计划在无人机捕获的图像和卫星图像之间进行特征点匹配,以提高定位。我已经有了航拍图像和相应的卫星图像数据。1.基于此设置生成深度图像对我来说具有挑战性。我想在没有深度信息的情况下进行训练(不使用 h5 深度文件),但我不确定如何删除深度信息以及在训练期间要考虑什么。是否可能?2.您能否提供有关使用某些库支撑结构生成图像姿势的更具体信息?我应该如何为 npz 文件生成内部函数、姿势和pair_infos?3.准备好 npz 文件后,这是否意味着数据集已准备好进行训练?在培训过程中还有其他注意事项吗?我非常感谢您的指导;您的见解将帮助我对这个领域有更深入的了解。

我的工作看起来像你,你可以和我的老板讨论,你可以在这里找到我们的演示:这里

我的回答:

  1. 是的,有可能
  2. 使用无人机图像,尝试OpenDroneMap,您需要从调试OpenDroneMap开始,本机构建和逐步调试
  3. 是的,单色图像,深度图像,2个相机的姿势就是训练Loftr所需要的

感谢您的回复。你的回答对我很有帮助。我现在无法访问您的演示网站,但非常感谢您的建议。如果有任何后续,我会与您联系。谢谢 My work is similar to yours, focusing on training and matching based on RGB images. Have you achieved this? I have some questions to ask. Thank you very much.

feel free to ask !

1.How can I create dataset labels without using h5 files, to train LoFTR on an RGB dataset to predict the homography matrix of image pairs? it's possible? 2.Specifically, how should I prepare npz files for my own dataset? Hope you can share some of your experiences with me. My email is dllyoyo52@gmail.com.

fighterzzzh commented 4 months ago

2. opendronemap

Hello, your response has been incredibly insightful. If my goal is to work with low-texture objects, such as containers, can I use COLMAP to generate depth maps? Additionally, given that these depth maps are often imperfect, can they still yield good results when used for training?

trand2k commented 4 months ago
  1. opendronemap

Hello, your response has been incredibly insightful. If my goal is to work with low-texture objects, such as containers, can I use COLMAP to generate depth maps? Additionally, given that these depth maps are often imperfect, can they still yield good results when used for training?

thanks for your interest, my answer is it maybe possible. Many factors will affect the results, but let try it, i dont have experiment with gen Depth map with COLMAP, but opendronemap also use COLMAP for generate depth image, so , i think it same, it is a one way :))

fighterzzzh commented 4 months ago
  1. opendronemap

Hello, your response has been incredibly insightful. If my goal is to work with low-texture objects, such as containers, can I use COLMAP to generate depth maps? Additionally, given that these depth maps are often imperfect, can they still yield good results when used for training?

thanks for your interest, my answer is it maybe possible. Many factors will affect the results, but let try it, i dont have experiment with gen Depth map with COLMAP, but opendronemap also use COLMAP for generate depth image, so , i think it same, it is a one way :))

  1. opendronemap

Hello, your response has been incredibly insightful. If my goal is to work with low-texture objects, such as containers, can I use COLMAP to generate depth maps? Additionally, given that these depth maps are often imperfect, can they still yield good results when used for training?

thanks for your interest, my answer is it maybe possible. Many factors will affect the results, but let try it, i dont have experiment with gen Depth map with COLMAP, but opendronemap also use COLMAP for generate depth image, so , i think it same, it is a one way :))

Your answer is highly appreciated. Is there a substantial difference between training with my own data from scratch or using an existing model for transfer learning? I should probably try both, but training from scratch might put too much strain on my current computer.

trand2k commented 4 months ago
  1. opendronemap

Hello, your response has been incredibly insightful. If my goal is to work with low-texture objects, such as containers, can I use COLMAP to generate depth maps? Additionally, given that these depth maps are often imperfect, can they still yield good results when used for training?

thanks for your interest, my answer is it maybe possible. Many factors will affect the results, but let try it, i dont have experiment with gen Depth map with COLMAP, but opendronemap also use COLMAP for generate depth image, so , i think it same, it is a one way :))

  1. opendronemap

Hello, your response has been incredibly insightful. If my goal is to work with low-texture objects, such as containers, can I use COLMAP to generate depth maps? Additionally, given that these depth maps are often imperfect, can they still yield good results when used for training?

thanks for your interest, my answer is it maybe possible. Many factors will affect the results, but let try it, i dont have experiment with gen Depth map with COLMAP, but opendronemap also use COLMAP for generate depth image, so , i think it same, it is a one way :))

Your answer is highly appreciated. Is there a substantial difference between training with my own data from scratch or using an existing model for transfer learning? I should probably try both, but training from scratch might put too much strain on my current computer.

i don't train from scratch, just transfer learning from pretrained model. I can't give you advise for this :)). but i have another advise for you that: U need check data-loader carefully before training, good luck and keep forward :))

llyyccccc commented 2 months ago
  1. opendronemap

Hello, your response has been incredibly insightful. If my goal is to work with low-texture objects, such as containers, can I use COLMAP to generate depth maps? Additionally, given that these depth maps are often imperfect, can they still yield good results when used for training?

thanks for your interest, my answer is it maybe possible. Many factors will affect the results, but let try it, i dont have experiment with gen Depth map with COLMAP, but opendronemap also use COLMAP for generate depth image, so , i think it same, it is a one way :))

  1. opendronemap

Hello, your response has been incredibly insightful. If my goal is to work with low-texture objects, such as containers, can I use COLMAP to generate depth maps? Additionally, given that these depth maps are often imperfect, can they still yield good results when used for training?

thanks for your interest, my answer is it maybe possible. Many factors will affect the results, but let try it, i dont have experiment with gen Depth map with COLMAP, but opendronemap also use COLMAP for generate depth image, so , i think it same, it is a one way :))

Your answer is highly appreciated. Is there a substantial difference between training with my own data from scratch or using an existing model for transfer learning? I should probably try both, but training from scratch might put too much strain on my current computer.

i don't train from scratch, just transfer learning from pretrained model. I can't give you advise for this :)). but i have another advise for you that: U need check data-loader carefully before training, good luck and keep forward :))

Hello, is it possible for me to construct a dataset using infrared images and visible light for training? Looking forward to your response