Open bobododosjl opened 1 month ago
can this segmentation results been showned by running your open source code
---Original--- From: @.> Date: Sun, Sep 29, 2024 08:14 AM To: @.>; Cc: @.**@.>; Subject: Re: [ShanWang-Shan/HomoFusion] Can this semantic segmentation processbe visualized?please details the related settings about Apollo (Issue #4)
Thank you for your interest in our work. I’m not entirely sure I fully understand what you mean by "visualize process." The segmentation results are displayed in color, as shown in the paper. Most of our settings can be found in the YAML files. For the specific settings regarding Apollo, please refer to: config/data/apolloscape.yaml.
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>
Yes, the segmentation results are generated in PNG format with color-coded segmentation.
ok
Can this method be integrated into SLAM framework?
We have not tried this yet. However, I believe it is possible to use our segmentation as an additional feature extractor to supplement SLAM for matching.
thank u
lane_marking_examples.tar.gz can this dataset be used to test?
Our model was trained on the Apolloscape dataset. If your data is also from this dataset and you have the camera extrinsics, the data can be used for testing. However, if the data comes from a different source, fine-tuning with some of the new data would be necessary.
I have already download checkpoint of Apollo, when I run 'python3 scripts/benchmark_val.py', it download "https://github.com/lukemelas/EfficientNet-PyTorch/releases/download/1.0/efficientnet-b6-c76e70fd.pth", is that right?
It is correct, the original training starts from EfficientNet's pre-trained weights. Initially, it downloads the EfficientNet pre-trained weights, and then it loads our specific pre-trained weights.
I download the EfficientNet's pre-trained weights, but I still meet the error. [Uploading log.txt…]()
I was not able to open your error file
I have sent to your email
Is data from lane_segmentation of Apollo dataset?
Yes, the data is from the lane_segmentation of the Apollo dataset, and I have updated the README to clarify this. I have also included a link to download the pose information we used, which comes from the Apollo self-localization dataset.
Thank you for your interest in our work. I’m not entirely sure I fully understand what you mean by "visualize process." The segmentation results are displayed in color, as shown in the paper. Most of our settings can be found in the YAML files. For the specific settings regarding Apollo, please refer to: config/data/apolloscape.yaml.