ShanWang-Shan / HomoFusion

Code for ICCV2023 paper: Homography Guided Temporal Fusion for Road Line and Marking Segmentation
Other
10 stars 0 forks source link

Can this semantic segmentation process be visualized?please details the related settings about Apollo #4

Open bobododosjl opened 1 month ago

ShanWang-Shan commented 3 weeks ago

Thank you for your interest in our work. I’m not entirely sure I fully understand what you mean by "visualize process." The segmentation results are displayed in color, as shown in the paper. Most of our settings can be found in the YAML files. For the specific settings regarding Apollo, please refer to: config/data/apolloscape.yaml.

bobododosjl commented 3 weeks ago

can this segmentation results been showned by running your open source code

---Original--- From: @.> Date: Sun, Sep 29, 2024 08:14 AM To: @.>; Cc: @.**@.>; Subject: Re: [ShanWang-Shan/HomoFusion] Can this semantic segmentation processbe visualized?please details the related settings about Apollo (Issue #4)

Thank you for your interest in our work. I’m not entirely sure I fully understand what you mean by "visualize process." The segmentation results are displayed in color, as shown in the paper. Most of our settings can be found in the YAML files. For the specific settings regarding Apollo, please refer to: config/data/apolloscape.yaml.

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>

ShanWang-Shan commented 3 weeks ago

Yes, the segmentation results are generated in PNG format with color-coded segmentation.

bobododosjl commented 3 weeks ago

ok

bobododosjl commented 2 weeks ago

Can this method be integrated into SLAM framework?

ShanWang-Shan commented 2 weeks ago

We have not tried this yet. However, I believe it is possible to use our segmentation as an additional feature extractor to supplement SLAM for matching.

bobododosjl commented 2 weeks ago

thank u

bobododosjl commented 2 weeks ago

lane_marking_examples.tar.gz can this dataset be used to test?

ShanWang-Shan commented 2 weeks ago

Our model was trained on the Apolloscape dataset. If your data is also from this dataset and you have the camera extrinsics, the data can be used for testing. However, if the data comes from a different source, fine-tuning with some of the new data would be necessary.

bobododosjl commented 2 weeks ago

I have already download checkpoint of Apollo, when I run 'python3 scripts/benchmark_val.py', it download "https://github.com/lukemelas/EfficientNet-PyTorch/releases/download/1.0/efficientnet-b6-c76e70fd.pth", is that right?

ShanWang-Shan commented 1 week ago

It is correct, the original training starts from EfficientNet's pre-trained weights. Initially, it downloads the EfficientNet pre-trained weights, and then it loads our specific pre-trained weights.

bobododosjl commented 1 week ago

I download the EfficientNet's pre-trained weights, but I still meet the error. [Uploading log.txt…]()

ShanWang-Shan commented 1 week ago

I was not able to open your error file

bobododosjl commented 1 week ago

I have sent to your email

bobododosjl commented 1 week ago

Is data from lane_segmentation of Apollo dataset?

ShanWang-Shan commented 1 week ago

Yes, the data is from the lane_segmentation of the Apollo dataset, and I have updated the README to clarify this. I have also included a link to download the pose information we used, which comes from the Apollo self-localization dataset.