WHU-USI3DV / Mobile-Seed

[IEEE RAL'24 & IROS'24] Mobile-Seed: Joint Semantic Segmentation and Boundary Detection for Mobile Robots
https://whu-usi3dv.github.io/Mobile-Seed/
BSD 2-Clause "Simplified" License
118 stars 2 forks source link

How to train Cityscapes dataset with Mobile-Seed? #3

Closed Tranbaber closed 6 months ago

Tranbaber commented 6 months ago

I have the following questions about training:

  1. can I use tools/train.py for training?
  2. Where is the dataset path set?
  3. the overall framework of the network is in that file? Please help the author to answer this question, thank you very much!
martin-liao commented 6 months ago
  1. Yes, tools/train.py could be used for training.
  2. You can find detailed configurations in configs/ folder. The __base__ folder contains basic configurations for dataset, models, training schedules and default settings.
  3. Models are located in mmseg/models/ folder, and the framework is split into several submodules and each saved in respective subfolders. For more details, please refer to the MMsegmentation 0.29.1 tutorial.
Tranbaber commented 6 months ago

Thanks for your help, I see a lot of dataset profiles in the configs/base/datasets folder, which profile do I need to use if I need to train the model for both semantic segmentation and edge detection tasks? Want to train the same pre-trained model as you posted. @martin-liao

martin-liao commented 6 months ago

MS_tiny_cityscapes.py for cityscapes dataset.

martin-liao commented 6 months ago

Oh... It seems that some hyperparameters in the data pre-processing code are not optimum. I am uploading processed data now and I will correct the mistakes later.

Tranbaber commented 6 months ago

image I downloaded the pre-training model and trained it using the commands in the image, but the prompt says that the model file can't be found, so I would like to ask where the model file should be placed? @martin-liao

Tranbaber commented 6 months ago

Oh... It seems that some hyperparameters in the data pre-processing code are not optimum. I am uploading processed data now and I will correct the mistakes later.

Thank you for your great contribution to the open source community!

Tranbaber commented 6 months ago

image I downloaded the pre-training model and trained it using the commands in the image, but the prompt says that the model file can't be found, so I would like to ask where the model file should be placed? @martin-liao

Sorry, I was careless. Problem solved. @martin-liao

martin-liao commented 6 months ago

image I downloaded the pre-training model and trained it using the commands in the image, but the prompt says that the model file can't be found, so I would like to ask where the model file should be placed? @martin-liao

Please put it under the /ckpt folder. We recommend to use the Linux environment for convenience

Tranbaber commented 6 months ago

image I downloaded the pre-training model and trained it using the commands in the image, but the prompt says that the model file can't be found, so I would like to ask where the model file should be placed? @martin-liao

Please put it under the /ckpt folder. We recommend to use the Linux environment for convenience

Ok, thanks for the advice, I was trying to test it on my laptop first, the actual training will be in Linux! @martin-liao

Tranbaber commented 6 months ago

image

configs/base/datasets/cityscapes.py: image

configs/base/datasets/cityscapes_boundary.py: image

Do you see any problem with my dataset path settings? I checked the dataset and these files are not missing, how can I solve this problem? @martin-liao

Tranbaber commented 6 months ago

image

Hello Author! I found a small bug in your project, in mmseg/datasets/cityscapes.py you look for a label filename with the suffix "_gtFine_labelTrainIds.png", but using data_preprocess /cityscapes-preprocess/code/demoPreproc_gen_png_label.m generates a label filename suffix of "_gtFine_trainIds.png" and there is a mismatch. I'm not sure if this is a personal issue for me. I was able to train the network properly after modifying it. Thanks again to the author. @martin-liao

martin-liao commented 6 months ago

No, ``_gtFine_labelTrainIds.png" is right. Please refer to cityscapesscripts about converting the semantic+instance label to semantic label only.

Tranbaber commented 6 months ago

image

Okay, thank you very much for your help. My current training process is looking fine right? Also I would like to ask about checkpoint saving, how many .pth files will be saved for a training session and where will they be saved to? @martin-liao

martin-liao commented 6 months ago

Hi, I have fixed the error in the data pre-processing code just now. Please use the newest code to generate training semantic boundary labels!

Tranbaber commented 6 months ago

OK,Thanks a lots!

martin-liao commented 6 months ago

Generated semantic boundary map for aachen_000000_000019.png should be similar to this: aachen_000000_000019_gtFine_edge

martin-liao commented 6 months ago

I will upload the pre-processed data to baidu disk and onedrive later

Tranbaber commented 6 months ago

I will upload the pre-processed data to baidu disk and onedrive later

Okay, thanks! I have another small question, if I want to do both instance segmentation and edge detection, what do I need to change? Because there is _gtFine_instanceIds.png in the dataset, is it possible to train for instance segmentation? I would like to ask the author for this question. @martin-liao

martin-liao commented 6 months ago

The answer is yes:

  1. generating instance label following the cityscapesscripts for supervision;
  2. the semantic boundary should be instance-sensitive instead of instance-insensitive for the IS task. If you are interested in the difference between instance-sensitive/insensitive, we suggest you read the simultaneous edge alignment and learning (SEAL)
Tranbaber commented 6 months ago

The answer is yes:

  1. generating instance label following the cityscapesscripts for supervision;

  2. the semantic boundary should be instance-sensitive instead of instance-insensitive for the IS task. If you are interested in the difference between instance-sensitive/insensitive, we suggest you read the simultaneous edge alignment and learning (SEAL)

OK,I will try it, Thanks!

martin-liao commented 6 months ago

As the problem has been solved, we close the issue now.