PRBonn / bonnet

Bonnet: An Open-Source Training and Deployment Framework for Semantic Segmentation in Robotics.
GNU General Public License v3.0
323 stars 89 forks source link

Retraining with Personal Data #14

Closed AhmedElsafy closed 6 years ago

AhmedElsafy commented 6 years ago

I am trying to retrain with my own dataset, in the dataset/aux_script , it is mentioning to use "Use the output format extracted from the BAG that uses images and color labels created by Philipp's label creator."

Is this a kind of tool I should use first for my annotation?

tano297 commented 6 years ago

Hi,

Unfortunately, due to a cooperation with a company, the labeling tool that we use is proprietary, but here are some you can give a shot:

https://bitbucket.org/ueacomputervision/image-labelling-tool https://github.com/tzutalin/labelImg https://github.com/davidjesusacu/polyrnn-pp http://is-innovation.eu/ratsnake/ https://rectlabel.com

I will close this as it is not related to the framework, and to avoid spamming everybody, but feel free to email me for non-framework related questions.

AhmedElsafy commented 6 years ago

Thanks Andres,

I am trying to retrain the network on my data, do you have any guide for the data/labels format and structure to be stored in ?

like the RGB folders and train and test, the format of the labeled segmented images ?

Thanks

On Wed, Apr 25, 2018 at 10:49 AM, Andres Milioto notifications@github.com wrote:

Closed #14 https://github.com/Photogrammetry-Robotics-Bonn/bonnet/issues/14.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/Photogrammetry-Robotics-Bonn/bonnet/issues/14#event-1594108860, or mute the thread https://github.com/notifications/unsubscribe-auth/AONUxtzAQq5S2p2I7Jm5o_mFQkPTKTb8ks5tsH7ZgaJpZM4TiaHU .

tano297 commented 6 years ago

Hi,

I put a toy example in our server of the resulting dataset you get from pre-processing cityscapes (this case only one image) with the cityscapes parser included in the dataset folder, aux scripts.

http://ipb.uni-bonn.de/html/projects/bonnet/datasets/cityscapes_toy.tar.gz

By looking at this script, along with this dataset, and the data.yaml corresponding to cityscapes you should be able to understand better how the data format works!

AhmedElsafy commented 6 years ago

Hi Andres,

I am just wondering for training the CWC model, do I have to convert my segmented labeled images into mono-chrome images through the dataset,aux scripts? or just load my data with colored masks and put the color map on the data.yaml file

Thanks for your kind support

Regards

On Fri, Apr 27, 2018 at 4:34 AM, Andres Milioto notifications@github.com wrote:

Hi,

I put a toy example in our server of the resulting dataset you get from pre-processing cityscapes (this case only one image) with the cityscapes parser included in the dataset folder, aux scripts.

http://ipb.uni-bonn.de/html/projects/bonnet/datasets/cityscapes_toy.tar.gz

By looking at this script, along with this dataset, and the data.yaml corresponding to cityscapes you should be able to understand better how the data format works!

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/Photogrammetry-Robotics-Bonn/bonnet/issues/14#issuecomment-384890316, or mute the thread https://github.com/notifications/unsubscribe-auth/AONUxotvvJX-zS6hNoHlAybjJCOOlGaNks5tssoEgaJpZM4TiaHU .