lattice-ai / DeepLabV3-Plus

Tensorflow 2.3.0 implementation of DeepLabV3-Plus
https://keras.io/examples/vision/deeplabv3_plus/
98 stars 22 forks source link

Model deeplabv3-plus-human-parsing-resnet-50-backbone.h5 #5

Closed dnalexen closed 2 years ago

dnalexen commented 3 years ago

Hello, can you share the file deeplabv3-plus-human-parsing-resnet-50-backbone.h5 for those who do not have access to a GPU for a long time to train the model?

soumik12345 commented 3 years ago

Hi @dnalexen , apologies for the delay in response. We would be publishing the pre-trained models for all our projects as part of our official release which is scheduled for February 2021.

NyanSwanAung commented 3 years ago

hey @dnalexen , if you're using google colab with supported GPU, you can train the model with batch_size=7for both train and validation dataset. You can simply change it in their awesome config file. However, if you train with default batch_size=8, colab will crash due to Out of Memory Allocation(OOM). Shout out to lattice-ai team for providing us this phenomenon open-source code 💯⭐️ Cheers!

ndhuyvn1994 commented 3 years ago

hey @dnalexen , if you're using google colab with supported GPU, you can train the model with batch_size=7for both train and validation dataset. You can simply change it in their awesome config file. However, if you train with default batch_size=8, colab will crash due to Out of Memory Allocation(OOM). Shout out to lattice-ai team for providing us this phenomenon open-source code 💯⭐️ Cheers!

Hi. I tried to run on Google Colab with batchsize is 7. However, there is a error named "TypeError: Input 'filename' of 'ReadFile' Op has type float32 that does not match expected type of string.". Did you meet this error before? Thank you so much!

NyanSwanAung commented 3 years ago

@dhuynguyen94 No I didn't, how did you download the dataset? Share me your script.

andrew-healey commented 3 years ago

hey @dnalexen , if you're using google colab with supported GPU, you can train the model with batch_size=7for both train and validation dataset. You can simply change it in their awesome config file. However, if you train with default batch_size=8, colab will crash due to Out of Memory Allocation(OOM). Shout out to lattice-ai team for providing us this phenomenon open-source code 💯⭐️ Cheers!

Hi. I tried to run on Google Colab with batchsize is 7. However, there is a error named "TypeError: Input 'filename' of 'ReadFile' Op has type float32 that does not match expected type of string.". Did you meet this error before? Thank you so much!

The source of the problem is an error with how CamVid is loaded. I fixed it by running

bash dataset/camvid.sh
mv camvid/ dataset/camvid

This fixes the problem because the config file looks for images in dataset/camvid, not ./camvid.

EMHussain commented 3 years ago

Hi @dnalexen , apologies for the delay in response. We would be publishing the pre-trained models for all our projects as part of our official release which is scheduled for February 2021.

Any update on pre-trained model?

apolo74 commented 2 years ago

Same question again, could you guys share your pre-trained weights, please? deeplabv3-plus-human-parsing-resnet-50-backbone.h5 Thanks in advance, Boris