AIWintermuteAI / aXeleRate

Keras-based framework for AI on the Edge
MIT License
179 stars 71 forks source link

Exemple did not work #26

Closed Mjxkill closed 4 years ago

Mjxkill commented 4 years ago

Describe the bug

Hello,

I have install (on OS X) aXelerate to learn how to create a model

conda -n ml python=3.7 conda activate ml pip install git+https://github.com/AIWintermuteAI/aXeleRate git clone https://github.com/AIWintermuteAI/aXeleRate

all is OK for installation

I start the script tests_training_and_inference.py

python tests_training_and_inference.py and this script crash at segmentation part :

Project folder projects/segment already exists. Creating a folder for new training session. Segmentation Failed to load pre-trained weights for the whole model. It might be because you didn't specify any or the weight file cannot be found Current training session folder is projects/segment/2020-09-23_14-06-08

Model: "model_4"


Layer (type) Output Shape Param #

input_4 (InputLayer) (None, 320, 240, 3) 0


conv1_pad (ZeroPadding2D) (None, 322, 242, 3) 0


conv1 (Conv2D) (None, 160, 120, 24) 648


conv1_bn (BatchNormalization (None, 160, 120, 24) 96


conv1_relu (ReLU) (None, 160, 120, 24) 0


conv_dw_1 (DepthwiseConv2D) (None, 160, 120, 24) 216


conv_dw_1_bn (BatchNormaliza (None, 160, 120, 24) 96


conv_dw_1_relu (ReLU) (None, 160, 120, 24) 0


conv_pw_1 (Conv2D) (None, 160, 120, 48) 1152


conv_pw_1_bn (BatchNormaliza (None, 160, 120, 48) 192


conv_pw_1_relu (ReLU) (None, 160, 120, 48) 0


conv_pad_2 (ZeroPadding2D) (None, 162, 122, 48) 0


conv_dw_2 (DepthwiseConv2D) (None, 80, 60, 48) 432


conv_dw_2_bn (BatchNormaliza (None, 80, 60, 48) 192


conv_dw_2_relu (ReLU) (None, 80, 60, 48) 0


conv_pw_2 (Conv2D) (None, 80, 60, 96) 4608


conv_pw_2_bn (BatchNormaliza (None, 80, 60, 96) 384


conv_pw_2_relu (ReLU) (None, 80, 60, 96) 0


conv_dw_3 (DepthwiseConv2D) (None, 80, 60, 96) 864


conv_dw_3_bn (BatchNormaliza (None, 80, 60, 96) 384


conv_dw_3_relu (ReLU) (None, 80, 60, 96) 0


conv_pw_3 (Conv2D) (None, 80, 60, 96) 9216


conv_pw_3_bn (BatchNormaliza (None, 80, 60, 96) 384


conv_pw_3_relu (ReLU) (None, 80, 60, 96) 0


conv_pad_4 (ZeroPadding2D) (None, 82, 62, 96) 0


conv_dw_4 (DepthwiseConv2D) (None, 40, 30, 96) 864


conv_dw_4_bn (BatchNormaliza (None, 40, 30, 96) 384


conv_dw_4_relu (ReLU) (None, 40, 30, 96) 0


conv_pw_4 (Conv2D) (None, 40, 30, 192) 18432


conv_pw_4_bn (BatchNormaliza (None, 40, 30, 192) 768


conv_pw_4_relu (ReLU) (None, 40, 30, 192) 0


conv_dw_5 (DepthwiseConv2D) (None, 40, 30, 192) 1728


conv_dw_5_bn (BatchNormaliza (None, 40, 30, 192) 768


conv_dw_5_relu (ReLU) (None, 40, 30, 192) 0


conv_pw_5 (Conv2D) (None, 40, 30, 192) 36864


conv_pw_5_bn (BatchNormaliza (None, 40, 30, 192) 768


conv_pw_5_relu (ReLU) (None, 40, 30, 192) 0


conv_pad_6 (ZeroPadding2D) (None, 42, 32, 192) 0


conv_dw_6 (DepthwiseConv2D) (None, 20, 15, 192) 1728


conv_dw_6_bn (BatchNormaliza (None, 20, 15, 192) 768


conv_dw_6_relu (ReLU) (None, 20, 15, 192) 0


conv_pw_6 (Conv2D) (None, 20, 15, 384) 73728


conv_pw_6_bn (BatchNormaliza (None, 20, 15, 384) 1536


conv_pw_6_relu (ReLU) (None, 20, 15, 384) 0


conv_dw_7 (DepthwiseConv2D) (None, 20, 15, 384) 3456


conv_dw_7_bn (BatchNormaliza (None, 20, 15, 384) 1536


conv_dw_7_relu (ReLU) (None, 20, 15, 384) 0


conv_pw_7 (Conv2D) (None, 20, 15, 384) 147456


conv_pw_7_bn (BatchNormaliza (None, 20, 15, 384) 1536


conv_pw_7_relu (ReLU) (None, 20, 15, 384) 0


conv_dw_8 (DepthwiseConv2D) (None, 20, 15, 384) 3456


conv_dw_8_bn (BatchNormaliza (None, 20, 15, 384) 1536


conv_dw_8_relu (ReLU) (None, 20, 15, 384) 0


conv_pw_8 (Conv2D) (None, 20, 15, 384) 147456


conv_pw_8_bn (BatchNormaliza (None, 20, 15, 384) 1536


conv_pw_8_relu (ReLU) (None, 20, 15, 384) 0


conv_dw_9 (DepthwiseConv2D) (None, 20, 15, 384) 3456


conv_dw_9_bn (BatchNormaliza (None, 20, 15, 384) 1536


conv_dw_9_relu (ReLU) (None, 20, 15, 384) 0


conv_pw_9 (Conv2D) (None, 20, 15, 384) 147456


conv_pw_9_bn (BatchNormaliza (None, 20, 15, 384) 1536


conv_pw_9_relu (ReLU) (None, 20, 15, 384) 0


conv_dw_10 (DepthwiseConv2D) (None, 20, 15, 384) 3456


conv_dw_10_bn (BatchNormaliz (None, 20, 15, 384) 1536


conv_dw_10_relu (ReLU) (None, 20, 15, 384) 0


conv_pw_10 (Conv2D) (None, 20, 15, 384) 147456


conv_pw_10_bn (BatchNormaliz (None, 20, 15, 384) 1536


conv_pw_10_relu (ReLU) (None, 20, 15, 384) 0


conv_dw_11 (DepthwiseConv2D) (None, 20, 15, 384) 3456


conv_dw_11_bn (BatchNormaliz (None, 20, 15, 384) 1536


conv_dw_11_relu (ReLU) (None, 20, 15, 384) 0


conv_pw_11 (Conv2D) (None, 20, 15, 384) 147456


conv_pw_11_bn (BatchNormaliz (None, 20, 15, 384) 1536


conv_pw_11_relu (ReLU) (None, 20, 15, 384) 0


zero_padding2d_1 (ZeroPaddin (None, 22, 17, 384) 0


conv2d_1 (Conv2D) (None, 20, 15, 256) 884992


batch_normalization_1 (Batch (None, 20, 15, 256) 1024


up_sampling2d_1 (UpSampling2 (None, 40, 30, 256) 0


zero_padding2d_2 (ZeroPaddin (None, 42, 32, 256) 0


conv2d_2 (Conv2D) (None, 40, 30, 128) 295040


batch_normalization_2 (Batch (None, 40, 30, 128) 512


up_sampling2d_2 (UpSampling2 (None, 80, 60, 128) 0


zero_padding2d_3 (ZeroPaddin (None, 82, 62, 128) 0


conv2d_3 (Conv2D) (None, 80, 60, 64) 73792


batch_normalization_3 (Batch (None, 80, 60, 64) 256


up_sampling2d_3 (UpSampling2 (None, 160, 120, 64) 0


zero_padding2d_4 (ZeroPaddin (None, 162, 122, 64) 0


conv2d_4 (Conv2D) (None, 160, 120, 32) 18464


batch_normalization_4 (Batch (None, 160, 120, 32) 128


conv2d_5 (Conv2D) (None, 160, 120, 20) 5780


activation_1 (Activation) (None, 160, 120, 20) 0

Total params: 2,207,108 Trainable params: 2,195,108 Non-trainable params: 12,000


Epoch 1/5 4/4 [==============================] - 5s 1s/step - loss: 3.4831 - val_loss: 2.9801

Epoch 00001: val_loss improved from inf to 2.98010, saving model to projects/segment/2020-09-23_14-06-08/Segnet_best_val_loss.h5 /Users/michael/aXeleRate/axelerate/networks/common_utils/fit.py:165: UserWarning: Matplotlib is currently using agg, which is a non-GUI backend, so cannot show the figure. plt.show(block=False) /Users/michael/aXeleRate/axelerate/networks/common_utils/fit.py:166: UserWarning: Matplotlib is currently using agg, which is a non-GUI backend, so cannot show the figure. plt.pause(1) Epoch 2/5 4/4 [==============================] - 2s 533ms/step - loss: 3.2988 - val_loss: 2.9247

Epoch 00002: val_loss improved from 2.98010 to 2.92473, saving model to projects/segment/2020-09-23_14-06-08/Segnet_best_val_loss.h5 Epoch 3/5 4/4 [==============================] - 2s 531ms/step - loss: 3.0916 - val_loss: 2.7481

Epoch 00003: val_loss improved from 2.92473 to 2.74812, saving model to projects/segment/2020-09-23_14-06-08/Segnet_best_val_loss.h5 Epoch 4/5 4/4 [==============================] - 2s 539ms/step - loss: 2.9650 - val_loss: 2.5661

Epoch 00004: val_loss improved from 2.74812 to 2.56610, saving model to projects/segment/2020-09-23_14-06-08/Segnet_best_val_loss.h5 Epoch 5/5 4/4 [==============================] - 2s 542ms/step - loss: 2.9098 - val_loss: 2.5182

Epoch 00005: val_loss improved from 2.56610 to 2.51816, saving model to projects/segment/2020-09-23_14-06-08/Segnet_best_val_loss.h5 39-seconds to train Folder projects/segment/2020-09-23_14-06-08/Inference_results is created. Segmentation Loading pre-trained weights for the whole model: projects/segment/2020-09-23_14-06-08/Segnet_best_val_loss.h5 Found the following classes in the segmentation image: [ 0 1 6 7 8 9 12 16 17] Traceback (most recent call last): File "tests_training_and_inference.py", line 160, in setup_inference(item, model_path) File "/Users/michael/aXeleRate/axelerate/infer.py", line 100, in setup_inference predict(model=segnet._network, inp=input_arr, image = orig_image, out_fname=out_fname) File "/Users/michael/aXeleRate/axelerate/networks/segnet/predict.py", line 136, in predict seg_img = visualize_segmentation(pr, inp_img=image, n_classes=n_classes, overlay_img=True, colors=colors) File "/Users/michael/aXeleRate/axelerate/networks/segnet/predict.py", line 102, in visualize_segmentation seg_img = get_colored_segmentation_image(seg_arr, n_classes , colors=colors) File "/Users/michael/aXeleRate/axelerate/networks/segnet/predict.py", line 52, in get_colored_segmentation_image seg_img[:, :, 0] += ((seg_arr[:, :] == c)*(colors[c][0])).astype('uint8') ValueError: operands could not be broadcast together with shapes (120,160) (160,120) (120,160) operands could not be broadcast together with shapes (120,160) (160,120) (120,160) ['SegNet MobileNet7_5 operands could not be broadcast together with shapes (120,160) (160,120) (120,160) ']

Do you know why ?

thanks a lot in avance !

AIWintermuteAI commented 4 years ago

Thanks for noticing this - there was a bug in segmentation mask generation, which I fixed in the latest commit. DO git pull to get the latest changes and try again. Please, reply if the issue was resolved :)

AIWintermuteAI commented 4 years ago

Oh and by the way currently aXeleRate only supports local training on Linux machines(tested with Ubuntu 18.04) - the training and inference will(should) work on OS X or Windows, but conversion functions will not work, since converters use binaries compiled for specific OS and I only have added binaries for Linux now. You might want to try the examples in Google Colab, which requires no local setup and supports all operations.

Mjxkill commented 4 years ago

Yes now it is working ! thanks !!!