Closed goncz closed 4 years ago
@goncz , thank you very much for the feedback. It seems that your Caffe does not support the Clip operator, which is used to implement Relu6. Here is a workaround:
self.emit_Relu(IR_node)
Duplicate of #836
@goncz , thank you very much for the feedback. It seems that your Caffe does not support the Clip operator, which is used to implement Relu6. Here is a workaround:
- Go to the installation directory of MMdnn. E.g. ~/.local/lib/pythonX.Y/site-packages/mmdnn , or /usr/lib/python3/dist-packages/mmdnn
- Open file "mmdnn/conversion/caffe/caffe_emitter.py" .
- Locate the function "def emit_Relu6(self, IR_node)" (e.g., line 609)
- Replace its implementation with the following single line of code:
self.emit_Relu(IR_node)
- Then, Relu will be used to to simulate Relu6. To be noted, the final learning performance may not be exactly the same to that of the source model.
Thank you @linmajia, this solved my problem.
However, i now have this problem:
I0618 10:51:23.611192 5265 layer_factory.hpp:77] Creating layer MobilenetV1_MobilenetV1_Conv2d_6_pointwise_BatchNorm_batchnorm_add_scale
I0618 10:51:23.611218 5265 net.cpp:122] Setting up MobilenetV1_MobilenetV1_Conv2d_6_pointwise_BatchNorm_batchnorm_add_scale
I0618 10:51:23.611222 5265 net.cpp:129] Top shape: 1 512 14 14 (100352)
I0618 10:51:23.611225 5265 net.cpp:137] Memory required for data: 68155264
I0618 10:51:23.611229 5265 layer_factory.hpp:77] Creating layer MobilenetV1_MobilenetV1_Conv2d_6_pointwise_Relu6
I0618 10:51:23.611233 5265 net.cpp:84] Creating Layer MobilenetV1_MobilenetV1_Conv2d_6_pointwise_Relu6
I0618 10:51:23.611249 5265 net.cpp:406] MobilenetV1_MobilenetV1_Conv2d_6_pointwise_Relu6 <- MobilenetV1_MobilenetV1_Conv2d_6_pointwise_BatchNorm_batchnorm_add
I0618 10:51:23.611253 5265 net.cpp:367] MobilenetV1_MobilenetV1_Conv2d_6_pointwise_Relu6 -> MobilenetV1_MobilenetV1_Conv2d_6_pointwise_BatchNorm_batchnorm_add (in-place)
I0618 10:51:23.611537 5265 net.cpp:122] Setting up MobilenetV1_MobilenetV1_Conv2d_6_pointwise_Relu6
I0618 10:51:23.611544 5265 net.cpp:129] Top shape: 1 512 14 14 (100352)
I0618 10:51:23.611563 5265 net.cpp:137] Memory required for data: 68556672
I0618 10:51:23.611567 5265 layer_factory.hpp:77] Creating layer MobilenetV1_MobilenetV1_Conv2d_7_depthwise_depthwise
I0618 10:51:23.611572 5265 net.cpp:84] Creating Layer MobilenetV1_MobilenetV1_Conv2d_7_depthwise_depthwise
I0618 10:51:23.611575 5265 net.cpp:406] MobilenetV1_MobilenetV1_Conv2d_7_depthwise_depthwise <- MobilenetV1_MobilenetV1_Conv2d_6_pointwise_BatchNorm_batchnorm_add
I0618 10:51:23.611579 5265 net.cpp:380] MobilenetV1_MobilenetV1_Conv2d_7_depthwise_depthwise -> MobilenetV1_MobilenetV1_Conv2d_7_depthwise_depthwise
F0618 10:51:24.405036 5265 cudnn_conv_layer.cpp:53] Check failed: status == CUDNN_STATUS_SUCCESS (4 vs. 0) CUDNN_STATUS_INTERNAL_ERROR
*** Check failure stack trace: ***
Aborted (core dumped)
I have experienced something similar before, i then added the following code before running the tf.Session. In what script should i write this here?
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
@goncz , since MMdnn invokes the underlying deep learning frameworks, I suggest that you try the CPU mode to avoid out-of-GPU-memory issues by temporarily hiding the GPUs:
export CUDA_VISIBLE_DEVICES=" "
Platform: Ubuntu 16.04 Python version: 3.6.9 Source framework with version: Tensorflow 1.12.2 with Tensorflow GPU 1.12.2
I'm trying to run the Frozen graph conversion example from https://github.com/microsoft/MMdnn/tree/master/mmdnn/conversion/tensorflow However, when i run the command
mmconvert -sf tensorflow -iw mobilenet_v1_1.0_224/frozen_graph.pb --inNodeName input --inputShape 224,224,3 --dstNodeName MobilenetV1/Predictions/Softmax -df caffe -om tf_mobilenet
I get the following errors: