MPolaris / onnx2tflite

Tool for onnx->keras or onnx->tflite. Hope this tool can help you.
Apache License 2.0
526 stars 42 forks source link

Dimensional error #44

Closed krgy12138 closed 1 year ago

krgy12138 commented 1 year ago

something error when i use it

2023-05-27 16:38:48.242776: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used. 2023-05-27 16:38:48.288173: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used. 2023-05-27 16:38:48.288582: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. 2023-05-27 16:38:48.962344: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT WARNING: The argument dynamic_input_shape=True is not needed any more, onnxsim can now support dynamic input shapes natively, please refer to the latest documentation. An error will be raised in the future. Checking 0/1... shape[0] of input "input_0" is dynamic, we assume it presents batch size and set it as 1 when testing. If it is not wanted, please set the it manually by --test-input-shape (see onnxsim -h for the details). 2023-05-27 16:38:51.395079: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1956] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform. Skipping registering GPU devices... WARNING:convolution_layers ::ConvTranspose with pad will lead output error to bigger, please check it out. WARNING:convolution_layers ::ConvTranspose with pad will lead output error to bigger, please check it out. Traceback (most recent call last): File "/home/duyangfan/nxy/onnx2tflite-main/converter.py", line 108, in run() File "/home/duyangfan/nxy/onnx2tflite-main/converter.py", line 92, in run onnx_converter( File "/home/duyangfan/nxy/onnx2tflite-main/converter.py", line 21, in onnx_converter keras_model = keras_builder(model_proto, native_groupconv) File "/home/duyangfan/nxy/onnx2tflite-main/utils/builder.py", line 82, in keras_builder tf_tensor[node_outputs[index]] = tf_operator(tf_tensor, onnx_weights, node_inputs, op_attr, index=index)(_inputs) File "/home/duyangfan/nxy/onnx2tflite-main/layers/mathematics_layers.py", line 100, in call out = tf.matmul(self.first_operand, self.second_operand) File "/home/duyangfan/miniconda3/lib/python3.10/site-packages/tensorflow/python/util/traceback_utils.py", line 153, in error_handler raise e.with_traceback(filtered_tb) from None File "/home/duyangfan/miniconda3/lib/python3.10/site-packages/keras/layers/core/tf_op_layer.py", line 119, in handle return TFOpLambda(op)(*args, **kwargs) File "/home/duyangfan/miniconda3/lib/python3.10/site-packages/keras/utils/traceback_utils.py", line 70, in error_handler raise e.with_traceback(filtered_tb) from None ValueError: Exception encountered when calling layer "tf.linalg.matmul" (type TFOpLambda).

Dimensions must be equal, but are 58 and 64 for '{{node tf.linalg.matmul/MatMul}} = BatchMatMulV2[T=DT_FLOAT, adj_x=false, adj_y=false](Placeholder, tf.linalg.matmul/MatMul/b)' with input shapes: [1,1,58,58], [64,64].

Call arguments received by layer "tf.linalg.matmul" (type TFOpLambda): • a=tf.Tensor(shape=(1, 1, 58, 58), dtype=float32) • b=array([[ 0.04544717, 0.07029884, 0.1148618 , ..., -0.11733958, -0.05653606, 0.00598602], [ 0.0400255 , 0.09087554, -0.03369874, ..., 0.11601257, -0.11816581, -0.02627973], [-0.05443765, -0.06790046, 0.07890876, ..., 0.01277761, -0.03936964, -0.06176169], ..., [ 0.02922536, 0.05941173, -0.11838005, ..., -0.07970631, -0.02553718, 0.10650459], [ 0.00736488, -0.11999485, 0.02790602, ..., 0.11965565, 0.11500892, 0.01097947], [-0.12261999, -0.00388932, 0.04968777, ..., 0.05339718, 0.10601966, -0.11772572]], dtype=float32) • transpose_a=False • transpose_b=False • adjoint_a=False • adjoint_b=False • a_is_sparse=False • b_is_sparse=False • output_type=None • name=None

This was strange because my model didn't show any dimensional errors during training

MPolaris commented 1 year ago

can you share your onnx model? it seems dimension error before tf.matmul.

krgy12138 commented 1 year ago

of course this is the code of my model

class ResidualBlock(nn.Module):
  def __init__(self, channels):
    super(ResidualBlock, self).__init__()
    self.conv1 = nn.Conv2d(channels, channels, kernel_size=3, stride=1, padding=1, bias=True)
    self.in1 = nn.InstanceNorm2d(channels, affine=True)
    self.relu = nn.ReLU(inplace=True)
    self.conv2 = nn.Conv2d(channels, channels, kernel_size=3, stride=1, padding=1, bias=True)
    self.in2 = nn.InstanceNorm2d(channels, affine=True)

  def forward(self, x):
    identity = x #(batch_size,channels,height,width)
    out = self.conv1(x) 
    out = self.in1(out) 
    out = self.relu(out) 
    out = self.conv2(out) 
    out = self.in2(out) 
    out = out + identity
    return out

class Generator(nn.Module):
  def __init__(self):
    super(Generator, self).__init__()
    self.num_domains = 2
    self.num_residual_blocks = 6

    self.conv1 = nn.Conv2d(1, 64, kernel_size=7, stride=1, padding=3, bias=True)
    self.in1 = nn.InstanceNorm2d(64, affine=True)
    self.relu = nn.ReLU(inplace=True)
    self.conv2 = nn.Conv2d(64, 128, kernel_size=4, stride=2, padding=1, bias=True)
    self.in2 = nn.InstanceNorm2d(128, affine=True)
    self.conv3 = nn.Conv2d(128, 256, kernel_size=4, stride=2, padding=1, bias=True)
    self.in3 = nn.InstanceNorm2d(256, affine=True)

    self.res_blocks = nn.ModuleList()
    for _ in range(self.num_residual_blocks):
        self.res_blocks.append(ResidualBlock(256))

    self.deconv1 = nn.ConvTranspose2d(256, 128, kernel_size=4, stride=2, padding=1, bias=True)
    self.in4 = nn.InstanceNorm2d(128, affine=True)
    self.deconv2 = nn.ConvTranspose2d(128, 64, kernel_size=4, stride=2, padding=1, bias=True)
    self.in5 = nn.InstanceNorm2d(64, affine=True)
    self.conv4 = nn.Conv2d(64, 1, kernel_size=7, stride=1, padding=3, bias=True)

    self.out_layers = nn.ModuleList()
    for _ in range(2):
        self.out_layers.append(nn.Linear(64, 64))

  def forward(self, x):
    out = self.conv1(x)
    out = self.in1(out)
    out = self.relu(out)
    out = self.conv2(out)
    out = self.in2(out)
    out = self.relu(out)
    out = self.conv3(out)
    out = self.in3(out)
    out = self.relu(out)

    for block in self.res_blocks:
        out = block(out)

    out = self.deconv1(out)
    out = self.in4(out)
    out = self.relu(out)
    out = self.deconv2(out)
    out = self.in5(out)
    out = self.relu(out)
    out = self.conv4(out)
    out = torch.tanh(out)

    c_outs = []
    for i in range(self.num_domains):
        c_outs.append(self.out_layers[i](out))

    return out, c_outs
MPolaris commented 1 year ago

Fixed