gmalivenko / pytorch2keras

PyTorch to Keras model convertor
https://pytorch2keras.readthedocs.io/en/latest/
MIT License
857 stars 143 forks source link

my output keras model is channel first? #81

Closed hackeritchy closed 5 years ago

hackeritchy commented 5 years ago

Describe the bug After use pytorch_to_keras to convert my pytorch model to keras I got the channel first keras model. Can anyone show me how to get the default channel last model in keras?

To Reproduce Snippet of your code

Screen Shot 2019-06-13 at 7 04 22 PM

Expected behavior A clear and concise description of what you expected to happen.

Logs If applicable, add error message to help explain your problem.

Environment (please complete the following information):

Additional context Add any other context about the problem here.

gmalivenko commented 5 years ago

Hello @hackeritchy. Try to use change_ordering option of the converter.

hackeritchy commented 5 years ago

Hello @nerox8664 when I use change_ordering option of the converter, the program throws error:

graph node: TransformerMobileNet/Conv2d[conv4]
node id: 226
type: onnx::Conv
inputs: ['225', '85']
outputs: ['TransformerMobileNet/Conv2d[conv4]']
name in state_dict: conv4
attrs: {'dilations': [1, 1], 'group': 1, 'kernel_shape': [9, 9], 'pads': [4, 4, 4, 4], 'strides': [1, 1]}
is_terminal: True
Converting convolution ...
Traceback (most recent call last):
  File "pytorch_to_keras.py", line 14, in <module>
    keras_model = pytorch_to_keras(model, input_var, input_shapes=[(3, 256, 256)], change_ordering=True, verbose=True) 
  File "/home/ubuntu/miniconda3/envs/.py36env/lib/python3.6/site-packages/pytorch2keras/converter.py", line 354, in pytorch_to_keras
    model_tf_ordering = keras.models.Model.from_config(conf)
  File "/home/ubuntu/miniconda3/envs/.py36env/lib/python3.6/site-packages/keras/engine/network.py", line 1032, in from_config
    process_node(layer, node_data)
  File "/home/ubuntu/miniconda3/envs/.py36env/lib/python3.6/site-packages/keras/engine/network.py", line 991, in process_node
    layer(unpack_singleton(input_tensors), **kwargs)
  File "/home/ubuntu/miniconda3/envs/.py36env/lib/python3.6/site-packages/keras/engine/base_layer.py", line 457, in __call__
    output = self.call(inputs, **kwargs)
  File "/home/ubuntu/miniconda3/envs/.py36env/lib/python3.6/site-packages/keras/layers/core.py", line 687, in call
    return self.function(inputs, **arguments)
  File "/home/ubuntu/miniconda3/envs/.py36env/lib/python3.6/site-packages/pytorch2keras/normalization_layers.py", line 104, in target_layer
    layer = tf.contrib.layers.instance_norm(
NameError: name 'tf' is not defined

I modified 'channel_first' in ~/.keras/keras.json. And here follows my scripts:

 input_var = torch.rand(1, 3, 256, 256) 
 keras_model = pytorch_to_keras(model, input_var, input_shapes=[(3, 256, 256)], change_ordering=True, verbose=True)
 # keras_model = pytorch_to_keras(model, input_var, input_shapes=[(3, 256, 256)], verbose=True, names='keep') 
gmalivenko commented 5 years ago

I added extra input inside lambda function for normalization. You can fetch the newest version and try again.

hackeritchy commented 5 years ago

Hi,@nerox8664,Thank you for replying so quickly, I fixed the previous problem.But I got another one. :( I can convert the model correctly using change_ordering=False, except its channel first. Cannot figure out how this happened.

graph node: TransformerMobileNet/Conv2d[conv4]
node id: 226
type: onnx::Conv
inputs: ['225', '85']
outputs: ['TransformerMobileNet/Conv2d[conv4]']
name in state_dict: conv4
attrs: {'dilations': [1, 1], 'group': 1, 'kernel_shape': [9, 9], 'pads': [4, 4, 4, 4], 'strides': [1, 1]}
is_terminal: True
Converting convolution ...
Traceback (most recent call last):
  File "pytorch_to_keras.py", line 14, in <module>
    keras_model = pytorch_to_keras(model, input_var, input_shapes=[(3, 256, 256)], change_ordering=True, verbose=True) 
  File "/home/ubuntu/miniconda3/envs/.py36env/lib/python3.6/site-packages/pytorch2keras/converter.py", line 354, in pytorch_to_keras
    model_tf_ordering = keras.models.Model.from_config(conf)
  File "/home/ubuntu/miniconda3/envs/.py36env/lib/python3.6/site-packages/keras/engine/network.py", line 1032, in from_config
    process_node(layer, node_data)
  File "/home/ubuntu/miniconda3/envs/.py36env/lib/python3.6/site-packages/keras/engine/network.py", line 991, in process_node
    layer(unpack_singleton(input_tensors), **kwargs)
  File "/home/ubuntu/miniconda3/envs/.py36env/lib/python3.6/site-packages/keras/engine/base_layer.py", line 457, in __call__
    output = self.call(inputs, **kwargs)
  File "/home/ubuntu/miniconda3/envs/.py36env/lib/python3.6/site-packages/keras/layers/core.py", line 687, in call
    return self.function(inputs, **arguments)
  File "/home/ubuntu/miniconda3/envs/.py36env/lib/python3.6/site-packages/pytorch2keras/normalization_layers.py", line 108, in target_layer
    trainable=False
  File "/home/ubuntu/miniconda3/envs/.py36env/lib/python3.6/site-packages/tensorflow/contrib/framework/python/ops/arg_scope.py", line 182, in func_with_args
    return func(*args, **current_args)
  File "/home/ubuntu/miniconda3/envs/.py36env/lib/python3.6/site-packages/tensorflow/contrib/layers/python/layers/normalization.py", line 137, in instance_norm
    trainable=trainable)
  File "/home/ubuntu/miniconda3/envs/.py36env/lib/python3.6/site-packages/tensorflow/contrib/framework/python/ops/arg_scope.py", line 182, in func_with_args
    return func(*args, **current_args)
  File "/home/ubuntu/miniconda3/envs/.py36env/lib/python3.6/site-packages/tensorflow/contrib/framework/python/ops/variables.py", line 350, in model_variable
    aggregation=aggregation)
  File "/home/ubuntu/miniconda3/envs/.py36env/lib/python3.6/site-packages/tensorflow/contrib/framework/python/ops/arg_scope.py", line 182, in func_with_args
    return func(*args, **current_args)
  File "/home/ubuntu/miniconda3/envs/.py36env/lib/python3.6/site-packages/tensorflow/contrib/framework/python/ops/variables.py", line 277, in variable
    aggregation=aggregation)
  File "/home/ubuntu/miniconda3/envs/.py36env/lib/python3.6/site-packages/tensorflow/python/ops/variable_scope.py", line 1487, in get_variable
    aggregation=aggregation)
  File "/home/ubuntu/miniconda3/envs/.py36env/lib/python3.6/site-packages/tensorflow/python/ops/variable_scope.py", line 1237, in get_variable
    aggregation=aggregation)
  File "/home/ubuntu/miniconda3/envs/.py36env/lib/python3.6/site-packages/tensorflow/python/ops/variable_scope.py", line 540, in get_variable
    aggregation=aggregation)
  File "/home/ubuntu/miniconda3/envs/.py36env/lib/python3.6/site-packages/tensorflow/python/ops/variable_scope.py", line 492, in _true_getter
    aggregation=aggregation)
  File "/home/ubuntu/miniconda3/envs/.py36env/lib/python3.6/site-packages/tensorflow/python/ops/variable_scope.py", line 922, in _get_single_variable
    aggregation=aggregation)
  File "/home/ubuntu/miniconda3/envs/.py36env/lib/python3.6/site-packages/tensorflow/python/ops/variables.py", line 183, in __call__
    return cls._variable_v1_call(*args, **kwargs)
  File "/home/ubuntu/miniconda3/envs/.py36env/lib/python3.6/site-packages/tensorflow/python/ops/variables.py", line 146, in _variable_v1_call
    aggregation=aggregation)
  File "/home/ubuntu/miniconda3/envs/.py36env/lib/python3.6/site-packages/tensorflow/python/ops/variables.py", line 125, in <lambda>
    previous_getter = lambda **kwargs: default_variable_creator(None, **kwargs)
  File "/home/ubuntu/miniconda3/envs/.py36env/lib/python3.6/site-packages/tensorflow/python/ops/variable_scope.py", line 2444, in default_variable_creator
    expected_shape=expected_shape, import_scope=import_scope)
  File "/home/ubuntu/miniconda3/envs/.py36env/lib/python3.6/site-packages/tensorflow/python/ops/variables.py", line 187, in __call__
    return super(VariableMetaclass, cls).__call__(*args, **kwargs)
  File "/home/ubuntu/miniconda3/envs/.py36env/lib/python3.6/site-packages/tensorflow/python/ops/variables.py", line 1329, in __init__
    constraint=constraint)
  File "/home/ubuntu/miniconda3/envs/.py36env/lib/python3.6/site-packages/tensorflow/python/ops/variables.py", line 1437, in _init_from_args
    initial_value(), name="initial_value", dtype=dtype)
  File "/home/ubuntu/miniconda3/envs/.py36env/lib/python3.6/site-packages/tensorflow/python/ops/variable_scope.py", line 896, in <lambda>
    shape.as_list(), dtype=dtype, partition_info=partition_info)
  File "/home/ubuntu/miniconda3/envs/.py36env/lib/python3.6/site-packages/tensorflow/python/ops/init_ops.py", line 220, in __call__
    self.value, dtype=dtype, shape=shape, verify_shape=verify_shape)
  File "/home/ubuntu/miniconda3/envs/.py36env/lib/python3.6/site-packages/tensorflow/python/framework/constant_op.py", line 208, in constant
    value, dtype=dtype, shape=shape, verify_shape=verify_shape))
  File "/home/ubuntu/miniconda3/envs/.py36env/lib/python3.6/site-packages/tensorflow/python/framework/tensor_util.py", line 497, in make_tensor_proto
    (shape_size, nparray.size))
ValueError: Too many elements provided. Needed at most 64, but received 128
gmalivenko commented 5 years ago

The issue occurred by probably lambda layers and different channel ordering (for native Keras layers I simply change HWC to CHW in model config).

I can't reproduce the error without model definition. If you'll send me a model class I will try to fix it.

hackeritchy commented 5 years ago

This is my model definition:

import torch
import torch.nn as nn
import torch.nn.functional as F

class Bottleneck(nn.Module):
    def __init__(self, inp_c, out_c, kernel_size, stride, t=1):
        assert stride in [1, 2], 'stride must be either 1 or 2'
        super().__init__()
        self.residual = stride == 1 and inp_c == out_c
        # pad = kernel_size // 2
        # self.reflection_pad = nn.ReflectionPad2d(pad)
        self.conv1 = nn.Conv2d(inp_c, t*inp_c, 1, 1, bias=False)
        self.in1 = nn.InstanceNorm2d(t*inp_c, affine=True)
        self.conv2 = nn.Conv2d(t*inp_c, t*inp_c, kernel_size, stride,
                               groups=t*inp_c, bias=False, padding=kernel_size//2)
        self.in2 = nn.InstanceNorm2d(t*inp_c, affine=True)
        self.conv3 = nn.Conv2d(t*inp_c, out_c, 1, 1, bias=False)
        self.in3 = nn.InstanceNorm2d(out_c, affine=True)

    def forward(self, x):
        out = F.relu6(self.in1(self.conv1(x)))
        # out = self.reflection_pad(out)
        out = F.relu6(self.in2(self.conv2(out)))
        out = self.in3(self.conv3(out))
        if self.residual:
            out = x + out
        return out

class UpsampleConv(nn.Module):
    def __init__(self, inp_c, out_c, kernel_size, stride, upsample=2):
        super().__init__()
        if upsample:
            self.upsample=upsample
            # self.upsample = nn.Upsample(mode='bilinear', scale_factor=upsample, align_corners=False)
            self.upsample = nn.Upsample(mode='nearest', scale_factor=2)
        else:
            self.upsample = None
        self.conv1 = Bottleneck(inp_c, out_c, kernel_size, stride)

    def forward(self, x):
        x_in = x
        if self.upsample is not None:
            x_in = self.upsample(x_in)
        out = F.relu(self.conv1(x_in))
        return out

class TransformerMobileNet(nn.Module):
    def __init__(self):
        super().__init__()
        # Conv Layers
        # self.reflection_pad = nn.ReflectionPad2d(9//2)
        self.conv1 = nn.Conv2d(3, 32, kernel_size=9, stride=1, bias=False, padding=9//2)
        self.in1 = nn.InstanceNorm2d(32, affine=True)
        self.conv2 = Bottleneck(32, 64, kernel_size=3, stride=2)
        self.conv3 = Bottleneck(64, 128, kernel_size=3, stride=2)
        # Residual Layers
        self.res1 = Bottleneck(128, 128,  3, 1)
        self.res2 = Bottleneck(128, 128,  3, 1)
        self.res3 = Bottleneck(128, 128,  3, 1)
        self.res4 = Bottleneck(128, 128,  3, 1)
        self.res5 = Bottleneck(128, 128,  3, 1)
        # Upsampling Layers
        self.upconv1 = UpsampleConv(128, 64, kernel_size=3, stride=1)
        self.upconv2 = UpsampleConv(64, 32, kernel_size=3, stride=1)
        self.conv4 = nn.Conv2d(32, 3, kernel_size=9, stride=1, bias=False, padding=9//2)

    def forward(self, x):
        # out = self.reflection_pad(x)
        out = F.relu(self.in1(self.conv1(x)))
        out = self.conv2(out)
        out = self.conv3(out)
        out = self.res1(out)
        out = self.res2(out)
        out = self.res3(out)
        out = self.res4(out)
        out = self.res5(out)
        out = self.upconv1(out)
        out = self.upconv2(out)
        out = self.conv4(out)
        return out

if __name__ == "__main__":
    from torchsummary import summary
    model = TransformerMobileNet()
    model.to('cuda')
    summary(model, (3, 256, 256))
    model.load_state_dict(torch.load('saved_model/release.pth'))
    torch.onnx.export(model, torch.randn(1, 3, 256, 256).cuda(), 'models/style1.onnx')
gmalivenko commented 5 years ago

I've fixed InstanceNorm2d lambda with runtime check of image channel ordering. But not sure the model will be converted well due to high difference in the results. You can check it with latest master commit / latest library version.

hackeritchy commented 5 years ago

Thank you for your solution! The model was converted successfully to keras format after checking out the latest library version.

Brave731 commented 4 years ago

@hackeritchy Can you tell me the version you are using? pytorch2keras and onnx2keras?