prisma-ai / torch2coreml

Torch7 -> CoreML
MIT License
386 stars 55 forks source link

Applying coreml conversation on other style transfer torch models #8

Open engahmed1190 opened 7 years ago

engahmed1190 commented 7 years ago

I have tried the coreml conversation on other repository here is the link of the repository :- link they provide a pre trained model :- model.t7

By running the perpare_model.lua on this model an error is thrown adding some changes to make it load the model

require 'InstanceNormalization'
require 'src/utils'
require 'src/descriptor_net'
require 'src/preprocess_criterion'

error :-

index local 'x' (a nil value)
stack traceback:
    perpare_model.lua:12: in function 'replaceModule'
    perpare_model.lua:32: in function 'main'
    perpare_model.lua:46: in main chunk
    [C]: in function 'dofile'
    .../src/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk
    [C]: at 0x00406670

running on convert-fast-neural-style.py

getting this error

Traceback (most recent call last):
  File "convert-fast-neural-style.py", line 176, in <module>
    main()
  File "convert-fast-neural-style.py", line 162, in main
    unknown_layer_converter_fn=convert_instance_norm
  File "/usr/local/lib/python2.7/dist-packages/torch2coreml/_torch_converter.py", line 195, in convert
    torch_model.evaluate()
  File "/usr/local/lib/python2.7/dist-packages/torch/legacy/nn/Container.py", line 39, in evaluate
    self.applyToModules(lambda m: m.evaluate())
  File "/usr/local/lib/python2.7/dist-packages/torch/legacy/nn/Container.py", line 26, in applyToModules
    func(module)
  File "/usr/local/lib/python2.7/dist-packages/torch/legacy/nn/Container.py", line 39, in <lambda>
    self.applyToModules(lambda m: m.evaluate())
TypeError: 'NoneType' object is not callable
engahmed1190 commented 7 years ago

this might help , this is the model layers and the main problem is with evaluate() method

nn.Sequential {
  [input -> (0) -> (1) -> (2) -> (3) -> (4) -> (5) -> (6) -> (7) -> (8) -> (9) -> (10) -> (11) -> (12) -> (13) -> (14) -> (15) -> (16) -> (17) -> (18) -> (19) -> (20) -> (21) -> (22) -> (23) -> output]
  (0): TorchObject(nn.TVLoss, {'_type': 'torch.FloatTensor', 'strength': 0, 'x_diff': 
  ( 0 ,.,.) = 
    0.0000  0.0000  0.0000  ...  -0.0039  0.0078  0.0000
    0.0000  0.0000  0.0000  ...   0.0039  0.0000  0.0000
    0.0000  0.0000  0.0000  ...   0.0078 -0.0157  0.0000
             ...             ⋱             ...          
   -0.0196  0.0157 -0.0196  ...   0.0078  0.0588  0.0392
   -0.0196  0.0196 -0.0235  ...   0.0235  0.0745  0.1451
   -0.0078  0.0000  0.0118  ...   0.0431  0.1882  0.1569

  ( 1 ,.,.) = 
    0.0000  0.0000  0.0000  ...  -0.0039  0.0078  0.0000
    0.0000  0.0000  0.0000  ...   0.0039  0.0000  0.0000
    0.0000  0.0000  0.0000  ...   0.0078 -0.0078  0.0000
             ...             ⋱             ...          
   -0.0196  0.0157 -0.0196  ...   0.0078  0.0431  0.0235
   -0.0196  0.0196 -0.0235  ...   0.0196  0.0588  0.1373
   -0.0078  0.0000  0.0118  ...   0.0353  0.1765  0.1451

  ( 2 ,.,.) = 
    0.0000  0.0000  0.0000  ...  -0.0039  0.0078  0.0000
    0.0000  0.0000  0.0000  ...   0.0039  0.0000  0.0000
    0.0000  0.0000  0.0000  ...   0.0078 -0.0078  0.0000
             ...             ⋱             ...          
   -0.0196  0.0157 -0.0196  ...   0.0078  0.0471  0.0235
   -0.0196  0.0196 -0.0235  ...   0.0118  0.0510  0.1216
   -0.0078  0.0039  0.0118  ...   0.0157  0.1647  0.1176
  [torch.FloatTensor of size 3x511x511]
  , 'gradInput': [torch.FloatTensor with no dimension]
  , 'y_diff': 
  ( 0 ,.,.) = 
    0.0000  0.0000  0.0000  ...   0.0000  0.0078  0.0000
    0.0000  0.0000  0.0000  ...   0.0078  0.0118 -0.0039
    0.0000  0.0000  0.0000  ...  -0.0118 -0.0078  0.0000
             ...             ⋱             ...          
    0.0039  0.0039  0.0078  ...   0.0157  0.0314  0.0471
   -0.0157 -0.0039 -0.0235  ...   0.0863  0.1059  0.2196
    0.0039  0.0039  0.0039  ...   0.1020  0.1961  0.1451

  ( 1 ,.,.) = 
    0.0000  0.0000  0.0000  ...   0.0000  0.0078  0.0000
    0.0000  0.0000  0.0000  ...   0.0000  0.0039 -0.0039
    0.0000  0.0000  0.0000  ...  -0.0118 -0.0078  0.0000
             ...             ⋱             ...          
    0.0039  0.0039  0.0078  ...   0.0196  0.0314  0.0471
   -0.0157 -0.0039 -0.0235  ...   0.0706  0.0863  0.2039
    0.0039  0.0039  0.0039  ...   0.0941  0.1922  0.1451

  ( 2 ,.,.) = 
    0.0000  0.0000  0.0000  ...   0.0000  0.0078  0.0000
    0.0000  0.0000  0.0000  ...   0.0000  0.0039 -0.0039
    0.0000  0.0000  0.0000  ...  -0.0196 -0.0078  0.0000
             ...             ⋱             ...          
    0.0039  0.0039  0.0078  ...   0.0039  0.0078  0.0118
   -0.0196 -0.0078 -0.0235  ...   0.0275  0.0314  0.1451
    0.0039  0.0039  0.0039  ...   0.0745  0.1725  0.0980
  [torch.FloatTensor of size 3x511x511]
  , 'train': True, 'output': [torch.FloatTensor with no dimension]
  })
  (1): nn.SpatialReplicationPadding(4, 4, 4, 4)
  (2): nn.SpatialConvolution(3 -> 32, 9x9)
  (3): nn.InstanceNormalization
  (4): nn.ReLU
  (5): nn.SpatialConvolution(32 -> 64, 3x3, 2, 2, 1, 1)
  (6): nn.InstanceNormalization
  (7): nn.ReLU
  (8): nn.SpatialConvolution(64 -> 128, 3x3, 2, 2, 1, 1)
  (9): nn.InstanceNormalization
  (10): nn.ReLU
  (11): nn.Sequential {
    [input -> (0) -> (1) -> output]
    (0): torch.legacy.nn.ConcatTable.ConcatTable {
      input
        |`-> (0): nn.Identity
        |`-> (1): nn.Sequential {
               [input -> (0) -> (1) -> (2) -> (3) -> (4) -> (5) -> (6) -> output]
               (0): nn.SpatialReplicationPadding(1, 1, 1, 1)
               (1): nn.SpatialConvolution(128 -> 128, 3x3)
               (2): nn.InstanceNormalization
               (3): nn.ReLU
               (4): nn.SpatialReplicationPadding(1, 1, 1, 1)
               (5): nn.SpatialConvolution(128 -> 128, 3x3)
               (6): nn.InstanceNormalization
             }
         +. -> output
    }
    (1): nn.CAddTable
  }
  (12): nn.Sequential {
    [input -> (0) -> (1) -> output]
    (0): torch.legacy.nn.ConcatTable.ConcatTable {
      input
        |`-> (0): nn.Identity
        |`-> (1): nn.Sequential {
               [input -> (0) -> (1) -> (2) -> (3) -> (4) -> (5) -> (6) -> output]
               (0): nn.SpatialReplicationPadding(1, 1, 1, 1)
               (1): nn.SpatialConvolution(128 -> 128, 3x3)
               (2): nn.InstanceNormalization
               (3): nn.ReLU
               (4): nn.SpatialReplicationPadding(1, 1, 1, 1)
               (5): nn.SpatialConvolution(128 -> 128, 3x3)
               (6): nn.InstanceNormalization
             }
         +. -> output
    }
    (1): nn.CAddTable
  }
  (13): nn.Sequential {
    [input -> (0) -> (1) -> output]
    (0): torch.legacy.nn.ConcatTable.ConcatTable {
      input
        |`-> (0): nn.Identity
        |`-> (1): nn.Sequential {
               [input -> (0) -> (1) -> (2) -> (3) -> (4) -> (5) -> (6) -> output]
               (0): nn.SpatialReplicationPadding(1, 1, 1, 1)
               (1): nn.SpatialConvolution(128 -> 128, 3x3)
               (2): nn.InstanceNormalization
               (3): nn.ReLU
               (4): nn.SpatialReplicationPadding(1, 1, 1, 1)
               (5): nn.SpatialConvolution(128 -> 128, 3x3)
               (6): nn.InstanceNormalization
             }
         +. -> output
    }
    (1): nn.CAddTable
  }
  (14): nn.Sequential {
    [input -> (0) -> (1) -> output]
    (0): torch.legacy.nn.ConcatTable.ConcatTable {
      input
        |`-> (0): nn.Identity
        |`-> (1): nn.Sequential {
               [input -> (0) -> (1) -> (2) -> (3) -> (4) -> (5) -> (6) -> output]
               (0): nn.SpatialReplicationPadding(1, 1, 1, 1)
               (1): nn.SpatialConvolution(128 -> 128, 3x3)
               (2): nn.InstanceNormalization
               (3): nn.ReLU
               (4): nn.SpatialReplicationPadding(1, 1, 1, 1)
               (5): nn.SpatialConvolution(128 -> 128, 3x3)
               (6): nn.InstanceNormalization
             }
         +. -> output
    }
    (1): nn.CAddTable
  }
  (15): nn.Sequential {
    [input -> (0) -> (1) -> output]
    (0): torch.legacy.nn.ConcatTable.ConcatTable {
      input
        |`-> (0): nn.Identity
        |`-> (1): nn.Sequential {
               [input -> (0) -> (1) -> (2) -> (3) -> (4) -> (5) -> (6) -> output]
               (0): nn.SpatialReplicationPadding(1, 1, 1, 1)
               (1): nn.SpatialConvolution(128 -> 128, 3x3)
               (2): nn.InstanceNormalization
               (3): nn.ReLU
               (4): nn.SpatialReplicationPadding(1, 1, 1, 1)
               (5): nn.SpatialConvolution(128 -> 128, 3x3)
               (6): nn.InstanceNormalization
             }
         +. -> output
    }
    (1): nn.CAddTable
  }
  (16): nn.SpatialFullConvolution(128 -> 64, 3x3, 2, 2, 1, 1, 1, 1)
  (17): nn.InstanceNormalization
  (18): nn.ReLU
  (19): nn.SpatialFullConvolution(64 -> 32, 3x3, 2, 2, 1, 1, 1, 1)
  (20): nn.InstanceNormalization
  (21): nn.ReLU
  (22): nn.SpatialReplicationPadding(1, 1, 1, 1)
  (23): nn.SpatialConvolution(32 -> 3, 3x3)
}

the model is loaded perfectly fine but when i am trying to evaluate model , i get this error

 Traceback (most recent call last):
  File "convert-fast-neural-style.py", line 176, in <module>
    main()
  File "convert-fast-neural-style.py", line 162, in main
    unknown_layer_converter_fn=convert_instance_norm
  File "/usr/local/lib/python2.7/dist-packages/torch2coreml/_torch_converter.py", line 194, in convert
    print (model.evaluate())
  File "/usr/local/lib/python2.7/dist-packages/torch/legacy/nn/Container.py", line 39, in evaluate
    self.applyToModules(lambda m: m.evaluate())
  File "/usr/local/lib/python2.7/dist-packages/torch/legacy/nn/Container.py", line 26, in applyToModules
    func(module)
  File "/usr/local/lib/python2.7/dist-packages/torch/legacy/nn/Container.py", line 39, in <lambda>
    self.applyToModules(lambda m: m.evaluate())
TypeError: 'NoneType' object is not callable

what is the main reason behind the evaluation to bring this NoneType error. is their any other may that is equivalent to m.evaluate()

for reference also this is the Model i am trying to evaluate

opedge commented 7 years ago

To use this repo and models with custom layers from it you need to implement corresponding layers using pytorch (legacy.nn.Module subclasses) and replace them in parsed model.

engahmed1190 commented 7 years ago

@opedge do you have any example for that

opedge commented 7 years ago

There is an example of implementing custom InstanceNormalization layer using pytorch.