First of all, thanks for the package! It's amazing that something like this exists.
While trying to convert a model I noticed that using same padding breaks the conversion of convolution layers. Given that same is supported as an argument by both the torch and the tf version of convolution layers, I would assume that this should be a relatively easy fix.
import torch
import torch.nn as nn
import nobuco
from nobuco import ChannelOrder
dummy_image = torch.rand(size=(1, 3, 2048))
for padding in [0, 'same']:
pytorch_module = nn.Conv1d(3, 10, 15, padding=padding)
keras_model = nobuco.pytorch_to_keras(
pytorch_module,
args=[dummy_image], kwargs=None,
inputs_channel_order=ChannelOrder.TENSORFLOW,
outputs_channel_order=ChannelOrder.TENSORFLOW
)
First of all, thanks for the package! It's amazing that something like this exists.
While trying to convert a model I noticed that using
same
padding breaks the conversion of convolution layers. Given thatsame
is supported as an argument by both the torch and the tf version of convolution layers, I would assume that this should be a relatively easy fix.