rwth-i6 / pytorch-to-returnn

Make PyTorch code runnable within RETURNN
3 stars 6 forks source link

Dummy run with serialized dim tags #111

Closed vieting closed 2 years ago

vieting commented 2 years ago

In #110, I added a dummy run to test a serialized config. In the example given in test_negative_sampling, this does not yet work. The error can be reproduced by adding the dummy run to that test, i.e.:

def test_negative_sampling():
  n_batch, n_time, n_feat = 3, 14, 7  # B, T', F
  n_negatives = 10  # N

  def model_func(wrapped_import, inputs: torch.Tensor):
    if typing.TYPE_CHECKING or not wrapped_import:
      import torch
    else:
      torch = wrapped_import("torch")
    model = torch.nn.Conv1d(in_channels=n_feat, out_channels=n_feat, kernel_size=2, stride=3)
    inputs = model(inputs.transpose(1, 2)).transpose(1, 2).contiguous()

    bsz, tsz, fsz = inputs.shape  # (B,T,F)
    tszs = torch.arange(tsz).unsqueeze(-1).expand(-1, n_negatives).flatten()  # (T*N)
    neg_idxs = torch.randint(low=0, high=tsz - 1, size=(bsz, n_negatives * tsz))  # (B,T*N)
    neg_idxs = neg_idxs + (neg_idxs >= tszs).int()  # (B,T*N)
    neg_idxs = neg_idxs + (torch.arange(bsz).unsqueeze(1) * tsz)  # (B,T*N)
    y = inputs.view(-1, fsz)  # (B,T,F) => (B*T,F)
    negs = y[neg_idxs.view(-1)]  # (B*T*N,F)
    negs = negs.view(bsz, tsz, n_negatives, fsz).permute(2, 0, 1, 3)  # to (N,B,T,F)
    inputs_unsqueeze = inputs.unsqueeze(0)  # (1,B,T,F)
    targets = torch.cat([inputs_unsqueeze, negs], dim=0)  # (N+1,B,T,F)
    logits = torch.cosine_similarity(inputs.float(), targets.float(), dim=-1).type_as(inputs)
    return logits

  rnd = numpy.random.RandomState(42)
  x = rnd.normal(0., 1., (n_batch, n_time, n_feat)).astype("float32")
  converter = verify_torch_and_convert_to_returnn(model_func, inputs=x, inputs_data_kwargs={
    "shape": (None, n_feat), "batch_dim_axis": 0, "time_dim_axis": 1, "feature_dim_axis": 2})

  cfg = converter.get_returnn_config_serialized()
  from returnn_helpers import config_net_dict_via_serialized, dummy_run_net
  config, net_dict = config_net_dict_via_serialized(cfg)
  dummy_run_net(config)
vieting commented 2 years ago

I'll add a draft PR and add the log and stack trace here once the test finish.

vieting commented 2 years ago

See #112 and corresponding tests (here)

Traceback

``` ERROR: test_layers.test_negative_sampling ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/runner/.local/lib/python3.8/site-packages/nose/case.py", line 198, in TestBase.runTest line: self.test(*self.arg) locals: self = test_layers.test_negative_sampling self.test = self.arg = () File "/home/runner/work/pytorch-to-returnn/pytorch-to-returnn/tests/test_layers.py", line 83, in test_negative_sampling line: dummy_run_net(config) locals: dummy_run_net = config = {'numpy': , 'single_step_dim': Dim{'single-step'!}, 'use_tensorflow': True, 'behavior_version': 12, 'time_data_dim': Dim{'time:data'[B]}, 'feature_data_dim': Dim{F'feature:data'(7)}, '_10___time_data__2___3__..., len = 14 File "/home/runner/work/pytorch-to-returnn/pytorch-to-returnn/tests/returnn_helpers.py", line 38, in dummy_run_net line: engine.init_train_from_config(train_data=dataset) locals: engine = engine.init_train_from_config = > train_data = dataset = File "/home/runner/.local/lib/python3.8/site-packages/returnn/tf/engine.py", line 1056, in Engine.init_train_from_config line: self.init_network_from_config(config) locals: self = self.init_network_from_config = > config = File "/home/runner/.local/lib/python3.8/site-packages/returnn/tf/engine.py", line 1121, in Engine.init_network_from_config line: self._init_network(net_desc=net_dict, epoch=self.epoch) locals: self = self._init_network = > net_desc = net_dict = {'Transpose': {'class': 'copy', 'from': 'data'}, 'Conv1d': {'class': 'conv', 'from': 'Transpose', 'activation': None, 'with_bias': True, 'n_out': 7, 'filter_size': (2,), 'padding': 'valid', 'in_spatial_dims': ['T'], 'strides': (3,)}, 'Transpose_1': {'class': 'copy', 'from': 'Conv1d'}, 'Range_Leng..., len = 110 epoch = None self.epoch = 1 File "/home/runner/.local/lib/python3.8/site-packages/returnn/tf/engine.py", line 1301, in Engine._init_network line: self.network, self.updater = self.create_network( config=self.config, extern_data=extern_data, rnd_seed=net_random_seed, train_flag=train_flag, eval_flag=self.use_eval_flag, search_flag=self.use_search_flag, initial_learning_rate=getattr(self, "initial_learning_rate", None), net_dict=net_desc) locals: self = self.network = None self.updater = None self.create_network = > config = self.config = extern_data = rnd_seed = net_random_seed = 1 train_flag = eval_flag = self.use_eval_flag = True search_flag = self.use_search_flag = False initial_learning_rate = getattr = net_dict = net_desc = {'Transpose': {'class': 'copy', 'from': 'data'}, 'Conv1d': {'class': 'conv', 'from': 'Transpose', 'activation': None, 'with_bias': True, 'n_out': 7, 'filter_size': (2,), 'padding': 'valid', 'in_spatial_dims': ['T'], 'strides': (3,)}, 'Transpose_1': {'class': 'copy', 'from': 'Conv1d'}, 'Range_Leng..., len = 110 File "/home/runner/.local/lib/python3.8/site-packages/returnn/tf/engine.py", line 1342, in Engine.create_network line: network.construct_from_dict(net_dict) locals: network = > network.construct_from_dict = >> net_dict = {'Transpose': {'class': 'copy', 'from': 'data'}, 'Conv1d': {'class': 'conv', 'from': 'Transpose', 'activation': None, 'with_bias': True, 'n_out': 7, 'filter_size': (2,), 'padding': 'valid', 'in_spatial_dims': ['T'], 'strides': (3,)}, 'Transpose_1': {'class': 'copy', 'from': 'Conv1d'}, 'Range_Leng..., len = 110 File "/home/runner/.local/lib/python3.8/site-packages/returnn/tf/network.py", line 619, in TFNetwork.construct_from_dict line: self.construct_layer(net_dict, name, get_layer=get_layer) locals: self = > self.construct_layer = >> net_dict = {'Transpose': {'class': 'copy', 'from': 'data'}, 'Conv1d': {'class': 'conv', 'from': 'Transpose', 'activation': None, 'with_bias': True, 'n_out': 7, 'filter_size': (2,), 'padding': 'valid', 'in_spatial_dims': ['T'], 'strides': (3,)}, 'Transpose_1': {'class': 'copy', 'from': 'Conv1d'}, 'Range_Leng..., len = 110 name = 'output', len = 6 get_layer = None File "/home/runner/.local/lib/python3.8/site-packages/returnn/tf/network.py", line 938, in TFNetwork.construct_layer line: layer_class.transform_config_dict(layer_desc, network=net, get_layer=get_layer) locals: layer_class = layer_class.transform_config_dict = > layer_desc = {'_network': >, '_name': 'output'} network = net = > get_layer = .get_layer at 0x7f71707e6040> File "/home/runner/.local/lib/python3.8/site-packages/returnn/tf/layers/basic.py", line 363, in CopyLayer.transform_config_dict line: super(CopyLayer, cls).transform_config_dict(d, network=network, get_layer=get_layer) locals: super = CopyLayer = cls = transform_config_dict = d = {'_network': >, '_name': 'output'} network = > get_layer = .get_layer at 0x7f71707e6040> File "/home/runner/.local/lib/python3.8/site-packages/returnn/tf/layers/base.py", line 581, in LayerBase.transform_config_dict line: d["sources"] = [ get_layer(src_name) for src_name in src_names if not src_name == "none"] locals: d = {'_network': >, '_name': 'output'} get_layer = .get_layer at 0x7f71707e6040> src_name = src_names = ['mul_2'] File "/home/runner/.local/lib/python3.8/site-packages/returnn/tf/layers/base.py", line 582, in line: get_layer(src_name) locals: get_layer = .get_layer at 0x7f71707e6040> src_name = 'mul_2' File "/home/runner/.local/lib/python3.8/site-packages/returnn/tf/network.py", line 825, in TFNetwork.construct_layer..get_layer line: return self.construct_layer(net_dict=net_dict, name=src_name, get_layer=get_layer, add_layer=add_layer) locals: self = > self.construct_layer = >> net_dict = {'Transpose': {'class': 'copy', 'from': 'data'}, 'Conv1d': {'class': 'conv', 'from': 'Transpose', 'activation': None, 'with_bias': True, 'n_out': 7, 'filter_size': (2,), 'padding': 'valid', 'in_spatial_dims': ['T'], 'strides': (3,)}, 'Transpose_1': {'class': 'copy', 'from': 'Conv1d'}, 'Range_Leng..., len = 110 name = src_name = 'mul_2' get_layer = .get_layer at 0x7f71707e6040> add_layer = >> File "/home/runner/.local/lib/python3.8/site-packages/returnn/tf/network.py", line 938, in TFNetwork.construct_layer line: layer_class.transform_config_dict(layer_desc, network=net, get_layer=get_layer) locals: layer_class = layer_class.transform_config_dict = > layer_desc = {'kind': 'mul', 'out_shape': {Dim{'Conv1d:conv:s0'[?]}, Dim{'Unflatten_3_split_dims0+(static-dim-2)'(11)}, Dim{B}}, '_network': >, '_name': 'mul_2'} network = net = > get_layer = .get_layer at 0x7f71707e6040> File "/home/runner/.local/lib/python3.8/site-packages/returnn/tf/layers/base.py", line 581, in LayerBase.transform_config_dict line: d["sources"] = [ get_layer(src_name) for src_name in src_names if not src_name == "none"] locals: d = {'kind': 'mul', 'out_shape': {Dim{'Conv1d:conv:s0'[?]}, Dim{'Unflatten_3_split_dims0+(static-dim-2)'(11)}, Dim{B}}, '_network': >, '_name': 'mul_2'} get_layer = .get_layer at 0x7f71707e6040> src_name = src_names = ['Reduce', 'Minimum'], _[0]: {len = 6} File "/home/runner/.local/lib/python3.8/site-packages/returnn/tf/layers/base.py", line 582, in line: get_layer(src_name) locals: get_layer = .get_layer at 0x7f71707e6040> src_name = 'Reduce', len = 6 File "/home/runner/.local/lib/python3.8/site-packages/returnn/tf/network.py", line 825, in TFNetwork.construct_layer..get_layer line: return self.construct_layer(net_dict=net_dict, name=src_name, get_layer=get_layer, add_layer=add_layer) locals: self = > self.construct_layer = >> net_dict = {'Transpose': {'class': 'copy', 'from': 'data'}, 'Conv1d': {'class': 'conv', 'from': 'Transpose', 'activation': None, 'with_bias': True, 'n_out': 7, 'filter_size': (2,), 'padding': 'valid', 'in_spatial_dims': ['T'], 'strides': (3,)}, 'Transpose_1': {'class': 'copy', 'from': 'Conv1d'}, 'Range_Leng..., len = 110 name = src_name = 'Reduce', len = 6 get_layer = .get_layer at 0x7f71707e6040> add_layer = >> File "/home/runner/.local/lib/python3.8/site-packages/returnn/tf/network.py", line 938, in TFNetwork.construct_layer line: layer_class.transform_config_dict(layer_desc, network=net, get_layer=get_layer) locals: layer_class = layer_class.transform_config_dict = > layer_desc = {'mode': 'sum', 'axes': ['F'], '_network': >, '_name': 'Reduce'} network = net = > get_layer = .get_layer at 0x7f71707e6040> File "/home/runner/.local/lib/python3.8/site-packages/returnn/tf/layers/base.py", line 581, in LayerBase.transform_config_dict line: d["sources"] = [ get_layer(src_name) for src_name in src_names if not src_name == "none"] locals: d = {'mode': 'sum', 'axes': ['F'], '_network': >, '_name': 'Reduce'} get_layer = .get_layer at 0x7f71707e6040> src_name = src_names = ['mul'] File "/home/runner/.local/lib/python3.8/site-packages/returnn/tf/layers/base.py", line 582, in line: get_layer(src_name) locals: get_layer = .get_layer at 0x7f71707e6040> src_name = 'mul' File "/home/runner/.local/lib/python3.8/site-packages/returnn/tf/network.py", line 825, in TFNetwork.construct_layer..get_layer line: return self.construct_layer(net_dict=net_dict, name=src_name, get_layer=get_layer, add_layer=add_layer) locals: self = > self.construct_layer = >> net_dict = {'Transpose': {'class': 'copy', 'from': 'data'}, 'Conv1d': {'class': 'conv', 'from': 'Transpose', 'activation': None, 'with_bias': True, 'n_out': 7, 'filter_size': (2,), 'padding': 'valid', 'in_spatial_dims': ['T'], 'strides': (3,)}, 'Transpose_1': {'class': 'copy', 'from': 'Conv1d'}, 'Range_Leng..., len = 110 name = src_name = 'mul' get_layer = .get_layer at 0x7f71707e6040> add_layer = >> File "/home/runner/.local/lib/python3.8/site-packages/returnn/tf/network.py", line 938, in TFNetwork.construct_layer line: layer_class.transform_config_dict(layer_desc, network=net, get_layer=get_layer) locals: layer_class = layer_class.transform_config_dict = > layer_desc = {'kind': 'mul', 'out_shape': {Dim{'Conv1d:conv:s0'[?]}, Dim{'Unflatten_3_split_dims0+(static-dim-2)'(11)}, Dim{B}, Dim{F'Conv1d:channel'(7)}}, '_network': >, '_name': 'mul'} network = net = > get_layer = .get_layer at 0x7f71707e6040> File "/home/runner/.local/lib/python3.8/site-packages/returnn/tf/layers/base.py", line 581, in LayerBase.transform_config_dict line: d["sources"] = [ get_layer(src_name) for src_name in src_names if not src_name == "none"] locals: d = {'kind': 'mul', 'out_shape': {Dim{'Conv1d:conv:s0'[?]}, Dim{'Unflatten_3_split_dims0+(static-dim-2)'(11)}, Dim{B}, Dim{F'Conv1d:channel'(7)}}, '_network': >, '_name': 'mul'} get_layer = .get_layer at 0x7f71707e6040> src_name = src_names = ['Transpose_1', 'Cat'], _[0]: {len = 11} File "/home/runner/.local/lib/python3.8/site-packages/returnn/tf/layers/base.py", line 582, in line: get_layer(src_name) locals: get_layer = .get_layer at 0x7f71707e6040> src_name = 'Cat' File "/home/runner/.local/lib/python3.8/site-packages/returnn/tf/network.py", line 825, in TFNetwork.construct_layer..get_layer line: return self.construct_layer(net_dict=net_dict, name=src_name, get_layer=get_layer, add_layer=add_layer) locals: self = > self.construct_layer = >> net_dict = {'Transpose': {'class': 'copy', 'from': 'data'}, 'Conv1d': {'class': 'conv', 'from': 'Transpose', 'activation': None, 'with_bias': True, 'n_out': 7, 'filter_size': (2,), 'padding': 'valid', 'in_spatial_dims': ['T'], 'strides': (3,)}, 'Transpose_1': {'class': 'copy', 'from': 'Conv1d'}, 'Range_Leng..., len = 110 name = src_name = 'Cat' get_layer = .get_layer at 0x7f71707e6040> add_layer = >> File "/home/runner/.local/lib/python3.8/site-packages/returnn/tf/network.py", line 938, in TFNetwork.construct_layer line: layer_class.transform_config_dict(layer_desc, network=net, get_layer=get_layer) locals: layer_class = layer_class.transform_config_dict = > layer_desc = {'_network': >, '_name': 'Cat'} network = net = > get_layer = .get_layer at 0x7f71707e6040> File "/home/runner/.local/lib/python3.8/site-packages/returnn/tf/layers/basic.py", line 456, in ConcatLayer.transform_config_dict line: super(ConcatLayer, cls).transform_config_dict(d, network=network, get_layer=get_layer) locals: super = ConcatLayer = cls = transform_config_dict = d = {'_network': >, '_name': 'Cat'} network = > get_layer = .get_layer at 0x7f71707e6040> File "/home/runner/.local/lib/python3.8/site-packages/returnn/tf/layers/base.py", line 581, in LayerBase.transform_config_dict line: d["sources"] = [ get_layer(src_name) for src_name in src_names if not src_name == "none"] locals: d = {'_network': >, '_name': 'Cat'} get_layer = .get_layer at 0x7f71707e6040> src_name = src_names = ('Unflatten_3', 'Cat_ReturnnReinterpretSameSizeAs'), _[0]: {len = 11} File "/home/runner/.local/lib/python3.8/site-packages/returnn/tf/layers/base.py", line 582, in line: get_layer(src_name) locals: get_layer = .get_layer at 0x7f71707e6040> src_name = 'Cat_ReturnnReinterpretSameSizeAs', len = 32 File "/home/runner/.local/lib/python3.8/site-packages/returnn/tf/network.py", line 825, in TFNetwork.construct_layer..get_layer line: return self.construct_layer(net_dict=net_dict, name=src_name, get_layer=get_layer, add_layer=add_layer) locals: self = > self.construct_layer = >> net_dict = {'Transpose': {'class': 'copy', 'from': 'data'}, 'Conv1d': {'class': 'conv', 'from': 'Transpose', 'activation': None, 'with_bias': True, 'n_out': 7, 'filter_size': (2,), 'padding': 'valid', 'in_spatial_dims': ['T'], 'strides': (3,)}, 'Transpose_1': {'class': 'copy', 'from': 'Conv1d'}, 'Range_Leng..., len = 110 name = src_name = 'Cat_ReturnnReinterpretSameSizeAs', len = 32 get_layer = .get_layer at 0x7f71707e6040> add_layer = >> File "/home/runner/.local/lib/python3.8/site-packages/returnn/tf/network.py", line 938, in TFNetwork.construct_layer line: layer_class.transform_config_dict(layer_desc, network=net, get_layer=get_layer) locals: layer_class = layer_class.transform_config_dict = > layer_desc = {'size_base': 'Unflatten_3', '_network': >, '_name': 'Cat_ReturnnReinterpretSameSizeAs'} network = net = > get_layer = .get_layer at 0x7f71707e6040> File "/home/runner/.local/lib/python3.8/site-packages/returnn/tf/layers/basic.py", line 4729, in ReinterpretDataLayer.transform_config_dict line: super(ReinterpretDataLayer, cls).transform_config_dict(d, network=network, get_layer=get_layer) locals: super = ReinterpretDataLayer = cls = transform_config_dict = d = {'size_base': 'Unflatten_3', '_network': >, '_name': 'Cat_ReturnnReinterpretSameSizeAs'} network = > get_layer = .get_layer at 0x7f71707e6040> File "/home/runner/.local/lib/python3.8/site-packages/returnn/tf/layers/base.py", line 581, in LayerBase.transform_config_dict line: d["sources"] = [ get_layer(src_name) for src_name in src_names if not src_name == "none"] locals: d = {'size_base': 'Unflatten_3', '_network': >, '_name': 'Cat_ReturnnReinterpretSameSizeAs'} get_layer = .get_layer at 0x7f71707e6040> src_name = src_names = ['Transpose_2'], _[0]: {len = 11} File "/home/runner/.local/lib/python3.8/site-packages/returnn/tf/layers/base.py", line 582, in line: get_layer(src_name) locals: get_layer = .get_layer at 0x7f71707e6040> src_name = 'Transpose_2', len = 11 File "/home/runner/.local/lib/python3.8/site-packages/returnn/tf/network.py", line 825, in TFNetwork.construct_layer..get_layer line: return self.construct_layer(net_dict=net_dict, name=src_name, get_layer=get_layer, add_layer=add_layer) locals: self = > self.construct_layer = >> net_dict = {'Transpose': {'class': 'copy', 'from': 'data'}, 'Conv1d': {'class': 'conv', 'from': 'Transpose', 'activation': None, 'with_bias': True, 'n_out': 7, 'filter_size': (2,), 'padding': 'valid', 'in_spatial_dims': ['T'], 'strides': (3,)}, 'Transpose_1': {'class': 'copy', 'from': 'Conv1d'}, 'Range_Leng..., len = 110 name = src_name = 'Transpose_2', len = 11 get_layer = .get_layer at 0x7f71707e6040> add_layer = >> File "/home/runner/.local/lib/python3.8/site-packages/returnn/tf/network.py", line 938, in TFNetwork.construct_layer line: layer_class.transform_config_dict(layer_desc, network=net, get_layer=get_layer) locals: layer_class = layer_class.transform_config_dict = > layer_desc = {'_network': >, '_name': 'Transpose_2'} network = net = > get_layer = .get_layer at 0x7f71707e6040> File "/home/runner/.local/lib/python3.8/site-packages/returnn/tf/layers/basic.py", line 363, in CopyLayer.transform_config_dict line: super(CopyLayer, cls).transform_config_dict(d, network=network, get_layer=get_layer) locals: super = CopyLayer = cls = transform_config_dict = d = {'_network': >, '_name': 'Transpose_2'} network = > get_layer = .get_layer at 0x7f71707e6040> File "/home/runner/.local/lib/python3.8/site-packages/returnn/tf/layers/base.py", line 581, in LayerBase.transform_config_dict line: d["sources"] = [ get_layer(src_name) for src_name in src_names if not src_name == "none"] locals: d = {'_network': >, '_name': 'Transpose_2'} get_layer = .get_layer at 0x7f71707e6040> src_name = src_names = ['Unflatten_2'], _[0]: {len = 11} File "/home/runner/.local/lib/python3.8/site-packages/returnn/tf/layers/base.py", line 582, in line: get_layer(src_name) locals: get_layer = .get_layer at 0x7f71707e6040> src_name = 'Unflatten_2', len = 11 File "/home/runner/.local/lib/python3.8/site-packages/returnn/tf/network.py", line 825, in TFNetwork.construct_layer..get_layer line: return self.construct_layer(net_dict=net_dict, name=src_name, get_layer=get_layer, add_layer=add_layer) locals: self = > self.construct_layer = >> net_dict = {'Transpose': {'class': 'copy', 'from': 'data'}, 'Conv1d': {'class': 'conv', 'from': 'Transpose', 'activation': None, 'with_bias': True, 'n_out': 7, 'filter_size': (2,), 'padding': 'valid', 'in_spatial_dims': ['T'], 'strides': (3,)}, 'Transpose_1': {'class': 'copy', 'from': 'Conv1d'}, 'Range_Leng..., len = 110 name = src_name = 'Unflatten_2', len = 11 get_layer = .get_layer at 0x7f71707e6040> add_layer = >> File "/home/runner/.local/lib/python3.8/site-packages/returnn/tf/network.py", line 938, in TFNetwork.construct_layer line: layer_class.transform_config_dict(layer_desc, network=net, get_layer=get_layer) locals: layer_class = layer_class.transform_config_dict = > layer_desc = {'axis': 'B', 'dims': [Dim{B}, Dim{'time:data'[B]}, Dim{'static-dim-2'(10)}], '_network': >, '_name': 'Unflatten_2'} network = net = > get_layer = .get_layer at 0x7f71707e6040> File "/home/runner/.local/lib/python3.8/site-packages/returnn/tf/layers/base.py", line 581, in LayerBase.transform_config_dict line: d["sources"] = [ get_layer(src_name) for src_name in src_names if not src_name == "none"] locals: d = {'axis': 'B', 'dims': [Dim{B}, Dim{'time:data'[B]}, Dim{'static-dim-2'(10)}], '_network': >, '_name': 'Unflatten_2'} get_layer = .get_layer at 0x7f71707e6040> src_name = src_names = ['GatherTensor'], _[0]: {len = 12} File "/home/runner/.local/lib/python3.8/site-packages/returnn/tf/layers/base.py", line 582, in line: get_layer(src_name) locals: get_layer = .get_layer at 0x7f71707e6040> src_name = 'GatherTensor', len = 12 File "/home/runner/.local/lib/python3.8/site-packages/returnn/tf/network.py", line 825, in TFNetwork.construct_layer..get_layer line: return self.construct_layer(net_dict=net_dict, name=src_name, get_layer=get_layer, add_layer=add_layer) locals: self = > self.construct_layer = >> net_dict = {'Transpose': {'class': 'copy', 'from': 'data'}, 'Conv1d': {'class': 'conv', 'from': 'Transpose', 'activation': None, 'with_bias': True, 'n_out': 7, 'filter_size': (2,), 'padding': 'valid', 'in_spatial_dims': ['T'], 'strides': (3,)}, 'Transpose_1': {'class': 'copy', 'from': 'Conv1d'}, 'Range_Leng..., len = 110 name = src_name = 'GatherTensor', len = 12 get_layer = .get_layer at 0x7f71707e6040> add_layer = >> File "/home/runner/.local/lib/python3.8/site-packages/returnn/tf/network.py", line 938, in TFNetwork.construct_layer line: layer_class.transform_config_dict(layer_desc, network=net, get_layer=get_layer) locals: layer_class = layer_class.transform_config_dict = > layer_desc = {'axis': 'B', 'position': 'Flatten_2', '_network': >, '_name': 'GatherTensor', 'sources': []} network = net = > get_layer = .get_layer at 0x7f71707e6040> File "/home/runner/.local/lib/python3.8/site-packages/returnn/tf/layers/basic.py", line 1513, in GatherLayer.transform_config_dict line: d["position"] = get_layer(d["position"]) locals: d = {'axis': 'B', 'position': 'Flatten_2', '_network': >, '_name': 'GatherTensor', 'sources': []} get_layer = .get_layer at 0x7f71707e6040> File "/home/runner/.local/lib/python3.8/site-packages/returnn/tf/network.py", line 825, in TFNetwork.construct_layer..get_layer line: return self.construct_layer(net_dict=net_dict, name=src_name, get_layer=get_layer, add_layer=add_layer) locals: self = > self.construct_layer = >> net_dict = {'Transpose': {'class': 'copy', 'from': 'data'}, 'Conv1d': {'class': 'conv', 'from': 'Transpose', 'activation': None, 'with_bias': True, 'n_out': 7, 'filter_size': (2,), 'padding': 'valid', 'in_spatial_dims': ['T'], 'strides': (3,)}, 'Transpose_1': {'class': 'copy', 'from': 'Conv1d'}, 'Range_Leng..., len = 110 name = src_name = 'Flatten_2', len = 9 get_layer = .get_layer at 0x7f71707e6040> add_layer = >> File "/home/runner/.local/lib/python3.8/site-packages/returnn/tf/network.py", line 938, in TFNetwork.construct_layer line: layer_class.transform_config_dict(layer_desc, network=net, get_layer=get_layer) locals: layer_class = layer_class.transform_config_dict = > layer_desc = {'axes': ['B', 'T'], 'keep_order': True, '_network': >, '_name': 'Flatten_2'} network = net = > get_layer = .get_layer at 0x7f71707e6040> File "/home/runner/.local/lib/python3.8/site-packages/returnn/tf/layers/base.py", line 581, in LayerBase.transform_config_dict line: d["sources"] = [ get_layer(src_name) for src_name in src_names if not src_name == "none"] locals: d = {'axes': ['B', 'T'], 'keep_order': True, '_network': >, '_name': 'Flatten_2'} get_layer = .get_layer at 0x7f71707e6040> src_name = src_names = ['add_1'] File "/home/runner/.local/lib/python3.8/site-packages/returnn/tf/layers/base.py", line 582, in line: get_layer(src_name) locals: get_layer = .get_layer at 0x7f71707e6040> src_name = 'add_1' File "/home/runner/.local/lib/python3.8/site-packages/returnn/tf/network.py", line 825, in TFNetwork.construct_layer..get_layer line: return self.construct_layer(net_dict=net_dict, name=src_name, get_layer=get_layer, add_layer=add_layer) locals: self = > self.construct_layer = >> net_dict = {'Transpose': {'class': 'copy', 'from': 'data'}, 'Conv1d': {'class': 'conv', 'from': 'Transpose', 'activation': None, 'with_bias': True, 'n_out': 7, 'filter_size': (2,), 'padding': 'valid', 'in_spatial_dims': ['T'], 'strides': (3,)}, 'Transpose_1': {'class': 'copy', 'from': 'Conv1d'}, 'Range_Leng..., len = 110 name = src_name = 'add_1' get_layer = .get_layer at 0x7f71707e6040> add_layer = >> File "/home/runner/.local/lib/python3.8/site-packages/returnn/tf/network.py", line 938, in TFNetwork.construct_layer line: layer_class.transform_config_dict(layer_desc, network=net, get_layer=get_layer) locals: layer_class = layer_class.transform_config_dict = > layer_desc = {'kind': 'add', 'out_shape': {Dim{'(10*((time:data+-2)//3))+10'[?]}, Dim{B}}, '_network': >, '_name': 'add_1'} network = net = > get_layer = .get_layer at 0x7f71707e6040> File "/home/runner/.local/lib/python3.8/site-packages/returnn/tf/layers/base.py", line 581, in LayerBase.transform_config_dict line: d["sources"] = [ get_layer(src_name) for src_name in src_names if not src_name == "none"] locals: d = {'kind': 'add', 'out_shape': {Dim{'(10*((time:data+-2)//3))+10'[?]}, Dim{B}}, '_network': >, '_name': 'add_1'} get_layer = .get_layer at 0x7f71707e6040> src_name = src_names = ['add', 'add_Squeeze'] File "/home/runner/.local/lib/python3.8/site-packages/returnn/tf/layers/base.py", line 582, in line: get_layer(src_name) locals: get_layer = .get_layer at 0x7f71707e6040> src_name = 'add' File "/home/runner/.local/lib/python3.8/site-packages/returnn/tf/network.py", line 825, in TFNetwork.construct_layer..get_layer line: return self.construct_layer(net_dict=net_dict, name=src_name, get_layer=get_layer, add_layer=add_layer) locals: self = > self.construct_layer = >> net_dict = {'Transpose': {'class': 'copy', 'from': 'data'}, 'Conv1d': {'class': 'conv', 'from': 'Transpose', 'activation': None, 'with_bias': True, 'n_out': 7, 'filter_size': (2,), 'padding': 'valid', 'in_spatial_dims': ['T'], 'strides': (3,)}, 'Transpose_1': {'class': 'copy', 'from': 'Conv1d'}, 'Range_Leng..., len = 110 name = src_name = 'add' get_layer = .get_layer at 0x7f71707e6040> add_layer = >> File "/home/runner/.local/lib/python3.8/site-packages/returnn/tf/network.py", line 938, in TFNetwork.construct_layer line: layer_class.transform_config_dict(layer_desc, network=net, get_layer=get_layer) locals: layer_class = layer_class.transform_config_dict = > layer_desc = {'kind': 'add', 'out_shape': {Dim{'(10*((time:data+-2)//3))+10'[?]}, Dim{B}}, '_network': >, '_name': 'add'} network = net = > get_layer = .get_layer at 0x7f71707e6040> File "/home/runner/.local/lib/python3.8/site-packages/returnn/tf/layers/base.py", line 581, in LayerBase.transform_config_dict line: d["sources"] = [ get_layer(src_name) for src_name in src_names if not src_name == "none"] locals: d = {'kind': 'add', 'out_shape': {Dim{'(10*((time:data+-2)//3))+10'[?]}, Dim{B}}, '_network': >, '_name': 'add'} get_layer = .get_layer at 0x7f71707e6040> src_name = src_names = ['mul_randint', 'Cast_2'], _[0]: {len = 11} File "/home/runner/.local/lib/python3.8/site-packages/returnn/tf/layers/base.py", line 582, in line: get_layer(src_name) locals: get_layer = .get_layer at 0x7f71707e6040> src_name = 'Cast_2', len = 6 File "/home/runner/.local/lib/python3.8/site-packages/returnn/tf/network.py", line 825, in TFNetwork.construct_layer..get_layer line: return self.construct_layer(net_dict=net_dict, name=src_name, get_layer=get_layer, add_layer=add_layer) locals: self = > self.construct_layer = >> net_dict = {'Transpose': {'class': 'copy', 'from': 'data'}, 'Conv1d': {'class': 'conv', 'from': 'Transpose', 'activation': None, 'with_bias': True, 'n_out': 7, 'filter_size': (2,), 'padding': 'valid', 'in_spatial_dims': ['T'], 'strides': (3,)}, 'Transpose_1': {'class': 'copy', 'from': 'Conv1d'}, 'Range_Leng..., len = 110 name = src_name = 'Cast_2', len = 6 get_layer = .get_layer at 0x7f71707e6040> add_layer = >> File "/home/runner/.local/lib/python3.8/site-packages/returnn/tf/network.py", line 938, in TFNetwork.construct_layer line: layer_class.transform_config_dict(layer_desc, network=net, get_layer=get_layer) locals: layer_class = layer_class.transform_config_dict = > layer_desc = {'dtype': 'int64', '_network': >, '_name': 'Cast_2'} network = net = > get_layer = .get_layer at 0x7f71707e6040> File "/home/runner/.local/lib/python3.8/site-packages/returnn/tf/layers/basic.py", line 363, in CopyLayer.transform_config_dict line: super(CopyLayer, cls).transform_config_dict(d, network=network, get_layer=get_layer) locals: super = CopyLayer = cls = transform_config_dict = d = {'dtype': 'int64', '_network': >, '_name': 'Cast_2'} network = > get_layer = .get_layer at 0x7f71707e6040> File "/home/runner/.local/lib/python3.8/site-packages/returnn/tf/layers/base.py", line 581, in LayerBase.transform_config_dict line: d["sources"] = [ get_layer(src_name) for src_name in src_names if not src_name == "none"] locals: d = {'dtype': 'int64', '_network': >, '_name': 'Cast_2'} get_layer = .get_layer at 0x7f71707e6040> src_name = src_names = ['Cast_1'], _[0]: {len = 6} File "/home/runner/.local/lib/python3.8/site-packages/returnn/tf/layers/base.py", line 582, in line: get_layer(src_name) locals: get_layer = .get_layer at 0x7f71707e6040> src_name = 'Cast_1', len = 6 File "/home/runner/.local/lib/python3.8/site-packages/returnn/tf/network.py", line 825, in TFNetwork.construct_layer..get_layer line: return self.construct_layer(net_dict=net_dict, name=src_name, get_layer=get_layer, add_layer=add_layer) locals: self = > self.construct_layer = >> net_dict = {'Transpose': {'class': 'copy', 'from': 'data'}, 'Conv1d': {'class': 'conv', 'from': 'Transpose', 'activation': None, 'with_bias': True, 'n_out': 7, 'filter_size': (2,), 'padding': 'valid', 'in_spatial_dims': ['T'], 'strides': (3,)}, 'Transpose_1': {'class': 'copy', 'from': 'Conv1d'}, 'Range_Leng..., len = 110 name = src_name = 'Cast_1', len = 6 get_layer = .get_layer at 0x7f71707e6040> add_layer = >> File "/home/runner/.local/lib/python3.8/site-packages/returnn/tf/network.py", line 938, in TFNetwork.construct_layer line: layer_class.transform_config_dict(layer_desc, network=net, get_layer=get_layer) locals: layer_class = layer_class.transform_config_dict = > layer_desc = {'dtype': 'int32', '_network': >, '_name': 'Cast_1'} network = net = > get_layer = .get_layer at 0x7f71707e6040> File "/home/runner/.local/lib/python3.8/site-packages/returnn/tf/layers/basic.py", line 363, in CopyLayer.transform_config_dict line: super(CopyLayer, cls).transform_config_dict(d, network=network, get_layer=get_layer) locals: super = CopyLayer = cls = transform_config_dict = d = {'dtype': 'int32', '_network': >, '_name': 'Cast_1'} network = > get_layer = .get_layer at 0x7f71707e6040> File "/home/runner/.local/lib/python3.8/site-packages/returnn/tf/layers/base.py", line 581, in LayerBase.transform_config_dict line: d["sources"] = [ get_layer(src_name) for src_name in src_names if not src_name == "none"] locals: d = {'dtype': 'int32', '_network': >, '_name': 'Cast_1'} get_layer = .get_layer at 0x7f71707e6040> src_name = src_names = ['greater_equal'], _[0]: {len = 13} File "/home/runner/.local/lib/python3.8/site-packages/returnn/tf/layers/base.py", line 582, in line: get_layer(src_name) locals: get_layer = .get_layer at 0x7f71707e6040> src_name = 'greater_equal', len = 13 File "/home/runner/.local/lib/python3.8/site-packages/returnn/tf/network.py", line 825, in TFNetwork.construct_layer..get_layer line: return self.construct_layer(net_dict=net_dict, name=src_name, get_layer=get_layer, add_layer=add_layer) locals: self = > self.construct_layer = >> net_dict = {'Transpose': {'class': 'copy', 'from': 'data'}, 'Conv1d': {'class': 'conv', 'from': 'Transpose', 'activation': None, 'with_bias': True, 'n_out': 7, 'filter_size': (2,), 'padding': 'valid', 'in_spatial_dims': ['T'], 'strides': (3,)}, 'Transpose_1': {'class': 'copy', 'from': 'Conv1d'}, 'Range_Leng..., len = 110 name = src_name = 'greater_equal', len = 13 get_layer = .get_layer at 0x7f71707e6040> add_layer = >> File "/home/runner/.local/lib/python3.8/site-packages/returnn/tf/network.py", line 945, in TFNetwork.construct_layer line: return add_layer(name=name_with_prefix, layer_class=layer_class, **layer_desc) locals: add_layer = >> name = 'greater_equal', len = 13 name_with_prefix = 'greater_equal', len = 13 layer_class = layer_desc = {'kind': 'greater_equal', 'out_shape': {Dim{'(10*((time:data+-2)//3))+10'[?]}, Dim{B}}, '_network': >, '_name': 'greater_equal', 'sources': [ self = > self._create_layer = >> name = 'greater_equal', len = 13 layer_class = layer_desc = {'kind': 'greater_equal', 'out_shape': {Dim{'(10*((time:data+-2)//3))+10'[?]}, Dim{B}}, '_network': >, '_name': 'greater_equal', 'sources': [ Data{'greater_equal_output', [B,T|'10*time:data+9'[B]], dtype='bool'} layer_class = layer_class.fixup_out_data = > layer_desc = {'kind': 'greater_equal', 'out_shape': {Dim{'(10*((time:data+-2)//3))+10'[?]}, Dim{B}}, '_network': >, '_name': 'greater_equal', 'sources': [ Data{'greater_equal_output', [B,T|'10*time:data+9'[B]], dtype='bool'} output.verify_out_shape = out_shape = {Dim{'(10*((time:data+-2)//3))+10'[?]}, Dim{B}}, len = 2 File "/home/runner/.local/lib/python3.8/site-packages/returnn/tf/util/data.py", line 2908, in Data.verify_out_shape line: raise VerifyOutShapeException( "%s verify_out_shape, with dims %s, does not match out_shape %r, %s not in self" % ( self, self_dim_tags, out_shape, dim)) locals: VerifyOutShapeException = self = Data{'greater_equal_output', [B,T|'10*time:data+9'[B]], dtype='bool'} self_dim_tags = {Dim{B}, Dim{'10*time:data+9'[B]}}, len = 2 out_shape = {Dim{'(10*((time:data+-2)//3))+10'[?]}, Dim{B}}, len = 2 dim = Dim{'(10*((time:data+-2)//3))+10'[?]} returnn.tf.util.data.VerifyOutShapeException: Data{'greater_equal_output', [B,T|'10*time:data+9'[B]], dtype='bool'} verify_out_shape, with dims {Dim{B}, Dim{'10*time:data+9'[B]}}, does not match out_shape {Dim{'(10*((time:data+-2)//3))+10'[?]}, Dim{B}}, Dim{'(10*((time:data+-2)//3))+10'[?]} not in self ```

Log

``` >>> Running with standard reference imports... >>> Running with wrapped imports, wrapping original PyTorch... *** func call pytorch_to_returnn.import_wrapper._torch_traced.torch._jit_internal._copy_to_script_wrapper(...) *** func call pytorch_to_returnn.import_wrapper._torch_traced.torch._jit_internal._overload_method(...) *** func call pytorch_to_returnn.import_wrapper._torch_traced.torch.empty(...) *** func call pytorch_to_returnn.import_wrapper._torch_traced.torch.nn.init.kaiming_uniform_(...) *** func call pytorch_to_returnn.import_wrapper._torch_traced.torch.nn.init._calculate_fan_in_and_fan_out(...) *** func call pytorch_to_returnn.import_wrapper._torch_traced.torch.nn.init.uniform_(...) *** torch module call pytorch_to_returnn.import_wrapper._torch_traced.torch.nn.modules.conv.Conv1d(...)(...) *** func call pytorch_to_returnn.import_wrapper._torch_traced.torch.nn.functional.conv1d(...) *** func call pytorch_to_returnn.import_wrapper._torch_traced.torch.arange(...) *** func call pytorch_to_returnn.import_wrapper._torch_traced.torch.cat(...) *** func call pytorch_to_returnn.import_wrapper._torch_traced.torch.cosine_similarity(...) >>>> Module naming hierarchy: .tmp_root: (hidden, empty) Conv1d: -> ... WrappedTorchFunction: >> -> ... >>>> Root module calls: >>>> Modules with params: >>>> Looks good! >>> Running with wrapped Torch import, wrapping replacement for PyTorch... RETURNN input: Data{'data', [B,T|'time:data'[B],F|F'feature:data'(7)]} *** root/'Transpose' layer dict: {'class': 'copy', 'from': 'data'} *** root/'Transpose' CopyLayer output: *** root/'Conv1d' layer dict: {'class': 'conv', 'from': 'Transpose', 'activation': None, 'with_bias': True, 'n_out': 7, 'filter_size': (2,), 'padding': 'valid', 'in_spatial_dims': ['T'], 'strides': (3,)} *** root/'Conv1d' ConvLayer output: *** root/'Conv1d' ConvLayer importing params ['weight', 'bias'] ... *** root/'Conv1d' ConvLayer check RETURNN inputs/outputs given Torch inputs/outputs ... **** validate: add network input tensor **** validate: add call > (depth 0)> input tensor **** validate: add call > (depth 0)> output tensor *** root/'Transpose_1' layer dict: {'class': 'copy', 'from': 'Conv1d'} *** root/'Transpose_1' CopyLayer output: *** root/'Range_Length' layer dict: {'class': 'length', 'axis': 'T', 'from': 'data'} *** root/'Range_Length' LengthLayer output: *** root/'Range_Reduce' layer dict: {'class': 'reduce', 'mode': 'max', 'axes': ['B'], 'from': 'Range_Length'} *** root/'Range_Reduce' ReduceLayer output: *** root/'Range_sub_unnamed_const' layer dict: {'class': 'constant', 'value': 1} *** root/'Range_sub_unnamed_const' ConstantLayer output: *** root/'Range_sub' layer dict: {'class': 'combine', 'kind': 'sub', 'out_shape': set(), 'from': ['Range_Reduce', 'Range_sub_unnamed_const']} *** root/'Range_sub' CombineLayer output: *** root/'Range_sub_unnamed_const_1' layer dict: {'class': 'constant', 'value': 1} *** root/'Range_sub_unnamed_const_1' ConstantLayer output: *** root/'Range_sub_1' layer dict: {'class': 'combine', 'kind': 'sub', 'out_shape': set(), 'from': ['Range_sub', 'Range_sub_unnamed_const_1']} *** root/'Range_sub_1' CombineLayer output: *** root/'Range_floordiv_unnamed_const' layer dict: {'class': 'constant', 'value': 3} *** root/'Range_floordiv_unnamed_const' ConstantLayer output: *** root/'Range_floordiv' layer dict: {'class': 'combine', 'kind': 'floordiv', 'out_shape': set(), 'from': ['Range_sub_1', 'Range_floordiv_unnamed_const']} *** root/'Range_floordiv' CombineLayer output: *** root/'Range_add_unnamed_const' layer dict: {'class': 'constant', 'value': 1} *** root/'Range_add_unnamed_const' ConstantLayer output: *** root/'Range_add' layer dict: {'class': 'combine', 'kind': 'add', 'out_shape': set(), 'from': ['Range_floordiv', 'Range_add_unnamed_const']} *** root/'Range_add' CombineLayer output: *** root/'Range' layer dict: {'class': 'range_from_length', 'from': 'Range_add'} *** root/'Range' RangeFromLengthLayer output: *** root/'Unflatten_Length' layer dict: {'class': 'length', 'axis': 'T', 'from': 'data'} *** root/'Unflatten_Length' LengthLayer output: *** root/'Unflatten_Reduce' layer dict: {'class': 'reduce', 'mode': 'max', 'axes': ['B'], 'from': 'Unflatten_Length'} *** root/'Unflatten_Reduce' ReduceLayer output: *** root/'Unflatten_sub_unnamed_const' layer dict: {'class': 'constant', 'value': 1} *** root/'Unflatten_sub_unnamed_const' ConstantLayer output: *** root/'Unflatten_sub' layer dict: {'class': 'combine', 'kind': 'sub', 'out_shape': set(), 'from': ['Unflatten_Reduce', 'Unflatten_sub_unnamed_const']} *** root/'Unflatten_sub' CombineLayer output: *** root/'Unflatten_sub_unnamed_const_1' layer dict: {'class': 'constant', 'value': 1} *** root/'Unflatten_sub_unnamed_const_1' ConstantLayer output: *** root/'Unflatten_sub_1' layer dict: {'class': 'combine', 'kind': 'sub', 'out_shape': set(), 'from': ['Unflatten_sub', 'Unflatten_sub_unnamed_const_1']} *** root/'Unflatten_sub_1' CombineLayer output: *** root/'Unflatten_floordiv_unnamed_const' layer dict: {'class': 'constant', 'value': 3} *** root/'Unflatten_floordiv_unnamed_const' ConstantLayer output: *** root/'Unflatten_floordiv' layer dict: {'class': 'combine', 'kind': 'floordiv', 'out_shape': set(), 'from': ['Unflatten_sub_1', 'Unflatten_floordiv_unnamed_const']} *** root/'Unflatten_floordiv' CombineLayer output: *** root/'Unflatten_add_unnamed_const' layer dict: {'class': 'constant', 'value': 1} *** root/'Unflatten_add_unnamed_const' ConstantLayer output: *** root/'Unflatten_add' layer dict: {'class': 'combine', 'kind': 'add', 'out_shape': set(), 'from': ['Unflatten_floordiv', 'Unflatten_add_unnamed_const']} *** root/'Unflatten_add' CombineLayer output: *** root/'Unflatten' layer dict: {'class': 'split_dims', 'from': 'Range', 'axis': 'T', 'dims': [-1, 1]} *** root/'Unflatten' SplitDimsLayer output: *** root/'Tile' layer dict: {'class': 'tile', 'multiples': {'T': 1, 'F': 10}, 'from': 'Unflatten'} *** root/'Tile' TileLayer output: *** root/'Flatten' layer dict: {'class': 'merge_dims', 'from': 'Tile', 'axes': ['T', 'F'], 'keep_order': True} *** root/'Flatten' MergeDimsLayer output: *** root/'randint_Length' layer dict: {'class': 'length', 'axis': 'T', 'from': 'data'} *** root/'randint_Length' LengthLayer output: *** root/'Length_randint_Reduce' layer dict: {'class': 'reduce', 'mode': 'max', 'axes': ['B'], 'from': 'randint_Length'} *** root/'Length_randint_Reduce' ReduceLayer output: *** root/'randint_sub_unnamed_const' layer dict: {'class': 'constant', 'value': 1} *** root/'randint_sub_unnamed_const' ConstantLayer output: *** root/'Reduce_randint_sub' layer dict: {'class': 'combine', 'kind': 'sub', 'out_shape': set(), 'from': ['Length_randint_Reduce', 'randint_sub_unnamed_const']} *** root/'Reduce_randint_sub' CombineLayer output: *** root/'randint_sub_unnamed_const_1' layer dict: {'class': 'constant', 'value': 1} *** root/'randint_sub_unnamed_const_1' ConstantLayer output: *** root/'sub_randint_sub' layer dict: {'class': 'combine', 'kind': 'sub', 'out_shape': set(), 'from': ['Reduce_randint_sub', 'randint_sub_unnamed_const_1']} *** root/'sub_randint_sub' CombineLayer output: *** root/'randint_floordiv_unnamed_const' layer dict: {'class': 'constant', 'value': 3} *** root/'randint_floordiv_unnamed_const' ConstantLayer output: *** root/'sub_randint_floordiv' layer dict: {'class': 'combine', 'kind': 'floordiv', 'out_shape': set(), 'from': ['sub_randint_sub', 'randint_floordiv_unnamed_const']} *** root/'sub_randint_floordiv' CombineLayer output: *** root/'randint_add_unnamed_const' layer dict: {'class': 'constant', 'value': 1} *** root/'randint_add_unnamed_const' ConstantLayer output: *** root/'floordiv_randint_add' layer dict: {'class': 'combine', 'kind': 'add', 'out_shape': set(), 'from': ['sub_randint_floordiv', 'randint_add_unnamed_const']} *** root/'floordiv_randint_add' CombineLayer output: *** root/'randint_sub_unnamed_const_2' layer dict: {'class': 'constant', 'value': 1} *** root/'randint_sub_unnamed_const_2' ConstantLayer output: *** root/'add_randint_sub' layer dict: {'class': 'combine', 'kind': 'sub', 'out_shape': set(), 'from': ['floordiv_randint_add', 'randint_sub_unnamed_const_2']} *** root/'add_randint_sub' CombineLayer output: *** root/'sub_randint_Length' layer dict: {'class': 'length', 'axis': 'B', 'from': 'data'} *** root/'sub_randint_Length' LengthLayer output: *** root/'Length_randint_Length' layer dict: {'class': 'length', 'axis': 'T', 'from': 'data'} *** root/'Length_randint_Length' LengthLayer output: *** root/'Length_randint_Reduce_1' layer dict: {'class': 'reduce', 'mode': 'max', 'axes': ['B'], 'from': 'Length_randint_Length'} *** root/'Length_randint_Reduce_1' ReduceLayer output: *** root/'randint_sub_unnamed_const_3' layer dict: {'class': 'constant', 'value': 1} *** root/'randint_sub_unnamed_const_3' ConstantLayer output: *** root/'Reduce_randint_sub_1' layer dict: {'class': 'combine', 'kind': 'sub', 'out_shape': set(), 'from': ['Length_randint_Reduce_1', 'randint_sub_unnamed_const_3']} *** root/'Reduce_randint_sub_1' CombineLayer output: *** root/'randint_sub_unnamed_const_4' layer dict: {'class': 'constant', 'value': 1} *** root/'randint_sub_unnamed_const_4' ConstantLayer output: *** root/'sub_randint_sub_1' layer dict: {'class': 'combine', 'kind': 'sub', 'out_shape': set(), 'from': ['Reduce_randint_sub_1', 'randint_sub_unnamed_const_4']} *** root/'sub_randint_sub_1' CombineLayer output: *** root/'randint_floordiv_unnamed_const_1' layer dict: {'class': 'constant', 'value': 3} *** root/'randint_floordiv_unnamed_const_1' ConstantLayer output: *** root/'sub_randint_floordiv_1' layer dict: {'class': 'combine', 'kind': 'floordiv', 'out_shape': set(), 'from': ['sub_randint_sub_1', 'randint_floordiv_unnamed_const_1']} *** root/'sub_randint_floordiv_1' CombineLayer output: *** root/'randint_add_unnamed_const_1' layer dict: {'class': 'constant', 'value': 1} *** root/'randint_add_unnamed_const_1' ConstantLayer output: *** root/'floordiv_randint_add_1' layer dict: {'class': 'combine', 'kind': 'add', 'out_shape': set(), 'from': ['sub_randint_floordiv_1', 'randint_add_unnamed_const_1']} *** root/'floordiv_randint_add_1' CombineLayer output: *** root/'randint_mul_unnamed_const' layer dict: {'class': 'constant', 'value': 10} *** root/'randint_mul_unnamed_const' ConstantLayer output: *** root/'add_randint_mul' layer dict: {'class': 'combine', 'kind': 'mul', 'out_shape': set(), 'from': ['randint_mul_unnamed_const', 'floordiv_randint_add_1']} *** root/'add_randint_mul' CombineLayer output: *** root/'mul_randint_Cast' layer dict: {'class': 'cast', 'from': 'add_randint_sub', 'dtype': 'int64'} *** root/'mul_randint_Cast' CastLayer output: *** root/'mul_randint' layer dict: {'class': 'rand_int', 'shape': (Dim{B}, Dim{'(10*((time:data+-2)//3))+10'[?]}), 'maxval': 'mul_randint_Cast', 'minval': 0, 'dtype': 'int64', 'from': ['data']} *** root/'mul_randint' RandIntLayer output: *** root/'Cast' layer dict: {'class': 'cast', 'from': 'Flatten', 'dtype': 'int64'} *** root/'Cast' CastLayer output: *** root/'greater_equal_ReturnnReinterpretSameSizeAs' layer dict: {'class': 'reinterpret_data', 'from': 'Cast', 'size_base': 'mul_randint'} *** root/'greater_equal_ReturnnReinterpretSameSizeAs' ReinterpretDataLayer output: *** root/'greater_equal' layer dict: {'class': 'compare', 'kind': 'greater_equal', 'out_shape': {Dim{'(10*((time:data+-2)//3))+10'[B]}, Dim{B}}, 'from': ['mul_randint', 'greater_equal_ReturnnReinterpretSameSizeAs']} *** root/'greater_equal' CompareLayer output: *** root/'Cast_1' layer dict: {'class': 'cast', 'from': 'greater_equal', 'dtype': 'int32'} *** root/'Cast_1' CastLayer output: *** root/'Cast_2' layer dict: {'class': 'cast', 'from': 'Cast_1', 'dtype': 'int64'} *** root/'Cast_2' CastLayer output: *** root/'add' layer dict: {'class': 'combine', 'kind': 'add', 'out_shape': {Dim{'(10*((time:data+-2)//3))+10'[B]}, Dim{B}}, 'from': ['mul_randint', 'Cast_2']} *** root/'add' CombineLayer output: *** root/'Range_Length_1' layer dict: {'class': 'length', 'axis': 'B', 'from': 'data'} *** root/'Range_Length_1' LengthLayer output: *** root/'Range_1' layer dict: {'class': 'range_from_length', 'from': 'Range_Length_1'} *** root/'Range_1' RangeFromLengthLayer output: *** root/'Unflatten_Length_1' layer dict: {'class': 'length', 'axis': 'B', 'from': 'data'} *** root/'Unflatten_Length_1' LengthLayer output: *** root/'Unflatten_1' layer dict: {'class': 'split_dims', 'from': 'Range_1', 'axis': 'B', 'dims': [-1, 1]} *** root/'Unflatten_1' SplitDimsLayer output: *** root/'mul_Length' layer dict: {'class': 'length', 'axis': 'T', 'from': 'data'} *** root/'mul_Length' LengthLayer output: *** root/'Length_mul_Reduce' layer dict: {'class': 'reduce', 'mode': 'max', 'axes': ['B'], 'from': 'mul_Length'} *** root/'Length_mul_Reduce' ReduceLayer output: *** root/'mul_sub_unnamed_const' layer dict: {'class': 'constant', 'value': 1} *** root/'mul_sub_unnamed_const' ConstantLayer output: *** root/'Reduce_mul_sub' layer dict: {'class': 'combine', 'kind': 'sub', 'out_shape': set(), 'from': ['Length_mul_Reduce', 'mul_sub_unnamed_const']} *** root/'Reduce_mul_sub' CombineLayer output: *** root/'mul_sub_unnamed_const_1' layer dict: {'class': 'constant', 'value': 1} *** root/'mul_sub_unnamed_const_1' ConstantLayer output: *** root/'sub_mul_sub' layer dict: {'class': 'combine', 'kind': 'sub', 'out_shape': set(), 'from': ['Reduce_mul_sub', 'mul_sub_unnamed_const_1']} *** root/'sub_mul_sub' CombineLayer output: *** root/'mul_floordiv_unnamed_const' layer dict: {'class': 'constant', 'value': 3} *** root/'mul_floordiv_unnamed_const' ConstantLayer output: *** root/'sub_mul_floordiv' layer dict: {'class': 'combine', 'kind': 'floordiv', 'out_shape': set(), 'from': ['sub_mul_sub', 'mul_floordiv_unnamed_const']} *** root/'sub_mul_floordiv' CombineLayer output: *** root/'mul_add_unnamed_const' layer dict: {'class': 'constant', 'value': 1} *** root/'mul_add_unnamed_const' ConstantLayer output: *** root/'floordiv_mul_add' layer dict: {'class': 'combine', 'kind': 'add', 'out_shape': set(), 'from': ['sub_mul_floordiv', 'mul_add_unnamed_const']} *** root/'floordiv_mul_add' CombineLayer output: *** root/'add_mul' layer dict: {'class': 'combine', 'kind': 'mul', 'out_shape': {Dim{'Unflatten_1_split_dims1'(1)}, Dim{B}}, 'from': ['Unflatten_1', 'floordiv_mul_add']} *** root/'add_mul' CombineLayer output: *** root/'Cast_3' layer dict: {'class': 'cast', 'from': 'add_mul', 'dtype': 'int64'} *** root/'Cast_3' CastLayer output: *** root/'add_Squeeze' layer dict: {'class': 'squeeze', 'from': 'Cast_3', 'axis': ['F']} *** root/'add_Squeeze' SqueezeLayer output: *** root/'add_1' layer dict: {'class': 'combine', 'kind': 'add', 'out_shape': {Dim{'(10*((time:data+-2)//3))+10'[B]}, Dim{B}}, 'from': ['add', 'add_Squeeze']} *** root/'add_1' CombineLayer output: *** root/'Flatten_1' layer dict: {'class': 'merge_dims', 'from': 'Transpose_1', 'axes': ['B', 'T'], 'keep_order': True} *** root/'Flatten_1' MergeDimsLayer output: *** root/'Flatten_2' layer dict: {'class': 'merge_dims', 'from': 'add_1', 'axes': ['B', 'T'], 'keep_order': True} *** root/'Flatten_2' MergeDimsLayer output: *** root/'GatherTensor' layer dict: {'class': 'gather', 'from': 'Flatten_1', 'axis': 'B', 'position': 'Flatten_2'} *** root/'GatherTensor' GatherLayer output: *** root/'Unflatten_Length_2' layer dict: {'class': 'length', 'axis': 'B', 'from': 'data'} *** root/'Unflatten_Length_2' LengthLayer output: *** root/'Unflatten_Length_3' layer dict: {'class': 'length', 'axis': 'T', 'from': 'data'} *** root/'Unflatten_Length_3' LengthLayer output: *** root/'Unflatten_Reduce_1' layer dict: {'class': 'reduce', 'mode': 'max', 'axes': ['B'], 'from': 'Unflatten_Length_3'} *** root/'Unflatten_Reduce_1' ReduceLayer output: *** root/'Unflatten_sub_unnamed_const_2' layer dict: {'class': 'constant', 'value': 1} *** root/'Unflatten_sub_unnamed_const_2' ConstantLayer output: *** root/'Unflatten_sub_2' layer dict: {'class': 'combine', 'kind': 'sub', 'out_shape': set(), 'from': ['Unflatten_Reduce_1', 'Unflatten_sub_unnamed_const_2']} *** root/'Unflatten_sub_2' CombineLayer output: *** root/'Unflatten_sub_unnamed_const_3' layer dict: {'class': 'constant', 'value': 1} *** root/'Unflatten_sub_unnamed_const_3' ConstantLayer output: *** root/'Unflatten_sub_3' layer dict: {'class': 'combine', 'kind': 'sub', 'out_shape': set(), 'from': ['Unflatten_sub_2', 'Unflatten_sub_unnamed_const_3']} *** root/'Unflatten_sub_3' CombineLayer output: *** root/'Unflatten_floordiv_unnamed_const_1' layer dict: {'class': 'constant', 'value': 3} *** root/'Unflatten_floordiv_unnamed_const_1' ConstantLayer output: *** root/'Unflatten_floordiv_1' layer dict: {'class': 'combine', 'kind': 'floordiv', 'out_shape': set(), 'from': ['Unflatten_sub_3', 'Unflatten_floordiv_unnamed_const_1']} *** root/'Unflatten_floordiv_1' CombineLayer output: *** root/'Unflatten_add_unnamed_const_1' layer dict: {'class': 'constant', 'value': 1} *** root/'Unflatten_add_unnamed_const_1' ConstantLayer output: *** root/'Unflatten_add_1' layer dict: {'class': 'combine', 'kind': 'add', 'out_shape': set(), 'from': ['Unflatten_floordiv_1', 'Unflatten_add_unnamed_const_1']} *** root/'Unflatten_add_1' CombineLayer output: *** root/'Unflatten_2' layer dict: {'class': 'split_dims', 'from': 'GatherTensor', 'axis': 'B', 'dims': [Dim{B}, Dim{'((time:data+-2)//3)+1'[?]}, Dim{'static-dim-2'(10)}]} *** root/'Unflatten_2' SplitDimsLayer output: *** root/'Transpose_2' layer dict: {'class': 'copy', 'from': 'Unflatten_2'} *** root/'Transpose_2' CopyLayer output: *** root/'Unflatten_Length_4' layer dict: {'class': 'length', 'axis': 'B', 'from': 'data'} *** root/'Unflatten_Length_4' LengthLayer output: *** root/'Unflatten_3' layer dict: {'class': 'split_dims', 'from': 'Transpose_1', 'axis': 'B', 'dims': [1, -1]} *** root/'Unflatten_3' SplitDimsLayer output: *** root/'Cat_ReturnnReinterpretSameSizeAs' layer dict: {'class': 'reinterpret_data', 'from': 'Transpose_2', 'size_base': 'Unflatten_3'} *** root/'Cat_ReturnnReinterpretSameSizeAs' ReinterpretDataLayer output: *** root/'Cat' layer dict: {'class': 'concat', 'from': [('Unflatten_3', 'stag:Unflatten_3_split_dims0'), ('Cat_ReturnnReinterpretSameSizeAs', 'stag:static-dim-2')]} *** root/'Cat' ConcatLayer output: *** root/'mul' layer dict: {'class': 'combine', 'kind': 'mul', 'out_shape': {Dim{'Unflatten_3_split_dims0+(static-dim-2)'(11)}, Dim{B}, Dim{'Conv1d:conv:s0'[B]}, Dim{F'Conv1d:channel'(7)}}, 'from': ['Transpose_1', 'Cat']} *** root/'mul' CombineLayer output: *** root/'Reduce' layer dict: {'class': 'reduce', 'mode': 'sum', 'axes': ['F'], 'from': 'mul'} *** root/'Reduce' ReduceLayer output: *** root/'Power' layer dict: {'class': 'eval', 'eval': 'tf.math.pow(source(0), 2)', 'from': 'Transpose_1'} *** root/'Power' EvalLayer output: *** root/'Reduce_1' layer dict: {'class': 'reduce', 'mode': 'sum', 'axes': ['F'], 'from': 'Power'} *** root/'Reduce_1' ReduceLayer output: *** root/'Power_1' layer dict: {'class': 'eval', 'eval': 'tf.math.pow(source(0), 2)', 'from': 'Cat'} *** root/'Power_1' EvalLayer output: *** root/'Reduce_2' layer dict: {'class': 'reduce', 'mode': 'sum', 'axes': ['F'], 'from': 'Power_1'} *** root/'Reduce_2' ReduceLayer output: *** root/'mul_1' layer dict: {'class': 'combine', 'kind': 'mul', 'out_shape': {Dim{'Unflatten_3_split_dims0+(static-dim-2)'(11)}, Dim{B}, Dim{'Conv1d:conv:s0'[B]}}, 'from': ['Reduce_1', 'Reduce_2']} *** root/'mul_1' CombineLayer output: *** root/'Rsqrt' layer dict: {'class': 'activation', 'activation': 'rsqrt', 'from': 'mul_1'} *** root/'Rsqrt' ActivationLayer output: *** root/'unnamed_const' layer dict: {'class': 'constant', 'value': 100000000.0} *** root/'unnamed_const' ConstantLayer output: *** root/'Minimum' layer dict: {'class': 'eval', 'eval': 'tf.minimum(source(0), source(1))', 'from': ['Rsqrt', 'unnamed_const']} *** root/'Minimum' EvalLayer output: *** root/'mul_2' layer dict: {'class': 'combine', 'kind': 'mul', 'out_shape': {Dim{'Unflatten_3_split_dims0+(static-dim-2)'(11)}, Dim{B}, Dim{'Conv1d:conv:s0'[B]}}, 'from': ['Reduce', 'Minimum']} *** root/'mul_2' CombineLayer output: *** root/'output' layer dict: {'class': 'copy', 'from': 'mul_2'} *** root/'output' CopyLayer output: RETURNN output: Data{'output_output', [F|'Unflatten_3_split_dims0+(static-dim-2)'(11),B,T|'Conv1d:conv:s0'[B]]} axis map RETURNN<-Torch {0: 0, 1: 1, 2: 2} >>>> Module naming hierarchy: .tmp_root: (hidden, empty) data: None -> None Transpose: > -> Conv1d: > -> Transpose_1: > -> Range_Length: > -> Range_Reduce: > -> Range_sub: > -> Range_sub_unnamed_const: > -> Range_sub_1: > -> Range_sub_unnamed_const_1: > -> Range_floordiv: > -> Range_floordiv_unnamed_const: > -> Range_add: > -> Range_add_unnamed_const: > -> Range: > -> Unflatten: > -> Unflatten_Length: > -> Unflatten_Reduce: > -> Unflatten_sub: > -> Unflatten_sub_unnamed_const: > -> Unflatten_sub_1: > -> Unflatten_sub_unnamed_const_1: > -> Unflatten_floordiv: > -> Unflatten_floordiv_unnamed_const: > -> Unflatten_add: > -> Unflatten_add_unnamed_const: > -> Tile: > -> Flatten: > -> randint_Length: > -> Length_randint_Reduce: > -> Reduce_randint_sub: > -> randint_sub_unnamed_const: > -> sub_randint_sub: > -> randint_sub_unnamed_const_1: > -> sub_randint_floordiv: > -> randint_floordiv_unnamed_const: > -> floordiv_randint_add: > -> randint_add_unnamed_const: > -> add_randint_sub: > -> randint_sub_unnamed_const_2: > -> sub_randint_Length: > -> Length_randint_Length: > -> Length_randint_Reduce_1: > -> Reduce_randint_sub_1: > -> randint_sub_unnamed_const_3: > -> sub_randint_sub_1: > -> randint_sub_unnamed_const_4: > -> sub_randint_floordiv_1: > -> randint_floordiv_unnamed_const_1: > -> floordiv_randint_add_1: > -> randint_add_unnamed_const_1: > -> add_randint_mul: > -> randint_mul_unnamed_const: > -> mul_randint: > -> mul_randint_Cast: > -> Cast: > -> greater_equal: > -> greater_equal_ReturnnReinterpretSameSizeAs: > -> Cast_1: > -> Cast_2: > -> add: > -> Range_Length_1: > -> Range_1: > -> Unflatten_1: > -> Unflatten_Length_1: > -> mul_Length: > -> Length_mul_Reduce: > -> Reduce_mul_sub: > -> mul_sub_unnamed_const: > -> sub_mul_sub: > -> mul_sub_unnamed_const_1: > -> sub_mul_floordiv: > -> mul_floordiv_unnamed_const: > -> floordiv_mul_add: > -> mul_add_unnamed_const: > -> add_mul: > -> Cast_3: > -> add_1: > -> add_Squeeze: > -> Flatten_1: > -> Flatten_2: > -> GatherTensor: > -> Unflatten_2: > -> Unflatten_Length_2: > -> Unflatten_Length_3: > -> Unflatten_Reduce_1: > -> Unflatten_sub_2: > -> Unflatten_sub_unnamed_const_2: > -> Unflatten_sub_3: > -> Unflatten_sub_unnamed_const_3: > -> Unflatten_floordiv_1: > -> Unflatten_floordiv_unnamed_const_1: > -> Unflatten_add_1: > -> Unflatten_add_unnamed_const_1: > -> Transpose_2: > -> Unflatten_3: > -> Unflatten_Length_4: > -> Cat: > -> Cat_ReturnnReinterpretSameSizeAs: > -> mul: > -> Reduce: > -> Power: > -> Reduce_1: > -> Power_1: > -> Reduce_2: > -> mul_1: > -> Rsqrt: > -> Minimum: > -> unnamed_const: > -> mul_2: > -> output: > -> >>>> RETURNN net dict: >>>> Root module calls: >>>> Modules with params: Output shape: (11, 3, 5) Output seq lens: {1: array([5, 5, 5], dtype=int32)} Output shape (converted to Torch): (11, 3, 5) >>>> Looks good! Saving TF checkpoint to '/tmp/tmp5iuzinyvtmp-returnn-tf-checkpoint/model'... >>> Constructing RETURNN model, load TF checkpoint, run... Output shape: (11, 3, 5) >>>> Looks good! >>> Constructing RETURNN model via Python code, load TF checkpoint, run... *** root/'Transpose' layer dict: {'class': 'copy', 'from': 'data'} *** root/'Transpose' CopyLayer output: *** root/'Conv1d' layer dict: {'class': 'conv', 'from': 'Transpose', 'activation': None, 'with_bias': True, 'n_out': 7, 'filter_size': (2,), 'padding': 'valid', 'in_spatial_dims': ['T'], 'strides': (3,)} *** root/'Conv1d' ConvLayer output: *** root/'Transpose_1' layer dict: {'class': 'copy', 'from': 'Conv1d'} *** root/'Transpose_1' CopyLayer output: *** root/'Range_Length' layer dict: {'class': 'length', 'axis': 'T', 'from': 'data'} *** root/'Range_Length' LengthLayer output: *** root/'Range_Reduce' layer dict: {'class': 'reduce', 'mode': 'max', 'axes': ['B'], 'from': 'Range_Length'} *** root/'Range_Reduce' ReduceLayer output: *** root/'Range_sub_unnamed_const' layer dict: {'class': 'constant', 'value': 1} *** root/'Range_sub_unnamed_const' ConstantLayer output: *** root/'Range_sub' layer dict: {'class': 'combine', 'kind': 'sub', 'out_shape': set(), 'from': ['Range_Reduce', 'Range_sub_unnamed_const']} *** root/'Range_sub' CombineLayer output: *** root/'Range_sub_unnamed_const_1' layer dict: {'class': 'constant', 'value': 1} *** root/'Range_sub_unnamed_const_1' ConstantLayer output: *** root/'Range_sub_1' layer dict: {'class': 'combine', 'kind': 'sub', 'out_shape': set(), 'from': ['Range_sub', 'Range_sub_unnamed_const_1']} *** root/'Range_sub_1' CombineLayer output: *** root/'Range_floordiv_unnamed_const' layer dict: {'class': 'constant', 'value': 3} *** root/'Range_floordiv_unnamed_const' ConstantLayer output: *** root/'Range_floordiv' layer dict: {'class': 'combine', 'kind': 'floordiv', 'out_shape': set(), 'from': ['Range_sub_1', 'Range_floordiv_unnamed_const']} *** root/'Range_floordiv' CombineLayer output: *** root/'Range_add_unnamed_const' layer dict: {'class': 'constant', 'value': 1} *** root/'Range_add_unnamed_const' ConstantLayer output: *** root/'Range_add' layer dict: {'class': 'combine', 'kind': 'add', 'out_shape': set(), 'from': ['Range_floordiv', 'Range_add_unnamed_const']} *** root/'Range_add' CombineLayer output: *** root/'Range' layer dict: {'class': 'range_from_length', 'from': 'Range_add'} *** root/'Range' RangeFromLengthLayer output: *** root/'Unflatten_Length' layer dict: {'class': 'length', 'axis': 'T', 'from': 'data'} *** root/'Unflatten_Length' LengthLayer output: *** root/'Unflatten_Reduce' layer dict: {'class': 'reduce', 'mode': 'max', 'axes': ['B'], 'from': 'Unflatten_Length'} *** root/'Unflatten_Reduce' ReduceLayer output: *** root/'Unflatten_sub_unnamed_const' layer dict: {'class': 'constant', 'value': 1} *** root/'Unflatten_sub_unnamed_const' ConstantLayer output: *** root/'Unflatten_sub' layer dict: {'class': 'combine', 'kind': 'sub', 'out_shape': set(), 'from': ['Unflatten_Reduce', 'Unflatten_sub_unnamed_const']} *** root/'Unflatten_sub' CombineLayer output: *** root/'Unflatten_sub_unnamed_const_1' layer dict: {'class': 'constant', 'value': 1} *** root/'Unflatten_sub_unnamed_const_1' ConstantLayer output: *** root/'Unflatten_sub_1' layer dict: {'class': 'combine', 'kind': 'sub', 'out_shape': set(), 'from': ['Unflatten_sub', 'Unflatten_sub_unnamed_const_1']} *** root/'Unflatten_sub_1' CombineLayer output: *** root/'Unflatten_floordiv_unnamed_const' layer dict: {'class': 'constant', 'value': 3} *** root/'Unflatten_floordiv_unnamed_const' ConstantLayer output: *** root/'Unflatten_floordiv' layer dict: {'class': 'combine', 'kind': 'floordiv', 'out_shape': set(), 'from': ['Unflatten_sub_1', 'Unflatten_floordiv_unnamed_const']} *** root/'Unflatten_floordiv' CombineLayer output: *** root/'Unflatten_add_unnamed_const' layer dict: {'class': 'constant', 'value': 1} *** root/'Unflatten_add_unnamed_const' ConstantLayer output: *** root/'Unflatten_add' layer dict: {'class': 'combine', 'kind': 'add', 'out_shape': set(), 'from': ['Unflatten_floordiv', 'Unflatten_add_unnamed_const']} *** root/'Unflatten_add' CombineLayer output: *** root/'Unflatten' layer dict: {'class': 'split_dims', 'from': 'Range', 'axis': 'T', 'dims': [-1, 1]} *** root/'Unflatten' SplitDimsLayer output: *** root/'Tile' layer dict: {'class': 'tile', 'multiples': {'T': 1, 'F': 10}, 'from': 'Unflatten'} *** root/'Tile' TileLayer output: *** root/'Flatten' layer dict: {'class': 'merge_dims', 'from': 'Tile', 'axes': ['T', 'F'], 'keep_order': True} *** root/'Flatten' MergeDimsLayer output: *** root/'randint_Length' layer dict: {'class': 'length', 'axis': 'T', 'from': 'data'} *** root/'randint_Length' LengthLayer output: *** root/'Length_randint_Reduce' layer dict: {'class': 'reduce', 'mode': 'max', 'axes': ['B'], 'from': 'randint_Length'} *** root/'Length_randint_Reduce' ReduceLayer output: *** root/'randint_sub_unnamed_const' layer dict: {'class': 'constant', 'value': 1} *** root/'randint_sub_unnamed_const' ConstantLayer output: *** root/'Reduce_randint_sub' layer dict: {'class': 'combine', 'kind': 'sub', 'out_shape': set(), 'from': ['Length_randint_Reduce', 'randint_sub_unnamed_const']} *** root/'Reduce_randint_sub' CombineLayer output: *** root/'randint_sub_unnamed_const_1' layer dict: {'class': 'constant', 'value': 1} *** root/'randint_sub_unnamed_const_1' ConstantLayer output: *** root/'sub_randint_sub' layer dict: {'class': 'combine', 'kind': 'sub', 'out_shape': set(), 'from': ['Reduce_randint_sub', 'randint_sub_unnamed_const_1']} *** root/'sub_randint_sub' CombineLayer output: *** root/'randint_floordiv_unnamed_const' layer dict: {'class': 'constant', 'value': 3} *** root/'randint_floordiv_unnamed_const' ConstantLayer output: *** root/'sub_randint_floordiv' layer dict: {'class': 'combine', 'kind': 'floordiv', 'out_shape': set(), 'from': ['sub_randint_sub', 'randint_floordiv_unnamed_const']} *** root/'sub_randint_floordiv' CombineLayer output: *** root/'randint_add_unnamed_const' layer dict: {'class': 'constant', 'value': 1} *** root/'randint_add_unnamed_const' ConstantLayer output: *** root/'floordiv_randint_add' layer dict: {'class': 'combine', 'kind': 'add', 'out_shape': set(), 'from': ['sub_randint_floordiv', 'randint_add_unnamed_const']} *** root/'floordiv_randint_add' CombineLayer output: *** root/'randint_sub_unnamed_const_2' layer dict: {'class': 'constant', 'value': 1} *** root/'randint_sub_unnamed_const_2' ConstantLayer output: *** root/'add_randint_sub' layer dict: {'class': 'combine', 'kind': 'sub', 'out_shape': set(), 'from': ['floordiv_randint_add', 'randint_sub_unnamed_const_2']} *** root/'add_randint_sub' CombineLayer output: *** root/'sub_randint_Length' layer dict: {'class': 'length', 'axis': 'B', 'from': 'data'} *** root/'sub_randint_Length' LengthLayer output: *** root/'Length_randint_Length' layer dict: {'class': 'length', 'axis': 'T', 'from': 'data'} *** root/'Length_randint_Length' LengthLayer output: *** root/'Length_randint_Reduce_1' layer dict: {'class': 'reduce', 'mode': 'max', 'axes': ['B'], 'from': 'Length_randint_Length'} *** root/'Length_randint_Reduce_1' ReduceLayer output: *** root/'randint_sub_unnamed_const_3' layer dict: {'class': 'constant', 'value': 1} *** root/'randint_sub_unnamed_const_3' ConstantLayer output: *** root/'Reduce_randint_sub_1' layer dict: {'class': 'combine', 'kind': 'sub', 'out_shape': set(), 'from': ['Length_randint_Reduce_1', 'randint_sub_unnamed_const_3']} *** root/'Reduce_randint_sub_1' CombineLayer output: *** root/'randint_sub_unnamed_const_4' layer dict: {'class': 'constant', 'value': 1} *** root/'randint_sub_unnamed_const_4' ConstantLayer output: *** root/'sub_randint_sub_1' layer dict: {'class': 'combine', 'kind': 'sub', 'out_shape': set(), 'from': ['Reduce_randint_sub_1', 'randint_sub_unnamed_const_4']} *** root/'sub_randint_sub_1' CombineLayer output: *** root/'randint_floordiv_unnamed_const_1' layer dict: {'class': 'constant', 'value': 3} *** root/'randint_floordiv_unnamed_const_1' ConstantLayer output: *** root/'sub_randint_floordiv_1' layer dict: {'class': 'combine', 'kind': 'floordiv', 'out_shape': set(), 'from': ['sub_randint_sub_1', 'randint_floordiv_unnamed_const_1']} *** root/'sub_randint_floordiv_1' CombineLayer output: *** root/'randint_add_unnamed_const_1' layer dict: {'class': 'constant', 'value': 1} *** root/'randint_add_unnamed_const_1' ConstantLayer output: *** root/'floordiv_randint_add_1' layer dict: {'class': 'combine', 'kind': 'add', 'out_shape': set(), 'from': ['sub_randint_floordiv_1', 'randint_add_unnamed_const_1']} *** root/'floordiv_randint_add_1' CombineLayer output: *** root/'randint_mul_unnamed_const' layer dict: {'class': 'constant', 'value': 10} *** root/'randint_mul_unnamed_const' ConstantLayer output: *** root/'add_randint_mul' layer dict: {'class': 'combine', 'kind': 'mul', 'out_shape': set(), 'from': ['randint_mul_unnamed_const', 'floordiv_randint_add_1']} *** root/'add_randint_mul' CombineLayer output: *** root/'mul_randint_Cast' layer dict: {'class': 'cast', 'from': 'add_randint_sub', 'dtype': 'int64'} *** root/'mul_randint_Cast' CastLayer output: *** root/'mul_randint' layer dict: {'class': 'rand_int', 'shape': (Dim{B}, Dim{'(10*((time:data+-2)//3))+10'[?]}), 'maxval': 'mul_randint_Cast', 'minval': 0, 'dtype': 'int64', 'from': ['data']} *** root/'mul_randint' RandIntLayer output: *** root/'Cast' layer dict: {'class': 'cast', 'from': 'Flatten', 'dtype': 'int64'} *** root/'Cast' CastLayer output: *** root/'greater_equal_ReturnnReinterpretSameSizeAs' layer dict: {'class': 'reinterpret_data', 'from': 'Cast', 'size_base': 'mul_randint'} *** root/'greater_equal_ReturnnReinterpretSameSizeAs' ReinterpretDataLayer output: *** root/'greater_equal' layer dict: {'class': 'compare', 'kind': 'greater_equal', 'out_shape': {Dim{'(10*((time:data+-2)//3))+10'[B]}, Dim{B}}, 'from': ['mul_randint', 'greater_equal_ReturnnReinterpretSameSizeAs']} *** root/'greater_equal' CompareLayer output: *** root/'Cast_1' layer dict: {'class': 'cast', 'from': 'greater_equal', 'dtype': 'int32'} *** root/'Cast_1' CastLayer output: *** root/'Cast_2' layer dict: {'class': 'cast', 'from': 'Cast_1', 'dtype': 'int64'} *** root/'Cast_2' CastLayer output: *** root/'add' layer dict: {'class': 'combine', 'kind': 'add', 'out_shape': {Dim{'(10*((time:data+-2)//3))+10'[B]}, Dim{B}}, 'from': ['mul_randint', 'Cast_2']} *** root/'add' CombineLayer output: *** root/'Range_Length_1' layer dict: {'class': 'length', 'axis': 'B', 'from': 'data'} *** root/'Range_Length_1' LengthLayer output: *** root/'Range_1' layer dict: {'class': 'range_from_length', 'from': 'Range_Length_1'} *** root/'Range_1' RangeFromLengthLayer output: *** root/'Unflatten_Length_1' layer dict: {'class': 'length', 'axis': 'B', 'from': 'data'} *** root/'Unflatten_Length_1' LengthLayer output: *** root/'Unflatten_1' layer dict: {'class': 'split_dims', 'from': 'Range_1', 'axis': 'B', 'dims': [-1, 1]} *** root/'Unflatten_1' SplitDimsLayer output: *** root/'mul_Length' layer dict: {'class': 'length', 'axis': 'T', 'from': 'data'} *** root/'mul_Length' LengthLayer output: *** root/'Length_mul_Reduce' layer dict: {'class': 'reduce', 'mode': 'max', 'axes': ['B'], 'from': 'mul_Length'} *** root/'Length_mul_Reduce' ReduceLayer output: *** root/'mul_sub_unnamed_const' layer dict: {'class': 'constant', 'value': 1} *** root/'mul_sub_unnamed_const' ConstantLayer output: *** root/'Reduce_mul_sub' layer dict: {'class': 'combine', 'kind': 'sub', 'out_shape': set(), 'from': ['Length_mul_Reduce', 'mul_sub_unnamed_const']} *** root/'Reduce_mul_sub' CombineLayer output: *** root/'mul_sub_unnamed_const_1' layer dict: {'class': 'constant', 'value': 1} *** root/'mul_sub_unnamed_const_1' ConstantLayer output: *** root/'sub_mul_sub' layer dict: {'class': 'combine', 'kind': 'sub', 'out_shape': set(), 'from': ['Reduce_mul_sub', 'mul_sub_unnamed_const_1']} *** root/'sub_mul_sub' CombineLayer output: *** root/'mul_floordiv_unnamed_const' layer dict: {'class': 'constant', 'value': 3} *** root/'mul_floordiv_unnamed_const' ConstantLayer output: *** root/'sub_mul_floordiv' layer dict: {'class': 'combine', 'kind': 'floordiv', 'out_shape': set(), 'from': ['sub_mul_sub', 'mul_floordiv_unnamed_const']} *** root/'sub_mul_floordiv' CombineLayer output: *** root/'mul_add_unnamed_const' layer dict: {'class': 'constant', 'value': 1} *** root/'mul_add_unnamed_const' ConstantLayer output: *** root/'floordiv_mul_add' layer dict: {'class': 'combine', 'kind': 'add', 'out_shape': set(), 'from': ['sub_mul_floordiv', 'mul_add_unnamed_const']} *** root/'floordiv_mul_add' CombineLayer output: *** root/'add_mul' layer dict: {'class': 'combine', 'kind': 'mul', 'out_shape': {Dim{'Unflatten_1_split_dims1'(1)}, Dim{B}}, 'from': ['Unflatten_1', 'floordiv_mul_add']} *** root/'add_mul' CombineLayer output: *** root/'Cast_3' layer dict: {'class': 'cast', 'from': 'add_mul', 'dtype': 'int64'} *** root/'Cast_3' CastLayer output: *** root/'add_Squeeze' layer dict: {'class': 'squeeze', 'from': 'Cast_3', 'axis': ['F']} *** root/'add_Squeeze' SqueezeLayer output: *** root/'add_1' layer dict: {'class': 'combine', 'kind': 'add', 'out_shape': {Dim{'(10*((time:data+-2)//3))+10'[B]}, Dim{B}}, 'from': ['add', 'add_Squeeze']} *** root/'add_1' CombineLayer output: *** root/'Flatten_1' layer dict: {'class': 'merge_dims', 'from': 'Transpose_1', 'axes': ['B', 'T'], 'keep_order': True} *** root/'Flatten_1' MergeDimsLayer output: *** root/'Flatten_2' layer dict: {'class': 'merge_dims', 'from': 'add_1', 'axes': ['B', 'T'], 'keep_order': True} *** root/'Flatten_2' MergeDimsLayer output: *** root/'GatherTensor' layer dict: {'class': 'gather', 'from': 'Flatten_1', 'axis': 'B', 'position': 'Flatten_2'} *** root/'GatherTensor' GatherLayer output: *** root/'Unflatten_Length_2' layer dict: {'class': 'length', 'axis': 'B', 'from': 'data'} *** root/'Unflatten_Length_2' LengthLayer output: *** root/'Unflatten_Length_3' layer dict: {'class': 'length', 'axis': 'T', 'from': 'data'} *** root/'Unflatten_Length_3' LengthLayer output: *** root/'Unflatten_Reduce_1' layer dict: {'class': 'reduce', 'mode': 'max', 'axes': ['B'], 'from': 'Unflatten_Length_3'} *** root/'Unflatten_Reduce_1' ReduceLayer output: *** root/'Unflatten_sub_unnamed_const_2' layer dict: {'class': 'constant', 'value': 1} *** root/'Unflatten_sub_unnamed_const_2' ConstantLayer output: *** root/'Unflatten_sub_2' layer dict: {'class': 'combine', 'kind': 'sub', 'out_shape': set(), 'from': ['Unflatten_Reduce_1', 'Unflatten_sub_unnamed_const_2']} *** root/'Unflatten_sub_2' CombineLayer output: *** root/'Unflatten_sub_unnamed_const_3' layer dict: {'class': 'constant', 'value': 1} *** root/'Unflatten_sub_unnamed_const_3' ConstantLayer output: *** root/'Unflatten_sub_3' layer dict: {'class': 'combine', 'kind': 'sub', 'out_shape': set(), 'from': ['Unflatten_sub_2', 'Unflatten_sub_unnamed_const_3']} *** root/'Unflatten_sub_3' CombineLayer output: *** root/'Unflatten_floordiv_unnamed_const_1' layer dict: {'class': 'constant', 'value': 3} *** root/'Unflatten_floordiv_unnamed_const_1' ConstantLayer output: *** root/'Unflatten_floordiv_1' layer dict: {'class': 'combine', 'kind': 'floordiv', 'out_shape': set(), 'from': ['Unflatten_sub_3', 'Unflatten_floordiv_unnamed_const_1']} *** root/'Unflatten_floordiv_1' CombineLayer output: *** root/'Unflatten_add_unnamed_const_1' layer dict: {'class': 'constant', 'value': 1} *** root/'Unflatten_add_unnamed_const_1' ConstantLayer output: *** root/'Unflatten_add_1' layer dict: {'class': 'combine', 'kind': 'add', 'out_shape': set(), 'from': ['Unflatten_floordiv_1', 'Unflatten_add_unnamed_const_1']} *** root/'Unflatten_add_1' CombineLayer output: *** root/'Unflatten_2' layer dict: {'class': 'split_dims', 'from': 'GatherTensor', 'axis': 'B', 'dims': [Dim{B}, Dim{'((time:data+-2)//3)+1'[?]}, Dim{'static-dim-2'(10)}]} *** root/'Unflatten_2' SplitDimsLayer output: *** root/'Transpose_2' layer dict: {'class': 'copy', 'from': 'Unflatten_2'} *** root/'Transpose_2' CopyLayer output: *** root/'Unflatten_Length_4' layer dict: {'class': 'length', 'axis': 'B', 'from': 'data'} *** root/'Unflatten_Length_4' LengthLayer output: *** root/'Unflatten_3' layer dict: {'class': 'split_dims', 'from': 'Transpose_1', 'axis': 'B', 'dims': [1, -1]} *** root/'Unflatten_3' SplitDimsLayer output: *** root/'Cat_ReturnnReinterpretSameSizeAs' layer dict: {'class': 'reinterpret_data', 'from': 'Transpose_2', 'size_base': 'Unflatten_3'} *** root/'Cat_ReturnnReinterpretSameSizeAs' ReinterpretDataLayer output: *** root/'Cat' layer dict: {'class': 'concat', 'from': [('Unflatten_3', 'stag:Unflatten_3_split_dims0'), ('Cat_ReturnnReinterpretSameSizeAs', 'stag:static-dim-2')]} *** root/'Cat' ConcatLayer output: *** root/'mul' layer dict: {'class': 'combine', 'kind': 'mul', 'out_shape': {Dim{'Unflatten_3_split_dims0+(static-dim-2)'(11)}, Dim{B}, Dim{'Conv1d:conv:s0'[B]}, Dim{F'Conv1d:channel'(7)}}, 'from': ['Transpose_1', 'Cat']} *** root/'mul' CombineLayer output: *** root/'Reduce' layer dict: {'class': 'reduce', 'mode': 'sum', 'axes': ['F'], 'from': 'mul'} *** root/'Reduce' ReduceLayer output: *** root/'Power' layer dict: {'class': 'eval', 'eval': 'tf.math.pow(source(0), 2)', 'from': 'Transpose_1'} *** root/'Power' EvalLayer output: *** root/'Reduce_1' layer dict: {'class': 'reduce', 'mode': 'sum', 'axes': ['F'], 'from': 'Power'} *** root/'Reduce_1' ReduceLayer output: *** root/'Power_1' layer dict: {'class': 'eval', 'eval': 'tf.math.pow(source(0), 2)', 'from': 'Cat'} *** root/'Power_1' EvalLayer output: *** root/'Reduce_2' layer dict: {'class': 'reduce', 'mode': 'sum', 'axes': ['F'], 'from': 'Power_1'} *** root/'Reduce_2' ReduceLayer output: *** root/'mul_1' layer dict: {'class': 'combine', 'kind': 'mul', 'out_shape': {Dim{'Unflatten_3_split_dims0+(static-dim-2)'(11)}, Dim{B}, Dim{'Conv1d:conv:s0'[B]}}, 'from': ['Reduce_1', 'Reduce_2']} *** root/'mul_1' CombineLayer output: *** root/'Rsqrt' layer dict: {'class': 'activation', 'activation': 'rsqrt', 'from': 'mul_1'} *** root/'Rsqrt' ActivationLayer output: *** root/'_unnamed_const' layer dict: {'class': 'constant', 'value': 100000000.0} *** root/'_unnamed_const' ConstantLayer output: *** root/'Minimum' layer dict: {'class': 'eval', 'eval': 'tf.minimum(source(0), source(1))', 'from': ['Rsqrt', '_unnamed_const']} *** root/'Minimum' EvalLayer output: *** root/'mul_2' layer dict: {'class': 'combine', 'kind': 'mul', 'out_shape': {Dim{'Unflatten_3_split_dims0+(static-dim-2)'(11)}, Dim{B}, Dim{'Conv1d:conv:s0'[B]}}, 'from': ['Reduce', 'Minimum']} *** root/'mul_2' CombineLayer output: *** root/'output' layer dict: {'class': 'copy', 'from': 'mul_2'} *** root/'output' CopyLayer output: Output shape: (11, 3, 5) >>>> Looks good! import numpy from returnn.tf.util.data import Dim, batch_dim, single_step_dim, SpatialDim, FeatureDim use_tensorflow = True behavior_version = 12 time_data_dim = SpatialDim('time:data') feature_data_dim = FeatureDim('feature:data', 7) _10___time_data__2___3___10_dim = SpatialDim('(10*((time:data+-2)//3))+10') Unflatten_1_split_dims1_dim = SpatialDim('Unflatten_1_split_dims1', 1) static_dim_2_dim = SpatialDim('static-dim-2', 10) Unflatten_3_split_dims0_dim = SpatialDim('Unflatten_3_split_dims0', 1) Conv1d_conv_s0_dim = SpatialDim('Conv1d:conv:s0') Conv1d_channel_dim = FeatureDim('Conv1d:channel', 7) extern_data = { 'data': { 'dim_tags': [ batch_dim, time_data_dim, feature_data_dim ], 'dtype': 'float32', 'time_dim_axis': 1, 'feature_dim_axis': 2 } } network = { 'Transpose': {'class': 'copy', 'from': 'data'}, 'Conv1d': { 'class': 'conv', 'from': 'Transpose', 'activation': None, 'with_bias': True, 'n_out': 7, 'filter_size': (2,), 'padding': 'valid', 'in_spatial_dims': ['T'], 'strides': (3,) }, 'Transpose_1': {'class': 'copy', 'from': 'Conv1d'}, 'Range_Length': {'class': 'length', 'axis': 'T', 'from': 'data'}, 'Range_Reduce': {'class': 'reduce', 'mode': 'max', 'axes': ['B'], 'from': 'Range_Length'}, 'Range_sub': { 'class': 'combine', 'kind': 'sub', 'out_shape': {}, 'from': ['Range_Reduce', 'Range_sub_unnamed_const'] }, 'Range_sub_unnamed_const': {'class': 'constant', 'value': 1}, 'Range_sub_1': { 'class': 'combine', 'kind': 'sub', 'out_shape': {}, 'from': ['Range_sub', 'Range_sub_unnamed_const_1'] }, 'Range_sub_unnamed_const_1': {'class': 'constant', 'value': 1}, 'Range_floordiv': { 'class': 'combine', 'kind': 'floordiv', 'out_shape': {}, 'from': ['Range_sub_1', 'Range_floordiv_unnamed_const'] }, 'Range_floordiv_unnamed_const': {'class': 'constant', 'value': 3}, 'Range_add': { 'class': 'combine', 'kind': 'add', 'out_shape': {}, 'from': ['Range_floordiv', 'Range_add_unnamed_const'] }, 'Range_add_unnamed_const': {'class': 'constant', 'value': 1}, 'Range': {'class': 'range_from_length', 'from': 'Range_add'}, 'Unflatten': {'class': 'split_dims', 'from': 'Range', 'axis': 'T', 'dims': [-1, 1]}, 'Unflatten_Length': {'class': 'length', 'axis': 'T', 'from': 'data'}, 'Unflatten_Reduce': {'class': 'reduce', 'mode': 'max', 'axes': ['B'], 'from': 'Unflatten_Length'}, 'Unflatten_sub': { 'class': 'combine', 'kind': 'sub', 'out_shape': {}, 'from': ['Unflatten_Reduce', 'Unflatten_sub_unnamed_const'] }, 'Unflatten_sub_unnamed_const': {'class': 'constant', 'value': 1}, 'Unflatten_sub_1': { 'class': 'combine', 'kind': 'sub', 'out_shape': {}, 'from': ['Unflatten_sub', 'Unflatten_sub_unnamed_const_1'] }, 'Unflatten_sub_unnamed_const_1': {'class': 'constant', 'value': 1}, 'Unflatten_floordiv': { 'class': 'combine', 'kind': 'floordiv', 'out_shape': {}, 'from': ['Unflatten_sub_1', 'Unflatten_floordiv_unnamed_const'] }, 'Unflatten_floordiv_unnamed_const': {'class': 'constant', 'value': 3}, 'Unflatten_add': { 'class': 'combine', 'kind': 'add', 'out_shape': {}, 'from': ['Unflatten_floordiv', 'Unflatten_add_unnamed_const'] }, 'Unflatten_add_unnamed_const': {'class': 'constant', 'value': 1}, 'Tile': {'class': 'tile', 'multiples': {'T': 1, 'F': 10}, 'from': 'Unflatten'}, 'Flatten': {'class': 'merge_dims', 'from': 'Tile', 'axes': ['T', 'F'], 'keep_order': True}, 'randint_Length': {'class': 'length', 'axis': 'T', 'from': 'data'}, 'Length_randint_Reduce': {'class': 'reduce', 'mode': 'max', 'axes': ['B'], 'from': 'randint_Length'}, 'Reduce_randint_sub': { 'class': 'combine', 'kind': 'sub', 'out_shape': {}, 'from': ['Length_randint_Reduce', 'randint_sub_unnamed_const'] }, 'randint_sub_unnamed_const': {'class': 'constant', 'value': 1}, 'sub_randint_sub': { 'class': 'combine', 'kind': 'sub', 'out_shape': {}, 'from': ['Reduce_randint_sub', 'randint_sub_unnamed_const_1'] }, 'randint_sub_unnamed_const_1': {'class': 'constant', 'value': 1}, 'sub_randint_floordiv': { 'class': 'combine', 'kind': 'floordiv', 'out_shape': {}, 'from': ['sub_randint_sub', 'randint_floordiv_unnamed_const'] }, 'randint_floordiv_unnamed_const': {'class': 'constant', 'value': 3}, 'floordiv_randint_add': { 'class': 'combine', 'kind': 'add', 'out_shape': {}, 'from': ['sub_randint_floordiv', 'randint_add_unnamed_const'] }, 'randint_add_unnamed_const': {'class': 'constant', 'value': 1}, 'add_randint_sub': { 'class': 'combine', 'kind': 'sub', 'out_shape': {}, 'from': ['floordiv_randint_add', 'randint_sub_unnamed_const_2'] }, 'randint_sub_unnamed_const_2': {'class': 'constant', 'value': 1}, 'sub_randint_Length': {'class': 'length', 'axis': 'B', 'from': 'data'}, 'Length_randint_Length': {'class': 'length', 'axis': 'T', 'from': 'data'}, 'Length_randint_Reduce_1': {'class': 'reduce', 'mode': 'max', 'axes': ['B'], 'from': 'Length_randint_Length'}, 'Reduce_randint_sub_1': { 'class': 'combine', 'kind': 'sub', 'out_shape': {}, 'from': ['Length_randint_Reduce_1', 'randint_sub_unnamed_const_3'] }, 'randint_sub_unnamed_const_3': {'class': 'constant', 'value': 1}, 'sub_randint_sub_1': { 'class': 'combine', 'kind': 'sub', 'out_shape': {}, 'from': ['Reduce_randint_sub_1', 'randint_sub_unnamed_const_4'] }, 'randint_sub_unnamed_const_4': {'class': 'constant', 'value': 1}, 'sub_randint_floordiv_1': { 'class': 'combine', 'kind': 'floordiv', 'out_shape': {}, 'from': ['sub_randint_sub_1', 'randint_floordiv_unnamed_const_1'] }, 'randint_floordiv_unnamed_const_1': {'class': 'constant', 'value': 3}, 'floordiv_randint_add_1': { 'class': 'combine', 'kind': 'add', 'out_shape': {}, 'from': ['sub_randint_floordiv_1', 'randint_add_unnamed_const_1'] }, 'randint_add_unnamed_const_1': {'class': 'constant', 'value': 1}, 'add_randint_mul': { 'class': 'combine', 'kind': 'mul', 'out_shape': {}, 'from': ['randint_mul_unnamed_const', 'floordiv_randint_add_1'] }, 'randint_mul_unnamed_const': {'class': 'constant', 'value': 10}, 'mul_randint': { 'class': 'rand_int', 'shape': ( batch_dim, 10 * time_data_dim + -2 // 3 + 10 ), 'maxval': 'mul_randint_Cast', 'minval': 0, 'dtype': 'int64', 'from': ['data'] }, 'mul_randint_Cast': {'class': 'cast', 'from': 'add_randint_sub', 'dtype': 'int64'}, 'Cast': {'class': 'cast', 'from': 'Flatten', 'dtype': 'int64'}, 'greater_equal': { 'class': 'compare', 'kind': 'greater_equal', 'out_shape': {batch_dim, _10___time_data__2___3___10_dim}, 'from': ['mul_randint', 'greater_equal_ReturnnReinterpretSameSizeAs'] }, 'greater_equal_ReturnnReinterpretSameSizeAs': {'class': 'reinterpret_data', 'from': 'Cast', 'size_base': 'mul_randint'}, 'Cast_1': {'class': 'cast', 'from': 'greater_equal', 'dtype': 'int32'}, 'Cast_2': {'class': 'cast', 'from': 'Cast_1', 'dtype': 'int64'}, 'add': { 'class': 'combine', 'kind': 'add', 'out_shape': {batch_dim, _10___time_data__2___3___10_dim}, 'from': ['mul_randint', 'Cast_2'] }, 'Range_Length_1': {'class': 'length', 'axis': 'B', 'from': 'data'}, 'Range_1': {'class': 'range_from_length', 'from': 'Range_Length_1'}, 'Unflatten_1': {'class': 'split_dims', 'from': 'Range_1', 'axis': 'B', 'dims': [-1, 1]}, 'Unflatten_Length_1': {'class': 'length', 'axis': 'B', 'from': 'data'}, 'mul_Length': {'class': 'length', 'axis': 'T', 'from': 'data'}, 'Length_mul_Reduce': {'class': 'reduce', 'mode': 'max', 'axes': ['B'], 'from': 'mul_Length'}, 'Reduce_mul_sub': { 'class': 'combine', 'kind': 'sub', 'out_shape': {}, 'from': ['Length_mul_Reduce', 'mul_sub_unnamed_const'] }, 'mul_sub_unnamed_const': {'class': 'constant', 'value': 1}, 'sub_mul_sub': { 'class': 'combine', 'kind': 'sub', 'out_shape': {}, 'from': ['Reduce_mul_sub', 'mul_sub_unnamed_const_1'] }, 'mul_sub_unnamed_const_1': {'class': 'constant', 'value': 1}, 'sub_mul_floordiv': { 'class': 'combine', 'kind': 'floordiv', 'out_shape': {}, 'from': ['sub_mul_sub', 'mul_floordiv_unnamed_const'] }, 'mul_floordiv_unnamed_const': {'class': 'constant', 'value': 3}, 'floordiv_mul_add': { 'class': 'combine', 'kind': 'add', 'out_shape': {}, 'from': ['sub_mul_floordiv', 'mul_add_unnamed_const'] }, 'mul_add_unnamed_const': {'class': 'constant', 'value': 1}, 'add_mul': { 'class': 'combine', 'kind': 'mul', 'out_shape': {batch_dim, Unflatten_1_split_dims1_dim}, 'from': ['Unflatten_1', 'floordiv_mul_add'] }, 'Cast_3': {'class': 'cast', 'from': 'add_mul', 'dtype': 'int64'}, 'add_1': { 'class': 'combine', 'kind': 'add', 'out_shape': {batch_dim, _10___time_data__2___3___10_dim}, 'from': ['add', 'add_Squeeze'] }, 'add_Squeeze': {'class': 'squeeze', 'from': 'Cast_3', 'axis': ['F']}, 'Flatten_1': {'class': 'merge_dims', 'from': 'Transpose_1', 'axes': ['B', 'T'], 'keep_order': True}, 'Flatten_2': {'class': 'merge_dims', 'from': 'add_1', 'axes': ['B', 'T'], 'keep_order': True}, 'GatherTensor': {'class': 'gather', 'from': 'Flatten_1', 'axis': 'B', 'position': 'Flatten_2'}, 'Unflatten_2': { 'class': 'split_dims', 'from': 'GatherTensor', 'axis': 'B', 'dims': [ batch_dim, time_data_dim + -2 // 3 + 1, static_dim_2_dim ] }, 'Unflatten_Length_2': {'class': 'length', 'axis': 'B', 'from': 'data'}, 'Unflatten_Length_3': {'class': 'length', 'axis': 'T', 'from': 'data'}, 'Unflatten_Reduce_1': {'class': 'reduce', 'mode': 'max', 'axes': ['B'], 'from': 'Unflatten_Length_3'}, 'Unflatten_sub_2': { 'class': 'combine', 'kind': 'sub', 'out_shape': {}, 'from': ['Unflatten_Reduce_1', 'Unflatten_sub_unnamed_const_2'] }, 'Unflatten_sub_unnamed_const_2': {'class': 'constant', 'value': 1}, 'Unflatten_sub_3': { 'class': 'combine', 'kind': 'sub', 'out_shape': {}, 'from': ['Unflatten_sub_2', 'Unflatten_sub_unnamed_const_3'] }, 'Unflatten_sub_unnamed_const_3': {'class': 'constant', 'value': 1}, 'Unflatten_floordiv_1': { 'class': 'combine', 'kind': 'floordiv', 'out_shape': {}, 'from': ['Unflatten_sub_3', 'Unflatten_floordiv_unnamed_const_1'] }, 'Unflatten_floordiv_unnamed_const_1': {'class': 'constant', 'value': 3}, 'Unflatten_add_1': { 'class': 'combine', 'kind': 'add', 'out_shape': {}, 'from': ['Unflatten_floordiv_1', 'Unflatten_add_unnamed_const_1'] }, 'Unflatten_add_unnamed_const_1': {'class': 'constant', 'value': 1}, 'Transpose_2': {'class': 'copy', 'from': 'Unflatten_2'}, 'Unflatten_3': {'class': 'split_dims', 'from': 'Transpose_1', 'axis': 'B', 'dims': [1, -1]}, 'Unflatten_Length_4': {'class': 'length', 'axis': 'B', 'from': 'data'}, 'Cat': { 'class': 'concat', 'from': [('Unflatten_3', 'stag:Unflatten_3_split_dims0'), ('Cat_ReturnnReinterpretSameSizeAs', 'stag:static-dim-2')] }, 'Cat_ReturnnReinterpretSameSizeAs': {'class': 'reinterpret_data', 'from': 'Transpose_2', 'size_base': 'Unflatten_3'}, 'mul': { 'class': 'combine', 'kind': 'mul', 'out_shape': {batch_dim, Unflatten_3_split_dims0_dim + static_dim_2_dim, Conv1d_conv_s0_dim, Conv1d_channel_dim}, 'from': ['Transpose_1', 'Cat'] }, 'Reduce': {'class': 'reduce', 'mode': 'sum', 'axes': ['F'], 'from': 'mul'}, 'Power': {'class': 'eval', 'eval': 'tf.math.pow(source(0), 2)', 'from': 'Transpose_1'}, 'Reduce_1': {'class': 'reduce', 'mode': 'sum', 'axes': ['F'], 'from': 'Power'}, 'Power_1': {'class': 'eval', 'eval': 'tf.math.pow(source(0), 2)', 'from': 'Cat'}, 'Reduce_2': {'class': 'reduce', 'mode': 'sum', 'axes': ['F'], 'from': 'Power_1'}, 'mul_1': { 'class': 'combine', 'kind': 'mul', 'out_shape': {batch_dim, Conv1d_conv_s0_dim, Unflatten_3_split_dims0_dim + static_dim_2_dim}, 'from': ['Reduce_1', 'Reduce_2'] }, 'Rsqrt': {'class': 'activation', 'activation': 'rsqrt', 'from': 'mul_1'}, 'Minimum': {'class': 'eval', 'eval': 'tf.minimum(source(0), source(1))', 'from': ['Rsqrt', 'unnamed_const']}, 'unnamed_const': {'class': 'constant', 'value': 100000000.0}, 'mul_2': { 'class': 'combine', 'kind': 'mul', 'out_shape': {batch_dim, Conv1d_conv_s0_dim, Unflatten_3_split_dims0_dim + static_dim_2_dim}, 'from': ['Reduce', 'Minimum'] }, 'output': {'class': 'copy', 'from': 'mul_2'} } Exception creating layer /'greater_equal' of class CompareLayer with opts: {'_name': 'greater_equal', '_network': >, 'kind': 'greater_equal', 'name': 'greater_equal', 'network': >, 'out_shape': {Dim{'(10*((time:data+-2)//3))+10'[?]}, Dim{B}}, 'output': Data{'greater_equal_output', [B,T|'10*time:data+9'[B]], dtype='bool'}, 'sources': [, ]} ```

The serialized config looks like this

``` import numpy from returnn.tf.util.data import Dim, batch_dim, single_step_dim, SpatialDim, FeatureDim use_tensorflow = True behavior_version = 12 time_data_dim = SpatialDim('time:data') feature_data_dim = FeatureDim('feature:data', 7) _10___time_data__2___3___10_dim = SpatialDim('(10*((time:data+-2)//3))+10') Unflatten_1_split_dims1_dim = SpatialDim('Unflatten_1_split_dims1', 1) static_dim_2_dim = SpatialDim('static-dim-2', 10) Unflatten_3_split_dims0_dim = SpatialDim('Unflatten_3_split_dims0', 1) Conv1d_conv_s0_dim = SpatialDim('Conv1d:conv:s0') Conv1d_channel_dim = FeatureDim('Conv1d:channel', 7) extern_data = { 'data': { 'dim_tags': [ batch_dim, time_data_dim, feature_data_dim ], 'dtype': 'float32', 'time_dim_axis': 1, 'feature_dim_axis': 2 } } network = { 'Transpose': {'class': 'copy', 'from': 'data'}, 'Conv1d': { 'class': 'conv', 'from': 'Transpose', 'activation': None, 'with_bias': True, 'n_out': 7, 'filter_size': (2,), 'padding': 'valid', 'in_spatial_dims': ['T'], 'strides': (3,) }, 'Transpose_1': {'class': 'copy', 'from': 'Conv1d'}, 'Range_Length': {'class': 'length', 'axis': 'T', 'from': 'data'}, 'Range_Reduce': {'class': 'reduce', 'mode': 'max', 'axes': ['B'], 'from': 'Range_Length'}, 'Range_sub': { 'class': 'combine', 'kind': 'sub', 'out_shape': {}, 'from': ['Range_Reduce', 'Range_sub_unnamed_const'] }, 'Range_sub_unnamed_const': {'class': 'constant', 'value': 1}, 'Range_sub_1': { 'class': 'combine', 'kind': 'sub', 'out_shape': {}, 'from': ['Range_sub', 'Range_sub_unnamed_const_1'] }, 'Range_sub_unnamed_const_1': {'class': 'constant', 'value': 1}, 'Range_floordiv': { 'class': 'combine', 'kind': 'floordiv', 'out_shape': {}, 'from': ['Range_sub_1', 'Range_floordiv_unnamed_const'] }, 'Range_floordiv_unnamed_const': {'class': 'constant', 'value': 3}, 'Range_add': { 'class': 'combine', 'kind': 'add', 'out_shape': {}, 'from': ['Range_floordiv', 'Range_add_unnamed_const'] }, 'Range_add_unnamed_const': {'class': 'constant', 'value': 1}, 'Range': {'class': 'range_from_length', 'from': 'Range_add'}, 'Unflatten': {'class': 'split_dims', 'from': 'Range', 'axis': 'T', 'dims': [-1, 1]}, 'Unflatten_Length': {'class': 'length', 'axis': 'T', 'from': 'data'}, 'Unflatten_Reduce': {'class': 'reduce', 'mode': 'max', 'axes': ['B'], 'from': 'Unflatten_Length'}, 'Unflatten_sub': { 'class': 'combine', 'kind': 'sub', 'out_shape': {}, 'from': ['Unflatten_Reduce', 'Unflatten_sub_unnamed_const'] }, 'Unflatten_sub_unnamed_const': {'class': 'constant', 'value': 1}, 'Unflatten_sub_1': { 'class': 'combine', 'kind': 'sub', 'out_shape': {}, 'from': ['Unflatten_sub', 'Unflatten_sub_unnamed_const_1'] }, 'Unflatten_sub_unnamed_const_1': {'class': 'constant', 'value': 1}, 'Unflatten_floordiv': { 'class': 'combine', 'kind': 'floordiv', 'out_shape': {}, 'from': ['Unflatten_sub_1', 'Unflatten_floordiv_unnamed_const'] }, 'Unflatten_floordiv_unnamed_const': {'class': 'constant', 'value': 3}, 'Unflatten_add': { 'class': 'combine', 'kind': 'add', 'out_shape': {}, 'from': ['Unflatten_floordiv', 'Unflatten_add_unnamed_const'] }, 'Unflatten_add_unnamed_const': {'class': 'constant', 'value': 1}, 'Tile': {'class': 'tile', 'multiples': {'T': 1, 'F': 10}, 'from': 'Unflatten'}, 'Flatten': {'class': 'merge_dims', 'from': 'Tile', 'axes': ['T', 'F'], 'keep_order': True}, 'randint_Length': {'class': 'length', 'axis': 'T', 'from': 'data'}, 'Length_randint_Reduce': {'class': 'reduce', 'mode': 'max', 'axes': ['B'], 'from': 'randint_Length'}, 'Reduce_randint_sub': { 'class': 'combine', 'kind': 'sub', 'out_shape': {}, 'from': ['Length_randint_Reduce', 'randint_sub_unnamed_const'] }, 'randint_sub_unnamed_const': {'class': 'constant', 'value': 1}, 'sub_randint_sub': { 'class': 'combine', 'kind': 'sub', 'out_shape': {}, 'from': ['Reduce_randint_sub', 'randint_sub_unnamed_const_1'] }, 'randint_sub_unnamed_const_1': {'class': 'constant', 'value': 1}, 'sub_randint_floordiv': { 'class': 'combine', 'kind': 'floordiv', 'out_shape': {}, 'from': ['sub_randint_sub', 'randint_floordiv_unnamed_const'] }, 'randint_floordiv_unnamed_const': {'class': 'constant', 'value': 3}, 'floordiv_randint_add': { 'class': 'combine', 'kind': 'add', 'out_shape': {}, 'from': ['sub_randint_floordiv', 'randint_add_unnamed_const'] }, 'randint_add_unnamed_const': {'class': 'constant', 'value': 1}, 'add_randint_sub': { 'class': 'combine', 'kind': 'sub', 'out_shape': {}, 'from': ['floordiv_randint_add', 'randint_sub_unnamed_const_2'] }, 'randint_sub_unnamed_const_2': {'class': 'constant', 'value': 1}, 'sub_randint_Length': {'class': 'length', 'axis': 'B', 'from': 'data'}, 'Length_randint_Length': {'class': 'length', 'axis': 'T', 'from': 'data'}, 'Length_randint_Reduce_1': {'class': 'reduce', 'mode': 'max', 'axes': ['B'], 'from': 'Length_randint_Length'}, 'Reduce_randint_sub_1': { 'class': 'combine', 'kind': 'sub', 'out_shape': {}, 'from': ['Length_randint_Reduce_1', 'randint_sub_unnamed_const_3'] }, 'randint_sub_unnamed_const_3': {'class': 'constant', 'value': 1}, 'sub_randint_sub_1': { 'class': 'combine', 'kind': 'sub', 'out_shape': {}, 'from': ['Reduce_randint_sub_1', 'randint_sub_unnamed_const_4'] }, 'randint_sub_unnamed_const_4': {'class': 'constant', 'value': 1}, 'sub_randint_floordiv_1': { 'class': 'combine', 'kind': 'floordiv', 'out_shape': {}, 'from': ['sub_randint_sub_1', 'randint_floordiv_unnamed_const_1'] }, 'randint_floordiv_unnamed_const_1': {'class': 'constant', 'value': 3}, 'floordiv_randint_add_1': { 'class': 'combine', 'kind': 'add', 'out_shape': {}, 'from': ['sub_randint_floordiv_1', 'randint_add_unnamed_const_1'] }, 'randint_add_unnamed_const_1': {'class': 'constant', 'value': 1}, 'add_randint_mul': { 'class': 'combine', 'kind': 'mul', 'out_shape': {}, 'from': ['randint_mul_unnamed_const', 'floordiv_randint_add_1'] }, 'randint_mul_unnamed_const': {'class': 'constant', 'value': 10}, 'mul_randint': { 'class': 'rand_int', 'shape': ( batch_dim, 10 * time_data_dim + -2 // 3 + 10 ), 'maxval': 'mul_randint_Cast', 'minval': 0, 'dtype': 'int64', 'from': ['data'] }, 'mul_randint_Cast': {'class': 'cast', 'from': 'add_randint_sub', 'dtype': 'int64'}, 'Cast': {'class': 'cast', 'from': 'Flatten', 'dtype': 'int64'}, 'greater_equal': { 'class': 'compare', 'kind': 'greater_equal', 'out_shape': {batch_dim, _10___time_data__2___3___10_dim}, 'from': ['mul_randint', 'greater_equal_ReturnnReinterpretSameSizeAs'] }, 'greater_equal_ReturnnReinterpretSameSizeAs': {'class': 'reinterpret_data', 'from': 'Cast', 'size_base': 'mul_randint'}, 'Cast_1': {'class': 'cast', 'from': 'greater_equal', 'dtype': 'int32'}, 'Cast_2': {'class': 'cast', 'from': 'Cast_1', 'dtype': 'int64'}, 'add': { 'class': 'combine', 'kind': 'add', 'out_shape': {batch_dim, _10___time_data__2___3___10_dim}, 'from': ['mul_randint', 'Cast_2'] }, 'Range_Length_1': {'class': 'length', 'axis': 'B', 'from': 'data'}, 'Range_1': {'class': 'range_from_length', 'from': 'Range_Length_1'}, 'Unflatten_1': {'class': 'split_dims', 'from': 'Range_1', 'axis': 'B', 'dims': [-1, 1]}, 'Unflatten_Length_1': {'class': 'length', 'axis': 'B', 'from': 'data'}, 'mul_Length': {'class': 'length', 'axis': 'T', 'from': 'data'}, 'Length_mul_Reduce': {'class': 'reduce', 'mode': 'max', 'axes': ['B'], 'from': 'mul_Length'}, 'Reduce_mul_sub': { 'class': 'combine', 'kind': 'sub', 'out_shape': {}, 'from': ['Length_mul_Reduce', 'mul_sub_unnamed_const'] }, 'mul_sub_unnamed_const': {'class': 'constant', 'value': 1}, 'sub_mul_sub': { 'class': 'combine', 'kind': 'sub', 'out_shape': {}, 'from': ['Reduce_mul_sub', 'mul_sub_unnamed_const_1'] }, 'mul_sub_unnamed_const_1': {'class': 'constant', 'value': 1}, 'sub_mul_floordiv': { 'class': 'combine', 'kind': 'floordiv', 'out_shape': {}, 'from': ['sub_mul_sub', 'mul_floordiv_unnamed_const'] }, 'mul_floordiv_unnamed_const': {'class': 'constant', 'value': 3}, 'floordiv_mul_add': { 'class': 'combine', 'kind': 'add', 'out_shape': {}, 'from': ['sub_mul_floordiv', 'mul_add_unnamed_const'] }, 'mul_add_unnamed_const': {'class': 'constant', 'value': 1}, 'add_mul': { 'class': 'combine', 'kind': 'mul', 'out_shape': {batch_dim, Unflatten_1_split_dims1_dim}, 'from': ['Unflatten_1', 'floordiv_mul_add'] }, 'Cast_3': {'class': 'cast', 'from': 'add_mul', 'dtype': 'int64'}, 'add_1': { 'class': 'combine', 'kind': 'add', 'out_shape': {batch_dim, _10___time_data__2___3___10_dim}, 'from': ['add', 'add_Squeeze'] }, 'add_Squeeze': {'class': 'squeeze', 'from': 'Cast_3', 'axis': ['F']}, 'Flatten_1': {'class': 'merge_dims', 'from': 'Transpose_1', 'axes': ['B', 'T'], 'keep_order': True}, 'Flatten_2': {'class': 'merge_dims', 'from': 'add_1', 'axes': ['B', 'T'], 'keep_order': True}, 'GatherTensor': {'class': 'gather', 'from': 'Flatten_1', 'axis': 'B', 'position': 'Flatten_2'}, 'Unflatten_2': { 'class': 'split_dims', 'from': 'GatherTensor', 'axis': 'B', 'dims': [ batch_dim, time_data_dim + -2 // 3 + 1, static_dim_2_dim ] }, 'Unflatten_Length_2': {'class': 'length', 'axis': 'B', 'from': 'data'}, 'Unflatten_Length_3': {'class': 'length', 'axis': 'T', 'from': 'data'}, 'Unflatten_Reduce_1': {'class': 'reduce', 'mode': 'max', 'axes': ['B'], 'from': 'Unflatten_Length_3'}, 'Unflatten_sub_2': { 'class': 'combine', 'kind': 'sub', 'out_shape': {}, 'from': ['Unflatten_Reduce_1', 'Unflatten_sub_unnamed_const_2'] }, 'Unflatten_sub_unnamed_const_2': {'class': 'constant', 'value': 1}, 'Unflatten_sub_3': { 'class': 'combine', 'kind': 'sub', 'out_shape': {}, 'from': ['Unflatten_sub_2', 'Unflatten_sub_unnamed_const_3'] }, 'Unflatten_sub_unnamed_const_3': {'class': 'constant', 'value': 1}, 'Unflatten_floordiv_1': { 'class': 'combine', 'kind': 'floordiv', 'out_shape': {}, 'from': ['Unflatten_sub_3', 'Unflatten_floordiv_unnamed_const_1'] }, 'Unflatten_floordiv_unnamed_const_1': {'class': 'constant', 'value': 3}, 'Unflatten_add_1': { 'class': 'combine', 'kind': 'add', 'out_shape': {}, 'from': ['Unflatten_floordiv_1', 'Unflatten_add_unnamed_const_1'] }, 'Unflatten_add_unnamed_const_1': {'class': 'constant', 'value': 1}, 'Transpose_2': {'class': 'copy', 'from': 'Unflatten_2'}, 'Unflatten_3': {'class': 'split_dims', 'from': 'Transpose_1', 'axis': 'B', 'dims': [1, -1]}, 'Unflatten_Length_4': {'class': 'length', 'axis': 'B', 'from': 'data'}, 'Cat': { 'class': 'concat', 'from': [('Unflatten_3', 'stag:Unflatten_3_split_dims0'), ('Cat_ReturnnReinterpretSameSizeAs', 'stag:static-dim-2')] }, 'Cat_ReturnnReinterpretSameSizeAs': {'class': 'reinterpret_data', 'from': 'Transpose_2', 'size_base': 'Unflatten_3'}, 'mul': { 'class': 'combine', 'kind': 'mul', 'out_shape': {batch_dim, Unflatten_3_split_dims0_dim + static_dim_2_dim, Conv1d_conv_s0_dim, Conv1d_channel_dim}, 'from': ['Transpose_1', 'Cat'] }, 'Reduce': {'class': 'reduce', 'mode': 'sum', 'axes': ['F'], 'from': 'mul'}, 'Power': {'class': 'eval', 'eval': 'tf.math.pow(source(0), 2)', 'from': 'Transpose_1'}, 'Reduce_1': {'class': 'reduce', 'mode': 'sum', 'axes': ['F'], 'from': 'Power'}, 'Power_1': {'class': 'eval', 'eval': 'tf.math.pow(source(0), 2)', 'from': 'Cat'}, 'Reduce_2': {'class': 'reduce', 'mode': 'sum', 'axes': ['F'], 'from': 'Power_1'}, 'mul_1': { 'class': 'combine', 'kind': 'mul', 'out_shape': {batch_dim, Conv1d_conv_s0_dim, Unflatten_3_split_dims0_dim + static_dim_2_dim}, 'from': ['Reduce_1', 'Reduce_2'] }, 'Rsqrt': {'class': 'activation', 'activation': 'rsqrt', 'from': 'mul_1'}, 'Minimum': {'class': 'eval', 'eval': 'tf.minimum(source(0), source(1))', 'from': ['Rsqrt', 'unnamed_const']}, 'unnamed_const': {'class': 'constant', 'value': 100000000.0}, 'mul_2': { 'class': 'combine', 'kind': 'mul', 'out_shape': {batch_dim, Conv1d_conv_s0_dim, Unflatten_3_split_dims0_dim + static_dim_2_dim}, 'from': ['Reduce', 'Minimum'] }, 'output': {'class': 'copy', 'from': 'mul_2'} } ```
albertz commented 2 years ago
self_dim_tags = <local> {Dim{B}, Dim{'10*time:data+9'[B]}}, len = 2
out_shape = <local> {Dim{'(10*((time:data+-2)//3))+10'[?]}, Dim{B}}, len = 2

Yes, they don't match. Do you see where they come from, and which of those should be correct?

vieting commented 2 years ago

The dim tag in out_shape is from neg_idxs, specifically from n_negative * tsz in the specified shape for torch.randint. The other one is wrong. It seems that is because the serialization is wrong, see the serialized net dict in layer `"mul_randint":

  'mul_randint': {
    'class': 'rand_int',
    'shape': (
      batch_dim,
      10 * time_data_dim + -2 // 3 + 10
    ),
    'maxval': 'mul_randint_Cast',
    'minval': 0,
    'dtype': 'int64',
    'from': ['data']
  },

The brackets are missing and the +- should probably + or -, right?

vieting commented 2 years ago

I added brackets for the serialization in #112. The + - thing is just cosmetic I guess but should work like this.

Now just the batch attribute differs, Dim{'(10*((time:data+-2)//3))+10'[B]} vs. Dim{'(10*((time:data+-2)//3))+10'[?]}, see here

albertz commented 2 years ago

I added brackets for the serialization in #112. The + - thing is just cosmetic I guess but should work like this.

Now just the batch attribute differs, Dim{'(10*((time:data+-2)//3))+10'[B]} vs. Dim{'(10*((time:data+-2)//3))+10'[?]}, see here

No, that is not the issue. It should still match.

The issue is that these are really different dim tags:

time_data_dim = SpatialDim('time:data')
_10___time_data__2___3___10_dim = SpatialDim('(10*((time:data+-2)//3))+10')

Normally the ReturnnDimTagProxy should not have created a separate dim here. There is code for this in dim_ref_repr.

albertz commented 2 years ago

See this commit on returnn-common: https://github.com/rwth-i6/returnn_common/commit/326af3f38aad7b57f02654f94eac16b88cce4671

Maybe try to add the same thing here and see if that fixes the problem. Otherwise, you should debug why it does not go into the dim.derived_from_op in this function.

vieting commented 2 years ago

Thanks, this fixes the issue for the time dim tag :+1:

Now we have the same error with Dim{'Unflatten_1_split_dims1'(1)}. The only difference is that the for the dim tag in self_dim_tags and out_shape, auto_generated=True whereas dim.auto_generated=False.

albertz commented 2 years ago

Thanks, this fixes the issue for the time dim tag +1

But I still see this in the generated code:


time_data_dim = SpatialDim('time:data')
_10___time_data__2___3___10_dim = SpatialDim('(10*((time:data+-2)//3))+10')

So there is still sth wrong.

Although, I think this is actually not used, so then it does not matter too much. I guess this just needs some cleanup then.

Now we have the same error with Dim{'Unflatten_1_split_dims1'(1)}. The only difference is that the for the dim tag in self_dim_tags and out_shape, auto_generated=True whereas dim.auto_generated=False.

This is not the same problem.

The problem is that Unflatten_1_split_dims1_dim is not actually assigned properly. The dim tag is created here:

'Unflatten_1': {'class': 'split_dims', 'from': 'Range_1', 'axis': 'B', 'dims': [-1, 1]},

This should have been sth like:

'Unflatten_1': {'class': 'split_dims', 'from': 'Range_1', 'axis': 'B', 'dims': [-1, Unflatten_1_split_dims1_dim]},

I'm not exactly sure about the best solution (which is not too complicated) to solve this yet. I'm thinking about it.

albertz commented 2 years ago

Right now our assumption is that all dynamic dim tags are from extern data already, or derived from those. Which is also true so far.

And we did not care too much about static dims yet. However, that is the problem now. And new static dim tags can be created also within the network.

If we want to make use consistently of dim tags, we need to go through all code which creates new dim tags (not derived from input dims but really new dims), e.g. typical examples are the output dim of LinearLayer but also here SplitDimsLayer.

A simpler solution might be to not make consistent use of dim tags. Here we need it for out_shape for CombineLayer and CompareLayer. But we actually only use out_shape to make sure that broadcasting is valid and correct. This is mostly just for verification. We could also introduce another option to explicitly allow for broadcasting in the layer. Maybe this is the only issue here.

vieting commented 2 years ago

A simpler solution might be to not make consistent use of dim tags. Here we need it for out_shape for CombineLayer and CompareLayer. But we actually only use out_shape to make sure that broadcasting is valid and correct. This is mostly just for verification. We could also introduce another option to explicitly allow for broadcasting in the layer.

This would be on RETURNN side, right? What exactly should it do? Broadcasting is allowed anyway.

Maybe this is the only issue here.

Seems so, if I don't set out_shape in the CombineLayer, the test works. I'll check if this breaks other things.

vieting commented 2 years ago

test_broadcasting_with_lengths breaks

RequirementNotSatisfied: All inputs
 - Data{'Cast_output', [T|'Range_input_len'[]], dtype='int64'}
 - Data{'Unflatten_1_output', [B], dtype='int64'}
require broadcasting to 
  Data{'less_output', [B,T|'Range_input_len'[]], dtype='int64'}.
This must be explicitly allowed, e.g. by specifying out_shape. (required since behavior_version >= 4)

So I guess we need that option on RETURNN side.

albertz commented 2 years ago

A simpler solution might be to not make consistent use of dim tags. Here we need it for out_shape for CombineLayer and CompareLayer. But we actually only use out_shape to make sure that broadcasting is valid and correct. This is mostly just for verification. We could also introduce another option to explicitly allow for broadcasting in the layer.

This would be on RETURNN side, right? What exactly should it do? Broadcasting is allowed anyway.

This option is already there in the RETURNN code. It's just not exposed as a layer option. This is allow_broadcast_all_sources. I just made this an option for CombineLayer and CompareLayer. I think those are the two cases we need here.

No, it was not allowed anyway in all cases. It was only allowed to broadcast some of the inputs, but not all. E.g. shape [A] + [B] was not allowed. Shape [A,B] + [B] was allowed.

Maybe this is the only issue here.

Seems so, if I don't set out_shape in the CombineLayer, the test works. I'll check if this breaks other things.

Setting out_shape had the effect that it enabled allow_broadcast_all_sources. I think we introduced out_shape because we needed this in some cases. In any case, we will need it in some cases.

On pytorch-to-returnn side, it would be good if we only set allow_broadcast_all_sources if we really need it.

vieting commented 2 years ago

On pytorch-to-returnn side, it would be good if we only set allow_broadcast_all_sources if we really need it.

This would be the case in general if the output has more dims than any of the inputs? I pushed a commit in #112.

albertz commented 2 years ago

On pytorch-to-returnn side, it would be good if we only set allow_broadcast_all_sources if we really need it.

This would be the case in general if the output has more dims than any of the inputs? I pushed a commit in #112.

Yes exactly.

albertz commented 2 years ago

I guess this is fixed now via #112 right?

vieting commented 2 years ago

I guess this is fixed now via #112 right?

Yes, thanks :+1:

vieting commented 2 years ago

Btw, the brackets for the dim tag serialization should also be added in returnn_common, right?

albertz commented 2 years ago

Btw, the brackets for the dim tag serialization should also be added in returnn_common, right?

Yes, I will do that. But I'm thinking whether I only add the brackets when needed, and not always.