rwth-i6 / pytorch-to-returnn

Make PyTorch code runnable within RETURNN
3 stars 6 forks source link

fix loading params to returnn #38

Closed vieting closed 3 years ago

vieting commented 3 years ago

Loading the params to returnn works correctly when having a module mod = torch.nn.Linear(n_in, n_out) and calling it directly, i.e., y = mod(x). However, if the params are used directly like mod.weight, they are not loaded in returnn currently.

vieting commented 3 years ago

Judging from the previously failing test cases, we have to check whether the layer is a temp layer, similar to apply_call https://github.com/rwth-i6/pytorch-to-returnn/blob/main/pytorch_to_returnn/naming/call.py#L160

However, the network name is root/.tmp_root..., so I modified the check. Actually, in apply_call the name is also root/.tmp_root... so the check there never applies, at least for none of the tests in test_layers.py. Is that intended? What do you think?