in the new forward pass inside the graphexecutor, we are creating a new input based on the graph executor state (potentially summing together multiple inputs). pytorch then sees this input tensor as a new tensor, and the comparison with the old output tensor (which may be the same) fails.
when importing a NIR graph through NIRTorch, and then exporting that back to NIR, it messes up the edges of the NIR graph:
in a simple sequential network with two nodes, it never connects them:
in the new forward pass inside the graphexecutor, we are creating a new input based on the graph executor state (potentially summing together multiple inputs). pytorch then sees this input tensor as a new tensor, and the comparison with the old output tensor (which may be the same) fails.