PyTorch started enforcing a check that the output of a traced function matches the output of the same function eagerly-executed. This will almost always fail in probflow applications, because most model's outputs will be different between two calls, because of the random posterior sampling.
To fix, we'll just set check_trace=False in torch.jit.trace_module.
PyTorch started enforcing a check that the output of a traced function matches the output of the same function eagerly-executed. This will almost always fail in probflow applications, because most model's outputs will be different between two calls, because of the random posterior sampling.
To fix, we'll just set
check_trace=False
intorch.jit.trace_module
.