csarofeen / pytorch

Tensors and Dynamic neural networks in Python with strong GPU acceleration
http://pytorch.org
Other
26 stars 7 forks source link

Type promotion does not match with eager mode #1437

Closed zasdfgbnm closed 2 years ago

zasdfgbnm commented 2 years ago

🐛 Describe the bug

import torch

torch._C._jit_set_nvfuser_enabled(True)
torch._C._jit_set_texpr_fuser_enabled(False)
torch._C._jit_set_profiling_executor(True)
torch._C._jit_set_profiling_mode(True)

x = torch.tensor(0, device='cuda:0', dtype=torch.float16)
y = torch.tensor(0, dtype=torch.float32)
z = torch.tensor([0], device='cuda:0', dtype=torch.float16)
q = torch.tensor(0, device='cuda:0', dtype=torch.float)

print("========== case 1 ==========")
o = x * y + z
print("Eager:", o.dtype)

@torch.jit.script
def f(x, y, z):
    return x * y + z

o = f(x, y, z)
print("JIT 1:", o.dtype)
o = f(x, y, z)
print("JIT 2:", o.dtype)
o = f(x, y, z)
print("JIT 3:", o.dtype)

I am seeing

========== case 1 ==========
Eager: torch.float16
JIT 1: torch.float16
JIT 2: torch.float32
JIT 3: torch.float32
zasdfgbnm commented 2 years ago

Since I am looking at type promotion support for complex, I will look into this issue myself.