Open Redmept1on opened 3 months ago
oneflow.softmax perform differently between cpu and cuda.
import oneflow as flow import numpy as np x1 = flow.tensor(np.array([[float('inf'), 0, -1, float('nan'), 5]], dtype=np.float32)) x1 = x1.cuda() y1 = flow.softmax(x1) print(y1.device,y1) x1 = flow.tensor(np.array([[float('inf'), 0, -1, float('nan'), 5]], dtype=np.float32)) x1 = x1.cpu() y2 = flow.softmax(x1) print(y2.device,y2)
pytorch
import torch import numpy as np input_tensor = torch.tensor(np.array([[float('inf'), 0, -1, float('nan'), 5]], dtype=np.float32)) # other_tensor = torch.tensor(np.array([[float('nan'), 0, -1, float('nan'), 5]], dtype=np.float32)) output_tensors = torch.nn.functional.softmax(input_tensor) print(output_tensors)
Summary
oneflow.softmax perform differently between cpu and cuda.
Code to reproduce bug
pytorch
System Information