Closed kimihailv closed 1 year ago
I tried np.dtype('float32')
too
As of now, Convert
is implemented by calling x.type(dtype)
, which as far as I can tell is only available on torch tensors, and not numpy arrays. Try adding a ToTensor operation before the convert and then using torch.float32
.
Thanks
Actually, I don't need torch tensors, I want to get numpy arrays. I tried to rewrite Convert transform. But this error encountered:
numba.core.errors.TypingError: Failed in nopython mode pipeline (step: nopython frontend)
Untyped global name 'self': Cannot determine Numba type of <class '__main__.Convert'>
File ".[./../../tmp/ipykernel_76423/2351147030.py](https://vscode-remote+attached-002dcontainer-002b7b22636f6e7461696e65724e616d65223a222f6a6178222c2273657474696e6773223a7b22686f7374223a227373683a2f2f756232227d7d.vscode-resource.vscode-cdn.net/workspace/tmp/ipykernel_76423/2351147030.py)", line 21:
<source missing, REPL[/exec](https://vscode-remote+attached-002dcontainer-002b7b22636f6e7461696e65724e616d65223a222f6a6178222c2273657474696e6773223a7b22686f7374223a227373683a2f2f756232227d7d.vscode-resource.vscode-cdn.net/exec) in use?>
Code of transform:
class Convert(Operation):
"""Convert to target data type.
Parameters
----------
target_dtype: numpy.dtype or torch.dtype
Target data type.
"""
def __init__(self, target_dtype):
super().__init__()
self.target_dtype = target_dtype
def generate_code(self) -> Callable:
def convert(inp, dst):
return inp.astype(self.target_dtype)
convert.is_parallel = True
return convert
# TODO: something weird about device to allocate on
def declare_state_and_memory(self, previous_state: State) -> Tuple[State, Optional[AllocationQuery]]:
return replace(previous_state, dtype=self.target_dtype), None
Hi! Try the following:
class Convert(Operation):
"""Convert to target data type.
Parameters
----------
target_dtype: numpy.dtype or torch.dtype
Target data type.
"""
def __init__(self, target_dtype):
super().__init__()
self.target_dtype = target_dtype
def generate_code(self) -> Callable:
target_dtype = self.target_dtype
def convert(inp, dst):
return inp.astype(target_dtype)
convert.is_parallel = True
return convert
# TODO: something weird about device to allocate on
def declare_state_and_memory(self, previous_state: State) -> Tuple[State, Optional[AllocationQuery]]:
return replace(previous_state, dtype=self.target_dtype), None
it works, thank you!
Hello. I have the following code:
But it doesn't work: