pytorch / pytorch

Tensors and Dynamic neural networks in Python with strong GPU acceleration
https://pytorch.org
Other
82.56k stars 22.22k forks source link

torch.nested.as_nested_tensor fails with tensor on device #129647

Closed clessig closed 2 months ago

clessig commented 3 months ago

🐛 Describe the bug

torch.nested.as_nested_tensor fails with tensor on device, works with tensor on CPU:

>>> a = torch.ones( 10, 5, 16)
>>> b = torch.nested.as_nested_tensor( a, layout=torch.jagged)
>>> a = torch.ones( 10, 5, 16, device='cuda')
>>> b = torch.nested.as_nested_tensor( a, layout=torch.jagged)
Traceback (most recent call last):
  File "<console>", line 1, in <module>
  File "/etc/ecmwf/nfs/dh2_home_a/nacl/scratch/pytorch/torch/nested/__init__.py", line 119, in as_nested_tensor
    return nested_view_from_values_offsets(values, offsets)
  File "/etc/ecmwf/nfs/dh2_home_a/nacl/scratch/pytorch/torch/nested/_internal/nested_tensor.py", line 522, in nested_view_from_values_offsets
    return torch._nested_view_from_jagged(  # type: ignore[attr-defined]
  File "/etc/ecmwf/nfs/dh2_home_a/nacl/scratch/pytorch/torch/nested/_internal/nested_tensor.py", line 302, in __torch_function__
    return func(*args, **kwargs)
  File "/etc/ecmwf/nfs/dh2_home_a/nacl/scratch/pytorch/torch/nested/_internal/nested_tensor.py", line 286, in __torch_dispatch__
    return fn(*args, **kwargs)
  File "/etc/ecmwf/nfs/dh2_home_a/nacl/scratch/pytorch/torch/nested/_internal/ops.py", line 183, in inner
    return func(aten_op, *args, **kwargs)
  File "/etc/ecmwf/nfs/dh2_home_a/nacl/scratch/pytorch/torch/nested/_internal/ops.py", line 1123, in _nested_view_from_jagged_default
    return NestedTensor(
  File "/etc/ecmwf/nfs/dh2_home_a/nacl/scratch/pytorch/torch/nested/_internal/nested_tensor.py", line 79, in __new__
    assert values.device == offsets.device
AssertionError

Maybe for @jbschlosser .

Versions

Collecting environment information... PyTorch version: 2.5.0a0+gitf18beca Is debug build: False CUDA used to build PyTorch: 12.3 ROCM used to build PyTorch: N/A

OS: Red Hat Enterprise Linux release 8.8 (Ootpa) (x86_64) GCC version: (GCC) 8.5.0 20210514 (Red Hat 8.5.0-18) Clang version: 15.0.7 (Red Hat 15.0.7-1.module+el8.8.0+17939+b58878af) CMake version: version 3.20.2 Libc version: glibc-2.28

Python version: 3.10.10 (main, Feb 9 2023, 14:42:48) [GCC 8.5.0 20210514 (Red Hat 8.5.0-10)] (64-bit runtime) Python platform: Linux-4.18.0-477.43.1.el8_8.x86_64-x86_64-with-glibc2.28 Is CUDA available: True CUDA runtime version: Could not collect CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA A100-SXM4-40GB Nvidia driver version: 550.54.14 cuDNN version: Probably one of the following: /usr/lib64/libcudnn.so.8.9.7 /usr/lib64/libcudnn_adv_infer.so.8.9.7 /usr/lib64/libcudnn_adv_train.so.8.9.7 /usr/lib64/libcudnn_cnn_infer.so.8.9.7 /usr/lib64/libcudnn_cnn_train.so.8.9.7 /usr/lib64/libcudnn_ops_infer.so.8.9.7 /usr/lib64/libcudnn_ops_train.so.8.9.7 HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True

CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 256 On-line CPU(s) list: 0-255 Thread(s) per core: 2 Core(s) per socket: 64 Socket(s): 2 NUMA node(s): 4 Vendor ID: AuthenticAMD CPU family: 23 Model: 49 Model name: AMD EPYC 7742 64-Core Processor Stepping: 0 CPU MHz: 2250.000 CPU max MHz: 2250.0000 CPU min MHz: 1500.0000 BogoMIPS: 4499.73 Virtualization: AMD-V L1d cache: 32K L1i cache: 32K L2 cache: 512K L3 cache: 16384K NUMA node0 CPU(s): 0-31,128-159 NUMA node1 CPU(s): 32-63,160-191 NUMA node2 CPU(s): 64-95,192-223 NUMA node3 CPU(s): 96-127,224-255 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sme sev sev_es

Versions of relevant libraries: [pip3] flake8==7.1.0 [pip3] mypy-extensions==1.0.0 [pip3] numpy==1.26.4 [pip3] optree==0.11.0 [pip3] torch==2.5.0a0+gitf18beca [conda] Could not collect

cc @cpuhrsch @jbschlosser @bhosmer @drisspg @soulitzer

clessig commented 3 months ago

Suggested fix: https://github.com/pytorch/pytorch/blob/ff026f3d0a68a4533b193498adc5b5c7b045b73c/torch/nested/__init__.py#L97

Add

if device is None :
    device = ts.device
jbschlosser commented 2 months ago

Thanks for the report @clessig! I agree with your proposed fix. We'd accept a PR with a fix + unit test (or I'll get to it shortly).