lpiccinelli-eth / UniDepth

Universal Monocular Metric Depth Estimation
Other
462 stars 39 forks source link

encountered an error while executing the demo #31

Open JV-X opened 2 months ago

JV-X commented 2 months ago
(Unidepth) hygx@DESKTOP-47Q8A9V:~/code/UniDepth$ python ./scripts/demo.py
Triton is not available, some optimizations will not be enabled.
This is just a warning: triton is not available
Torch version: 2.2.0+cu118
Instantiate: dinov2_vitl14
Traceback (most recent call last):
  File "/home/hygx/code/UniDepth/./scripts/demo.py", line 43, in <module>
    demo(model)
  File "/home/hygx/code/UniDepth/./scripts/demo.py", line 15, in demo
    predictions = model.infer(rgb_torch, intrinsics_torch)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/hygx/anaconda3/envs/Unidepth/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/home/hygx/code/UniDepth/unidepth/models/unidepthv1/unidepthv1.py", line 208, in infer
    encoder_outputs, cls_tokens = self.pixel_encoder(rgbs)
                                  ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/hygx/anaconda3/envs/Unidepth/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/hygx/anaconda3/envs/Unidepth/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/hygx/code/UniDepth/unidepth/models/backbones/dinov2.py", line 354, in forward
    ret = self.forward_features(*args, **kwargs)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/hygx/code/UniDepth/unidepth/models/backbones/dinov2.py", line 327, in forward_features
    x = blk(x)
        ^^^^^^
  File "/home/hygx/anaconda3/envs/Unidepth/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/hygx/anaconda3/envs/Unidepth/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/hygx/code/UniDepth/unidepth/models/backbones/metadinov2/block.py", line 277, in forward
    return super().forward(x_or_x_list)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/hygx/code/UniDepth/unidepth/models/backbones/metadinov2/block.py", line 109, in forward
    x = x + attn_residual_func(x)
            ^^^^^^^^^^^^^^^^^^^^^
  File "/home/hygx/code/UniDepth/unidepth/models/backbones/metadinov2/block.py", line 88, in attn_residual_func
    return self.ls1(self.attn(self.norm1(x)))
                    ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/hygx/anaconda3/envs/Unidepth/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/hygx/anaconda3/envs/Unidepth/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/hygx/code/UniDepth/unidepth/models/backbones/metadinov2/attention.py", line 80, in forward
    x = memory_efficient_attention(q, k, v, attn_bias=attn_bias)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/hygx/anaconda3/envs/Unidepth/lib/python3.11/site-packages/xformers/ops/fmha/__init__.py", line 223, in memory_efficient_attention
    return _memory_efficient_attention(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/hygx/anaconda3/envs/Unidepth/lib/python3.11/site-packages/xformers/ops/fmha/__init__.py", line 321, in _memory_efficient_attention
    return _memory_efficient_attention_forward(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/hygx/anaconda3/envs/Unidepth/lib/python3.11/site-packages/xformers/ops/fmha/__init__.py", line 337, in _memory_efficient_attention_forward
    op = _dispatch_fw(inp, False)
         ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/hygx/anaconda3/envs/Unidepth/lib/python3.11/site-packages/xformers/ops/fmha/dispatch.py", line 120, in _dispatch_fw
    return _run_priority_list(
           ^^^^^^^^^^^^^^^^^^^
  File "/home/hygx/anaconda3/envs/Unidepth/lib/python3.11/site-packages/xformers/ops/fmha/dispatch.py", line 63, in _run_priority_list
    raise NotImplementedError(msg)
NotImplementedError: No operator found for `memory_efficient_attention_forward` with inputs:
     query       : shape=(1, 1453, 16, 64) (torch.float32)
     key         : shape=(1, 1453, 16, 64) (torch.float32)
     value       : shape=(1, 1453, 16, 64) (torch.float32)
     attn_bias   : <class 'NoneType'>
     p           : 0.0
`decoderF` is not supported because:
    device=cpu (supported: {'cuda'})
    attn_bias type is <class 'NoneType'>
`flshattF@v2.3.6` is not supported because:
    device=cpu (supported: {'cuda'})
    dtype=torch.float32 (supported: {torch.float16, torch.bfloat16})
`tritonflashattF` is not supported because:
    device=cpu (supported: {'cuda'})
    dtype=torch.float32 (supported: {torch.float16, torch.bfloat16})
    operator wasn't built - see `python -m xformers.info` for more info
    triton is not available
`cutlassF` is not supported because:
    device=cpu (supported: {'cuda'})
`smallkF` is not supported because:
    max(query.shape[-1] != value.shape[-1]) > 32
    device=cpu (supported: {'cuda'})
    unsupported embed per head: 64

CUDA version:

(Unidepth) hygx@DESKTOP-47Q8A9V:~/code/UniDepth$ nvcc -V
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2022 NVIDIA Corporation
Built on Wed_Sep_21_10:33:58_PDT_2022
Cuda compilation tools, release 11.8, V11.8.89
Build cuda_11.8.r11.8/compiler.31833905_0

Python version:

(Unidepth) hygx@DESKTOP-47Q8A9V:~/code/UniDepth$ python
Python 3.11.9 (main, Apr 19 2024, 16:48:06) [GCC 11.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>>

The code runs on Ubuntu22.04 (WSL2)

lpiccinelli-eth commented 2 months ago

It looks like the model is on CPU, and xformers blocks do not support CPU. As a sanity check, you can try torch.cuda.is_available() in your script and see if torch can find your GPU.

JV-X commented 2 months ago

really thanks for your time

It looks like the model is on CPU, and xformers blocks do not support CPU. As a sanity check, you can try torch.cuda.is_available() in your script and see if torch can find your GPU.