koito19960406 / ZenSVI

This package is a one-stop solution for downloading, cleaning, analyzing street view imagery
https://zensvi.readthedocs.io/en/latest/
Creative Commons Attribution Share Alike 4.0 International
20 stars 2 forks source link

Cuda run time error when segmenting #22

Closed koito19960406 closed 1 year ago

koito19960406 commented 1 year ago

Got this error: code:

segmenter = Segmenter()
dir_input = dir_output / "panorama"
segmenter.segment(dir_input = dir_input, dir_pixel_ratio_output = dir_output, batch_size=1)
Traceback (most recent call last):
  File "d:\Koichi\delft_svi\src\download.py", line 20, in <module>
    segmenter.segment(dir_input = dir_input, dir_pixel_ratio_output = dir_output, batch_size=1)
  File "C:\Users\ual\.conda\envs\street_scope_test\Lib\site-packages\zensvi\cv\segmentation\segmentation.py", line 586, in segment
    completed_future.result()
  File "C:\Users\ual\.conda\envs\street_scope_test\Lib\concurrent\futures\_base.py", line 449, in result
    return self.__get_result()
           ^^^^^^^^^^^^^^^^^^^
  File "C:\Users\ual\.conda\envs\street_scope_test\Lib\concurrent\futures\_base.py", line 401, in __get_result
    raise self._exception
  File "C:\Users\ual\.conda\envs\street_scope_test\Lib\concurrent\futures\thread.py", line 58, in run
    result = self.fn(*self.args, **self.kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\ual\.conda\envs\street_scope_test\Lib\site-packages\zensvi\cv\segmentation\segmentation.py", line 516, in _process_images
    outputs, pixel_ratios = self._semantic_segmentation(images, original_img_shape)
                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\ual\.conda\envs\street_scope_test\Lib\site-packages\zensvi\cv\segmentation\segmentation.py", line 418, in _semantic_segmentation
    outputs = self.model(**inputs)
              ^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\ual\.conda\envs\street_scope_test\Lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\ual\AppData\Roaming\Python\Python311\site-packages\transformers\models\mask2former\modeling_mask2former.py", line 2500, in forward
    outputs = self.model(
              ^^^^^^^^^^^
  File "C:\Users\ual\.conda\envs\street_scope_test\Lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\ual\AppData\Roaming\Python\Python311\site-packages\transformers\models\mask2former\modeling_mask2former.py", line 2271, in forward
    pixel_level_module_output = self.pixel_level_module(
                                ^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\ual\.conda\envs\street_scope_test\Lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\ual\AppData\Roaming\Python\Python311\site-packages\transformers\models\mask2former\modeling_mask2former.py", line 1399, in forward
    backbone_features = self.encoder(pixel_values).feature_maps
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\ual\.conda\envs\street_scope_test\Lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\ual\AppData\Roaming\Python\Python311\site-packages\transformers\models\swin\modeling_swin.py", line 1325, in forward
    outputs = self.encoder(
              ^^^^^^^^^^^^^
  File "C:\Users\ual\.conda\envs\street_scope_test\Lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\ual\AppData\Roaming\Python\Python311\site-packages\transformers\models\swin\modeling_swin.py", line 839, in forward
    layer_outputs = layer_module(
                    ^^^^^^^^^^^^^
  File "C:\Users\ual\.conda\envs\street_scope_test\Lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\ual\AppData\Roaming\Python\Python311\site-packages\transformers\models\swin\modeling_swin.py", line 757, in forward
    layer_outputs = layer_module(
                    ^^^^^^^^^^^^^
  File "C:\Users\ual\.conda\envs\street_scope_test\Lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\ual\AppData\Roaming\Python\Python311\site-packages\transformers\models\swin\modeling_swin.py", line 688, in forward
    attention_outputs = self.attention(
                        ^^^^^^^^^^^^^^^
  File "C:\Users\ual\.conda\envs\street_scope_test\Lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\ual\AppData\Roaming\Python\Python311\site-packages\transformers\models\swin\modeling_swin.py", line 563, in forward
    self_outputs = self.self(hidden_states, attention_mask, head_mask, output_attentions)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\ual\.conda\envs\street_scope_test\Lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\ual\AppData\Roaming\Python\Python311\site-packages\transformers\models\swin\modeling_swin.py", line 472, in forward
    value_layer = self.transpose_for_scores(self.value(hidden_states))
                                            ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\ual\.conda\envs\street_scope_test\Lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\ual\.conda\envs\street_scope_test\Lib\site-packages\torch\nn\modules\linear.py", line 114, in forward
    return F.linear(input, self.weight, self.bias)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: CUDA error: CUBLAS_STATUS_EXECUTION_FAILED when calling `cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)`
koito19960406 commented 1 year ago

I added an optional argument max_workers: Union[int, None] = None in this commithttps://github.com/koito19960406/ZenSVI/commit/ece8735a8056d703b6d6cfdfbfaf5a367db5a421. So please reduce it when you have this error to reduce the burden on CUDA. This change is available in v0.1.15