ElpadoCan / SpotMAX

SpotMAX: a generalist framework for multi-dimensional automatic spot detection and quantification
GNU General Public License v3.0
5 stars 1 forks source link

Neural network seems to accumulate data on RAM through frames #64

Open ElpadoCan opened 7 months ago

ElpadoCan commented 7 months ago
Traceback (most recent call last):
  File "C:\Users\heroe\Desktop\spotMAX\spotmax\__init__.py", line 163, in inner_function
    result = func(self, *args, **kwargs)
  File "C:\Users\heroe\Desktop\spotMAX\spotmax\core.py", line 3613, in spots_detection
    df_spots_coords = self._spots_detection(
  File "C:\Users\heroe\Desktop\spotMAX\spotmax\core.py", line 3744, in _spots_detection
    labels = pipe.spots_semantic_segmentation(
  File "C:\Users\heroe\Desktop\spotMAX\spotmax\pipe.py", line 269, in spots_semantic_segmentation
    result = filters.global_semantic_segmentation(
  File "C:\Users\heroe\Desktop\spotMAX\spotmax\filters.py", line 352, in global_semantic_segmentation
    nnet_labels = nnet_model.segment(
  File "C:\Users\heroe\Desktop\spotMAX\spotmax\nnet\model.py", line 298, in segment
    prediction, _ = self.model(input_data, verbose=verbose)
  File "C:\Users\heroe\Desktop\spotMAX\spotmax\nnet\models\nd_model.py", line 137, in __call__
    predictions = model_instance.predict(data.images)
  File "C:\Users\heroe\Desktop\spotMAX\spotmax\nnet\models\unet2d_model.py", line 375, in predict
    masks = [
  File "C:\Users\heroe\Desktop\spotMAX\spotmax\nnet\models\unet2d_model.py", line 376, in <listcomp>
    self._predict_img(full_img=image)
  File "C:\Users\heroe\Desktop\spotMAX\spotmax\nnet\models\unet2d_model.py", line 350, in _predict_img
    output = self.net(img)
  File "C:\Users\heroe\miniconda3\envs\acdc\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "C:\Users\heroe\Desktop\spotMAX\spotmax\nnet\models\unet2D\unet_2D_model.py", line 34, in forward
    x = self.up4(x, x1)
  File "C:\Users\heroe\miniconda3\envs\acdc\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "C:\Users\heroe\Desktop\spotMAX\spotmax\nnet\models\unet2D\unet_parts.py", line 68, in forward
    return self.conv(x)
  File "C:\Users\heroe\miniconda3\envs\acdc\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "C:\Users\heroe\Desktop\spotMAX\spotmax\nnet\models\unet2D\unet_parts.py", line 25, in forward
    return self.double_conv(x)
  File "C:\Users\heroe\miniconda3\envs\acdc\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "C:\Users\heroe\miniconda3\envs\acdc\lib\site-packages\torch\nn\modules\container.py", line 204, in forward
    input = module(input)
  File "C:\Users\heroe\miniconda3\envs\acdc\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "C:\Users\heroe\miniconda3\envs\acdc\lib\site-packages\torch\nn\modules\conv.py", line 463, in forward
    return self._conv_forward(input, self.weight, self.bias)
  File "C:\Users\heroe\miniconda3\envs\acdc\lib\site-packages\torch\nn\modules\conv.py", line 459, in _conv_forward
    return F.conv2d(input, weight, bias, self.stride,
RuntimeError: [enforce fail at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\c10\core\impl\alloc_cpu.cpp:72] data. DefaultCPUAllocator: not enough memory: you tried to allocate 9740943872 bytes.
github-actions[bot] commented 1 month ago

Hi there, I'm the spotMAX bot :wave:. This issue had no activity for more than 180 days. For now, we have marked it as "inactive" until there is some new activity. If this issue was not solved yet, unfortunately we haven't had the time to implement it or it requires more discussion. Note that it doesn't mean it has been ignored, but a little reminder from your side would help :D. Feel free to reach out to us here or on our forum. If you think that this issue is no longer relevant, you may close it by yourself. In any case, we apologise for the inconvenience and we thank you for your patience and contributions so far!