mlampros / OpenImageR

Image processing Toolkit in R
https://mlampros.github.io/OpenImageR/
57 stars 10 forks source link

bad_alloc error #16

Closed spono closed 4 years ago

spono commented 4 years ago

Hi, I'm willing to run a segmentation on a quite big image (appr. 2Gb) but I'm getting the following error:

error: arma::memory::acquire(): out of memory
Error in interface_superpixels(input_image, method, superpixel, compactness,  : 
  std::bad_alloc

The laptop is new and runs 16 Gb RAM so I don't get how the error can happen. A quick run of memory.limit() reports a limit of 1.759219e+13 so it doesn't look to me as a possible source of troubles.

Any other idea or suggestion? thanks in advance!

mlampros commented 4 years ago

hi @spono,

would you mind sharing more information about your task to find out if it's a bug in the OpenImageR::superpixels function. For instance the following would be useful,

spono commented 4 years ago

sorry, I'm interested to test OpenImageR::superpixels for the unsupervised segmentation of aerial imagery with 15x15cm pixels.

I've been playing using your example code:

res_slico = superpixels(input_image = im,
                        method = "slico",
                        superpixel = 200, 
                        return_slic_data = TRUE,
                        return_labels = TRUE, 
                        write_slic = "", 
                        verbose = TRUE)

and what I reported is all what is printed on the console.

BTW, after some further testing I can say that is not an issue of the function but when readImage is used the file size increases approximately 6x (it seems quite weird to me...) and that's why from 2.2 Gb it passed to almost 13 Gb, filling completely the available RAM as soon as the processing started.

mlampros commented 4 years ago

hi @spono,

depending on the type of your input image (.png, .jpeg, .tif) the readImage function uses the corresponding package (png, jpeg, tif).

Just for reference - out of curiosity - (normally I avoid to upload images of that size from within R), I loaded a 3-dimensional .tif sample aerial image of approx. size 735 MB with the following functions (im is the name of the loaded 3-dimensional image),

OpenImageR::readImage

utils::object.size(im) returns 9910633192 bytes which is approx. 9.91 GiB

magick::image_read

it gave the following error for my image: "Magick: Unknown field ..."

imager::load.image

utils::object.size(im) returns 9910633528 bytes which is approx. 9.91 GiB

skimage ( python from within R using the reticulate package )

skim = reticulate::import('skimage.io', convert = T)
im = skim$imread('sample_image.tif')
utils::object.size(im) returns 4955316704 bytes which is approx. 4.96 GiB

skimage ( directly in python )

from skimage import io
from sys import getsizeof

im = io.imread('sample_image.tif')
getsizeof(im.tobytes()) returns 1238829160 bytes which is approx. 1.24 GiB


spono commented 4 years ago

great, thank you very much for the detailed explanation! It's my first time with such analyses and I'm still learning how to tackle properly the objective. It seems the best way will be to retile all the images into smaller chunks and/or scale up with processing power. Sorry for the false alarm!

mlampros commented 4 years ago

hi @spono,

if splitting the image into multiple tiles (for instance using 'gdal') is an option for you then you will be definitely able to work with these tiles from within R.

stale[bot] commented 4 years ago

This is Robo-lampros because the Human-lampros is lazy. This issue has been automatically marked as stale because it has not had recent activity. It will be closed after 7 days if no further activity occurs. Feel free to re-open a closed issue and the Human-lampros will respond.