Open jpchev opened 7 years ago
I am new to javascript. I also encountered this problem. After doing some digging I found this problem my be caused by “postMessage” of htm5. The peak memory usage of this code is about 3g。
var testData = new Uint8Array(1024 * 1024 * 1024);
var testData2 = new Uint8Array(testData.buffer,0,200);
var worker = new Worker('../../js/myWorker.js');
worker.postMessage(testData2);
You need to mark it as transferable [0], otherwise it is copied.
Maybe it's related to Chrome Issue #704099 [1].
[0] https://developer.mozilla.org/en/docs/Web/API/Worker/postMessage [1] https://bugs.chromium.org/p/chromium/issues/detail?id=704099#c2
Thanks a lot. My test image is a 555 frames uncompressed ct image (278M).
If the ownership of an object is transferred, it becomes unusable (neutered) in the context it was sent from and becomes available only to the worker it was sent to.
This api support transfer pixdata, but I think it won't work, at least not all cases.
function addDecodeTask(imageFrame, transferSyntax, pixelData, options) {
var priority = options.priority || undefined;
var transferList = options.transferPixelData ? [pixelData.buffer] : undefined;
I suggest we make a copy of pixdata and transfer it to the workers. Something like this
function addDecodeTask(imageFrame, transferSyntax, pixelData, options) {
var priority = options.priority || undefined;
var transferList = undefined;
if (!options.transferPixelData) {
pixelData = new Uint8Array(pixelData);
}
var transferList = [pixelData.buffer];
return cornerstoneWADOImageLoader.webWorkerManager.addTask(
'decodeTask',
{
imageFrame : imageFrame,
transferSyntax : transferSyntax,
pixelData : pixelData,
options: options
}, priority, transferList).promise;
}
I did some tests with compressed images and I found something interesting. (The original image size is 277MB, the compressed image size is 91MB.)
JPEG-LS Lossless Image Compression
transferSyntax "1.2.840.10008.1.2.4.80"
1.Web workers(with jpeg-ls decoder) use too much memory: hundreds of megabytes each. 2.Copying and transferring pixelData is much more easy and safe but use more memory and can be optimised.
Please excuse me for my poor English.
The version of CharLS (the JPEG-LS decoder) use in this project use a fixed 400 MB pool of memory [0]. So it's best not to use more that 2 current workers. Alternatively, you can use a dynamic memory version that use only what it needs, but is quite a bit slower.
If you want to fill a continuous array from a collection of slices, it's better not to pass the destination array to the workers as you would be limited to only one worker. However, you can just set() part of the continuous array in the image loaded callback and then discard the 2D image buffer.
[0] https://github.com/chafey/charls/blob/master/emccbuild.sh
@jpchev, I have also been getting a lot of OOM issues in Chrome lately, especially on Linux. It seems that only 1.8 GB of memory is available per process and that multiple unrelated tabs can share the same process.
I have filled the following bug reports : https://bugs.chromium.org/p/chromium/issues/detail?id=704099 https://bugs.chromium.org/p/chromium/issues/detail?id=704521
Maybe it is related? You can set the cache with cornerstone.imageCache.setMaximumSizeBytes(100*1024*1024)
;
@jpambrun thanks for your reply, I think the memory is taken by codecs integrated into cornerstone, actually this helps to lower down the memory consumption
loadCodecsOnStartup : false,
initializeCodecsOnStartup: false,
I have no idea about Google Chrome issues, but it seems we're hitting the bug in your reports
Any update on this?
As a workaround I use this call to free the memory
cornerstoneWADOImageLoader.webWorkerManager.terminate();
Any update on this? As a workaround I use this call to free the memory
cornerstoneWADOImageLoader.webWorkerManager.terminate();
@nyacoub when you terminate webworkers ? While loading different series simultaneously, if we terminate the workers, won't the other series be interrupted ? i decode the images on the server but it puts extra load on the server. I hope it will be resolved soon.
Hi, we use cornerstoneWADOImageLoader to work with several series. Images are encoded in jpeg lossy.
I'm getting an out of memory while prefetching a series of compressed images. In Google Chrome 32 bits the error happens after some series have been loaded (prefetched), in Google Chrome 64 bits it happens as soon as the first series is loaded.
Can you please give some info on how to empty the cache (I've seen you have in the backlog a similar task)?
I'm using the following configuration.