We need a more efficient Python approach to downsampling by 2.
Solution 1:
Change python code from order 4 spline to order 3 (bilinear) and turn prefilter off.
result = scipy.ndimage.interpolation.zoom(array, (0.5, 0.5, 0.5),output=None, order=1,mode='constant', cval=0.0, prefilter=False)
Solution 2:
FFT image and throw away half of data at high-frequencies.
Problem:
When I load a large 1GB dataset (e.g. "LARGE_PtCu_NanoParticles.tif" also known as tomo_5 recon) the dataset takes up 2GB of system memory (probably 16bit to 32bit casting which is ok). However, when I run a python transform to downsize by a factor of 2, tomviz uses 10GB of memory.
If you have a volume render in place, it takes up an additional 5GB of memory and I run out of memory.
I think it is OK to have large memory requirements (>32GB) when users want work with large datasets, but it is worrisome when downsampling requires 5x the memory of the original dataset.
We need a more efficient Python approach to downsampling by 2.
Solution 1: Change python code from order 4 spline to order 3 (bilinear) and turn prefilter off. result = scipy.ndimage.interpolation.zoom(array, (0.5, 0.5, 0.5),output=None, order=1,mode='constant', cval=0.0, prefilter=False)
Solution 2: FFT image and throw away half of data at high-frequencies.
Problem: When I load a large 1GB dataset (e.g. "LARGE_PtCu_NanoParticles.tif" also known as tomo_5 recon) the dataset takes up 2GB of system memory (probably 16bit to 32bit casting which is ok). However, when I run a python transform to downsize by a factor of 2, tomviz uses 10GB of memory.
If you have a volume render in place, it takes up an additional 5GB of memory and I run out of memory.
I think it is OK to have large memory requirements (>32GB) when users want work with large datasets, but it is worrisome when downsampling requires 5x the memory of the original dataset.