In general, it's possible that parameter values to methods can affect how much memory the processing requires (for example, padding). This is why it's an intended feature of httomo to be able to pass parameter values of a method to the memory estimators in httomolibgpu. However, it appears that this functionality is not working in httomo.
However, this code appears to be getting the default parameter values of the function to execute via the Signature object stored in the self.meta attribute (a MethodMeta object) and passing those through to the memory estimator, rather than the parameter values provided in the YAML.
For example, with the normalize function configured like the following:
where the values of cutoff and minus_log are different from their default values of 10.0 and false respectively, it's expected to see these updated values being passed to the memory estimator of normalize.
Instead, it appears that the default values are still passed through to the memory estimator:
(Pdb) l .
36 ) -> Tuple[int, np.dtype, Tuple[int, int]]:
37 """Calculate the max chunk size it can fit in the available memory"""
38 #print(f"In normalize memory estimator, kwargs is {kwargs}")
39
40 # normalize needs space to store the darks + flats and their means as a fixed cost
41 B-> flats_mean_space = np.prod(non_slice_dims_shape) * float32().nbytes
42 darks_mean_space = np.prod(non_slice_dims_shape) * float32().nbytes
43 available_memory -= flats_mean_space + darks_mean_space
44
45 # it also needs space for data input and output (we don't care about slice_dim)
46 # data: [x, 10, 20], dtype => other_dims = [10, 20]
(Pdb) kwargs
{'cutoff': 10.0, 'minus_log': False, 'nonnegativity': False, 'remove_nans': False}
a solution to that issue would be to somehow pass the darks/flats (or their shape) to the memory estimator
there's already an intention in httomo to pass parameter values of a method to its memory estimator
so one question is: is it possible to get the darks/flats from httomo to the memory estimator by this intended feature? Or should it be by some other means?
In general, it's possible that parameter values to methods can affect how much memory the processing requires (for example, padding). This is why it's an intended feature of httomo to be able to pass parameter values of a method to the memory estimators in httomolibgpu. However, it appears that this functionality is not working in httomo.
In the
HttomolibgpuWrapper
class, thecalc_max_slices()
method has a section that appears to be intended for this purpose: https://github.com/DiamondLightSource/httomo/blob/c62f9487e82f08ae7db06661769b3ad9794d7cb6/httomo/wrappers_class.py#L362-L373However, this code appears to be getting the default parameter values of the function to execute via the
Signature
object stored in theself.meta
attribute (aMethodMeta
object) and passing those through to the memory estimator, rather than the parameter values provided in the YAML.For example, with the
normalize
function configured like the following:where the values of
cutoff
andminus_log
are different from their default values of10.0
andfalse
respectively, it's expected to see these updated values being passed to the memory estimator ofnormalize
.Instead, it appears that the default values are still passed through to the memory estimator:
This issue is somewhat connected to https://github.com/DiamondLightSource/httomolibgpu/issues/110, in the sense that
so one question is: is it possible to get the darks/flats from httomo to the memory estimator by this intended feature? Or should it be by some other means?