Currently, because of the way the dimensionality of inputs are parse using np.atleast_1d, scalar inputs result in added dimensions in outputs for all quantities. For example, if I compute the contribution function for a scalar density with a temperature array of shape (100), my contribution function will have a shape (100,1). Applying squeeze to all outputs would force all of these length 1 dimensions to be dropped. Though we would also need to do some extra checking to ensure that an input array of length 1 produced an output with dimensions (100,1).
I think after thinking this through a bit, I'm against this because it means that the returned shapes are dependent on the inputs and thus inconsistent.
Currently, because of the way the dimensionality of inputs are parse using
np.atleast_1d
, scalar inputs result in added dimensions in outputs for all quantities. For example, if I compute the contribution function for a scalar density with a temperature array of shape(100)
, my contribution function will have a shape(100,1)
. Applyingsqueeze
to all outputs would force all of these length 1 dimensions to be dropped. Though we would also need to do some extra checking to ensure that an input array of length 1 produced an output with dimensions(100,1)
.