Open billbrod opened 3 months ago
Check out this pull request on
See visual diffs & provide feedback on Jupyter Notebooks.
Powered by ReviewNB
Attention: Patch coverage is 15.52795%
with 272 lines
in your changes missing coverage. Please review.
Files with missing lines | Patch % | Lines |
---|---|---|
...ptic/simulate/models/portilla_simoncelli_masked.py | 14.73% | 272 Missing :warning: |
Files with missing lines | Coverage Δ | |
---|---|---|
src/plenoptic/simulate/models/__init__.py | 100.00% <100.00%> (ø) |
|
...c/plenoptic/simulate/models/portilla_simoncelli.py | 97.76% <100.00%> (+0.35%) |
:arrow_up: |
src/plenoptic/tools/conv.py | 100.00% <100.00%> (ø) |
|
...ptic/simulate/models/portilla_simoncelli_masked.py | 14.73% <14.73%> (ø) |
Adding thoughts to questions/comments inline below:
Currently, both of these approaches are supported -- should they be? Or should we require the user to pass two masks that we multiply together? I can't think there's any case in which someone would pass a list with three or more mask tensors, but that would work as well. Even if we allow a list of masks, should we allow more than two?
At least for me I don’t see a need of having more than two mask tensors or two masks more generally. Although I do think for simplicity allowing either format of mask input may be useful. I guess for large images is where using the product of the two masks will be important — I can imagine a world where I will want to measure statistics on larger images where this will be important.
Before moving on, another question: how to handle mask normalization? The model works best if the individual masks (as shown above) sum to approximately 1 because, otherwise, some of the statistics end up being much larger than the others and optimization is hard. Currently, we're not normalizing within the model code, but should we?
I would normalize within the model code — it’s one of the things that I could imagine a first time user forgetting even if you make it very explicit or at the very least have a warning in the case that its not normalized that reminds the user this is an issue.
For the pooling regions, you want to avoid aliasing. This is an interaction between the sampling distance and the function used to generate the regions, and can be checked by seeing if they're equivariant to translation (if the windows are Cartesian) or rotation / dilation (if the windows are polar). I don't think the model should check this, but we can show an example of how to check this in our documentation and point out that this is important.
This seems reasonable to me.
The foveated pooling windows are not currently in plenoptic and are a pain to implement yourself. Pytorch implementations exist (as well as ways to get existing windows into pytorch), including one that I wrote. I can show examples making use of some of these, is that sufficient? None of them are python packages (so they can't be installed with pip), but I could at least package the one I wrote at least. I've hesitated because it's definitely research code, but it would simplify the process for people.
I have always found using the pooling-windows repo straightforward so I think it is sufficient. You could package it but cloning has always been easy to get running for me.
This pull request adds the pooled texture statistic model from Freeman and Simoncelli, 2011 and elsewhere. In our implementation, the statistics are computed with user-defined
masks
, which is a list of 3d tensors of shape(masks, height, width)
. The list may contain any number of tensors, and they will be multiplied together to create the final pooling regions. The intended use case is to define the regions as two-dimensional and separable, either in x and y, or polar angle and eccentricity.This pull request contains a notebook,
pooled_texture_model
, which demonstrates the usage of the model with different images and types of windows.This pull request is not yet ready; the following questions still need to be addressed:
Math / model details:
sqrt
and division operations in the model don't return NaNs or very large numbers. This has worked for the models and windows I've tested it with, assuming that the above normalization has been done (I noticed if the window sums are too small, a larger epsilon is needed). But it feels fairly arbitrary; I can try and figure out if I can figure out the relationship between the window sum and the epsilon value and then do one of: use window sum to determine epsilon raise an error if the window sum looks wrong; normalize the windows myselfSoftware design:
torch.gather
at run-time. This works well on the GPU, but not on the CPU and isn't very memory-efficient. I'm not sure if there's a better way here.User interface:
masks
argument is a list containing some number of 3d tensors, which are all multiplied together. In practice, I think the majority of users will pass two tensors here (x/y or angle/eccentricity) and I can't imagine a use case for three or more. Should I just require two tensors as the input? Or keep the list (allowing one or two), but raise an exception if more than two are passed? Allowing for a single mask allows me to test the pooled version against the original (see below).pip
), but I could at least package the one I wrote at least. I've hesitated because it's definitely research code, but it would simplify the process for people.Testing / relationship to other models:
_representation_scales
.I would like to do the similar but with a "folded image", like so:
However, this doesn't work because, in the regular model on the "folded image" above, the steerable pyramid coefficients, their magnitudes, and the reconstructed lowpass images on each of those four regions will all have zero mean, whereas in the pooled version, they all have a global zero mean: across the whole image, but not within each region. This ends up being fine for the purposes of synthesis, but means the outputs are quite different.
Maybe I can just synthesize the two metamers and (after rearrangement) check that each version of the model agrees it's a good metamer?