Open rafaelorozco opened 2 years ago
Base: 88.11% // Head: 87.94% // Decreases project coverage by -0.16%
:warning:
Coverage data is based on head (
9fb0572
) compared to base (304c778
). Patch coverage: 91.74% of modified lines in pull request are covered.:exclamation: Current head 9fb0572 differs from pull request most recent head 34a67dd. Consider uploading reports for the commit 34a67dd to get more accurate results
:umbrella: View full report at Codecov.
:loudspeaker: Do you have feedback about the report comment? Let us know in this issue.
-Changing irim block to generate multiple RBs with different dilations and different hidden channels. This is the proper welling implementation. -This should break some examples. If we are okay with this new block I will go and change all examples to run properly. It is as easy as changing NetworkIRIM(n_in, n_hidden ....)->NetworkIRIM(n_in, [n_hidden], [4];) thus defining a single unet layer with conv dilation 4 which is the current IRIM implementation.
-Add new network invertible unet this is basically a single loop unrolled iteration of irim. Name comes from welling code.
-Directly takes in a precomputed gradient. -meant for inverse problems where your operator is too expensive to use online.
-Doesnt have aggressive memory savings such as inplace conv1x1 yet but should work well with moderately sized 3D. Will be testing this.