Closed MaxSchambach closed 5 years ago
Hi @MaxSchambach,
It totally does make sense and I had the issue when multi-processing some heavy dataset too. I did not go down that road because I find the manual deletion and garbage collection invocation quite unelegant and once you start pepper your code with that. It is hard to find a sweet spot where to stop.
Ideally what I would like the implementation to be is a one-liner function call that takes care of both deletion and garbage collection while also deactivating the behaviour, problem is that I assume if it is possible at all it will require some massaging because the variables will be scoped in the responsible function.
I'm happy to merge the PR either way but keen to discuss about that if you can think about an elegant solution.
Yes, it is not the most elegant way in general, but the most straightforward.
Introducing a cleanup-function, in my opinion, might not be a lot more elegant, but makes things less Pythonic by being less explicit (in my opinion): the function still would need to get passed the variables to delete, so basically you're hiding the explicit deletion and garbage collection in some wrapper. I've had the problem in other Python applications too, and am not too happy about this solution, but I think in this case, with only 3 files being affected and only a couple of deletions, it is very much the best straightforward and in that sense the most elegant solution to deal with this.
If somebody knows a more Pythonic way to deal with this, I'd be happy to learn :)
An additional thought: it might be good to benchmark how this affects the performance of other use cases (i.e., calling the function often with smaller input). I could imagine that manually calling the GC that often could lead to a big performance hit.
I assume just deleting the variables without manually calling the GC didn't lead to the memory being freed in a timely manner?
On Wed, 7 Nov 2018 at 08:37, Maximilian Schambach notifications@github.com wrote:
Yes, it is not the most elegant way in general, but the most straightforward.
Introducing a cleanup-function, in my opinion, might not be a lot more elegant, but makes things less Pythonic by being less explicit (in my opinion): the function still would need to get passed the variables to delete, so basically you're hiding the explicit deletion and garbage collection in some wrapper. I've had the problem in other Python applications too, and am not too happy about this solution, but I think in this case, with only 3 files being affected and only a couple of deletions, it is very much the best straightforward and in that sense the most elegant solution to deal with this.
If somebody knows a more Pythonic way to deal with this, I'd be happy to learn :)
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/colour-science/colour-demosaicing/pull/10#issuecomment-436546030, or mute the thread https://github.com/notifications/unsubscribe-auth/ABXNwtKOV0lfJFYnagPXU6XZGzdxauKqks5uspu0gaJpZM4YQ-gu .
I could imagine that manually calling the GC that often could lead to a big performance hit.
That was the original reasoning behind having it wrapped in a function so that we could disable it easily if required. I was reading a few threads on how it could potentially affect performance.
We could also reduce the number of intermediates a bit but it would not be as efficient as the current systematic deletion + gc invocation.
Is it possible to reduce memory overhead by doing the computations in-place? Copy the incoming array once and do everything from then on without allocating new numpy arrays?
On Wed, 7 Nov 2018 at 09:41, Thomas Mansencal notifications@github.com wrote:
We could also reduce the number of intermediates a bit but it would not be as efficient as the current systematic deletion + gc invocation.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/colour-science/colour-demosaicing/pull/10#issuecomment-436564262, or mute the thread https://github.com/notifications/unsubscribe-auth/ABXNwkzKdNX2ZdtBlvZuSq6ytoRNYmgyks5usqrKgaJpZM4YQ-gu .
Is it possible to reduce memory overhead by doing the computations in-place?
I don't think that's a good idea. Most of the intermediate variables are used multiple times, calculating them in place every time would affect performance, especially since some of those calculations are somewhat computationally heavy for large images.
I will start a thread on SO to see if anybody has a bright idea we did not think about.
I will start a thread on SO to see if anybody has a bright idea we did not think about.
It does not seem to have given too much new input...
Yes, not much food for thoughts unfortunately.
I ran a quick test with a 5634 × 3753 image and I get roughly a 6.3go memory bump when using Menon (2007):
Filename: /Users/kelsolaar/Documents/Development/colour-science/colour-demosaicing/colour_demosaicing/bayer/demosaicing/menon2007.py
Line # Mem usage Increment Line Contents
================================================
218 4353.8 MiB 4353.8 MiB @profile
219 def refining_step_Menon2007(RGB, RGB_m, M):
220 """
221 Performs the refining step on given *RGB* colourspace array.
222
223 Parameters
224 ----------
225 RGB : array_like
226 *RGB* colourspace array.
227 RGB_m : array_like
228 *Bayer* CFA red, green and blue masks.
229 M : array_like
230 Estimation for the best directional reconstruction.
231
232 Returns
233 -------
234 ndarray
235 Refined *RGB* colourspace array.
236
237 Examples
238 --------
239 >>> RGB = np.array(
240 ... [[[0.30588236, 0.35686275, 0.3764706],
241 ... [0.30980393, 0.36078432, 0.39411766],
242 ... [0.29607844, 0.36078432, 0.40784314],
243 ... [0.29803923, 0.37647060, 0.42352942]],
244 ... [[0.30588236, 0.35686275, 0.3764706],
245 ... [0.30980393, 0.36078432, 0.39411766],
246 ... [0.29607844, 0.36078432, 0.40784314],
247 ... [0.29803923, 0.37647060, 0.42352942]]])
248 >>> RGB_m = np.array(
249 ... [[[0, 0, 1],
250 ... [0, 1, 0],
251 ... [0, 0, 1],
252 ... [0, 1, 0]],
253 ... [[0, 1, 0],
254 ... [1, 0, 0],
255 ... [0, 1, 0],
256 ... [1, 0, 0]]])
257 >>> M = np.array(
258 ... [[0, 1, 0, 1],
259 ... [1, 0, 1, 0]])
260 >>> refining_step_Menon2007(RGB, RGB_m, M)
261 array([[[ 0.30588236, 0.35686275, 0.3764706 ],
262 [ 0.30980393, 0.36078432, 0.39411765],
263 [ 0.29607844, 0.36078432, 0.40784314],
264 [ 0.29803923, 0.3764706 , 0.42352942]],
265 <BLANKLINE>
266 [[ 0.30588236, 0.35686275, 0.3764706 ],
267 [ 0.30980393, 0.36078432, 0.39411766],
268 [ 0.29607844, 0.36078432, 0.40784314],
269 [ 0.29803923, 0.3764706 , 0.42352942]]])
270 """
271
272 4837.8 MiB 484.0 MiB R, G, B = tsplit(RGB)
273 5321.7 MiB 484.0 MiB R_m, G_m, B_m = tsplit(RGB_m)
274 5483.1 MiB 161.3 MiB M = as_float_array(M)
275
276 # Updating of the green component.
277 5644.4 MiB 161.3 MiB R_G = R - G
278 5805.7 MiB 161.3 MiB B_G = B - G
279
280 5805.7 MiB 0.0 MiB FIR = np.ones(3) / 3
281
282 5805.7 MiB 0.0 MiB B_G_m = np.where(
283 5805.7 MiB 0.0 MiB B_m == 1,
284 5697.6 MiB -108.1 MiB np.where(M == 1, _cnv_h(B_G, FIR), _cnv_v(B_G, FIR)),
285 5697.6 MiB 0.0 MiB 0,
286 )
287 5697.6 MiB 0.0 MiB R_G_m = np.where(
288 5697.6 MiB 0.0 MiB R_m == 1,
289 5922.3 MiB 224.6 MiB np.where(M == 1, _cnv_h(R_G, FIR), _cnv_v(R_G, FIR)),
290 5922.3 MiB 0.0 MiB 0,
291 )
292
293 6083.6 MiB 161.3 MiB G = np.where(R_m == 1, R - R_G_m, G)
294 6190.3 MiB 106.7 MiB G = np.where(B_m == 1, B - B_G_m, G)
295
296 # Updating of the red and blue components in the green locations.
297 # Red rows.
298 6351.0 MiB 160.7 MiB R_r = np.transpose(np.any(R_m == 1, axis=1)[np.newaxis]) * np.ones(R.shape)
299 # Red columns.
300 6286.3 MiB -64.7 MiB R_c = np.any(R_m == 1, axis=0)[np.newaxis] * np.ones(R.shape)
301 # Blue rows.
302 6250.1 MiB -36.3 MiB B_r = np.transpose(np.any(B_m == 1, axis=1)[np.newaxis]) * np.ones(B.shape)
303 # Blue columns
304 6216.2 MiB -33.9 MiB B_c = np.any(B_m == 1, axis=0)[np.newaxis] * np.ones(B.shape)
305
306 6278.2 MiB 62.0 MiB R_G = R - G
307 6340.2 MiB 62.0 MiB B_G = B - G
308
309 6340.2 MiB 0.0 MiB k_b = np.array([0.5, 0, 0.5])
310
311 6340.2 MiB 0.0 MiB R_G_m = np.where(
312 6500.9 MiB 160.6 MiB np.logical_and(G_m == 1, B_r == 1),
313 5971.8 MiB -529.0 MiB _cnv_v(R_G, k_b),
314 5591.4 MiB -380.4 MiB R_G_m,
315 )
316 5597.5 MiB 6.2 MiB R = np.where(np.logical_and(G_m == 1, B_r == 1), G + R_G_m, R)
317 5597.5 MiB 0.0 MiB R_G_m = np.where(
318 5597.5 MiB 0.0 MiB np.logical_and(G_m == 1, B_c == 1),
319 5729.2 MiB 131.7 MiB _cnv_h(R_G, k_b),
320 5317.0 MiB -412.2 MiB R_G_m,
321 )
322 5311.9 MiB -5.1 MiB R = np.where(np.logical_and(G_m == 1, B_c == 1), G + R_G_m, R)
323
324 5311.9 MiB 0.0 MiB B_G_m = np.where(
325 5311.9 MiB 0.0 MiB np.logical_and(G_m == 1, R_r == 1),
326 5473.2 MiB 161.3 MiB _cnv_v(B_G, k_b),
327 5311.5 MiB -161.7 MiB B_G_m,
328 )
329 4988.0 MiB -323.5 MiB B = np.where(np.logical_and(G_m == 1, R_r == 1), G + B_G_m, B)
330 4988.0 MiB 0.0 MiB B_G_m = np.where(
331 4988.0 MiB 0.0 MiB np.logical_and(G_m == 1, R_c == 1),
332 5149.3 MiB 161.3 MiB _cnv_h(B_G, k_b),
333 4988.0 MiB -161.3 MiB B_G_m,
334 )
335 4988.0 MiB 0.0 MiB B = np.where(np.logical_and(G_m == 1, R_c == 1), G + B_G_m, B)
336
337 # Updating of the red (blue) component in the blue (red) locations.
338 5149.3 MiB 161.3 MiB R_B = R - B
339 5149.3 MiB 0.0 MiB R_B_m = np.where(
340 5149.3 MiB 0.0 MiB B_m == 1,
341 5310.7 MiB 161.4 MiB np.where(M == 1, _cnv_h(R_B, FIR), _cnv_v(R_B, FIR)),
342 5310.7 MiB 0.0 MiB 0,
343 )
344 5310.7 MiB 0.0 MiB R = np.where(B_m == 1, B + R_B_m, R)
345
346 5310.7 MiB 0.0 MiB R_B_m = np.where(
347 5310.9 MiB 0.2 MiB R_m == 1,
348 5249.1 MiB -61.8 MiB np.where(M == 1, _cnv_h(R_B, FIR), _cnv_v(R_B, FIR)),
349 5087.8 MiB -161.3 MiB 0,
350 )
351 5087.8 MiB 0.0 MiB B = np.where(R_m == 1, R - R_B_m, B)
352
353 4774.8 MiB -313.0 MiB return tstack([R, G, B])
Filename: /Users/kelsolaar/Documents/Development/colour-science/colour-demosaicing/colour_demosaicing/bayer/demosaicing/menon2007.py
Line # Mem usage Increment Line Contents
================================================
54 984.6 MiB 984.6 MiB @profile
55 def demosaicing_CFA_Bayer_Menon2007(CFA, pattern='RGGB', refining_step=True):
56 """
57 Returns the demosaiced *RGB* colourspace array from given *Bayer* CFA using
58 DDFAPD - *Menon (2007)* demosaicing algorithm.
59
60 Parameters
61 ----------
62 CFA : array_like
63 *Bayer* CFA.
64 pattern : unicode, optional
65 **{'RGGB', 'BGGR', 'GRBG', 'GBRG'}**,
66 Arrangement of the colour filters on the pixel array.
67 refining_step : bool
68 Perform refining step.
69
70 Returns
71 -------
72 ndarray
73 *RGB* colourspace array.
74
75 Notes
76 -----
77 - The definition output is not clipped in range [0, 1] : this allows for
78 direct HDRI / radiance image generation on *Bayer* CFA data and post
79 demosaicing of the high dynamic range data as showcased in this
80 `Jupyter Notebook <https://github.com/colour-science/colour-hdri/\
81 blob/develop/colour_hdri/examples/\
82 examples_merge_from_raw_files_with_post_demosaicing.ipynb>`_.
83
84 References
85 ----------
86 :cite:`Menon2007c`
87
88 Examples
89 --------
90 >>> CFA = np.array(
91 ... [[ 0.30980393, 0.36078432, 0.30588236, 0.3764706 ],
92 ... [ 0.35686275, 0.39607844, 0.36078432, 0.40000001]])
93 >>> demosaicing_CFA_Bayer_Menon2007(CFA)
94 array([[[ 0.30980393, 0.35686275, 0.39215687],
95 [ 0.30980393, 0.36078432, 0.39607844],
96 [ 0.30588236, 0.36078432, 0.39019608],
97 [ 0.32156864, 0.3764706 , 0.40000001]],
98 <BLANKLINE>
99 [[ 0.30980393, 0.35686275, 0.39215687],
100 [ 0.30980393, 0.36078432, 0.39607844],
101 [ 0.30588236, 0.36078432, 0.39019609],
102 [ 0.32156864, 0.3764706 , 0.40000001]]])
103 >>> CFA = np.array(
104 ... [[ 0.3764706 , 0.36078432, 0.40784314, 0.3764706 ],
105 ... [ 0.35686275, 0.30980393, 0.36078432, 0.29803923]])
106 >>> demosaicing_CFA_Bayer_Menon2007(CFA, 'BGGR')
107 array([[[ 0.30588236, 0.35686275, 0.3764706 ],
108 [ 0.30980393, 0.36078432, 0.39411766],
109 [ 0.29607844, 0.36078432, 0.40784314],
110 [ 0.29803923, 0.3764706 , 0.42352942]],
111 <BLANKLINE>
112 [[ 0.30588236, 0.35686275, 0.3764706 ],
113 [ 0.30980393, 0.36078432, 0.39411766],
114 [ 0.29607844, 0.36078432, 0.40784314],
115 [ 0.29803923, 0.3764706 , 0.42352942]]])
116 """
117
118 984.6 MiB 0.0 MiB CFA = as_float_array(CFA)
119 1045.1 MiB 60.5 MiB R_m, G_m, B_m = masks_CFA_Bayer(CFA.shape, pattern)
120
121 1045.1 MiB 0.0 MiB h_0 = np.array([0, 0.5, 0, 0.5, 0])
122 1045.1 MiB 0.0 MiB h_1 = np.array([-0.25, 0, 0.5, 0, -0.25])
123
124 1206.5 MiB 161.4 MiB R = CFA * R_m
125 1367.9 MiB 161.3 MiB G = CFA * G_m
126 1529.2 MiB 161.3 MiB B = CFA * B_m
127
128 1711.5 MiB 182.4 MiB G_H = np.where(G_m == 0, _cnv_h(CFA, h_0) + _cnv_h(CFA, h_1), G)
129 1873.3 MiB 161.8 MiB G_V = np.where(G_m == 0, _cnv_v(CFA, h_0) + _cnv_v(CFA, h_1), G)
130
131 2034.6 MiB 161.3 MiB C_H = np.where(R_m == 1, R - G_H, 0)
132 2034.6 MiB 0.0 MiB C_H = np.where(B_m == 1, B - G_H, C_H)
133
134 2196.0 MiB 161.4 MiB C_V = np.where(R_m == 1, R - G_V, 0)
135 2196.0 MiB 0.0 MiB C_V = np.where(B_m == 1, B - G_V, C_V)
136
137 2196.0 MiB 0.0 MiB D_H = np.abs(C_H - np.pad(C_H, ((0, 0),
138 2357.4 MiB 161.4 MiB (0, 2)), mode=str('reflect'))[:, 2:])
139 2357.4 MiB 0.0 MiB D_V = np.abs(C_V - np.pad(C_V, ((0, 2),
140 2518.7 MiB 161.3 MiB (0, 0)), mode=str('reflect'))[2:, :])
141
142 2518.7 MiB 0.0 MiB k = np.array(
143 2518.7 MiB 0.0 MiB [[0, 0, 1, 0, 1],
144 2518.7 MiB 0.0 MiB [0, 0, 0, 1, 0],
145 2518.7 MiB 0.0 MiB [0, 0, 3, 0, 3],
146 2518.7 MiB 0.0 MiB [0, 0, 0, 1, 0],
147 2518.7 MiB 0.0 MiB [0, 0, 1, 0, 1]]) # yapf: disable
148
149 2680.0 MiB 161.3 MiB d_H = convolve(D_H, k, mode='constant')
150 2841.4 MiB 161.3 MiB d_V = convolve(D_V, np.transpose(k), mode='constant')
151
152 2841.4 MiB 0.0 MiB mask = d_V >= d_H
153 2841.4 MiB 0.0 MiB G = np.where(mask, G_H, G_V)
154 3002.7 MiB 161.3 MiB M = np.where(mask, 1, 0)
155
156 # Red rows.
157 3184.2 MiB 181.6 MiB R_r = np.transpose(np.any(R_m == 1, axis=1)[np.newaxis]) * np.ones(R.shape)
158 # Blue rows.
159 3345.6 MiB 161.3 MiB B_r = np.transpose(np.any(B_m == 1, axis=1)[np.newaxis]) * np.ones(B.shape)
160
161 3345.6 MiB 0.0 MiB k_b = np.array([0.5, 0, 0.5])
162
163 3345.6 MiB 0.0 MiB R = np.where(
164 3385.9 MiB 40.3 MiB np.logical_and(G_m == 1, R_r == 1),
165 3547.2 MiB 161.3 MiB G + _cnv_h(R, k_b) - _cnv_h(G, k_b),
166 3385.9 MiB -161.3 MiB R,
167 )
168
169 3385.9 MiB 0.0 MiB R = np.where(
170 3385.9 MiB 0.0 MiB np.logical_and(G_m == 1, B_r == 1) == 1,
171 3547.2 MiB 161.3 MiB G + _cnv_v(R, k_b) - _cnv_v(G, k_b),
172 3385.9 MiB -161.3 MiB R,
173 )
174
175 3385.9 MiB 0.0 MiB B = np.where(
176 3385.9 MiB 0.0 MiB np.logical_and(G_m == 1, B_r == 1),
177 3547.2 MiB 161.3 MiB G + _cnv_h(B, k_b) - _cnv_h(G, k_b),
178 3385.9 MiB -161.3 MiB B,
179 )
180
181 3385.9 MiB 0.0 MiB B = np.where(
182 3385.9 MiB 0.0 MiB np.logical_and(G_m == 1, R_r == 1) == 1,
183 3547.2 MiB 161.3 MiB G + _cnv_v(B, k_b) - _cnv_v(G, k_b),
184 3385.9 MiB -161.3 MiB B,
185 )
186
187 3385.9 MiB 0.0 MiB R = np.where(
188 3385.9 MiB 0.0 MiB np.logical_and(B_r == 1, B_m == 1),
189 3385.9 MiB 0.0 MiB np.where(
190 3385.9 MiB 0.0 MiB M == 1,
191 3547.2 MiB 161.3 MiB B + _cnv_h(R, k_b) - _cnv_h(B, k_b),
192 3547.2 MiB 0.0 MiB B + _cnv_v(R, k_b) - _cnv_v(B, k_b),
193 ),
194 3385.9 MiB -161.3 MiB R,
195 )
196
197 3385.9 MiB 0.0 MiB B = np.where(
198 3385.9 MiB 0.0 MiB np.logical_and(R_r == 1, R_m == 1),
199 3385.9 MiB 0.0 MiB np.where(
200 3385.9 MiB 0.0 MiB M == 1,
201 3547.2 MiB 161.3 MiB R + _cnv_h(B, k_b) - _cnv_h(R, k_b),
202 3547.2 MiB 0.0 MiB R + _cnv_v(B, k_b) - _cnv_v(R, k_b),
203 ),
204 3385.9 MiB -161.3 MiB B,
205 )
206
207 3869.9 MiB 484.0 MiB RGB = tstack([R, G, B])
208
209 3869.9 MiB 0.0 MiB if refining_step:
210 2028.5 MiB -1841.3 MiB RGB = refining_step_Menon2007(RGB, tstack([R_m, G_m, B_m]), M)
211
212 2028.5 MiB 0.0 MiB return RGB
I will dig a bit to see if there is a less BFG variant on the memory decrease because having thought about it more, my gut feeling is that it is opening a Pandora Box.
And a test with your systematic deletion:
Filename: /Users/kelsolaar/Documents/Development/colour-science/colour-demosaicing/colour_demosaicing/bayer/demosaicing/menon2007.py
Line # Mem usage Increment Line Contents
================================================
214 1785.4 MiB 1785.4 MiB @profile
215 def refining_step_Menon2007(RGB, RGB_m, M):
216 """
217 Performs the refining step on given *RGB* colourspace array.
218
219 Parameters
220 ----------
221 RGB : array_like
222 *RGB* colourspace array.
223 RGB_m : array_like
224 *Bayer* CFA red, green and blue masks.
225 M : array_like
226 Estimation for the best directional reconstruction.
227
228 Returns
229 -------
230 ndarray
231 Refined *RGB* colourspace array.
232
233 Examples
234 --------
235 >>> RGB = np.array(
236 ... [[[0.30588236, 0.35686275, 0.3764706],
237 ... [0.30980393, 0.36078432, 0.39411766],
238 ... [0.29607844, 0.36078432, 0.40784314],
239 ... [0.29803923, 0.37647060, 0.42352942]],
240 ... [[0.30588236, 0.35686275, 0.3764706],
241 ... [0.30980393, 0.36078432, 0.39411766],
242 ... [0.29607844, 0.36078432, 0.40784314],
243 ... [0.29803923, 0.37647060, 0.42352942]]])
244 >>> RGB_m = np.array(
245 ... [[[0, 0, 1],
246 ... [0, 1, 0],
247 ... [0, 0, 1],
248 ... [0, 1, 0]],
249 ... [[0, 1, 0],
250 ... [1, 0, 0],
251 ... [0, 1, 0],
252 ... [1, 0, 0]]])
253 >>> M = np.array(
254 ... [[0, 1, 0, 1],
255 ... [1, 0, 1, 0]])
256 >>> refining_step_Menon2007(RGB, RGB_m, M)
257 array([[[ 0.30588236, 0.35686275, 0.3764706 ],
258 [ 0.30980393, 0.36078432, 0.39411765],
259 [ 0.29607844, 0.36078432, 0.40784314],
260 [ 0.29803923, 0.3764706 , 0.42352942]],
261 <BLANKLINE>
262 [[ 0.30588236, 0.35686275, 0.3764706 ],
263 [ 0.30980393, 0.36078432, 0.39411766],
264 [ 0.29607844, 0.36078432, 0.40784314],
265 [ 0.29803923, 0.3764706 , 0.42352942]]])
266 """
267
268 2269.4 MiB 484.0 MiB R, G, B = tsplit(RGB)
269 2753.3 MiB 484.0 MiB R_m, G_m, B_m = tsplit(RGB_m)
270 2753.3 MiB 0.0 MiB M = np.asarray(M)
271
272 2753.3 MiB 0.0 MiB del RGB, RGB_m
273 2753.3 MiB 0.0 MiB gc.collect()
274
275 # Updating of the green component.
276 2914.6 MiB 161.3 MiB R_G = R - G
277 3076.0 MiB 161.3 MiB B_G = B - G
278
279 3076.0 MiB 0.0 MiB FIR = np.ones(3) / 3
280
281 3076.0 MiB 0.0 MiB B_G_m = np.where(B_m == 1,
282 3237.3 MiB 161.3 MiB np.where(M == 1, _cnv_h(B_G, FIR), _cnv_v(B_G, FIR)), 0)
283 3237.3 MiB 0.0 MiB R_G_m = np.where(R_m == 1,
284 3398.6 MiB 161.3 MiB np.where(M == 1, _cnv_h(R_G, FIR), _cnv_v(R_G, FIR)), 0)
285
286 3076.0 MiB -322.6 MiB del B_G, R_G
287 3076.0 MiB 0.0 MiB gc.collect()
288
289 3237.3 MiB 161.3 MiB G = np.where(R_m == 1, R - R_G_m, G)
290 3237.3 MiB 0.0 MiB G = np.where(B_m == 1, B - B_G_m, G)
291
292 # Updating of the red and blue components in the green locations.
293 # Red rows.
294 3398.6 MiB 161.3 MiB R_r = np.transpose(np.any(R_m == 1, axis=1)[np.newaxis]) * np.ones(R.shape)
295 # Red columns.
296 3559.9 MiB 161.3 MiB R_c = np.any(R_m == 1, axis=0)[np.newaxis] * np.ones(R.shape)
297 # Blue rows.
298 3721.3 MiB 161.4 MiB B_r = np.transpose(np.any(B_m == 1, axis=1)[np.newaxis]) * np.ones(B.shape)
299 # Blue columns.
300 3882.6 MiB 161.3 MiB B_c = np.any(B_m == 1, axis=0)[np.newaxis] * np.ones(B.shape)
301
302 4043.9 MiB 161.3 MiB R_G = R - G
303 4111.0 MiB 67.1 MiB B_G = B - G
304
305 4111.0 MiB 0.0 MiB k_b = np.array([0.5, 0, 0.5])
306
307 4111.0 MiB 0.0 MiB R_G_m = np.where(
308 3771.1 MiB -339.9 MiB np.logical_and(G_m == 1, B_r == 1), _cnv_v(R_G, k_b), R_G_m)
309 3932.6 MiB 161.5 MiB R = np.where(np.logical_and(G_m == 1, B_r == 1), G + R_G_m, R)
310 3932.6 MiB 0.0 MiB R_G_m = np.where(
311 3884.2 MiB -48.4 MiB np.logical_and(G_m == 1, B_c == 1), _cnv_h(R_G, k_b), R_G_m)
312 3884.2 MiB 0.0 MiB R = np.where(np.logical_and(G_m == 1, B_c == 1), G + R_G_m, R)
313
314 3238.9 MiB -645.3 MiB del B_r, R_G_m, B_c, R_G
315 3238.9 MiB 0.0 MiB gc.collect()
316
317 3238.9 MiB 0.0 MiB B_G_m = np.where(
318 3560.3 MiB 321.4 MiB np.logical_and(G_m == 1, R_r == 1), _cnv_v(B_G, k_b), B_G_m)
319 3237.7 MiB -322.6 MiB B = np.where(np.logical_and(G_m == 1, R_r == 1), G + B_G_m, B)
320 3237.7 MiB 0.0 MiB B_G_m = np.where(
321 3398.3 MiB 160.6 MiB np.logical_and(G_m == 1, R_c == 1), _cnv_h(B_G, k_b), B_G_m)
322 3398.3 MiB 0.0 MiB B = np.where(np.logical_and(G_m == 1, R_c == 1), G + B_G_m, B)
323
324 2753.0 MiB -645.3 MiB del B_G_m, R_r, R_c, G_m, B_G
325 2753.0 MiB 0.0 MiB gc.collect()
326
327 # Updating of the red (blue) component in the blue (red) locations.
328 2914.3 MiB 161.3 MiB R_B = R - B
329 2914.6 MiB 0.3 MiB R_B_m = np.where(B_m == 1,
330 3076.0 MiB 161.3 MiB np.where(M == 1, _cnv_h(R_B, FIR), _cnv_v(R_B, FIR)), 0)
331 3076.0 MiB 0.0 MiB R = np.where(B_m == 1, B + R_B_m, R)
332
333 3076.0 MiB 0.0 MiB R_B_m = np.where(R_m == 1,
334 3076.0 MiB 0.0 MiB np.where(M == 1, _cnv_h(R_B, FIR), _cnv_v(R_B, FIR)), 0)
335 3076.0 MiB 0.0 MiB B = np.where(R_m == 1, R - R_B_m, B)
336
337 2753.3 MiB -322.6 MiB del R_B, R_B_m, R_m
338 2753.3 MiB 0.0 MiB gc.collect()
339
340 3237.3 MiB 484.0 MiB return tstack((R, G, B))
Filename: /Users/kelsolaar/Documents/Development/colour-science/colour-demosaicing/colour_demosaicing/bayer/demosaicing/menon2007.py
Line # Mem usage Increment Line Contents
================================================
55 984.6 MiB 984.6 MiB @profile
56 def demosaicing_CFA_Bayer_Menon2007(CFA, pattern='RGGB', refining_step=True):
57 """
58 Returns the demosaiced *RGB* colourspace array from given *Bayer* CFA using
59 DDFAPD - *Menon (2007)* demosaicing algorithm.
60
61 Parameters
62 ----------
63 CFA : array_like
64 *Bayer* CFA.
65 pattern : unicode, optional
66 **{'RGGB', 'BGGR', 'GRBG', 'GBRG'}**,
67 Arrangement of the colour filters on the pixel array.
68 refining_step : bool
69 Perform refining step.
70
71 Returns
72 -------
73 ndarray
74 *RGB* colourspace array.
75
76 Notes
77 -----
78 - The definition output is not clipped in range [0, 1] : this allows for
79 direct HDRI / radiance image generation on *Bayer* CFA data and post
80 demosaicing of the high dynamic range data as showcased in this
81 `Jupyter Notebook <https://github.com/colour-science/colour-hdri/\
82 blob/develop/colour_hdri/examples/\
83 examples_merge_from_raw_files_with_post_demosaicing.ipynb>`_.
84
85 References
86 ----------
87 :cite:`Menon2007c`
88
89 Examples
90 --------
91 >>> CFA = np.array(
92 ... [[ 0.30980393, 0.36078432, 0.30588236, 0.3764706 ],
93 ... [ 0.35686275, 0.39607844, 0.36078432, 0.40000001]])
94 >>> demosaicing_CFA_Bayer_Menon2007(CFA)
95 array([[[ 0.30980393, 0.35686275, 0.39215687],
96 [ 0.30980393, 0.36078432, 0.39607844],
97 [ 0.30588236, 0.36078432, 0.39019608],
98 [ 0.32156864, 0.3764706 , 0.40000001]],
99 <BLANKLINE>
100 [[ 0.30980393, 0.35686275, 0.39215687],
101 [ 0.30980393, 0.36078432, 0.39607844],
102 [ 0.30588236, 0.36078432, 0.39019609],
103 [ 0.32156864, 0.3764706 , 0.40000001]]])
104 >>> CFA = np.array(
105 ... [[ 0.3764706 , 0.36078432, 0.40784314, 0.3764706 ],
106 ... [ 0.35686275, 0.30980393, 0.36078432, 0.29803923]])
107 >>> demosaicing_CFA_Bayer_Menon2007(CFA, 'BGGR')
108 array([[[ 0.30588236, 0.35686275, 0.3764706 ],
109 [ 0.30980393, 0.36078432, 0.39411766],
110 [ 0.29607844, 0.36078432, 0.40784314],
111 [ 0.29803923, 0.3764706 , 0.42352942]],
112 <BLANKLINE>
113 [[ 0.30588236, 0.35686275, 0.3764706 ],
114 [ 0.30980393, 0.36078432, 0.39411766],
115 [ 0.29607844, 0.36078432, 0.40784314],
116 [ 0.29803923, 0.3764706 , 0.42352942]]])
117 """
118
119 984.6 MiB 0.0 MiB CFA = np.asarray(CFA)
120 1045.1 MiB 60.5 MiB R_m, G_m, B_m = masks_CFA_Bayer(CFA.shape, pattern)
121
122 1045.1 MiB 0.0 MiB h_0 = np.array([0, 0.5, 0, 0.5, 0])
123 1045.1 MiB 0.0 MiB h_1 = np.array([-0.25, 0, 0.5, 0, -0.25])
124
125 1206.4 MiB 161.3 MiB R = CFA * R_m
126 1367.7 MiB 161.3 MiB G = CFA * G_m
127 1529.1 MiB 161.3 MiB B = CFA * B_m
128
129 1711.4 MiB 182.4 MiB G_H = np.where(G_m == 0, _cnv_h(CFA, h_0) + _cnv_h(CFA, h_1), G)
130 1873.2 MiB 161.8 MiB G_V = np.where(G_m == 0, _cnv_v(CFA, h_0) + _cnv_v(CFA, h_1), G)
131
132 2034.5 MiB 161.3 MiB C_H = np.where(R_m == 1, R - G_H, 0)
133 2034.5 MiB 0.0 MiB C_H = np.where(B_m == 1, B - G_H, C_H)
134
135 2195.8 MiB 161.3 MiB C_V = np.where(R_m == 1, R - G_V, 0)
136 2195.8 MiB 0.0 MiB C_V = np.where(B_m == 1, B - G_V, C_V)
137
138 2195.8 MiB 0.0 MiB D_H = np.abs(C_H - np.pad(C_H, ((0, 0), (0, 2)),
139 2357.2 MiB 161.3 MiB mode=str('reflect'))[:, 2:])
140 2357.2 MiB 0.0 MiB D_V = np.abs(C_V - np.pad(C_V, ((0, 2), (0, 0)),
141 2518.5 MiB 161.3 MiB mode=str('reflect'))[2:, :])
142
143 2195.8 MiB -322.6 MiB del h_0, h_1, CFA, C_V, C_H
144
145 2195.8 MiB 0.0 MiB k = np.array(
146 2195.8 MiB 0.0 MiB [[0, 0, 1, 0, 1],
147 2195.8 MiB 0.0 MiB [0, 0, 0, 1, 0],
148 2195.8 MiB 0.0 MiB [0, 0, 3, 0, 3],
149 2195.8 MiB 0.0 MiB [0, 0, 0, 1, 0],
150 2195.8 MiB 0.0 MiB [0, 0, 1, 0, 1]]) # yapf: disable
151
152 2357.2 MiB 161.3 MiB d_H = convolve(D_H, k, mode='constant')
153 2518.5 MiB 161.3 MiB d_V = convolve(D_V, np.transpose(k), mode='constant')
154
155 2195.8 MiB -322.6 MiB del D_H, D_V
156 1724.8 MiB -471.0 MiB gc.collect()
157
158 1724.8 MiB 0.0 MiB mask = d_V >= d_H
159 1724.8 MiB 0.0 MiB G = np.where(mask, G_H, G_V)
160 1886.2 MiB 161.3 MiB M = np.where(mask, 1, 0)
161
162 1240.9 MiB -645.3 MiB del d_H, d_V, G_H, G_V
163 1240.9 MiB 0.0 MiB gc.collect()
164
165 # Red rows.
166 1422.4 MiB 181.5 MiB R_r = np.transpose(np.any(R_m == 1, axis=1)[np.newaxis]) * np.ones(R.shape)
167 # Blue rows.
168 1583.7 MiB 161.3 MiB B_r = np.transpose(np.any(B_m == 1, axis=1)[np.newaxis]) * np.ones(B.shape)
169
170 1583.7 MiB 0.0 MiB k_b = np.array([0.5, 0, 0.5])
171
172 1583.7 MiB 0.0 MiB R = np.where(
173 1624.0 MiB 40.3 MiB np.logical_and(G_m == 1, R_r == 1),
174 1624.0 MiB 0.0 MiB G + _cnv_h(R, k_b) - _cnv_h(G, k_b), R)
175
176 1624.0 MiB 0.0 MiB R = np.where(
177 1624.0 MiB 0.0 MiB np.logical_and(G_m == 1, B_r == 1) == 1,
178 1624.0 MiB 0.0 MiB G + _cnv_v(R, k_b) - _cnv_v(G, k_b), R)
179
180 1624.0 MiB 0.0 MiB B = np.where(
181 1624.0 MiB 0.0 MiB np.logical_and(G_m == 1, B_r == 1),
182 1624.0 MiB 0.0 MiB G + _cnv_h(B, k_b) - _cnv_h(G, k_b), B)
183
184 1624.0 MiB 0.0 MiB B = np.where(
185 1624.1 MiB 0.1 MiB np.logical_and(G_m == 1, R_r == 1) == 1,
186 1624.1 MiB 0.0 MiB G + _cnv_v(B, k_b) - _cnv_v(G, k_b), B)
187
188 1624.1 MiB 0.0 MiB R = np.where(
189 1624.1 MiB 0.0 MiB np.logical_and(B_r == 1, B_m == 1),
190 1785.4 MiB 161.3 MiB np.where(M == 1, B + _cnv_h(R, k_b) - _cnv_h(B, k_b),
191 1624.1 MiB -161.3 MiB B + _cnv_v(R, k_b) - _cnv_v(B, k_b)), R)
192
193 1624.1 MiB 0.0 MiB B = np.where(
194 1624.1 MiB 0.0 MiB np.logical_and(R_r == 1, R_m == 1),
195 1785.4 MiB 161.3 MiB np.where(M == 1, R + _cnv_h(B, k_b) - _cnv_h(R, k_b),
196 1624.1 MiB -161.3 MiB R + _cnv_v(B, k_b) - _cnv_v(R, k_b)), B)
197
198 2108.0 MiB 484.0 MiB RGB = tstack((R, G, B))
199
200 1301.4 MiB -806.6 MiB del R, G, B, k_b, R_r, B_r
201 1301.4 MiB 0.0 MiB gc.collect()
202
203 1301.4 MiB 0.0 MiB if refining_step:
204 1301.4 MiB 0.0 MiB RGB = refining_step_Menon2007(RGB, tstack((R_m, G_m, B_m)), M)
205
206 1139.4 MiB -162.1 MiB del M, R_m, G_m, B_m
207 1139.4 MiB 0.0 MiB gc.collect()
208
209 1139.4 MiB 0.0 MiB return RGB
So the interesting thing is that it does not seem like the gc.collect()
statements are doing much, memory has already be reclaimed most of the time as soon as the del
statements occur. I will try to comment the former to confirm. If true, this would be ideal!
I also ran some timings and runtime is not much affected by the deletion + forced garbage collection.
So just to confirm that the gc.collect()
statements are not really required:
Filename: /Users/kelsolaar/Documents/Development/colour-science/colour-demosaicing/colour_demosaicing/bayer/demosaicing/menon2007.py
Line # Mem usage Increment Line Contents
================================================
214 2256.2 MiB 2256.2 MiB @profile
215 def refining_step_Menon2007(RGB, RGB_m, M):
216 """
217 Performs the refining step on given *RGB* colourspace array.
218
219 Parameters
220 ----------
221 RGB : array_like
222 *RGB* colourspace array.
223 RGB_m : array_like
224 *Bayer* CFA red, green and blue masks.
225 M : array_like
226 Estimation for the best directional reconstruction.
227
228 Returns
229 -------
230 ndarray
231 Refined *RGB* colourspace array.
232
233 Examples
234 --------
235 >>> RGB = np.array(
236 ... [[[0.30588236, 0.35686275, 0.3764706],
237 ... [0.30980393, 0.36078432, 0.39411766],
238 ... [0.29607844, 0.36078432, 0.40784314],
239 ... [0.29803923, 0.37647060, 0.42352942]],
240 ... [[0.30588236, 0.35686275, 0.3764706],
241 ... [0.30980393, 0.36078432, 0.39411766],
242 ... [0.29607844, 0.36078432, 0.40784314],
243 ... [0.29803923, 0.37647060, 0.42352942]]])
244 >>> RGB_m = np.array(
245 ... [[[0, 0, 1],
246 ... [0, 1, 0],
247 ... [0, 0, 1],
248 ... [0, 1, 0]],
249 ... [[0, 1, 0],
250 ... [1, 0, 0],
251 ... [0, 1, 0],
252 ... [1, 0, 0]]])
253 >>> M = np.array(
254 ... [[0, 1, 0, 1],
255 ... [1, 0, 1, 0]])
256 >>> refining_step_Menon2007(RGB, RGB_m, M)
257 array([[[ 0.30588236, 0.35686275, 0.3764706 ],
258 [ 0.30980393, 0.36078432, 0.39411765],
259 [ 0.29607844, 0.36078432, 0.40784314],
260 [ 0.29803923, 0.3764706 , 0.42352942]],
261 <BLANKLINE>
262 [[ 0.30588236, 0.35686275, 0.3764706 ],
263 [ 0.30980393, 0.36078432, 0.39411766],
264 [ 0.29607844, 0.36078432, 0.40784314],
265 [ 0.29803923, 0.3764706 , 0.42352942]]])
266 """
267
268 2740.1 MiB 484.0 MiB R, G, B = tsplit(RGB)
269 3224.1 MiB 484.0 MiB R_m, G_m, B_m = tsplit(RGB_m)
270 3224.1 MiB 0.0 MiB M = np.asarray(M)
271
272 3224.1 MiB 0.0 MiB del RGB, RGB_m
273 # gc.collect()
274
275 # Updating of the green component.
276 3385.4 MiB 161.3 MiB R_G = R - G
277 3546.7 MiB 161.3 MiB B_G = B - G
278
279 3546.7 MiB 0.0 MiB FIR = np.ones(3) / 3
280
281 3546.7 MiB 0.0 MiB B_G_m = np.where(B_m == 1,
282 3708.1 MiB 161.3 MiB np.where(M == 1, _cnv_h(B_G, FIR), _cnv_v(B_G, FIR)), 0)
283 3708.1 MiB 0.0 MiB R_G_m = np.where(R_m == 1,
284 3869.4 MiB 161.3 MiB np.where(M == 1, _cnv_h(R_G, FIR), _cnv_v(R_G, FIR)), 0)
285
286 3546.7 MiB -322.6 MiB del B_G, R_G
287 # gc.collect()
288
289 3708.1 MiB 161.3 MiB G = np.where(R_m == 1, R - R_G_m, G)
290 3708.1 MiB 0.0 MiB G = np.where(B_m == 1, B - B_G_m, G)
291
292 # Updating of the red and blue components in the green locations.
293 # Red rows.
294 3869.4 MiB 161.3 MiB R_r = np.transpose(np.any(R_m == 1, axis=1)[np.newaxis]) * np.ones(R.shape)
295 # Red columns.
296 4030.7 MiB 161.3 MiB R_c = np.any(R_m == 1, axis=0)[np.newaxis] * np.ones(R.shape)
297 # Blue rows.
298 4192.0 MiB 161.3 MiB B_r = np.transpose(np.any(B_m == 1, axis=1)[np.newaxis]) * np.ones(B.shape)
299 # Blue columns.
300 4353.3 MiB 161.3 MiB B_c = np.any(B_m == 1, axis=0)[np.newaxis] * np.ones(B.shape)
301
302 4514.7 MiB 161.3 MiB R_G = R - G
303 4676.0 MiB 161.3 MiB B_G = B - G
304
305 4676.0 MiB 0.0 MiB k_b = np.array([0.5, 0, 0.5])
306
307 4676.0 MiB 0.0 MiB R_G_m = np.where(
308 4379.5 MiB -296.5 MiB np.logical_and(G_m == 1, B_r == 1), _cnv_v(R_G, k_b), R_G_m)
309 4573.1 MiB 193.6 MiB R = np.where(np.logical_and(G_m == 1, B_r == 1), G + R_G_m, R)
310 4573.1 MiB 0.0 MiB R_G_m = np.where(
311 4598.9 MiB 25.8 MiB np.logical_and(G_m == 1, B_c == 1), _cnv_h(R_G, k_b), R_G_m)
312 4599.1 MiB 0.2 MiB R = np.where(np.logical_and(G_m == 1, B_c == 1), G + R_G_m, R)
313
314 3954.0 MiB -645.1 MiB del B_r, R_G_m, B_c, R_G
315 # gc.collect()
316
317 3954.0 MiB 0.0 MiB B_G_m = np.where(
318 4114.6 MiB 160.6 MiB np.logical_and(G_m == 1, R_r == 1), _cnv_v(B_G, k_b), B_G_m)
319 3792.0 MiB -322.6 MiB B = np.where(np.logical_and(G_m == 1, R_r == 1), G + B_G_m, B)
320 3792.0 MiB 0.0 MiB B_G_m = np.where(
321 3869.4 MiB 77.4 MiB np.logical_and(G_m == 1, R_c == 1), _cnv_h(B_G, k_b), B_G_m)
322 3869.4 MiB 0.0 MiB B = np.where(np.logical_and(G_m == 1, R_c == 1), G + B_G_m, B)
323
324 3224.1 MiB -645.3 MiB del B_G_m, R_r, R_c, G_m, B_G
325 # gc.collect()
326
327 # Updating of the red (blue) component in the blue (red) locations.
328 3385.4 MiB 161.3 MiB R_B = R - B
329 3385.4 MiB 0.0 MiB R_B_m = np.where(B_m == 1,
330 3546.7 MiB 161.3 MiB np.where(M == 1, _cnv_h(R_B, FIR), _cnv_v(R_B, FIR)), 0)
331 3546.7 MiB 0.0 MiB R = np.where(B_m == 1, B + R_B_m, R)
332
333 3546.7 MiB 0.0 MiB R_B_m = np.where(R_m == 1,
334 3546.7 MiB 0.0 MiB np.where(M == 1, _cnv_h(R_B, FIR), _cnv_v(R_B, FIR)), 0)
335 3546.7 MiB 0.0 MiB B = np.where(R_m == 1, R - R_B_m, B)
336
337 3224.1 MiB -322.6 MiB del R_B, R_B_m, R_m
338 # gc.collect()
339
340 3708.1 MiB 484.0 MiB return tstack((R, G, B))
Filename: /Users/kelsolaar/Documents/Development/colour-science/colour-demosaicing/colour_demosaicing/bayer/demosaicing/menon2007.py
Line # Mem usage Increment Line Contents
================================================
55 984.2 MiB 984.2 MiB @profile
56 def demosaicing_CFA_Bayer_Menon2007(CFA, pattern='RGGB', refining_step=True):
57 """
58 Returns the demosaiced *RGB* colourspace array from given *Bayer* CFA using
59 DDFAPD - *Menon (2007)* demosaicing algorithm.
60
61 Parameters
62 ----------
63 CFA : array_like
64 *Bayer* CFA.
65 pattern : unicode, optional
66 **{'RGGB', 'BGGR', 'GRBG', 'GBRG'}**,
67 Arrangement of the colour filters on the pixel array.
68 refining_step : bool
69 Perform refining step.
70
71 Returns
72 -------
73 ndarray
74 *RGB* colourspace array.
75
76 Notes
77 -----
78 - The definition output is not clipped in range [0, 1] : this allows for
79 direct HDRI / radiance image generation on *Bayer* CFA data and post
80 demosaicing of the high dynamic range data as showcased in this
81 `Jupyter Notebook <https://github.com/colour-science/colour-hdri/\
82 blob/develop/colour_hdri/examples/\
83 examples_merge_from_raw_files_with_post_demosaicing.ipynb>`_.
84
85 References
86 ----------
87 :cite:`Menon2007c`
88
89 Examples
90 --------
91 >>> CFA = np.array(
92 ... [[ 0.30980393, 0.36078432, 0.30588236, 0.3764706 ],
93 ... [ 0.35686275, 0.39607844, 0.36078432, 0.40000001]])
94 >>> demosaicing_CFA_Bayer_Menon2007(CFA)
95 array([[[ 0.30980393, 0.35686275, 0.39215687],
96 [ 0.30980393, 0.36078432, 0.39607844],
97 [ 0.30588236, 0.36078432, 0.39019608],
98 [ 0.32156864, 0.3764706 , 0.40000001]],
99 <BLANKLINE>
100 [[ 0.30980393, 0.35686275, 0.39215687],
101 [ 0.30980393, 0.36078432, 0.39607844],
102 [ 0.30588236, 0.36078432, 0.39019609],
103 [ 0.32156864, 0.3764706 , 0.40000001]]])
104 >>> CFA = np.array(
105 ... [[ 0.3764706 , 0.36078432, 0.40784314, 0.3764706 ],
106 ... [ 0.35686275, 0.30980393, 0.36078432, 0.29803923]])
107 >>> demosaicing_CFA_Bayer_Menon2007(CFA, 'BGGR')
108 array([[[ 0.30588236, 0.35686275, 0.3764706 ],
109 [ 0.30980393, 0.36078432, 0.39411766],
110 [ 0.29607844, 0.36078432, 0.40784314],
111 [ 0.29803923, 0.3764706 , 0.42352942]],
112 <BLANKLINE>
113 [[ 0.30588236, 0.35686275, 0.3764706 ],
114 [ 0.30980393, 0.36078432, 0.39411766],
115 [ 0.29607844, 0.36078432, 0.40784314],
116 [ 0.29803923, 0.3764706 , 0.42352942]]])
117 """
118
119 984.2 MiB 0.0 MiB CFA = np.asarray(CFA)
120 1044.7 MiB 60.5 MiB R_m, G_m, B_m = masks_CFA_Bayer(CFA.shape, pattern)
121
122 1044.7 MiB 0.0 MiB h_0 = np.array([0, 0.5, 0, 0.5, 0])
123 1044.7 MiB 0.0 MiB h_1 = np.array([-0.25, 0, 0.5, 0, -0.25])
124
125 1206.0 MiB 161.3 MiB R = CFA * R_m
126 1367.3 MiB 161.3 MiB G = CFA * G_m
127 1528.7 MiB 161.3 MiB B = CFA * B_m
128
129 1711.0 MiB 182.4 MiB G_H = np.where(G_m == 0, _cnv_h(CFA, h_0) + _cnv_h(CFA, h_1), G)
130 1872.9 MiB 161.8 MiB G_V = np.where(G_m == 0, _cnv_v(CFA, h_0) + _cnv_v(CFA, h_1), G)
131
132 2034.2 MiB 161.4 MiB C_H = np.where(R_m == 1, R - G_H, 0)
133 2034.3 MiB 0.0 MiB C_H = np.where(B_m == 1, B - G_H, C_H)
134
135 2195.6 MiB 161.3 MiB C_V = np.where(R_m == 1, R - G_V, 0)
136 2195.7 MiB 0.1 MiB C_V = np.where(B_m == 1, B - G_V, C_V)
137
138 2195.7 MiB 0.0 MiB D_H = np.abs(C_H - np.pad(C_H, ((0, 0), (0, 2)),
139 2357.0 MiB 161.3 MiB mode=str('reflect'))[:, 2:])
140 2357.0 MiB 0.0 MiB D_V = np.abs(C_V - np.pad(C_V, ((0, 2), (0, 0)),
141 2518.3 MiB 161.3 MiB mode=str('reflect'))[2:, :])
142
143 2195.7 MiB -322.6 MiB del h_0, h_1, CFA, C_V, C_H
144
145 2195.7 MiB 0.0 MiB k = np.array(
146 2195.7 MiB 0.0 MiB [[0, 0, 1, 0, 1],
147 2195.7 MiB 0.0 MiB [0, 0, 0, 1, 0],
148 2195.7 MiB 0.0 MiB [0, 0, 3, 0, 3],
149 2195.7 MiB 0.0 MiB [0, 0, 0, 1, 0],
150 2195.7 MiB 0.0 MiB [0, 0, 1, 0, 1]]) # yapf: disable
151
152 2357.0 MiB 161.3 MiB d_H = convolve(D_H, k, mode='constant')
153 2518.3 MiB 161.3 MiB d_V = convolve(D_V, np.transpose(k), mode='constant')
154
155 2195.7 MiB -322.6 MiB del D_H, D_V
156 # gc.collect()
157
158 2195.7 MiB 0.0 MiB mask = d_V >= d_H
159 2195.7 MiB 0.0 MiB G = np.where(mask, G_H, G_V)
160 2357.0 MiB 161.3 MiB M = np.where(mask, 1, 0)
161
162 1711.7 MiB -645.3 MiB del d_H, d_V, G_H, G_V
163 # gc.collect()
164
165 # Red rows.
166 1893.2 MiB 181.5 MiB R_r = np.transpose(np.any(R_m == 1, axis=1)[np.newaxis]) * np.ones(R.shape)
167 # Blue rows.
168 2054.5 MiB 161.3 MiB B_r = np.transpose(np.any(B_m == 1, axis=1)[np.newaxis]) * np.ones(B.shape)
169
170 2054.5 MiB 0.0 MiB k_b = np.array([0.5, 0, 0.5])
171
172 2054.5 MiB 0.0 MiB R = np.where(
173 2094.9 MiB 40.3 MiB np.logical_and(G_m == 1, R_r == 1),
174 2094.9 MiB 0.0 MiB G + _cnv_h(R, k_b) - _cnv_h(G, k_b), R)
175
176 2094.9 MiB 0.0 MiB R = np.where(
177 2094.9 MiB 0.0 MiB np.logical_and(G_m == 1, B_r == 1) == 1,
178 2094.9 MiB 0.0 MiB G + _cnv_v(R, k_b) - _cnv_v(G, k_b), R)
179
180 2094.9 MiB 0.0 MiB B = np.where(
181 2094.9 MiB 0.0 MiB np.logical_and(G_m == 1, B_r == 1),
182 2094.9 MiB 0.0 MiB G + _cnv_h(B, k_b) - _cnv_h(G, k_b), B)
183
184 2094.9 MiB 0.0 MiB B = np.where(
185 2094.9 MiB 0.0 MiB np.logical_and(G_m == 1, R_r == 1) == 1,
186 2094.9 MiB 0.0 MiB G + _cnv_v(B, k_b) - _cnv_v(G, k_b), B)
187
188 2094.9 MiB 0.0 MiB R = np.where(
189 2094.9 MiB 0.0 MiB np.logical_and(B_r == 1, B_m == 1),
190 2256.2 MiB 161.3 MiB np.where(M == 1, B + _cnv_h(R, k_b) - _cnv_h(B, k_b),
191 2094.9 MiB -161.3 MiB B + _cnv_v(R, k_b) - _cnv_v(B, k_b)), R)
192
193 2094.9 MiB 0.0 MiB B = np.where(
194 2094.9 MiB 0.0 MiB np.logical_and(R_r == 1, R_m == 1),
195 2256.2 MiB 161.3 MiB np.where(M == 1, R + _cnv_h(B, k_b) - _cnv_h(R, k_b),
196 2094.9 MiB -161.3 MiB R + _cnv_v(B, k_b) - _cnv_v(R, k_b)), B)
197
198 2578.8 MiB 484.0 MiB RGB = tstack((R, G, B))
199
200 1772.2 MiB -806.6 MiB del R, G, B, k_b, R_r, B_r
201 # gc.collect()
202
203 1772.2 MiB 0.0 MiB if refining_step:
204 1772.2 MiB 0.0 MiB RGB = refining_step_Menon2007(RGB, tstack((R_m, G_m, B_m)), M)
205
206 1610.2 MiB -162.1 MiB del M, R_m, G_m, B_m
207 # gc.collect()
208
209 1610.2 MiB 0.0 MiB return RGB
It seems like a better place to be! @MichaelMauderer what do think?
I ran a benchmark to assess the impact of the additional garbage collector runs. Seems there is only a performance impact for very small array sizes and the difference gets smaller the larger the input. Probably the GC being more and mroe aggressiver in freeing the memory in the first place.
------------------------------------------------------------------------------------------------------------ benchmark: 14 tests ------------------------------------------------------------------------------------------------------------
Name (time in us) Min Max Mean StdDev Median IQR Outliers OPS Rounds Iterations
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
test_Menon2007_10 820.4458 (1.0) 1,222.0216 (1.0) 875.0249 (1.0) 78.7858 (2.40) 851.6632 (1.0) 29.8983 (2.79) 9;10 1,142.8246 (1.0) 100 1
test_Menon2007_100 1,025.9236 (1.25) 1,343.9599 (1.10) 1,048.4939 (1.20) 32.8225 (1.0) 1,042.9246 (1.22) 10.6989 (1.0) 4;11 953.7490 (0.83) 100 1
test_Menon2007_1000 2,123.6619 (2.59) 2,954.6600 (2.42) 2,247.0365 (2.57) 127.8322 (3.89) 2,207.2014 (2.59) 74.0131 (6.92) 9;9 445.0306 (0.39) 100 1
test_Menon2007_10000 19,172.4607 (23.37) 28,499.8652 (23.32) 21,058.9437 (24.07) 1,159.5001 (35.33) 20,868.7521 (24.50) 1,255.4374 (117.34) 25;1 47.4858 (0.04) 100 1
test_Menon2007_gc_10 157,546.1079 (192.03) 198,387.2481 (162.34) 178,483.7758 (203.98) 8,961.1096 (273.02) 179,554.9543 (210.83) 10,294.5573 (962.21) 33;1 5.6028 (0.00) 100 1
test_Menon2007_gc_100 158,351.0183 (193.01) 194,411.0612 (159.09) 175,251.7939 (200.28) 7,781.0778 (237.07) 175,691.0328 (206.29) 11,186.2315 (>1000.0) 31;0 5.7061 (0.00) 100 1
test_Menon2007_gc_1000 158,551.2200 (193.25) 190,447.7717 (155.85) 173,024.2792 (197.74) 7,654.3789 (233.21) 172,899.9343 (203.01) 10,712.5479 (>1000.0) 39;0 5.7795 (0.01) 100 1
test_Menon2007_gc_10000 190,708.9425 (232.45) 220,928.5488 (180.79) 203,541.0780 (232.61) 5,633.1978 (171.63) 203,000.2404 (238.36) 6,994.1610 (653.73) 34;1 4.9130 (0.00) 100 1
test_Menon2007_100000 383,404.3663 (467.31) 455,564.3166 (372.80) 414,261.7952 (473.43) 11,750.0035 (357.99) 416,267.9244 (488.77) 14,339.3345 (>1000.0) 29;2 2.4139 (0.00) 100 1
test_Menon2007_gc_100000 553,528.2979 (674.67) 612,579.5824 (501.28) 589,381.8253 (673.56) 12,475.3440 (380.09) 591,376.0860 (694.38) 20,012.2525 (>1000.0) 41;0 1.6967 (0.00) 100 1
test_Menon2007_1000000 4,315,950.7674 (>1000.0) 4,724,885.6828 (>1000.0) 4,541,078.9111 (>1000.0) 78,864.8458 (>1000.0) 4,556,112.3064 (>1000.0) 95,044.0561 (>1000.0) 27;2 0.2202 (0.00) 100 1
test_Menon2007_gc_1000000 4,433,835.5474 (>1000.0) 4,758,905.0171 (>1000.0) 4,681,090.0351 (>1000.0) 44,917.3294 (>1000.0) 4,689,296.8319 (>1000.0) 40,901.6696 (>1000.0) 18;5 0.2136 (0.00) 100 1
test_Menon2007_10000000 43,523,913.6934 (>1000.0) 46,318,977.8283 (>1000.0) 44,814,870.3438 (>1000.0) 471,639.1777 (>1000.0) 44,862,246.4503 (>1000.0) 533,658.3557 (>1000.0) 31;4 0.0223 (0.00) 100 1
test_Menon2007_gc_10000000 43,815,474.4457 (>1000.0) 46,419,028.8314 (>1000.0) 44,975,783.0699 (>1000.0) 494,251.3337 (>1000.0) 44,988,255.3729 (>1000.0) 660,520.5536 (>1000.0) 28;1 0.0222 (0.00) 100 1
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
@MaxSchambach Considering this and the data from Thomas, is adding just the del
statements actually solving the problem for you on your machine?
[...]is adding just the
del
statements actually solving the problem for you on your machine?
Yes, that seems to already do the job! I've updated this PR accordingly and removed the gc.collect()
statements.
Thanks @MaxSchambach, this is merged!
Very nice, thanks!
I will add you to the contributors file when I will merge the current working branch into develop. Thanks again!
Thanks! And no problem.
Summary
Added explicit deletion of temporary variables to reduce the memory consumption of the demosaicing methods.
Description
In particular for the demosaicing method by Menon, I have noticed a huge memory consumption. In one particular case I was not able to demosaic a large image (around 7000 x 5000) with 14 GB of available memory. By introducing the explicit deletion, I was able to tone it down to 5 GB in that particular case. I've added the garbage collection to the other demosaicing methods as well, even though the gain in memory usage is not as big.