Closed nikzasel closed 5 years ago
Thanks for the bug report, and sorry for the slow response - for some reason I didn't receive any notification that it had been submitted. I'll get back to you once I've had time to investigate.
From your code I understand, that if your dictionary does not have channels, then every dimension after spatial will be treated as signal(Kdim) dimension (channel and signal are reshaped into signal)?
More generally, the policy is that a multi-channel signal can be represented using a multi-channel dictionary and a single-channel sparse representation array, or using a single-channel dictionary and a multi-channel sparse representation array. When doing dictionary learning in the latter case, since a training array consisting of K
signals of C
channels each is equivalent to a training array consisting of K C
single channel signals, the CDL codes reshape the input as described to simplify the number of alternative training array configurations that need to be handled.
If yes, then even if my input data is multi-channel, but my dictionary is single-channel, then my data will be treated as multi-signal. And in such a configuration, I must supply dimK=1?
I think the paragraph above addresses the first part of this question. The answer to the second part is "no". The role of the dimK
parameter is to specify whether the input array has an axis allocated to multiple input signals. It's only required in cases where the input array has dimN + 1
dimensions, making it difficult to infer whether the extra dimension represents multiple channels (i.e. multi-channel, single signal) or multiple signals (i.e. single-channel, multiple signals). This is discussed in more detail in the docs for sporco.cnvrep.CSC_ConvRepIndexing
.
And if my Dict and data are multi-channel, then dimK=0?
Not necessarily - see above.
These are the questions, so about the bug.
If you supply to a SINGLE-CHANNEL dictionary a MULTI-CHANNEL MULTI-SIGNAL image, the algorithm fails. But it does not if you provide a MULTI-CHANNEL dictionary the same data.
It fails in function OnlineConvBPDNDictLearn.setcoef(self, Z):301 I think the problem is in the statement on line 288-289, which reshapes Z. Because it does reshape only input Z but does not reshape initial Z.
Thanks for providing a pull request with your bug report. The proposed changes do address the specific problem you identify, but there's a deeper underlying problem that also needs to be fixed: I'm about to push some changes that apply this alternative solution.
I'm going to close this issue with the push, but if it doesn't resolve all of the problems you encountered, or if you have any further questions, please feel free to re-open this issue.
Hello, I think, I`ve stumbled on a bug in your code, but before it, can you please explain the next thing, thank you.
These questions about OnlineConvBPDNDictLearn algorithm using cupy extension. Sporco version - 0.1.11 (conda-forge) Sporco cuda version - 0.0.3 (pip) Python version - 3.7.3 (conda-forge)
From your code I understand, that if your dictionary does not have channels, then every dimension after spatial will be treated as signal(Kdim) dimension (channel and signal are reshaped into signal)? If yes, then even if my input data is multi-channel, but my dictionary is single-channel, then my data will be treated as multi-signal. And in such a configuration, I must supply dimK=1? And if my Dict and data are multi-channel, then dimK=0?
These are the questions, so about the bug.
If you supply to a SINGLE-CHANNEL dictionary a MULTI-CHANNEL MULTI-SIGNAL image, the algorithm fails. But it does not if you provide a MULTI-CHANNEL dictionary the same data.
It fails in function OnlineConvBPDNDictLearn.setcoef(self, Z):301 I think the problem is in the statement on line 288-289, which reshapes Z. Because it does reshape only input Z but does not reshape initial Z.