Open jamona opened 9 years ago
Not sure if I understand you correctly, but self.transform(X) will give you the features detected in X when the learned feature detectors are applied.
Thanks for the answer. I already tried this but somehow don't get it. Sorry.
I selected 50 samples + 5 patches max. + 5 features.
My original data has the dimension of 50x (10,22050) each which leads to a list of patches of (5,10,22050) each. After your reshaping-step I have patches of shape (250, 220500).
By using the fit_transform i get a feature array of shape (250, 5). How can I interpret this? I have troubles getting it back into the original dimensional space - basically it should have the dimensions of (10,22050) which I also initially chose as patch_width and patch_height.
Could you please just clarify how the dimensions of the features must be interpreted? As far as I see the array has the dimensions n_samples * n_max_patches * n_features - but how can I revert this back into the original dimensional space?
Both input and output of SparseFiltering.transform() are 2d, where the first dimension is the number of datapoints (n_samples) and the second one the number of pixels (for the input) and features (n_features, for the output). The extraction of patches is not part of SparseFiltering, and you can adapt it for your application. What is shown in the example is basically the fitting on random subpatches. For an actual application you would want to apply the learned feature extractors to all subpatches of an input with a given stride, basically like in a convolutional neural network. This will give you a 4d output (n_samples, n_patches_horizontal, n_patches_vertical, n_features), where the second and third dimension depend on the size of your input, the striding etc.
Hello,
Thanks for the cool implementation of the sparse-filtering method. I just have one request: Is it possible to illustrate the features instead of the feature detectors?
As I understood the code, estimator.w_[i].reshape(patch_width, patch_height) gives me the feature detector. In order to visualize the real feature detected (e.g. "edge" in an image) what can I do? Is there sth like a inverse transformation possible?