pyxem / kikuchipy

Toolbox for analysis of electron backscatter diffraction (EBSD) patterns
https://kikuchipy.org
GNU General Public License v3.0
78 stars 30 forks source link

Add PCA Dictionary Indexing #638

Open ZacharyVarley opened 1 year ago

ZacharyVarley commented 1 year ago

Description of the change

Add fast PCA dictionary indexing

Progress of the PR

For reviewers

hakonanes commented 1 year ago

Hi @ZacharyVarley,

thank you for opening this PR! I modified your added tutorial to compare DI with PCA DI on my 6 yr old laptop. PCA DI is about 2x faster than standard DI, although it uses much more memory, which I hope we can look at.

Please let me know when you'd like feedback on the PR!

hakonanes commented 1 year ago

Please let me also know whether you have any questions regarding tests, docs, dependency control etc. You might already be aware of our contributing guide.

ZacharyVarley commented 1 year ago

Hi @ZacharyVarley,

thank you for opening this PR! I modified your added tutorial to compare DI with PCA DI on my 6 yr old laptop. PCA DI is about 2x faster than standard DI, although it uses much more memory, which I hope we can look at.

Please let me know when you'd like feedback on the PR!

@hakonanes

The memory footprint is expected to be 4x larger than holding the uint8 dictionary patterns, as the dictionary patterns must be cast to 32 bit floats in order to calculate the covariance matrix using LAPACK for subsequent eigenvector decomposition. I realize now that this should be avoided by chunking the dictionary covariance matrix calculation. Further, this approach is most beneficial when the dictionary and/or pattern dimensions are large (roughly > 100,000 and > 60x60). This might be why only a 2x speedup was observed. I am hoping to have time to patch up everything in the PR this week.

hakonanes commented 1 year ago

I realize now that this should be avoided by chunking the dictionary covariance matrix calculation.

I haven't looked in detail at the implementation, but just assumed that the dictionary had to be in memory for PCA DI. If not, that's great! You should be able to map the calculation across chunks of the dictionary using dask.array.map_blocks(). Using Dask gives the user the option to control the number of workers and available memory they want to use for this parallelized computation outside of kikuchipy, which is one of the greatest benefits of using Dask (kikuchipy don't have to consider these things).

From my experience, it is impossible to find the optimum between speed and memory use of DI across different platforms and sizes of dataset and dictionary. We therefore give the user the option to control how many experimental (n_per_iteration) and dictionary patterns (the chunk size of the dictionary if a Dask array, controlled by chunk_shape=n in calls to get_patterns()) are compared in each iteration. It would be good if similar options could be given to the user for PCA DI as well. Larger chunks = faster, but higher memory use.

review-notebook-app[bot] commented 8 months ago

Check out this pull request on  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB