Since we use of 256mb for a memory chunk, people are reporting exceeding main memory during computation of pairwise distances. The amount of pairs in these cases were massive (eg. hundreds of thousands). We should incorporate the memory requirements for feature computation, when setting the chunksize automatically. At the moment only the dimension of the output is used.
Either we solve this by setting a smaller chunk (eg. 8 MB, which is still sufficient for NumPys OpenMP parallelization) or think about the memory requirement stuff per frame.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Since we use of 256mb for a memory chunk, people are reporting exceeding main memory during computation of pairwise distances. The amount of pairs in these cases were massive (eg. hundreds of thousands). We should incorporate the memory requirements for feature computation, when setting the chunksize automatically. At the moment only the dimension of the output is used.
Either we solve this by setting a smaller chunk (eg. 8 MB, which is still sufficient for NumPys OpenMP parallelization) or think about the memory requirement stuff per frame.