Closed stabilize-ai closed 10 months ago
Thanks for the question!
The memory issue indeed is a problem when dealing with large datasets. And it is actually a general problem for dataset processing methods like coreset selection and clustering. It can be addressed from optimization and system perspectives. I believe there are quite a lot of works aiming at conducting efficient clustering on large-scale datasets, where you can seek for some insights. For example, the original demanding 10Mx10M matrix can be approximated through smaller matrices at multiple nodes.
In this work, we are mainly dealing with CIFAR and ImageNet, where the data is no more than 1M level. Trying to extend the method to larger levels like ImageNet-21K would also be significantly meaningful.
Thank you @vimar-gu ; I'll look through works on these (1) optimization; and (2) system perspectives. Do you also have some papers / repos in mind that you like for these topics ?
As I'm not quite familiar with this area, I can only give limited advices. You can refer to papers like:
Also there is a python package you can use for dealing with huge-scale data: Vaex
Thanks for the great work and code !
I was going over the code ad realize that the bin creation relies on an N x N similarity matrix, where N is the number of examples code line.
That would create lead to memory issues when scaling to large datasets with 10 M or 100 M examples because that would need a matrix of size 10Mx10M or 100Mx100M.
Have you thought about suggestions to address those use-case ?