Open gsuberland opened 2 years ago
I was thinking about this a bit. One potential solution without much code change is to make it embarrassingly parallel, I.e. run many instances of the cindex executable on subsets of the repo (and with separate output indexes). It's a bit gross but would be a way to do it without touching Go
Would it be possible to parallelise the indexing process, or at least parts of it, to improve the overall speed?
Running this over a 6.4GB repository with 275,000 files in it, on Windows, the process is neither bottlenecked on CPU or disk IO, but the process takes over an hour. Running two index commands on two repos on the same NVMe SSD, in parallel, results in a disk IO of around 20% and barely taxes one core. The memory usage is only around 400MB per process.
I suspect that sequentially opening each file, reading and processing contents, storing the results, then moving onto the next file is causing heavy throughput limitations when there are many thousands of small files.
I don't know enough Go to implement this myself, unfortunately. Is this something you could potentially investigate?