Closed yarondbb closed 10 years ago
@genadyp it is all yours :)
Using this version with extreem conditions (150K files. Monitor\index\write to file all the time) has process memory of 195M stable over days
Good work. Thank you! Few general comments:
Memory refactor:
Problem: Memory kept rising during Monitoring\Indexing\Write to file
Investigation: Ruby used a lot of temporary objects which made the internal allocated heaps arrays to grow. Not all temporary heaps used while monitoring\indexing\writing to file where freed. Probalby since there are also some living objects on the arrays which prevent to release it.
Solution: write a code which is much more memory fit so that minimum temporary objects are created. a. Seperate monitoring from Indexing. Doing these 2 tasks in paralel uses more memory and more time. b. Replace all Each loops with Enumerator loops. Each loops create temporary arrays of input for each iteration. This can cause huge memory load c. Synchronize monitoring and write to file - after each monitor cycle. If ContentData canged, write to file. (this preent writing in paralel which increse memory load). Keeping also legacy, write to file, thread. d. Write to file in chunks and Force GC in between e. Force GC after each dir is monitored f. Force GC after each dir is indexed