Open vladak opened 4 months ago
Rather than trying to terminate and cleanup the writer.addDocument()
call it would be easier to check the term count in certain fields and skip the document altogether if the count is above given threshold.
Another idea is to trim the number if terms.
Review the code of the JavaScriptSymbolTokenizer class to understand what it's doing and where the error might be occurring. Look for any potential bugs, errors, or exceptions that could be causing the issue.
When indexing bunch of projects, the indexer (based on 1.13.4) had a trailing worker thread that was spinning on the CPU for inordinate amount of time (minutes) while all the other files were finished. The stack trace looked like this (comes from my branch when experimenting with the fix for #4549 so thread name and line numbers do not match):
Specifically, the file in question came from the https://hg.mozilla.org/mozilla-central , it was
testing/modules/sinon-7.2.7.js
which contains long lists of numbers. Such files are known to cause problems. Ideally the analyzers should skip/truncate the files with huge number of terms.The indexer eventually finished, however there should be some time limit for
addFile()
orwriter.addDocument()
processing. There is already timeout enforced for xref production via theXrefWork
class.