oracle / opengrok

OpenGrok is a fast and usable source code search and cross reference engine, written in Java
http://oracle.github.io/opengrok/
Other
4.3k stars 740 forks source link

Alter handling of huge text files #3097

Open idodeclare opened 4 years ago

idodeclare commented 4 years ago

Issue #3090 asks whether matches might be excerpted in results from the search API to avoid a performance-killing situation such as returning a line that is a gigabyte in length. There is the open #2732 to convert SearchEngine to use the modern Lucene unified highlighter. With that PR's new HitFormatter, it would be fairly straight-forward to refactor to use the same excerpting as applied by LineHighlight for UI search.

Huge text files present additional problems, however, for OpenGrok.

The Lucene uhighlight API makes it ultimately impossible to avoid loading full, indexed source content into memory. While in some places in the API, Lucene permits content to be represented as CharSequence, which would allow (with a bit of work) to lazily load source content into memory; the final formatting via Lucene PassageFormatter is done with a method, format(Passage[] passages, String content), where a String is demanded.

As well keep in mind that Lucene postings have an offset datatype of int, so content past an offset of 2,147,483,647 cannot be indexed for OpenGrok to present context, since OpenGrok chooses to be able to store postings-with-offsets so that later context presentation is not re-analyzing files. (Currently OpenGrok does not limit the number of characters read, which results in issues like #2560. The latest JFlex 1.8.x has revised its yychar as a long, but Lucene would still have an int limit for offsets.)

For huge text files then I can think of a few possible choices:

or

or

I generally think the second option might be satisfactory. Is there truly much utility to excerpting from a 1GB JSON file? What does "context" mean within such a file? I don't expect realizing that option would be too difficult. I suppose it could be done by reclassifying huge Genre.PLAIN files as Genre.DATA; but still using the plain-text analyzer and, where applicable, a language-specific symbol tokenizer; and also avoiding XREF generation (by virtue of being Genre.DATA).

tarzanek commented 4 years ago

I vote for second option too, such huge files are no good for humans anyways(humans would filter them anyhow), so why bother with them? No gain, very narrow use case, they generally should just get rid of the way (exactly as stated in https://github.com/oracle/opengrok/issues/1646#issuecomment-613583802 comment). This is a source code engine, not heuristics, so we need to keep our focus in mind. Big files are data, not source code (so another option is to have a size limit on source code analyzers and degrade the analyzer if the limit is hit). So I agree with PLAIN -> DATA option too.

idodeclare commented 4 years ago

OK I'm glad to get agreement