Open steveyen opened 9 years ago
Would like to know more. In the past it seemed this didn't make sense for us because all of our indexes are essentially real-time indexes (whether or not that is good is separate question). Whereas in traditional Lucene, data indexed may not become searchable for some period of time until the segment in memory is flushed (and thus poor-mans real-time was to call Flush() often)
Further all of our attempts to do in memory index with the correct semantics are slower than Level/RocksDB.
Was talking to Ritesh M. and he was describing so-called "real time indexes" from solr (or lack thereof).
The thought I had was to consider yet another KV storage engine for bleve, but memory only, with cache-like semantics (throws away entries) to try to have limited memory bound.
Probably also needs to be matched up with a datasource (or Feed) that knows how to favor getting the latest, newest data onwards (as opposed to starting from zero with a full backfill). And/or, the (imaginary) Feed knows how to ask for only the data that the datasource (like KV engine) has in memory already.