Kentico / xperience-algolia

Enables the creation of Algolia search indexes and the indexing of Xperience content tree pages using a code-first approach.
https://www.kentico.com
MIT License
1 stars 3 forks source link

Feature: Processing Large Documents in Chunks #19

Closed seangwright closed 1 year ago

seangwright commented 2 years ago

Motivation

Algolia's documentation recommends breaking up large documents into chunks for better search relevancy of results and also to prevent hitting the search record size limit for a plan.

If we use the Xperience crawler for a page, crack a PDF, or have many structured content fields with blocks of text, we might want to chunk this content into multiple records.

Currently, this library creates 1 Algolia search record for each Page in the Content Tree, so chunking would have to a fully custom solution outside of this integration.

Proposed solution

I'm not sure the best way to introduce this feature at the moment, but my initial idea would be to start where the JObject is created from the page's content.

This could be changed to IList<JObject> and then have everything that populates this collection be updated to work with sets/lists instead of single items.

Or, it might be better to add a new indexing attribute to indicate a field should be chunked and then provide a method for creating the chunks.

Additional context

This would also require configuring an attribute for distinct for the index.

kentico-ericd commented 2 years ago

@seangwright I've pushed a potential solution for this in https://github.com/Kentico/xperience-algolia/pull/26. Can you please take a look and provide some feedback?

Currently, the entire "splitting" process is up to the developers to implement, and there is no new attribute to indicate which properties should be split. Is that acceptable, or should there be some default behavior?

seangwright commented 2 years ago

Thanks! Yup - I'll take a look on Thursday when I'm back to working on the site using Algolia.