Closed SepidehAlassi closed 5 years ago
@flavens Thank you for reviewing this, I asked @tobiasschweizer to check it also regarding the computation time. If it seems critical, I might need to optimise the code a bit.
The ontology cache does not work correctly when doing several async search requests because the result is not written back completely before doing new requests for the same ontology. This results in unnecessary calls to the Knora ontology route.
The easiest way to solve this problem would be the caching of all relevant ontologies at the very beginning. But I would really like to look for a better solution so that the ontology cache handles this by itself.
@tobiasschweizer can you please review this again? I believe now that the text search is faster, we can safely ignore adding a checkbox
@tobiasschweizer I applied your suggestions, can you please check again?
[x] forward full text search to the connected repositories (Newton & Leibniz)
[x] receive search results from 3rd party repositories and concatenate to BEOL results