We need to show, for each dataset and facet, which of the selected codelists are in use and to provide some example codes for context. The draft query won't guarantee to get all examples (indeed we only want some). It could actually give the impression that a selected code was not present when it was (the collapsed e.g. top 3 hits per dataset could all be about the first code).
We can enumerate the dimension values using aggregations instead of collapse. This returns, for each dataset and each dimension, the count of observation by dimension-value. For each dataset, typically only one dimension would have results. This additionally gives us a count by dimension-value. Although it will be better at enumerating codes than the collapse version (since it's grouping), it won't guarantee to find example codes in all of the codelists though - we'd take the top e.g. 10 hits per dimension, which could all come from the first codelist. We could set the size to an exhaustively high value and take e.g. the first 3 codes by codelist in clojure.
Alternatively we could denormalise the codelist onto the dimension-values in the observation index. This would allow us to add a second level of collapse (on inner_hits of the dataset collapsing) by e.g. "field": "<dimension>.scheme" to yield the top X codes by codelist by dataset.
We need to show, for each dataset and facet, which of the selected codelists are in use and to provide some example codes for context. The draft query won't guarantee to get all examples (indeed we only want some). It could actually give the impression that a selected code was not present when it was (the collapsed e.g. top 3 hits per dataset could all be about the first code).
We can enumerate the dimension values using
aggregations
instead ofcollapse
. This returns, for each dataset and each dimension, the count of observation by dimension-value. For each dataset, typically only one dimension would have results. This additionally gives us a count by dimension-value. Although it will be better at enumerating codes than thecollapse
version (since it's grouping), it won't guarantee to find example codes in all of the codelists though - we'd take the top e.g. 10 hits per dimension, which could all come from the first codelist. We could set thesize
to an exhaustively high value and take e.g. the first 3 codes by codelist in clojure.This could be quite slow.
Alternatively we could denormalise the codelist onto the dimension-values in the observation index. This would allow us to add a second level of
collapse
(oninner_hits
of the dataset collapsing) by e.g."field": "<dimension>.scheme"
to yield the top X codes by codelist by dataset.