At the UI side there is a problem. In order to make pairs of bibliographic data elements and Solr fields, we should have a list of Solr fields. In Solr there are two methods, each has a negative effect.
method 1: /select/?q=*:*&wt=csv&rows=0. It is fast, but it returns only those fields, which has stored in the index. The fields we use for the advanced search, and for counting the number of occurrences are not stored, so they are not art of the list
method 2 (via Luke request handler): /admin/luke?wt=json . It returns every fields, but it is very slow for a large index (like the one for K10plus).
Solution: call Luke request handler after the optimization of the index, extract the field names and store them into the OUTPUT_DIR along with other results of the analyses. The user interface will check if this file exists, and calls Luke only this file is not available.
At the UI side there is a problem. In order to make pairs of bibliographic data elements and Solr fields, we should have a list of Solr fields. In Solr there are two methods, each has a negative effect.
/select/?q=*:*&wt=csv&rows=0
. It is fast, but it returns only those fields, which has stored in the index. The fields we use for the advanced search, and for counting the number of occurrences are not stored, so they are not art of the list/admin/luke?wt=json
. It returns every fields, but it is very slow for a large index (like the one for K10plus).Solution: call Luke request handler after the optimization of the index, extract the field names and store them into the
OUTPUT_DIR
along with other results of the analyses. The user interface will check if this file exists, and calls Luke only this file is not available.