Python based Open Source ETL tools for file crawling, document processing (text extraction, OCR), content analysis (Entity Extraction & Named Entity Recognition) & data enrichment (annotation) pipelines & ingestor to Solr or Elastic search index & linked data graph database
When indexing a file where e.g. Tika extracts too long meta data entries, the current exception handling for the resulting HTTP 400 error from solr is not very helpful.
Printing the response text in addition makes it far more understandable where the issue comes from.
So I would propose to add a print statement in case of status_code >= 400
# if bad status code, raise exception
if r.status_code >= 400:
print('Solr {} error: {}'.format(r.status_code, r.text))
r.raise_for_status()
"msg":"Exception writing document id /media/text_document.docx to the index; possible analysis error: Document contains at least one immense term in field=\"Text_TextEntry_ss\" (whose UTF8 encoding is longer than the max length 32766), all of which were skipped. Please correct the analyzer to not produce such terms. The prefix of the first immense term is: '[107, 101, 121, 119, 111, 114, 100, 61, 88, 77, 76, 58, 99, 111, 109, 46, 97, 100, 111, 98, 101, 46, 120, 109, 112, 44, 32, 118, 97, 108]...', original message: bytes can be at most 32766 in length; got 38935. Perhaps the document has an indexed string field (solr.StrField) which is too large",
"code":400}}
When indexing a file where e.g. Tika extracts too long meta data entries, the current exception handling for the resulting HTTP 400 error from solr is not very helpful.
https://github.com/opensemanticsearch/open-semantic-etl/blob/f51efea6c18f2862328e68db4e90653fea13cb22/src/opensemanticetl/export_solr.py#L156
Printing the response text in addition makes it far more understandable where the issue comes from. So I would propose to add a print statement in case of status_code >= 400
See the previous error output:
vs. the new error output: