Unstructured-IO / unstructured-api

Apache License 2.0
528 stars 110 forks source link

chore(deps): Bump unstructured[local-inference] from 0.10.14 to 0.10.15 in /requirements #242

Closed dependabot[bot] closed 1 year ago

dependabot[bot] commented 1 year ago

Bumps unstructured[local-inference] from 0.10.14 to 0.10.15.

Release notes

Sourced from unstructured[local-inference]'s releases.

0.10.15

Enhancements

  • Suport for better element categories from the next-generation image-to-text model ("chipper").. Previously, not all of the classifications from Chipper were being mapped to proper unstructured element categories so the consumer of the library would see many UncategorizedText elements. This fixes the issue, improving the granularity of the element categories outputs for better downstream processing and chunking. The mapping update is:
    • "Threading": NarrativeText
    • "Form": NarrativeText
    • "Field-Name": Title
    • "Value": NarrativeText
    • "Link": NarrativeText
    • "Headline": Title (with category_depth=1)
    • "Subheadline": Title (with category_depth=2)
    • "Abstract": NarrativeText
  • Better ListItem grouping for PDF's (fast strategy). The partition_pdf with fast strategy previously broke down some numbered list item lines as separate elements. This enhancement leverages the x,y coordinates and bbox sizes to help decide whether the following chunk of text is a continuation of the immediate previous detected ListItem element or not, and not detect it as its own non-ListItem element.
  • Fall back to text-based classification for uncategorized Layout elements for Images and PDF's. Improves element classification by running existing text-based rules on previously UncategorizedText elements
  • Adds table partitioning for Partitioning for many doc types including: .html, .epub., .md, .rst, .odt, and .msg. At the core of this change is the .html partition functionality, which is leveraged by the other effected doc types. This impacts many scenarios where Table Elements are now propery extracted.
  • Create and add add_chunking_strategy decorator to partition functions. Previously, users were responsible for their own chunking after partitioning elements, often required for downstream applications. Now, individual elements may be combined into right-sized chunks where min and max character size may be specified if chunking_strategy=by_title. Relevant elements are grouped together for better downstream results. This enables users immediately use partitioned results effectively in downstream applications (e.g. RAG architecture apps) without any additional post-processing.
  • Adds languages as an input parameter and marks ocr_languages kwarg for deprecation in pdf, image, and auto partitioning functions. Previously, language information was only being used for Tesseract OCR for image-based documents and was in a Tesseract specific string format, but by refactoring into a list of standard language codes independent of Tesseract, the unstructured library will better support languages for other non-image pipelines and/or support for other OCR engines.
  • Removes UNSTRUCTURED_LANGUAGE env var usage and replaces language with languages as an input parameter to unstructured-partition-text_type functions. The previous parameter/input setup was not user-friendly or scalable to the variety of elements being processed. By refactoring the inputted language information into a list of standard language codes, we can support future applications of the element language such as detection, metadata, and multi-language elements. Now, to skip English specific checks, set the languages parameter to any non-English language(s).
  • Adds xlsx and xls filetype extensions to the skip_infer_table_types default list in partition. By adding these file types to the input parameter these files should not go through table extraction. Users can still specify if they would like to extract tables from these filetypes, but will have to set the skip_infer_table_types to exclude the desired filetype extension. This avoids mis-representing complex spreadsheets where there may be multiple sub-tables and other content.
  • Better debug output related to sentence counting internals. Clarify message when sentence is not counted toward sentence count because there aren't enough words, relevant for developers focused on unstructureds NLP internals.
  • Faster ocr_only speed for partitioning PDF and images. Use unstructured_pytesseract.run_and_get_multiple_output function to reduce the number of calls to tesseract by half when partitioning pdf or image with tesseract
  • Adds data source properties to fsspec connectors These properties (date_created, date_modified, version, source_url, record_locator) are written to element metadata during ingest, mapping elements to information about the document source from which they derive. This functionality enables downstream applications to reveal source document applications, e.g. a link to a GDrive doc, Salesforce record, etc.
  • Add delta table destination connector New delta table destination connector added to ingest CLI. Users may now use unstructured-ingest to write partitioned data from over 20 data sources (so far) to a Delta Table.
  • Rename to Source and Destination Connectors in the Documentation. Maintain naming consistency between Connectors codebase and documentation with the first addition to a destination connector.
  • Non-HTML text files now return unstructured-elements as opposed to HTML-elements. Previously the text based files that went through partition_html would return HTML-elements but now we preserve the format from the input using source_format argument in the partition call.
  • Adds PaddleOCR as an optional alternative to Tesseract for OCR in processing of PDF or Image files, it is installable via the makefile command install-paddleocr. For experimental purposes only.
  • Bump unstructured-inference to 0.5.28. This version bump markedly improves the output of table data, rendered as metadata.text_as_html in an element. These changes include:
    • add env variable ENTIRE_PAGE_OCR to specify using paddle or tesseract on entire page OCR
    • table structure detection now pads the input image by 25 pixels in all 4 directions to improve its recall (0.5.27)
    • support paddle with both cpu and gpu and assumed it is pre-installed (0.5.26)
    • fix a bug where cells_to_html doesn't handle cells spanning multiple rows properly (0.5.25)
    • remove cv2 preprocessing step before OCR step in table transformer (0.5.24)

Features

  • Adds element metadata via category_depth with default value None.
    • This additional metadata is useful for vectordb/LLM, chunking strategies, and retrieval applications.
  • Adds a naive hierarchy for elements via a parent_id on the element's metadata
    • Users will now have more metadata for implementing vectordb/LLM chunking strategies. For example, text elements could be queried by their preceding title element.
    • Title elements created from HTML headings will properly nest

Fixes

  • add_pytesseract_bboxes_to_elements no longer returns nan values. The function logic is now broken into new methods _get_element_box and convert_multiple_coordinates_to_new_system
  • Selecting a different model wasn't being respected when calling partition_image. Problem: partition_pdf allows for passing a model_name parameter. Given the similarity between the image and PDF pipelines, the expected behavior is that partition_image should support the same parameter, but partition_image was unintentionally not passing along its kwargs. This was corrected by adding the kwargs to the downstream call.
  • Fixes a chunking issue via dropping the field "coordinates". Problem: chunk_by_title function was chunking each element to its own individual chunk while it needed to group elements into a fewer number of chunks. We've discovered that this happens due to a metadata matching logic in chunk_by_title function, and discovered that elements with different metadata can't be put into the same chunk. At the same time, any element with "coordinates" essentially had different metadata than other elements, due each element locating in different places and having different coordinates. Fix: That is why we have included the key "coordinates" inside a list of excluded metadata keys, while doing this "metadata_matches" comparision. Importance: This change is crucial to be able to chunk by title for documents which include "coordinates" metadata in their elements.
Changelog

Sourced from unstructured[local-inference]'s changelog.

0.10.15

Enhancements

  • Suport for better element categories from the next-generation image-to-text model ("chipper").. Previously, not all of the classifications from Chipper were being mapped to proper unstructured element categories so the consumer of the library would see many UncategorizedText elements. This fixes the issue, improving the granularity of the element categories outputs for better downstream processing and chunking. The mapping update is:
    • "Threading": NarrativeText
    • "Form": NarrativeText
    • "Field-Name": Title
    • "Value": NarrativeText
    • "Link": NarrativeText
    • "Headline": Title (with category_depth=1)
    • "Subheadline": Title (with category_depth=2)
    • "Abstract": NarrativeText
  • Better ListItem grouping for PDF's (fast strategy). The partition_pdf with fast strategy previously broke down some numbered list item lines as separate elements. This enhancement leverages the x,y coordinates and bbox sizes to help decide whether the following chunk of text is a continuation of the immediate previous detected ListItem element or not, and not detect it as its own non-ListItem element.
  • Fall back to text-based classification for uncategorized Layout elements for Images and PDF's. Improves element classification by running existing text-based rules on previously UncategorizedText elements
  • Adds table partitioning for Partitioning for many doc types including: .html, .epub., .md, .rst, .odt, and .msg. At the core of this change is the .html partition functionality, which is leveraged by the other effected doc types. This impacts many scenarios where Table Elements are now propery extracted.
  • Create and add add_chunking_strategy decorator to partition functions. Previously, users were responsible for their own chunking after partitioning elements, often required for downstream applications. Now, individual elements may be combined into right-sized chunks where min and max character size may be specified if chunking_strategy=by_title. Relevant elements are grouped together for better downstream results. This enables users immediately use partitioned results effectively in downstream applications (e.g. RAG architecture apps) without any additional post-processing.
  • Adds languages as an input parameter and marks ocr_languages kwarg for deprecation in pdf, image, and auto partitioning functions. Previously, language information was only being used for Tesseract OCR for image-based documents and was in a Tesseract specific string format, but by refactoring into a list of standard language codes independent of Tesseract, the unstructured library will better support languages for other non-image pipelines and/or support for other OCR engines.
  • Removes UNSTRUCTURED_LANGUAGE env var usage and replaces language with languages as an input parameter to unstructured-partition-text_type functions. The previous parameter/input setup was not user-friendly or scalable to the variety of elements being processed. By refactoring the inputted language information into a list of standard language codes, we can support future applications of the element language such as detection, metadata, and multi-language elements. Now, to skip English specific checks, set the languages parameter to any non-English language(s).
  • Adds xlsx and xls filetype extensions to the skip_infer_table_types default list in partition. By adding these file types to the input parameter these files should not go through table extraction. Users can still specify if they would like to extract tables from these filetypes, but will have to set the skip_infer_table_types to exclude the desired filetype extension. This avoids mis-representing complex spreadsheets where there may be multiple sub-tables and other content.
  • Better debug output related to sentence counting internals. Clarify message when sentence is not counted toward sentence count because there aren't enough words, relevant for developers focused on unstructureds NLP internals.
  • Faster ocr_only speed for partitioning PDF and images. Use unstructured_pytesseract.run_and_get_multiple_output function to reduce the number of calls to tesseract by half when partitioning pdf or image with tesseract
  • Adds data source properties to fsspec connectors These properties (date_created, date_modified, version, source_url, record_locator) are written to element metadata during ingest, mapping elements to information about the document source from which they derive. This functionality enables downstream applications to reveal source document applications, e.g. a link to a GDrive doc, Salesforce record, etc.
  • Add delta table destination connector New delta table destination connector added to ingest CLI. Users may now use unstructured-ingest to write partitioned data from over 20 data sources (so far) to a Delta Table.
  • Rename to Source and Destination Connectors in the Documentation. Maintain naming consistency between Connectors codebase and documentation with the first addition to a destination connector.
  • Non-HTML text files now return unstructured-elements as opposed to HTML-elements. Previously the text based files that went through partition_html would return HTML-elements but now we preserve the format from the input using source_format argument in the partition call.
  • Adds PaddleOCR as an optional alternative to Tesseract for OCR in processing of PDF or Image files, it is installable via the makefile command install-paddleocr. For experimental purposes only.
  • Bump unstructured-inference to 0.5.28. This version bump markedly improves the output of table data, rendered as metadata.text_as_html in an element. These changes include:
    • add env variable ENTIRE_PAGE_OCR to specify using paddle or tesseract on entire page OCR
    • table structure detection now pads the input image by 25 pixels in all 4 directions to improve its recall (0.5.27)
    • support paddle with both cpu and gpu and assumed it is pre-installed (0.5.26)
    • fix a bug where cells_to_html doesn't handle cells spanning multiple rows properly (0.5.25)
    • remove cv2 preprocessing step before OCR step in table transformer (0.5.24)

Features

  • Adds element metadata via category_depth with default value None.
    • This additional metadata is useful for vectordb/LLM, chunking strategies, and retrieval applications.
  • Adds a naive hierarchy for elements via a parent_id on the element's metadata
    • Users will now have more metadata for implementing vectordb/LLM chunking strategies. For example, text elements could be queried by their preceding title element.
    • Title elements created from HTML headings will properly nest

Fixes

  • add_pytesseract_bboxes_to_elements no longer returns nan values. The function logic is now broken into new methods _get_element_box and convert_multiple_coordinates_to_new_system
  • Selecting a different model wasn't being respected when calling partition_image. Problem: partition_pdf allows for passing a model_name parameter. Given the similarity between the image and PDF pipelines, the expected behavior is that partition_image should support the same parameter, but partition_image was unintentionally not passing along its kwargs. This was corrected by adding the kwargs to the downstream call.
  • Fixes a chunking issue via dropping the field "coordinates". Problem: chunk_by_title function was chunking each element to its own individual chunk while it needed to group elements into a fewer number of chunks. We've discovered that this happens due to a metadata matching logic in chunk_by_title function, and discovered that elements with different metadata can't be put into the same chunk. At the same time, any element with "coordinates" essentially had different metadata than other elements, due each element locating in different places and having different coordinates. Fix: That is why we have included the key "coordinates" inside a list of excluded metadata keys, while doing this "metadata_matches" comparision. Importance: This change is crucial to be able to chunk by title for documents which include "coordinates" metadata in their elements.
Commits


Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
awalker4 commented 1 year ago

FYI the process right now will be to check out these branches and recompile. I'm getting a test failure locally related to the new parent_id not being returned in parallel mode. Need to investigate.