lobehub / lobe-chat

🤯 Lobe Chat - an open-source, modern-design AI chat framework. Supports Multi AI Providers( OpenAI / Claude 3 / Gemini / Ollama / Azure / DeepSeek), Knowledge Base (file upload / knowledge management / RAG ), Multi-Modals (Vision/TTS) and plugin system. One-click FREE deployment of your private ChatGPT/ Claude application.
https://chat-preview.lobehub.com
Other
40.63k stars 9.25k forks source link

[Bug] 对某些上传的pdf文件分块失败 #3551

Open havelhuang opened 3 weeks ago

havelhuang commented 3 weeks ago

📦 部署环境

Vercel

📌 软件版本

1.12.3

💻 系统环境

Windows

🌐 浏览器

Chrome

🐛 问题描述

某些pdf文件上传后无法分块,出现如图错误 屏幕截图 2024-08-22 165740

📷 复现步骤

No response

🚦 期望结果

No response

📝 补充信息

No response

lobehubbot commented 3 weeks ago

Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑‍🤝‍🧑👫🧑🏿‍🤝‍🧑🏻👩🏾‍🤝‍👨🏿👬🏿


📦 Deployment environment

Vercel

📌 Software version

1.12.3

💻 System environment

Windows

🌐 Browser

Chrome

🐛 Problem description

Some pdf files cannot be divided into chunks after uploading, and an error appears as shown in the figure. Screenshot 2024-08-22 165740

📷 Steps to reproduce

No response

🚦 Expected results

No response

📝 Supplementary information

No response

lobehubbot commented 3 weeks ago

👀 @havelhuang

Thank you for raising an issue. We will investigate into the matter and get back to you as soon as possible. Please make sure you have given us as much context as possible.\ 非常感谢您提交 issue。我们会尽快调查此事,并尽快回复您。 请确保您已经提供了尽可能多的背景信息。

arvinxx commented 3 weeks ago

能否附一个文件上来我看下

lobehubbot commented 3 weeks ago

Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑‍🤝‍🧑👫🧑🏿‍🤝‍🧑🏻👩🏾‍🤝‍👨🏿👬🏿


Can you attach a file for me to take a look at?

havelhuang commented 3 weeks ago

能否附一个文件上来我看下

synergy.pdf 比如这篇论文

lobehubbot commented 3 weeks ago

Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑‍🤝‍🧑👫🧑🏿‍🤝‍🧑🏻👩🏾‍🤝‍👨🏿👬🏿


Can you attach a file for me to take a look at?

synergy.pdf For example, this paper

gaarry commented 3 weeks ago
image

same

Xiaokai6880 commented 3 weeks ago

+1

lobehubbot commented 3 weeks ago

Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑‍🤝‍🧑👫🧑🏿‍🤝‍🧑🏻👩🏾‍🤝‍👨🏿👬🏿


+1

havelhuang commented 3 weeks ago

能否附一个文件上来我看下

synergy.pdf 比如这篇论文

有没有可能是你的API服务的问题呢?比如测试一下能不能正常调用text-embedding-3-small模型. 我用你这篇论文的前三页测试了一下,分块是没有问题的 image

其他pdf文件也可以分块,少量pdf文件不行

lobehubbot commented 3 weeks ago

Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑‍🤝‍🧑👫🧑🏿‍🤝‍🧑🏻👩🏾‍🤝‍👨🏿👬🏿


Can you attach a file for me to take a look at?

synergy.pdf For example, this paper

Is it possible that there is a problem with your API service? For example, test whether the text-embedding-3-small model can be called normally. I tested it with the first three pages of your paper, and there is no problem with chunking! [image](https://private-user-images.githubusercontent.com/11055122/360427683-665a1bbe-ecd5-4c5c-afbb-fea438540dae.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9..IvFfSSKpPltl2EB WLSYfAyItTEh7Vx4PL6mdRkAF3qQ)

Other pdf files can also be divided into chunks, but a small number of pdf files cannot.

lobehubbot commented 3 weeks ago

Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑‍🤝‍🧑👫🧑🏿‍🤝‍🧑🏻👩🏾‍🤝‍👨🏿👬🏿


Can you attach a file for me to take a look at?

synergy.pdf For example, this paper

Is it possible that there is a problem with your API service? For example, test whether the text-embedding-3-small model can be called normally. I tested it with the first three pages of your paper, and there is no problem with chunking. ![image](https://private-user-images.githubusercontent.com/11055122/360427683-665a1bbe-ecd5-4c5c-afbb-fea438540dae.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9..IvFfSSKpPltl2 EBWLSYfAyItTEh7Vx4PL6mdRkAF3qQ)

Other pdf files can also be divided into chunks, but a small number of pdf files cannot

(If you use Windows system) You can try to use the "Print function" to output this PDF as a new PDF and then upload it in chunks.

Sun-drenched commented 3 weeks ago

能否附一个文件上来我看下

synergy.pdf 比如这篇论文

有没有可能是你的API服务的问题呢?比如测试一下能不能正常调用text-embedding-3-small模型. 我用你这篇论文的前三页测试了一下,分块是没有问题的 image

其他pdf文件也可以分块,少量pdf文件不行

嗯,确实是这篇文档的"问题".目前定位到了你这篇论文里的第4页(含义复杂的图表+数学公式混排)解析出问题了..印象中这种页面不太好搞.. image

lobehubbot commented 3 weeks ago

Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑‍🤝‍🧑👫🧑🏿‍🤝‍🧑🏻👩🏾‍🤝‍👨🏿👬🏿


Can you attach a file for me to take a look at?

synergy.pdf For example, this paper

Is it possible that there is a problem with your API service? For example, test whether the text-embedding-3-small model can be called normally. I tested it with the first three pages of your paper, and there is no problem with chunking. ![image](https://private-user-images.githubusercontent.com/11055122/360427683-665a1bbe-ecd5-4c5c-afbb-fea438540dae.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9..IvFfSSKpPltl2 EBWLSYfAyItTEh7Vx4PL6mdRkAF3qQ)

Other pdf files can also be divided into chunks, but a small number of pdf files cannot

Well, it is indeed the "problem" of this document. Currently, I have located the 4th page of your paper (a mixed arrangement of charts and mathematical formulas with complex meanings) and there is a problem. I have the impression that this kind of page is not easy to handle. .. image

arvinxx commented 3 weeks ago

@Sun-drenched 这种后续可以等我把 Unstructed.io 的变量开起来后再试试。之前我试下来 Unstructed 解析这种复杂格式的文件都没问题的

lobehubbot commented 3 weeks ago

Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑‍🤝‍🧑👫🧑🏿‍🤝‍🧑🏻👩🏾‍🤝‍👨🏿👬🏿


@Sun-drenched I can try this follow-up after I open the variables of Unstructed.io. I tried Unstructed before and it had no problem parsing files in this complex format.

Sun-drenched commented 3 weeks ago

@Sun-drenched 这种后续可以等我把 Unstructed.io 的变量开起来后再试试。之前我试下来 Unstructed 解析这种复杂格式的文件都没问题的

希望如此吧,我之前测试一些理工科论文的时候,感觉市面上主流开源方案都不能做得很好(比如这个页面所示的,带有多个且复杂图例且图中还有足以影响理解结果的位置关系;流程图中表达复杂的数学关系).

另外,报错"[error]: invalid byte sequence for encoding "UTF8": 0x00"似乎还和PostgreSQl数据库有关.

lobehubbot commented 3 weeks ago

Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑‍🤝‍🧑👫🧑🏿‍🤝‍🧑🏻👩🏾‍🤝‍👨🏿👬🏿


@Sun-drenched I can try this follow-up after I open the variables of Unstructed.io. I tried Unstructed before and it had no problem parsing files in this complex format.

I hope so. When I tested some science and engineering papers before, I felt that the mainstream open source solutions on the market could not do a good job (for example, as shown on this page, there are multiple and complex legends and there are enough in the figure to affect the understanding of the results. positional relationships; complex mathematical relationships expressed in flow charts).

In addition, the error "[error]: invalid byte sequence for encoding "UTF8": 0x00" seems to be related to the PostgreSQl database.

alphandbelt1 commented 3 weeks ago

我这个pdf也是 image hackingciphers.pdf

ProblemFactory commented 3 weeks ago

是否能提供一种方案把PDF直接通过文本提取/页面截图的方式嵌入到聊天消息里而不是向量化?

lobehubbot commented 3 weeks ago

Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑‍🤝‍🧑👫🧑🏿‍🤝‍🧑🏻👩🏾‍🤝‍👨🏿👬🏿


Is there a solution to embed the PDF directly into the chat message through text extraction/page screenshot instead of vectorization?

arvinxx commented 3 weeks ago

@ProblemFactory 未来会做的

nexusdiscord commented 2 weeks ago

I got the same thing, using Vercel, Neon DB, and MinIO image [Chunking Error] { body: { detail: 'invalid byte sequence for encoding "UTF8": 0x00' }, name: 'error' } nextjs-docs.pdf

dosubot[bot] commented 2 weeks ago

Hey @nexusdiscord! I'm here to help you with any bugs, questions, or contributions you have while waiting for a human maintainer. Let's tackle this together!

The error message "invalid byte sequence for encoding 'UTF8': 0x00" indicates that the file contains invalid UTF-8 byte sequences, which is causing the chunking process to fail.

To address this issue, you can try the following steps:

  1. Check File Encoding: Ensure that the file is properly encoded in UTF-8. You can use tools like iconv or text editors that support encoding conversion to re-encode the file.

  2. Handle Non-UTF-8 Encodings: Modify the chunking process to handle files with different encodings. This might involve detecting the file encoding and converting it to UTF-8 before processing.

  3. Update Chunking Logic: If the chunking logic does not currently handle non-UTF-8 encodings, you may need to update it. This can be done in the ContentChunk class, specifically in the chunkContent method, to include encoding detection and conversion.

Here is a relevant part of the ContentChunk class where you might need to make changes:

async chunkContent(params: ChunkContentParams): Promise<ChunkResult> {
  // Detect and convert encoding to UTF-8 if necessary
  const content = convertToUTF8(params.content);

  if (this.isUsingUnstructured(params))
    return await this.chunkByUnstructured(params.filename, content);

  return await this.chunkByLangChain(params.filename, content);
}

function convertToUTF8(content: Uint8Array): Uint8Array {
  // Implement encoding detection and conversion logic here
  // For example, using the 'iconv-lite' library
  const iconv = require('iconv-lite');
  const detectedEncoding = detectEncoding(content); // Implement this function
  return iconv.encode(iconv.decode(content, detectedEncoding), 'utf-8');
}

By implementing encoding detection and conversion, you can ensure that the chunking process handles files with different encodings correctly.

For more details on the chunking process, you can refer to the ContentChunk class in the src/server/modules/ContentChunk/index.ts file [1].

To continue talking to Dosu, mention @dosu.

CXwudi commented 2 weeks ago

有些PDF如果先分成几段小PDF的话再分别分块就可行,比如说Head First Java本身45mb直接分块的话会报错,但如果把这个45mb分成5份小PDF,每份都小于15mb的话,lobechat就能分出来 image

nexusdiscord commented 2 weeks ago

When I split the PDF, I encountered a PDF page that caused an 'invalid byte sequence for encoding "UTF8": 0x00'. nextjs-docs-1-100-71-100-2-4-6-1.pdf