marella / chatdocs

Chat with your documents offline using AI.
MIT License
671 stars 98 forks source link

When trained on PDF I get results other than the content of the PDF #29

Open ManalIrfan opened 1 year ago

ManalIrfan commented 1 year ago

I uploaded a pdf to be trained, and then I asked who captain America was and it gave me an answer. how can i make it specific to the document only.

marella commented 1 year ago

You can try different prompts like:

Here I'm referring to the document text as "above text" which is passed to LLM using a prompt template. But it can be hard to make them answer only from documents especially if documents don't provide enough context.

Ananderz commented 1 year ago

I think what he means is that the pretrained LLM has it's own knowledge outside of his document. He probably has not uploaded a document about captain america. However the LLM knows who captain america is.

He is trying to sandbox the LLM from anything other than the document he uploaded. I'm not sure that's possible.

I have only been able to do it by using openAI embeddings over API and chatgpt3,5 turbo.

ManalIrfan commented 1 year ago

I think what he means is that the pretrained LLM has it's own knowledge outside of his document. He probably has not uploaded a document about captain america. However the LLM knows who captain america is.

He is trying to sandbox the LLM from anything other than the document he uploaded. I'm not sure that's possible.

I have only been able to do it by using openAI embeddings over API and chatgpt3,5 turbo.

Yes this is exactly what i mean, the document i uploaded had nothing to do with captain america and still gave me who captain america is

ccelik97 commented 1 year ago

I think what he means is that the pretrained LLM has it's own knowledge outside of his document. He probably has not uploaded a document about captain america. However the LLM knows who captain america is. He is trying to sandbox the LLM from anything other than the document he uploaded. I'm not sure that's possible. I have only been able to do it by using openAI embeddings over API and chatgpt3,5 turbo.

Yes this is exactly what i mean, the document i uploaded had nothing to do with captain america and still gave me who captain america is

The first thing you shouldn't disregard is that ~4GB model file is indeed large (the first "L" in "LLM"), because is trained on huge (as in terabytes of) text data. So it'll indeed know a thing or two about pretty much anything, not to mention the massively popular pop-culture stuff like Captain America. E.g. if it was a hyper-focused model made out of just a few PDFs then nothing could've justified it taking up gigabytes of storage/memory.

And the second thing to not to mix up is what this project does (so far): It ingests your documents so that the model will have an easier time looking up at all that data from the vector database when it's coming up with a little more educated answers to your questions than what it'd have otherwise done as an isolated snapshot of a mind. In other words you aren't training the model. The application is simply helping the model to take notes (for looking them up later).

manal-irfan commented 1 year ago

I think what he means is that the pretrained LLM has it's own knowledge outside of his document. He probably has not uploaded a document about captain america. However the LLM knows who captain america is. He is trying to sandbox the LLM from anything other than the document he uploaded. I'm not sure that's possible. I have only been able to do it by using openAI embeddings over API and chatgpt3,5 turbo.

Yes this is exactly what i mean, the document i uploaded had nothing to do with captain america and still gave me who captain america is

The first thing shouldn't disregard is that >=4GB model file is indeed large (the first "L" in "LLM"), which is trained on huge (as in terabytes of) text data. So it'll indeed know a thing or two about pretty much anything, not to mention the massively popular pop-culture stuff like Captain America. E.g. if it was a hyper-focused model made out of just a few PDFs then nothing could've justified it taking up gigabytes of storage/memory.

And the second thing to not to mix up is what this project does (so far): It ingests your documents so that the model will have an easier time looking at all that data from the vector database when it's coming up with a little more educated answers to your questions than what it'd have otherwise done as an isolated snapshot of a mind. In other words you aren't training the model. The application is simply helping the model to take notes (for looking them up later).

Any way we can sandbox this or maybe use different prompts to give output based on the document?

Ciaranwuk commented 1 year ago

This usually comes down to prompting. If you tell it to not answer anything if it can't find the answer inside the documents then it is less likely to use its previously learned knowledge. to emphasise, it is LESS LIKELY to. There isn't a generally accepted way to completely prevent an LLM using knowledge it has been trained on