Open pjbaron opened 4 months ago
Thank you for this, I also noticed that the embeddings hung and didn't know why, good to know it was not something I did but how the embeddings model was behaving!
thank you this fixed my problem
Traceback (most recent call last):
File "localrag.py", line 157, in
hey @vitorcalvi , I see the same error as you. Not really looked into it much but I found if you find the line that reads:
response = ollama.embeddings(model='mxbai-embed-large', prompt=content)
If I add a 'blank' line before it...it works. So change that one line in localrag.py to:
response = ollama.embeddings(model='mxbai-embed-large', prompt='')
response = ollama.embeddings(model='mxbai-embed-large', prompt=content)
Like I say I have no idea why and I havent looked into it. I was just debugging..and it worked :)
WSL2, Windows 10 Pro, Ubuntu 22.04
The localrag.py script will hang indefinitely when processing the supplied vault.txt file.
The problem appears to be that the mxbai-embed-large model hangs when supplied with an empty line (which separate each of the first ~15 sentences in the vault.txt, after these lines, the sentence separators switch from CRLF CRLF to just LF and there are no more empty lines)
I have hack-patched it here with the code: