vemonet / libre-chat

🦙 Free and Open Source Large Language Model (LLM) chatbot web UI and API. Self-hosted, offline capable and easy to setup. Powered by LangChain.
https://vemonet.github.io/libre-chat
MIT License
134 stars 17 forks source link

raise TemplateNotFound(template) jinja2.exceptions.TemplateNotFound: index.html #2

Open de-illusionist opened 1 year ago

de-illusionist commented 1 year ago

Describe the bug

I pip installed your package, and I had my local ggml llama 7B chat model and I set up the chat.yml, with the model in the models folder, even put sentence transformer and vectors in their respective folder, all in one folder, and ran the test code as in the end of the README.md and I ran the code inside a conda environment and it gave me "TemplateNotFound: index.html" error. However when I checked the console it had already executed the llama model in the terminal, and then threw the error.. I figured it might be searching for the "index.html" in the "~/.local/lib/python3.10/site-packages/libre_chat/templates" folder in the home folder of my test code, so i copied it their but its still throwing the same error. Can you please help?

A little more exact error message:

(LLama-2) <ur-pc-username>@<ur-pc-username>:~/Documents/AI_test_bed/LLama/Version_2/libre-chat-app$ python3 libre_chat_test.py
INFO:     [2023-08-10 11:01:07,012] ⚡ Using GPU: cuda:0 [llm:__init__]
INFO:     [2023-08-10 11:01:07,013] ♻️  Reusing existing vectorstore at ./vectorstore/db_faiss, skip building the vectorstore [llm:build_vectorstore]
INFO:     [2023-08-10 11:01:07,013] 🤖 Loading CTransformers model from ./models/llama-2-7b-chat.ggmlv3.q2_K.bin [llm:setup_dbqa]
INFO:     [2023-08-10 11:01:10,590] 💫 Loading vector database at ./vectorstore/db_faiss, with embeddings from ./sentence-transformer/all-MiniLM-L6-v2 [llm:setup_dbqa]
INFO:sentence_transformers.SentenceTransformer:Load pretrained SentenceTransformer: ./sentence-transformer/all-MiniLM-L6-v2
INFO:torch.distributed.nn.jit.instantiator:Created a temporary directory at /tmp/tmpv2qk1qgr
INFO:torch.distributed.nn.jit.instantiator:Writing /tmp/tmpv2qk1qgr/_remote_module_non_scriptable.py
INFO:faiss.loader:Loading faiss with AVX2 support.
INFO:faiss.loader:Successfully loaded faiss with AVX2 support.
INFO:     [2023-08-10 11:01:11,817] 💬 Querying the LLM with prompt: What is the capital of the Netherlands? [llm:query]
Batches: 100%|███████████████████████████████████████████████████████████████| 1/1 [00:00<00:00,  6.11it/s]
{'query': 'What is the capital of the Netherlands?', 'result': 'The capital of the Netherlands is Amsterdam.', 'source_documents': [Document(page_content='LLC which is incorporated in thestate of Delaware in the United States.', metadata={'source': 'data\\manu-20f-2022-09-24.pdf', 'page': 66}), Document(page_content='decrease of £7.1 million from net expenditure of £92.2 million for the year ended 30 June 2021. Net capitalexpenditure for the year ended 30 June 2022 was mainly comprised of payments made for the acquisitions ofSancho, Varane and van de Beek and the extension of Fernandes, less payments received relating to thedisposal of Lukaku.\nNet capital expenditure for the year ended 30 June 2021 was mainly comprised of payments made for', metadata={'source': 'data\\manu-20f-2022-09-24.pdf', 'page': 78})]}
INFO:     Started server process [46981]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
INFO:     127.0.0.1:54624 - "GET / HTTP/1.1" 500 Internal Server Error
ERROR:    Exception in ASGI application
Traceback (most recent call last):
  File "/home/<ur-pc-username>/.local/lib/python3.10/site-packages/anyio/streams/memory.py", line 98, in receive
    return self.receive_nowait()
  File "/home/<ur-pc-username>/.local/lib/python3.10/site-packages/anyio/streams/memory.py", line 93, in receive_nowait
    raise WouldBlock
anyio.WouldBlock

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/<ur-pc-username>/.local/lib/python3.10/site-packages/starlette/middleware/base.py", line 78, in call_next
    message = await recv_stream.receive()
  File "/home/<ur-pc-username>/.local/lib/python3.10/site-packages/anyio/streams/memory.py", line 118, in receive
    raise EndOfStream
anyio.EndOfStream

....
....
....
  File "/home/<ur-pc-username>/.local/lib/python3.10/site-packages/jinja2/environment.py", line 969, in _load_template
    template = self.loader.load(self, name, self.make_globals(globals))
  File "/home/<ur-pc-username>/.local/lib/python3.10/site-packages/jinja2/loaders.py", line 126, in load
    source, filename, uptodate = self.get_source(environment, name)
File "/home/<ur-pc-username>/.local/lib/python3.10/site-packages/jinja2/loaders.py", line 218, in get_source
    raise TemplateNotFound(template)
jinja2.exceptions.TemplateNotFound: index.html 

Reproduction

No response

Operating System

Ubuntu 22.04

Libre Chat version

0.0.4

Additional context

No response

vemonet commented 1 year ago

Hi @de-illusionist, thanks for the notification, sorry for the delayed answer I was away from keyboard all of August

The problem comes from jinja2, which is not finding the index.html file in the templates folder (which shows the chat web UI), when you try to use libre-chat after installing it directly with pip install libre-chat

Which is surprising because I used pkg_resources to pass the path. It works without issue when the package is installed from the source code (in dev, test and docker)

It is not a big problem, I did not wanted to use jinja2 on the long term.

I updated the main chatbot web UI, it is now implemented in python with gradio, which is less complex and easier to maintain

The fix will be present in the next release (0.0.6), in the meantime you can get it sooner by install from the main branch with pip install git+https://github.com/vemonet/libre-chat