ParisNeo / lollms-webui

Lord of Large Language Models Web User Interface
https://parisneo.github.io/lollms-webui/
Apache License 2.0
4.11k stars 522 forks source link

with every update get it more errors? #495

Open WiNeTel opened 5 months ago

WiNeTel commented 5 months ago

Expected Behavior

with every update you make my lollms working less, it´s start with barly accept my GPU´s in mode balanced or auto only use ~2gig VRAM of one GPU, the rest go in Memory giving up to try to load with GPU, CPU was still working. Next update, get only 1 Answer in the Chat,with the 2. Answer is terminating the program, press any key... your last update get model loded, don´t get any answer at all, only this: Exception in thread Thread-XX (run_llmodel_prompt): The Tread number change Traceback (most recent call last): File "G:\AI\lollms\installer_files\lollms_env\Lib\threading.py", line 1045, in _bootstrap_inner self.run() File "G:\AI\lollms\installer_files\lollms_env\Lib\threading.py", line 982, in run self._target(*self._args, **self._kwargs) File "G:\AI\lollms\installer_files\lollms_env\Lib\site-packages\gpt4all\_pyllmodel.py", line 373, in run_llmodel_prompt self.prompt_model(prompt, callback, **kwargs) File "G:\AI\lollms\installer_files\lollms_env\Lib\site-packages\gpt4all\_pyllmodel.py", line 344, in prompt_model llmodel.llmodel_prompt( OSError: exception: access violation writing 0x0000000000000140 Drücken Sie eine beliebige Taste . . . inside the console i can see, that he starting generate a answer, but this answer newer never finish, program freeze!

When i try to restart from WebUI i get this Error: ERROR: [Errno 10048] error while attempting to bind on address ('::1', 9600, 0, 0): normalerweise darf jede socketadresse (protokoll, netzwerkadresse oder anschluss) nur jeweils einmal verwendet werden INFO: Waiting for application shutdown. INFO: Application shutdown complete. ERROR: Exception in ASGI application Traceback (most recent call last): File "G:\AI\lollms\installer_files\lollms_env\Lib\site-packages\uvicorn\protocols\http\h11_impl.py", line 412, in run_asgi result = await app( # type: ignore[func-returns-value] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "G:\AI\lollms\installer_files\lollms_env\Lib\site-packages\uvicorn\middleware\proxy_headers.py", line 84, in __call__ return await self.app(scope, receive, send) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "G:\AI\lollms\installer_files\lollms_env\Lib\site-packages\engineio\async_drivers\asgi.py", line 67, in __call__ await self.other_asgi_app(scope, receive, send) File "G:\AI\lollms\installer_files\lollms_env\Lib\site-packages\fastapi\applications.py", line 1054, in __call__ await super().__call__(scope, receive, send) File "G:\AI\lollms\installer_files\lollms_env\Lib\site-packages\starlette\applications.py", line 123, in __call__ await self.middleware_stack(scope, receive, send) File "G:\AI\lollms\installer_files\lollms_env\Lib\site-packages\starlette\middleware\errors.py", line 164, in __call__ await self.app(scope, receive, _send) File "G:\AI\lollms\installer_files\lollms_env\Lib\site-packages\starlette\middleware\exceptions.py", line 62, in __call__ await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send) File "G:\AI\lollms\installer_files\lollms_env\Lib\site-packages\starlette\_exception_handler.py", line 53, in wrapped_app await app(scope, receive, sender) File "G:\AI\lollms\installer_files\lollms_env\Lib\site-packages\starlette\routing.py", line 758, in __call__ await self.middleware_stack(scope, receive, send) File "G:\AI\lollms\installer_files\lollms_env\Lib\site-packages\starlette\routing.py", line 778, in app await route.handle(scope, receive, send) File "G:\AI\lollms\installer_files\lollms_env\Lib\site-packages\starlette\routing.py", line 299, in handle await self.app(scope, receive, send) File "G:\AI\lollms\installer_files\lollms_env\Lib\site-packages\starlette\routing.py", line 79, in app await wrap_app_handling_exceptions(app, request)(scope, receive, send) File "G:\AI\lollms\installer_files\lollms_env\Lib\site-packages\starlette\_exception_handler.py", line 53, in wrapped_app await app(scope, receive, sender) File "G:\AI\lollms\installer_files\lollms_env\Lib\site-packages\starlette\routing.py", line 74, in app response = await func(request) ^^^^^^^^^^^^^^^^^^^ File "G:\AI\lollms\installer_files\lollms_env\Lib\site-packages\fastapi\routing.py", line 294, in app raw_response = await run_endpoint_function( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "G:\AI\lollms\installer_files\lollms_env\Lib\site-packages\fastapi\routing.py", line 191, in run_endpoint_function return await dependant.call(**values) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "G:\AI\lollms\lollms-webui\endpoints\lollms_webui_infos.py", line 60, in restart_program lollmsElfServer.run_restart_script(lollmsElfServer.args) File "G:\AI\lollms\lollms-webui\lollms_webui.py", line 351, in run_restart_script sys.exit(0) SystemExit: 0 INFO: 127.0.0.1:50412 - "GET /restart_program HTTP/1.1" 500 Internal Server Error Task exception was never retrieved future: <Task finished name='Task-199' coro=<AsyncServer.shutdown() done, defined at G:\AI\lollms\installer_files\lollms_env\Lib\site-packages\socketio\async_server.py:431> exception=TypeError("object NoneType can't be used in 'await' expression")> Traceback (most recent call last): File "G:\AI\lollms\installer_files\lollms_env\Lib\site-packages\socketio\async_server.py", line 438, in shutdown await self.eio.shutdown() File "G:\AI\lollms\installer_files\lollms_env\Lib\site-packages\engineio\async_server.py", line 345, in shutdown await self.service_task_handle TypeError: object NoneType can't be used in 'await' expression But i can Still use the Web UI, looks like he never shutdown!

When i Start completly new, i don´t get a error, but how i say a can´t use it!

Sometimes he write in the console, that is missing the \DATA\user_infos\default_user.svg Have checked the folder is empty

When Activate Long term memory in the Config, get this message in the Console: Couldn't add long term memory information to the context. Please verify the vector database

even new installation not solve, starting with the same problems. ! try different Models and different Bindings, alltime the same Error Try Models in GGUF, GGML and gptq Bindings i try was gpt4all, hugging face, exllama v2 and with Python LLAMA Cpp, get the same error like Issue #492

Supermicro dual CPU 2x14 cores, 64 GB Memory 1 Tesla P40 24GB VRAM 1 GTX 1060 6 GB VRAM, only used to install Programs local

Current Behavior

Please describe the behavior you are currently experiencing.

Steps to Reproduce

Install the program choose the Binding, the Model and the trouble begin

Possible Solution

no idea, there are no log files lollms detect my videocards is shown in System status, but never get runing the Driver, i try both Nvidea with and without Tensos in Main config and in the Binding Auto, balanced, GPU, nothing helped. tomorrow , hmm later try again a new installation! but this time will try the Manual way without win_install.bat

ba2512005 commented 5 months ago

Based on the ERROR: [Errno 10048] error while attempting to bind on address ('::1', 9600, 0, 0): normalerweise darf jede socketadresse (protokoll, netzwerkadresse oder anschluss) nur jeweils einmal verwendet werden

It seems like your port 9600 is already being used. Perhaps you didn't shut down the application correctly last time. Please restart your computer and try again. Also browse to your Documents folder and delete the file in the config folder called local_config.yaml

The newer versions have issues with using old config files.

WiNeTel commented 5 months ago

Based on the ERROR: [Errno 10048] error while attempting to bind on address ('::1', 9600, 0, 0): normalerweise darf jede socketadresse (protokoll, netzwerkadresse oder anschluss) nur jeweils einmal verwendet werden

It seems like your port 9600 is already being used. Perhaps you didn't shut down the application correctly last time. Please restart your computer and try again. Also browse to your Documents folder and delete the file in the config folder called local_config.yaml

The newer versions have issues with using old config files.

This Error i get, when i try to restart over the WebUi. the Config, i have deleted a few Times Yesterday.

WiNeTel commented 5 months ago

OK, have tried the manual instalation and have the same Problems. Now rermoved all my Videocards and only use the Card from my son, RTX3060 12GB. make a new installation, again! Found this Error, when instll with win_install.bat fatal: not a git repository (or any of the parent directories): .git

This is only a part from the Installation, but think its enough to found it. Resolving deltas: 100% (3995/3995), done. Submodule path 'lollms_core': checked out '7363a4bfdc309391873853a75323ee9a4d7719f5' Submodule path 'utilities/safe_store': checked out '3dd1693b1900228eee4a314ae303180f73ed0256' Submodule path 'zoos/bindings_zoo': checked out 'e28341e8147b457a0439e7a0f7d4bbacf95ca4f3' Submodule path 'zoos/extensions_zoo': checked out '4107557398450addb7500e06d80f26b37f572805' Submodule path 'zoos/models_zoo': checked out '061716fec92c79795c357f96b6dc3a72832cc3e0' Submodule path 'zoos/personalities_zoo': checked out '9deb009cb55489796571bd3627c313e7b10e8e23' fatal: not a git repository (or any of the parent directories): .git Obtaining file:///G:/AI/Lollms/lollms-webui/lollms_core Preparing metadata (setup.py) ... done Requirement already satisfied: tqdm in g:\ai\lollms\installer_files\lollms_env\lib\site-packages (from lollms==7.3.0) (4.65.0) Collecting pyyaml (from lollms==7.3.0) Using cached PyYAML-6.0.1-cp311-cp311-win_amd64.whl.metadata (2.1 kB) Collecting Pillow (from lollms==7.3.0) Using cached pillow-10.2.0-cp311-cp311-win_amd64.whl.metadata (9.9 kB) Collecting wget (from lollms==7.3.0) But now i remember some different then a week ago, don´t ask me what i was doing, but it´s start to load the Model again and need time like a week ago, and then work for a while after Restart not more working again When i now start the UI, its load very fast including the Models, a week ago, it´s make me crazy waiting to load the Model, then the Update waiting again to load the Model.