darabos / ai-alchemy

An element merging game powered by AI
12 stars 0 forks source link

Error with the installer #2

Closed iChristGit closed 2 months ago

iChristGit commented 7 months ago

Traceback (most recent call last):
  File "C:\Python\lib\site-packages\transformers\utils\hub.py", line 398, in cached_file
    resolved_file = hf_hub_download(
  File "C:\Python\lib\site-packages\huggingface_hub\utils\_validators.py", line 119, in _inner_fn
    return fn(*args, **kwargs)
  File "C:\Python\lib\site-packages\huggingface_hub\file_download.py", line 1403, in hf_hub_download
    raise head_call_error
  File "C:\Python\lib\site-packages\huggingface_hub\file_download.py", line 1261, in hf_hub_download
    metadata = get_hf_file_metadata(
  File "C:\Python\lib\site-packages\huggingface_hub\utils\_validators.py", line 119, in _inner_fn
    return fn(*args, **kwargs)
  File "C:\Python\lib\site-packages\huggingface_hub\file_download.py", line 1674, in get_hf_file_metadata
    r = _request_wrapper(
  File "C:\Python\lib\site-packages\huggingface_hub\file_download.py", line 369, in _request_wrapper
    response = _request_wrapper(
  File "C:\Python\lib\site-packages\huggingface_hub\file_download.py", line 393, in _request_wrapper
    hf_raise_for_status(response)
  File "C:\Python\lib\site-packages\huggingface_hub\utils\_errors.py", line 321, in hf_raise_for_status
    raise GatedRepoError(message, response) from e
huggingface_hub.utils._errors.GatedRepoError: 401 Client Error. (Request ID: Root=1-662a8d56-4d9eedf0441b26a22ecaac7b;4c0ae22c-1392-499b-8a7d-04ea30efce7a)

Cannot access gated repo for url https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1/resolve/main/config.json.
Access to model mistralai/Mistral-7B-Instruct-v0.1 is restricted. You must be authenticated to access it.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "C:\Python\lib\multiprocessing\process.py", line 314, in _bootstrap
    self.run()
  File "C:\Python\lib\multiprocessing\process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "C:\Python\lib\site-packages\uvicorn\_subprocess.py", line 78, in subprocess_started
    target(sockets=sockets)
  File "C:\Python\lib\site-packages\uvicorn\server.py", line 65, in run
    return asyncio.run(self.serve(sockets=sockets))
  File "C:\Python\lib\asyncio\runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "C:\Python\lib\asyncio\base_events.py", line 649, in run_until_complete
    return future.result()
  File "C:\Python\lib\site-packages\uvicorn\server.py", line 69, in serve
    await self._serve(sockets)
  File "C:\Python\lib\site-packages\uvicorn\server.py", line 76, in _serve
    config.load()
  File "C:\Python\lib\site-packages\uvicorn\config.py", line 433, in load
    self.loaded_app = import_from_string(self.app)
  File "C:\Python\lib\site-packages\uvicorn\importer.py", line 19, in import_from_string
    module = importlib.import_module(module_str)
  File "C:\Python\lib\importlib\__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 883, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "E:\AI-Apps\ai-alchemy\server.py", line 171, in <module>
    load_model()  # Avoid accidentally loading it twice.
  File "E:\AI-Apps\ai-alchemy\server.py", line 35, in load_model
    tokenizer = transformers.AutoTokenizer.from_pretrained(
  File "C:\Python\lib\site-packages\transformers\models\auto\tokenization_auto.py", line 819, in from_pretrained
    config = AutoConfig.from_pretrained(
  File "C:\Python\lib\site-packages\transformers\models\auto\configuration_auto.py", line 928, in from_pretrained
    config_dict, unused_kwargs = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)
  File "C:\Python\lib\site-packages\transformers\configuration_utils.py", line 631, in get_config_dict
    config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs)
  File "C:\Python\lib\site-packages\transformers\configuration_utils.py", line 686, in _get_config_dict
    resolved_config_file = cached_file(
  File "C:\Python\lib\site-packages\transformers\utils\hub.py", line 416, in cached_file
    raise EnvironmentError(
OSError: You are trying to access a gated repo.
Make sure to have access to it at https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1.
401 Client Error. (Request ID: Root=1-662a8d56-4d9eedf0441b26a22ecaac7b;4c0ae22c-1392-499b-8a7d-04ea30efce7a)

Cannot access gated repo for url https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1/resolve/main/config.json.
Access to model mistralai/Mistral-7B-Instruct-v0.1 is restricted. You must be authenticated to access it.

I get this error when installing fresh on Windows, it seems like the repo has changed and you need to accept it before downloading, i can do it manually but where do I put the files from hugginface?

darabos commented 7 months ago

Ugh. When did that happen? I'll try to look into it on the weekend. Until then maybe try a different language model that's not gated.

iChristGit commented 7 months ago

Ugh. When did that happen? I'll try to look into it on the weekend. Until then maybe try a different language model that's not gated.

I actually have the files already, just don't know where to put the files. Where does it store the Mistral model? Il just copy the files manually

iChristGit commented 7 months ago

I managed to fix the first issue by changing the model to MaziyarPanahi/Mistral-7B-Instruct-v0.1

But now getting this error after the web-ui loads, comfy ui is open alongside:

ERROR:    Exception in ASGI application
Traceback (most recent call last):
  File "C:\Python\lib\site-packages\uvicorn\protocols\http\h11_impl.py", line 407, in run_asgi
    result = await app(  # type: ignore[func-returns-value]
  File "C:\Python\lib\site-packages\uvicorn\middleware\proxy_headers.py", line 69, in __call__
    return await self.app(scope, receive, send)
  File "C:\Python\lib\site-packages\fastapi\applications.py", line 1054, in __call__
    await super().__call__(scope, receive, send)
  File "C:\Python\lib\site-packages\starlette\applications.py", line 123, in __call__
    await self.middleware_stack(scope, receive, send)
  File "C:\Python\lib\site-packages\starlette\middleware\errors.py", line 186, in __call__
    raise exc
  File "C:\Python\lib\site-packages\starlette\middleware\errors.py", line 164, in __call__
    await self.app(scope, receive, _send)
  File "C:\Python\lib\site-packages\starlette\middleware\exceptions.py", line 65, in __call__
    await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
  File "C:\Python\lib\site-packages\starlette\_exception_handler.py", line 64, in wrapped_app
    raise exc
  File "C:\Python\lib\site-packages\starlette\_exception_handler.py", line 53, in wrapped_app
    await app(scope, receive, sender)
  File "C:\Python\lib\site-packages\starlette\routing.py", line 756, in __call__
    await self.middleware_stack(scope, receive, send)
  File "C:\Python\lib\site-packages\starlette\routing.py", line 776, in app
    await route.handle(scope, receive, send)
  File "C:\Python\lib\site-packages\starlette\routing.py", line 297, in handle
    await self.app(scope, receive, send)
  File "C:\Python\lib\site-packages\starlette\routing.py", line 77, in app
    await wrap_app_handling_exceptions(app, request)(scope, receive, send)
  File "C:\Python\lib\site-packages\starlette\_exception_handler.py", line 64, in wrapped_app
    raise exc
  File "C:\Python\lib\site-packages\starlette\_exception_handler.py", line 53, in wrapped_app
    await app(scope, receive, sender)
  File "C:\Python\lib\site-packages\starlette\routing.py", line 72, in app
    response = await func(request)
  File "C:\Python\lib\site-packages\fastapi\routing.py", line 278, in app
    raw_response = await run_endpoint_function(
  File "C:\Python\lib\site-packages\fastapi\routing.py", line 193, in run_endpoint_function
    return await run_in_threadpool(dependant.call, **values)
  File "C:\Python\lib\site-packages\starlette\concurrency.py", line 42, in run_in_threadpool
    return await anyio.to_thread.run_sync(func, *args)
  File "C:\Python\lib\site-packages\anyio\to_thread.py", line 56, in run_sync
    return await get_async_backend().run_sync_in_worker_thread(
  File "C:\Python\lib\site-packages\anyio\_backends\_asyncio.py", line 2144, in run_sync_in_worker_thread
    return await future
  File "C:\Python\lib\site-packages\anyio\_backends\_asyncio.py", line 851, in run
    result = context.run(func, *args)
  File "E:\AI-Apps\ai-alchemy\server.py", line 269, in image
    description = get_image_description(cfg, card)
  File "E:\AI-Apps\ai-alchemy\server.py", line 90, in get_image_description
    return generate(messages, max_length=100)
  File "E:\AI-Apps\ai-alchemy\server.py", line 48, in generate
    generated_ids = generator.generate_batch(
RuntimeError: Library cublas64_12.dll is not found or cannot be loaded
INFO:     127.0.0.1:51196 - "GET /detective/image/Ms%20Scarlet HTTP/1.1" 500 Internal Server Error
ERROR:    Exception in ASGI application
Traceback (most recent call last):
  File "C:\Python\lib\site-packages\uvicorn\protocols\http\h11_impl.py", line 407, in run_asgi
    result = await app(  # type: ignore[func-returns-value]
  File "C:\Python\lib\site-packages\uvicorn\middleware\proxy_headers.py", line 69, in __call__
    return await self.app(scope, receive, send)
  File "C:\Python\lib\site-packages\fastapi\applications.py", line 1054, in __call__
    await super().__call__(scope, receive, send)
  File "C:\Python\lib\site-packages\starlette\applications.py", line 123, in __call__
    await self.middleware_stack(scope, receive, send)
  File "C:\Python\lib\site-packages\starlette\middleware\errors.py", line 186, in __call__
    raise exc
  File "C:\Python\lib\site-packages\starlette\middleware\errors.py", line 164, in __call__
    await self.app(scope, receive, _send)
  File "C:\Python\lib\site-packages\starlette\middleware\exceptions.py", line 65, in __call__
    await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
  File "C:\Python\lib\site-packages\starlette\_exception_handler.py", line 64, in wrapped_app
    raise exc
  File "C:\Python\lib\site-packages\starlette\_exception_handler.py", line 53, in wrapped_app
    await app(scope, receive, sender)
  File "C:\Python\lib\site-packages\starlette\routing.py", line 756, in __call__
    await self.middleware_stack(scope, receive, send)
  File "C:\Python\lib\site-packages\starlette\routing.py", line 776, in app
    await route.handle(scope, receive, send)
  File "C:\Python\lib\site-packages\starlette\routing.py", line 297, in handle
    await self.app(scope, receive, send)
  File "C:\Python\lib\site-packages\starlette\routing.py", line 77, in app
    await wrap_app_handling_exceptions(app, request)(scope, receive, send)
  File "C:\Python\lib\site-packages\starlette\_exception_handler.py", line 64, in wrapped_app
    raise exc
  File "C:\Python\lib\site-packages\starlette\_exception_handler.py", line 53, in wrapped_app
    await app(scope, receive, sender)
  File "C:\Python\lib\site-packages\starlette\routing.py", line 72, in app
    response = await func(request)
  File "C:\Python\lib\site-packages\fastapi\routing.py", line 278, in app
    raw_response = await run_endpoint_function(
  File "C:\Python\lib\site-packages\fastapi\routing.py", line 193, in run_endpoint_function
    return await run_in_threadpool(dependant.call, **values)
  File "C:\Python\lib\site-packages\starlette\concurrency.py", line 42, in run_in_threadpool
    return await anyio.to_thread.run_sync(func, *args)
  File "C:\Python\lib\site-packages\anyio\to_thread.py", line 56, in run_sync
    return await get_async_backend().run_sync_in_worker_thread(
  File "C:\Python\lib\site-packages\anyio\_backends\_asyncio.py", line 2144, in run_sync_in_worker_thread
    return await future
  File "C:\Python\lib\site-packages\anyio\_backends\_asyncio.py", line 851, in run
    result = context.run(func, *args)
  File "E:\AI-Apps\ai-alchemy\server.py", line 269, in image
    description = get_image_description(cfg, card)
  File "E:\AI-Apps\ai-alchemy\server.py", line 90, in get_image_description
    return generate(messages, max_length=100)
  File "E:\AI-Apps\ai-alchemy\server.py", line 48, in generate
    generated_ids = generator.generate_batch(
RuntimeError: Library cublas64_12.dll is not found or cannot be loaded
{'role': 'user', 'content': 'We are playing a game about a detective solving a case. What do we get if we combine Knife and Victim?'}
{'role': 'assistant', 'content': 'Stabbing.'}
{'role': 'user', 'content': 'What do we get if we combine Secret and Witness?'}
{'role': 'assistant', 'content': 'Blackmail.'}
{'role': 'user', 'content': 'What do we get if we combine Blood and Detective?'}
{'role': 'assistant', 'content': 'Clue.'}
{'role': 'user', 'content': 'What do we get if we combine Mr Jones and Mr James?'}
{'role': 'assistant', 'content': 'Enemies.'}
{'role': 'user', 'content': 'What do we get if we combine Detective and Ms Scarlet?'}
INFO:     127.0.0.1:51201 - "GET /elemental/info HTTP/1.1" 200 OK
{'role': 'user', 'content': 'We are playing a game about merging things. What do we get if we combine Fire and Water?'}
{'role': 'assistant', 'content': 'Steam.'}
{'role': 'user', 'content': 'What do we get if we combine Fire and City?'}
{'role': 'assistant', 'content': 'Fire station.'}
{'role': 'user', 'content': 'What do we get if we combine Superman and Batman?'}
{'role': 'assistant', 'content': 'Superbatman.'}
{'role': 'user', 'content': 'What do we get if we combine Human and Stone?'}
{'role': 'assistant', 'content': 'Dwarf.'}
{'role': 'user', 'content': 'What do we get if we combine Sea and Life?'}
{'role': 'assistant', 'content': 'Fish.'}
{'role': 'user', 'content': 'What do we get if we combine Blacksmith and Metal?'}
{'role': 'assistant', 'content': 'Axe.'}
{'role': 'user', 'content': 'What do we get if we combine Fire and Water?'}
darabos commented 7 months ago

For the model path: Maybe https://stackoverflow.com/questions/61798573/where-does-hugging-faces-transformers-save-models helps.

For cublas64_12.dll: Maybe you're missing https://developer.nvidia.com/cuda-downloads?

iChristGit commented 7 months ago

For the model path: Maybe https://stackoverflow.com/questions/61798573/where-does-hugging-faces-transformers-save-models helps.

For cublas64_12.dll: Maybe you're missing https://developer.nvidia.com/cuda-downloads?

I managed to fix the first issue with changing the code to fetch MaziyarPanahi/Mistral-7B-Instruct-v0.1 I managed to also fix the cublas issue, I had CUDA 11.8, installed CUDA 12 and now it works.

But.. I get this error now:

INFO:     127.0.0.1:51785 - "GET /troops/image/Training HTTP/1.1" 500 Internal Server Error
ERROR:    Exception in ASGI application
Traceback (most recent call last):
  File "C:\Python\lib\site-packages\uvicorn\protocols\http\h11_impl.py", line 407, in run_asgi
    result = await app(  # type: ignore[func-returns-value]
  File "C:\Python\lib\site-packages\uvicorn\middleware\proxy_headers.py", line 69, in __call__
    return await self.app(scope, receive, send)
  File "C:\Python\lib\site-packages\fastapi\applications.py", line 1054, in __call__
    await super().__call__(scope, receive, send)
  File "C:\Python\lib\site-packages\starlette\applications.py", line 123, in __call__
    await self.middleware_stack(scope, receive, send)
  File "C:\Python\lib\site-packages\starlette\middleware\errors.py", line 186, in __call__
    raise exc
  File "C:\Python\lib\site-packages\starlette\middleware\errors.py", line 164, in __call__
    await self.app(scope, receive, _send)
  File "C:\Python\lib\site-packages\starlette\middleware\exceptions.py", line 65, in __call__
    await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
  File "C:\Python\lib\site-packages\starlette\_exception_handler.py", line 64, in wrapped_app
    raise exc
  File "C:\Python\lib\site-packages\starlette\_exception_handler.py", line 53, in wrapped_app
    await app(scope, receive, sender)
  File "C:\Python\lib\site-packages\starlette\routing.py", line 756, in __call__
    await self.middleware_stack(scope, receive, send)
  File "C:\Python\lib\site-packages\starlette\routing.py", line 776, in app
    await route.handle(scope, receive, send)
  File "C:\Python\lib\site-packages\starlette\routing.py", line 297, in handle
    await self.app(scope, receive, send)
  File "C:\Python\lib\site-packages\starlette\routing.py", line 77, in app
    await wrap_app_handling_exceptions(app, request)(scope, receive, send)
  File "C:\Python\lib\site-packages\starlette\_exception_handler.py", line 64, in wrapped_app
    raise exc
  File "C:\Python\lib\site-packages\starlette\_exception_handler.py", line 53, in wrapped_app
    await app(scope, receive, sender)
  File "C:\Python\lib\site-packages\starlette\routing.py", line 72, in app
    response = await func(request)
  File "C:\Python\lib\site-packages\fastapi\routing.py", line 278, in app
    raw_response = await run_endpoint_function(
  File "C:\Python\lib\site-packages\fastapi\routing.py", line 193, in run_endpoint_function
    return await run_in_threadpool(dependant.call, **values)
  File "C:\Python\lib\site-packages\starlette\concurrency.py", line 42, in run_in_threadpool
    return await anyio.to_thread.run_sync(func, *args)
  File "C:\Python\lib\site-packages\anyio\to_thread.py", line 56, in run_sync
    return await get_async_backend().run_sync_in_worker_thread(
  File "C:\Python\lib\site-packages\anyio\_backends\_asyncio.py", line 2144, in run_sync_in_worker_thread
    return await future
  File "C:\Python\lib\site-packages\anyio\_backends\_asyncio.py", line 851, in run
    result = context.run(func, *args)
  File "E:\AI-Apps\ai-alchemy\server.py", line 270, in image
    imagedata = comfy.generate_image(cfg, f"{card}: {description}")
  File "E:\AI-Apps\ai-alchemy\server.py", line 139, in generate_image
    prompt_id = self.queue_prompt(prompt, client_id)["prompt_id"]
  File "E:\AI-Apps\ai-alchemy\server.py", line 112, in queue_prompt
    return json.loads(urllib.request.urlopen(req).read())
  File "C:\Python\lib\urllib\request.py", line 216, in urlopen
    return opener.open(url, data, timeout)
  File "C:\Python\lib\urllib\request.py", line 525, in open
    response = meth(req, response)
  File "C:\Python\lib\urllib\request.py", line 634, in http_response
    response = self.parent.error(
  File "C:\Python\lib\urllib\request.py", line 563, in error
    return self._call_chain(*args)
  File "C:\Python\lib\urllib\request.py", line 496, in _call_chain
    result = func(*args)
  File "C:\Python\lib\urllib\request.py", line 643, in http_error_default
    raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 400: Bad Request

I can see in comfy ui this error:

Failed to validate prompt for output 25:
* SDTurboScheduler 22:
  - Required input is missing: denoise
Output will be ignored
invalid prompt: {'type': 'prompt_outputs_failed_validation', 'message': 'Prompt outputs failed validation', 'details': '', 'extra_info': {}}
darabos commented 7 months ago

Awesome! I'm not sure about prompt_outputs_failed_validation. From what I can find, people seem to have this error when the ComfyUI workflow is trying to use a model checkpoint that is not downloaded. To check for that, you could try to look at the ComfyUI UI. Try loading the workflow from https://comfyanonymous.github.io/ComfyUI_examples/sdturbo/. If you can get that working, the copy in AI Alchemy should work too. It's basically the same.

iChristGit commented 7 months ago

Awesome! I'm not sure about prompt_outputs_failed_validation. From what I can find, people seem to have this error when the ComfyUI workflow is trying to use a model checkpoint that is not downloaded. To check for that, you could try to look at the ComfyUI UI. Try loading the workflow from https://comfyanonymous.github.io/ComfyUI_examples/sdturbo/. If you can get that working, the copy in AI Alchemy should work too. It's basically the same.

I downloaded the workflow and made sure I can actually generate images using sdxl-turbo, it works fine in comfy, but still I get this error when running ai-alchemy, the images wont generate.

Failed to validate prompt for output 25:
* SDTurboScheduler 22:
  - Required input is missing: denoise
Output will be ignored
invalid prompt: {'type': 'prompt_outputs_failed_validation', 'message': 'Prompt outputs failed validation', 'details': '', 'extra_info': {}}

Maybe the way ComfyUI proccesses the prompt has changed, and we need to adjust the comfyui_workflow.json file?

iChristGit commented 7 months ago

Just to confirm, I did a clean install, and the gated repo can be fixed with MaziyarPanahi/Mistral-7B-Instruct-v0.1, but still the issue with ComfyUI asking for desnoise parameter is still present.. I would love to find a fix for it, as my kid is obsessed with this game haha

darabos commented 7 months ago

Thanks for the summary! I've managed to reproduce the issue with a fresh ComfyUI. I'll try to figure out a fix.

darabos commented 7 months ago

It's working!

image

You only really need this one line: https://github.com/darabos/ai-alchemy/commit/b2c3c957846f1d24eeb0a09f9b1f2ff67e61ce71#diff-dfce5e18b7315f6ad7d0c77f3224eccf23d4836afb6dc3eda674f66e2d5c351bR111

Or you can just update the whole repo. (I switched to a model that can be used without login too. I went for Mistral-7b 0.2. Maybe it's smarter than 0.1? I also tried Phi-3 but it was writing me an essay instead of giving a straight answer. 😄 )

I hope it will work for you now!

iChristGit commented 7 months ago

Can confirm its fixed with latest git pull! thank you so much!

iChristGit commented 7 months ago

One thing I noticed is that its now not role-playing and sometimes dont comply with the prompt aka: image image

darabos commented 7 months ago

Oh! Probably best go back to Mistral-7b 0.1 then...

iChristGit commented 7 months ago

Changing to Praise2112/Mistral-7B-Instruct-v0.1-int8-ct2 Seem to perform better, however a lot of re-generate needed because many times it generates the name with a lot of < / s > image

Edit: I tried Llama-3-8B-Instruct (jncraton/Meta-Llama-3-8B-Instruct-ct2-int8) and it works perfectly!