HVision-NKU / StoryDiffusion

Create Magic Story!
Apache License 2.0
5.45k stars 519 forks source link

WARNING[XFORMERS]: xFormers can't load C++/CUDA extensions. xFormers was built for: PyTorch 2.0.1+cu118 with CUDA 1108 (you have 2.0.1+cpu) Python 3.10.11 (you have 3.10.1) Please reinstall xformers (see https://github.com/facebookresearch/xformers#installing-xformers) Memory-efficient attention, SwiGLU, sparse and more won't be available. Set XFORMERS_MORE_DETAILS=1 for more details #75

Closed qingxiao-jie closed 1 month ago

qingxiao-jie commented 1 month ago

1715418156621

grossmetal commented 1 month ago

请问这个问题您是如何解决的?

tomy2502 commented 1 month ago

一样的问题,可以运行下去,并且启动web服务

但点击“grnerate”按钮,就报错。


操作系统:windows 11

显卡:N卡3080,10G

start_merge_step:7 ['[Bob] at home, read new paper ', '[Bob] on the road, near the forest', '[Alice] is make a call at home ', 'A tiger appeared in the forest, at night ', ' The car on the road, near the forest ', '[Bob] very frightened, open mouth, in the forest, at night', '[Alice] very frightened, open mouth, in the forest, at night', '[Bob] and [Alice] running very fast, in the forest, at night', ' A house in the forest, at night ', '[Bob] and [Alice] in the house filled with treasure, laughing, at night '] {'[Bob]': [0, 1, 5, 7, 9], '[Alice]': [2, 6, 7, 9]} {'[Bob]': ' A man, wearing a black suit', '[Alice]': 'a woman, wearing a white shirt'} {'[Bob]': [0, 1, 5, 7, 9], '[Alice]': [2, 6, 7, 9]} {0: ['[Bob]'], 1: ['[Bob]'], 2: ['[Alice]'], 5: ['[Bob]'], 6: ['[Alice]'], 7: ['[Bob]', '[Alice]'], 9: ['[Bob]', '[Alice]']} [Bob] [0, 1] [' A man, wearing a black suit at home, read new paper ', ' A man, wearing a black suit on the road, near the forest'] Traceback (most recent call last): File "C:\Users\asus.conda\envs\story\lib\site-packages\gradio\queueing.py", line 501, in call_prediction output = await route_utils.call_process_api( File "C:\Users\asus.conda\envs\story\lib\site-packages\gradio\route_utils.py", line 258, in call_process_api output = await app.get_blocks().process_api( File "C:\Users\asus.conda\envs\story\lib\site-packages\gradio\blocks.py", line 1710, in process_api result = await self.call_function( File "C:\Users\asus.conda\envs\story\lib\site-packages\gradio\blocks.py", line 1262, in call_function prediction = await utils.async_iteration(iterator) File "C:\Users\asus.conda\envs\story\lib\site-packages\gradio\utils.py", line 517, in async_iteration return await iterator.anext() File "C:\Users\asus.conda\envs\story\lib\site-packages\gradio\utils.py", line 510, in anext return await anyio.to_thread.run_sync( File "C:\Users\asus.conda\envs\story\lib\site-packages\anyio\to_thread.py", line 56, in run_sync return await get_async_backend().run_sync_in_worker_thread( File "C:\Users\asus.conda\envs\story\lib\site-packages\anyio_backends_asyncio.py", line 2177, in run_sync_in_worker_thread return await future File "C:\Users\asus.conda\envs\story\lib\site-packages\anyio_backends_asyncio.py", line 859, in run result = context.run(func, args) File "C:\Users\asus.conda\envs\story\lib\site-packages\gradio\utils.py", line 493, in run_sync_iterator_async return next(iterator) File "C:\Users\asus.conda\envs\story\lib\site-packages\gradio\utils.py", line 676, in gen_wrapper response = next(iterator) File "c:\storydiffusion\gradio_app_sdxl_specific_id_low_vram.py", line 855, in process_generation id_images = pipe( File "C:\Users\asus.conda\envs\story\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(args, *kwargs) File "C:\Users\asus.conda\envs\story\lib\site-packages\diffusers\pipelines\stable_diffusion_xl\pipeline_stable_diffusion_xl.py", line 1096, in call ) = self.encode_prompt( File "C:\Users\asus.conda\envs\story\lib\site-packages\diffusers\pipelines\stable_diffusion_xl\pipeline_stable_diffusion_xl.py", line 415, in encode_prompt prompt_embeds = text_encoder(text_input_ids.to(device), output_hidden_states=True) File "C:\Users\asus.conda\envs\story\lib\site-packages\torch\cuda__init__.py", line 239, in _lazy_init raise AssertionError("Torch not compiled with CUDA enabled") AssertionError: Torch not compiled with CUDA enabled Exception in callback _ProactorBasePipeTransport._call_connection_lost(None) handle: <Handle _ProactorBasePipeTransport._call_connection_lost(None)> Traceback (most recent call last): File "C:\Users\asus.conda\envs\story\lib\asyncio\events.py", line 80, in _run self._context.run(self._callback, self._args) File "C:\Users\asus.conda\envs\story\lib\asyncio\proactor_events.py", line 165, in _call_connection_lost self._sock.shutdown(socket.SHUT_RDWR) ConnectionResetError: [WinError 10054] 远程主机强迫关闭了一个现有的连接。

tomy2502 commented 1 month ago

请问这个问题您是如何解决的?

https://github.com/HVision-NKU/StoryDiffusion/issues/8#issuecomment-2094952892

参考这个,搞定