Open HansonChan opened 9 months ago
@HansonChan Are you using a Mac?
@ka1tte Yes. Mac with M2. I try to move the models to "mps", but aother error appeared.
Traceback (most recent call last): File "/Users/Shuai/AI/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/routes.py", line 488, in run_predict output = await app.get_blocks().process_api( File "/Users/Shuai/AI/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/blocks.py", line 1434, in process_api data = self.postprocess_data(fn_index, result["prediction"], state) File "/Users/Shuai/AI/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/blocks.py", line 1335, in postprocess_data prediction_value = block.postprocess(prediction_value) File "/Users/Shuai/AI/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/components/gallery.py", line 205, in postprocess raise ValueError(f"Cannot process type as image: {type(img)}") ValueError: Cannot process type as image: <class 'NoneType'>
@HansonChan I provide a parameter that can use the CPU for inference. You can configure it through the setting page or api.
{
"cleaner_use_gpu": true
}
In my Mac, It works normally.
I solved my issue by downloading the model at https://huggingface.co/smartywu/big-lama and replacing it.
I have same error on windows :
Traceback (most recent call last):
File "D:\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict
output = await app.get_blocks().process_api(
File "D:\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1431, in process_api
result = await self.call_function(
File "D:\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1103, in call_function
prediction = await anyio.to_thread.run_sync(
File "D:\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "D:\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "D:\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
result = context.run(func, *args)
File "D:\stable-diffusion-webui\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper
response = f(*args, **kwargs)
File "D:\stable-diffusion-webui\extensions\sd-webui-cleaner\scripts\lama.py", line 14, in clean_object_init_img_with_mask
return clean_object(init_img_with_mask['image'],init_img_with_mask['mask'])
File "D:\stable-diffusion-webui\extensions\sd-webui-cleaner\scripts\lama.py", line 19, in clean_object
Lama = LiteLama2()
File "D:\stable-diffusion-webui\extensions\sd-webui-cleaner\scripts\lama.py", line 69, in __init__
self.load(location="cpu")
File "D:\stable-diffusion-webui\venv\lib\site-packages\litelama\litelama.py", line 19, in load
self._model = load_model(config_path=self._config_path, checkpoint_path=self._checkpoint_path, use_safetensors=use_safetensors)
File "D:\stable-diffusion-webui\venv\lib\site-packages\litelama\model.py", line 60, in load_model
with safetensors.safe_open(checkpoint_path, framework="pt", device="cpu") as f:
safetensors_rust.SafetensorError: Error while deserializing header: MetadataIncompleteBuffer
And I have change the model like Keny25 : https://huggingface.co/anyisalin/big-lama/tree/main
It's work well.
But the error for me is now in "Clean Up Upload" ...
Traceback (most recent call last):
File "D:\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict
output = await app.get_blocks().process_api(
File "D:\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1434, in process_api
data = self.postprocess_data(fn_index, result["prediction"], state)
File "D:\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1335, in postprocess_data
prediction_value = block.postprocess(prediction_value)
File "D:\stable-diffusion-webui\venv\lib\site-packages\gradio\components\gallery.py", line 205, in postprocess
raise ValueError(f"Cannot process type as image: {type(img)}")
ValueError: Cannot process type as image: <class 'NoneType'>
@zopi4k Do you have a GPU? I have only tested it on Linux and Mac, and I have not run it on Windows.
Yes Ka1tte ^^) I have a NVidia GeForce RTX 3070 Laptop GPU. if you change just the Model it's possible to work.
And Thx for this extention ! I've been looking for one since February xDD
@zopi4k I was able to replicate those errors when trying to use the extention and generating an image, otherwise it's working as intended
@zopi4k I was able to replicate those errors when trying to use the extention and generating an image, otherwise it's working as intended
same error. How do we fix it?
I also encounter the error: Error while deserializing header: HeaderTooLarge
It works pretty well on Mac OSX, but on my Windows Machine I encounter that error. I also tried replacing the safetensor files. It did not work.
Is this project still maintained?
*** Error loading script: clean_up_tab.py
Traceback (most recent call last):
File "D:\ Stable Diffusion\sd-webui-aki-v4\modules\scripts.py", line 319, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "D:\ Stable Diffusion\sd-webui-aki-v4\modules\script_loading.py", line 10, in load_module
module_spec.loader.exec_module(module)
File "
How do you fix this error
A possibly helpful tip: the mask and image must be the same size for the API to work. Remember to mask.resize(image.size) before b64 encoding the images.
{ "cleaner_use_gpu": true } @ka1tte May I ask which file this parameter is added to?My Mac is running with an error message
ValueError(f"Cannot process type as image: {type(img)}") ValueError: Cannot process type as image: <class 'NoneType'>
你好,我的mac也出现了同样的问题 Traceback (most recent call last): File "/Users/apple/Documents/AI/SD/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/routes.py", line 488, in run_predict output = await app.get_blocks().process_api( File "/Users/apple/Documents/AI/SD/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/blocks.py", line 1434, in process_api data = self.postprocess_data(fn_index, result["prediction"], state) File "/Users/apple/Documents/AI/SD/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/blocks.py", line 1335, in postprocess_data prediction_value = block.postprocess(prediction_value) File "/Users/apple/Documents/AI/SD/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/components/gallery.py", line 205, in postprocess raise ValueError(f"Cannot process type as image: {type(img)}") ValueError: Cannot process type as image: <class 'NoneType'> 请问如何解决,谢谢!
It seems haven't got the right device type?