Closed wikeeyang closed 5 months ago
@wikeeyang you can try the following code:
from datetime import datetime
# Get current date and time
now = datetime.now()
# Format the timestamp for filename
current_date = now.strftime("%Y-%m-%d_%H-%M-%S")
output_dir = script_directory + f"/images/gradio_outputs"
if not os.path.exists(output_dir):
os.makedirs(output_dir)
images.save(os.path.join(output_dir, f"{current_date}-{seed}.jpg"))
return os.path.join(output_dir, f"{current_date}-{seed}.jpg")
Windows 11 x64, Python 3.10.11 + torch 2.0.2 + cu11.8. Running on local URL: http://127.0.0.1:8888
To create a public link, set
share=True
inlaunch()
. IMPORTANT: You are using gradio version 4.16.0, however version 4.29.0 is available, please upgrade.The config attributes {'image_encoder': [None, None]} were passed to ConsistentIDStableDiffusionPipeline, but are not expected and will be ignored. Please verify your model_index.json configuration file. Keyword arguments {'image_encoder': [None, None]} are not expected by ConsistentIDStableDiffusionPipeline and will be ignored. Loading pipeline components...: 0%| | 0/7 [00:00<?, ?it/s]D:\AITest\ConsistentID\Python310\lib\site-packages\transformers\models\clip\feature_extraction_clip.py:28: FutureWarning: The class CLIPFeatureExtractor is deprecated and will be removed in version 5 of Transformers. Please use CLIPImageProcessor instead. warnings.warn( Loading pipeline components...: 100%|██████████████████████████████████████████████████| 7/7 [00:05<00:00, 1.33it/s] D:\AITest\ConsistentID\Python310\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py:69: UserWarning: Specified provider 'CUDAExecutionProvider' is not in available provider names.Available providers: 'AzureExecutionProvider, CPUExecutionProvider' warnings.warn( Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}} find model: C:\Users\Administrator/.insightface\models\buffalo_l\1k3d68.onnx landmark_3d_68 ['None', 3, 192, 192] 0.0 1.0 Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}} find model: C:\Users\Administrator/.insightface\models\buffalo_l\2d106det.onnx landmark_2d_106 ['None', 3, 192, 192] 0.0 1.0 Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}} find model: C:\Users\Administrator/.insightface\models\buffalo_l\det_10g.onnx detection [1, 3, '?', '?'] 127.5 128.0 Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}} find model: C:\Users\Administrator/.insightface\models\buffalo_l\genderage.onnx genderage ['None', 3, 96, 96] 0.0 1.0 Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}} find model: C:\Users\Administrator/.insightface\models\buffalo_l\w600k_r50.onnx recognition ['None', 3, 112, 112] 127.5 127.5 set det-size: (640, 640) Successfully loaded weights from checkpoint D:\AITest\ConsistentID\Python310\lib\site-packages\insightface\utils\transform.py:68: FutureWarning:
rcond
parameter will change to the default of machine precision timesmax(M, N)
where M and N are the input matrix dimensions. To use the future default and silence this warning we advise to passrcond=None
, to keep using the old, explicitly passrcond=-1
. P = np.linalg.lstsq(X_homo, Y)[0].T # Affine matrix. 3 x 4 D:\AITest\ConsistentID\Python310\lib\site-packages\diffusers\pipelines\stable_diffusion\pipeline_stable_diffusion.py:242: FutureWarning:_encode_prompt()
is deprecated and it will be removed in a future version. Useencode_prompt()
instead. Also, be aware that the output format changed from a concatenated tensor to a tuple. deprecate("_encode_prompt()", "1.0.0", deprecation_message, standard_warn=False) D:\AITest\ConsistentID\pipline_StableDiffusion_ConsistentID.py:512: FutureWarning: Accessing config attributein_channels
directly via 'UNet2DConditionModel' object attribute is deprecated. Please access 'in_channels' over 'UNet2DConditionModel's config object instead, e.g. 'unet.config.in_channels'. num_channels_latents = self.unet.in_channels 100%|████████████████████████████████████████████████████████████████████████████████| 50/50 [00:20<00:00, 2.45it/s] D:\AITest\ConsistentID\Python310\lib\site-packages\diffusers\pipelines\stable_diffusion\pipeline_stable_diffusion.py:458: FutureWarning: The decode_latents method is deprecated and will be removed in 1.0.0. Please use VaeImageProcessor.postprocess(...) instead deprecate("decode_latents", "1.0.0", deprecation_message, standard_warn=False) Traceback (most recent call last): File "D:\AITest\ConsistentID\Python310\lib\site-packages\gradio\queueing.py", line 495, in call_prediction output = await route_utils.call_process_api( File "D:\AITest\ConsistentID\Python310\lib\site-packages\gradio\route_utils.py", line 232, in call_process_api output = await app.get_blocks().process_api( File "D:\AITest\ConsistentID\Python310\lib\site-packages\gradio\blocks.py", line 1561, in process_api result = await self.call_function( File "D:\AITest\ConsistentID\Python310\lib\site-packages\gradio\blocks.py", line 1179, in call_function prediction = await anyio.to_thread.run_sync( File "D:\AITest\ConsistentID\Python310\lib\site-packages\anyio\to_thread.py", line 33, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "D:\AITest\ConsistentID\Python310\lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread return await future File "D:\AITest\ConsistentID\Python310\lib\site-packages\anyio_backends_asyncio.py", line 807, in run result = context.run(func, args) File "D:\AITest\ConsistentID\Python310\lib\site-packages\gradio\utils.py", line 695, in wrapper response = f(args, **kwargs) File "D:\AITest\ConsistentID\app-me.py", line 79, in process images.save(os.path.join(output_dir, f"{current_date}-{seed}.jpg")) File "D:\AITest\ConsistentID\Python310\lib\site-packages\PIL\Image.py", line 2410, in save fp = builtins.open(filename, "w+b") OSError: [Errno 22] Invalid argument: 'D:\AITest\ConsistentID/images/gradio_outputs\2024-05-11 17:41:15.170376-614.jpg'in the app.py have some line as below:
how to modify the output_dir value to solve this error?