Open BBird6 opened 4 months ago
Just ran into this issue myself. It's genuinely bizarre.
In my EmbeddingMerge extension I am using this function internally to create a template embedding file for SD1 models.
Two users are telling me that they also have this "unsafe pickle" error in my extension: one of them decided to add --disable-safe-unpickle
, while another one is asking me to change .pt
to .safetensors
format to fix this error.
Strange thing is that for many other people there are no errors whatsoever, including for myself!
I even tried to add torch.load(…,weights_only=True)
when loading a properly saved embedding, but it didn't help.
Despite the error says something about byteorder
, that user has a pretty standard Intel machine with Windows 10.
Since the same issue is happening with vanilla WebUI after creating a new embedding in Tran
tab, I believe it should be fixed here in upstream, rather than in my own extension. Right?
(Putting aside the fact that "nobody uses TI train in WebUI anymore" and that here it also throws a completely different error for SDXL models since no training support exists for them)
The core problem might be deeper than just unsafe pickles, because it is not even happening for the majority of users. Does anybody have clues of what might be causing this?
Yeah, it is the byteorder tag which makes it different from older embeddings.
Yesterday I installed the 1.7 version of Automatic1111, created the .pt file there, and it didn't had the
byteorder line in the code.
Then I took the file over to the 1.8 embeddings folder, and started training there.
On the first step of writing the .pt after training the 1.8 version reverted the .pt file to the byteorder again, making it unusable.
It has nothing to do with your extension, as I don't use it. --disable safe unpickle is no option for me, as the v1.8 won't even read it as a valid embedding.
I am running into the same thing. I have a very loose grasp of what is happening in training - I just follow a tutorial - and the tutorial I used on 1.7.x doesn't work on 1.8.x because of this exact same issue. My old embeddings still work fine, but I can't create any new ones. Really annoying.
Two users are telling me that they also have this "unsafe pickle" error in my extension: one of them decided to add
--disable-safe-unpickle
, while another one is asking me to change.pt
to.safetensors
format to fix this error.
--disable-safe-unpickle just ignores the error messages, if you'd create an embedding for publishing it, everyone else would run into the error with this embedding
Converting to safetenors doesn't work, the script goes straight to a callback error due to the corrupted file
the byteorder seems to be different but also the .data folder seems odd compared to my working files
I try to fix this ,if you are in a hurry ,try edit modules/textual_inversion/textual_inversion.py
line 64 and line 71 just like this
After do this ,restart webui and recreat embedding
This could cause some performance issues but train and infer should be all correct now
I'm seeing the same problem using...
Python 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]
Version: v1.9.0-62-gddb28b33
Commit hash: ddb28b33a3561a360b429c76f28f7ff1ffe282a0
CUDA 12.1
Launching Web UI with arguments: --data-dir=S:\GenAI\ImageGen\data --clip-models-path=S:\GenAI\ImageGen\data\models\clip --ckpt-dir=S:\GenAI\ImageGen\data\models\checkpoints --embeddings-dir=S:\GenAI\ImageGen\data\models\embeddings --vae-dir=S:\GenAI\ImageGen\data\models\vaeE --styles-file=S:\GenAI\ImageGen\data\*.csv --enable-insecure-extension-access --listen --theme=dark --no-half-vae --xformers
Another symptom I didn't see mentioned was that when trying to create an embedding, the UI doesn't give any hint of a problem but when you switch from Create embedding to the Train sub-tab, the new embedding isn't present in the list of available embeddings.
So, the PyTorch Tutorial for loading and saving modesl mentions that the zip format changed in PyTorch 1.6+:
The 1.6 release of PyTorch switched
torch.save
to use a new zip file-based format.torch.load
still retains the ability to load files in the old format. If for any reason you wanttorch.save
to use the old format, pass thekwarg
parameter_use_new_zipfile_serialization=False
.
... and I can confirm that the workaround above by @LingXuanYin works. At least, I'm now able to create an embedding file without triggering this error and the new embedding DOES show on the list of Embeddings in the Train tab.
I'm unclear whether https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15774 is a fix (rather than just a workaround). Would a better solution to be able to load .pt
files with the new serialization format? I can only assume that's much more work.
Checklist
What happened?
Whenever I create a new embedding, the pickle check fails to verify the new created file. Old embeddings are read without any problem. (This is my first new TI training since the 1.8.0 update)
Steps to reproduce the problem
What should have happened?
.pt file should pass the pickle check
What browsers do you use to access the UI ?
Google Chrome
Sysinfo
sysinfo-2024-03-11-06-37.json
Console logs
Additional information
No response