Open ToddCool opened 12 months ago
Hey @ToddCool, could you please confirm that you have VAEEncoder.mlmodelc
in your --resource-path
? If not, you will need to run --convert-vae-encoder
to generate it during the PyTorch to Core ML conversion phase.
Does the current
ml-stable-diffusion
supportimage2image
? I'm having the following error when trying to use it. Any help/comment is appreciated.ml-stable-diffusion
version: the latestswift run StableDiffusionSample --resource-path /Users/myUserName/ml-stable-diffusion/checkPoints/coreml-Deliberate/split-einsum/deliberate_v2_split-einsum --step-count 50 --compute-units cpuAndNeuralEngine --disable-safety --output-path ~/Downloads "A test description" --image-count 5 --image /Users/myUserName/Downloads/test\.png