apple / ml-stable-diffusion

Stable Diffusion with Core ML on Apple Silicon
MIT License
16.46k stars 886 forks source link

Can't specify ``--image xx`` (`image2image`) #209

Open ToddCool opened 12 months ago

ToddCool commented 12 months ago

Does the current ml-stable-diffusion support image2image? I'm having the following error when trying to use it. Any help/comment is appreciated.

  1. ml-stable-diffusion version: the latest
  2. Prompt entered: swift run StableDiffusionSample --resource-path /Users/myUserName/ml-stable-diffusion/checkPoints/coreml-Deliberate/split-einsum/deliberate_v2_split-einsum --step-count 50 --compute-units cpuAndNeuralEngine --disable-safety --output-path ~/Downloads "A test description" --image-count 5 --image /Users/myUserName/Downloads/test\.png
  3. Output with error:
    
    Build complete! (0.08s)
    Loading resources and creating pipeline
    (Note: This can take a while the first time using these resources)
    Sampling ...
    StableDiffusion/Encoder.swift:96: Fatal error: Unexpectedly found nil while unwrapping an Optional value
    [1]    20995 trace trap  swift run StableDiffusionSample --resource-path  --step-count 50        5```
atiorh commented 12 months ago

Hey @ToddCool, could you please confirm that you have VAEEncoder.mlmodelc in your --resource-path? If not, you will need to run --convert-vae-encoder to generate it during the PyTorch to Core ML conversion phase.