Open pribeh opened 1 month ago
I think your issue is that you're not loading the image in the ComfyUI workflow, you're working from an empty latent image. In my workflow, I have a node like this:
"1277": {
inputs: {
image: "image.png",
upload: "image",
},
class_type: "LoadImage",
_meta: {
title: "Load Image",
},
},
which refers to the uploaded image, and the output corresponds to this loaded image
Thanks for sharing @billyberkouwer ! I can't seemingly get that working in my workflow though. I'm new to ComfyUI and not sure how to get the json outputted workflow for things like image to image. Could you or anyone share a simple workflow for image to image?
if I try something like this:
"5": {
inputs: {
image: selectedImages[0]?.name || 'image_0.png', // Use the first selected image
upload: selectedImages[0]?.base64
? selectedImages[0].base64.replace(/^data:image\/\w+;base64,/, "") // Strip the base64 prefix
: "", // Fallback in case there's no image
},
class_type: "LoadImage",
_meta: { title: "Load Image" }
},
I always get invalid prompt: {'type': 'invalid_prompt', 'message': 'Cannot execute because a node is missing the class_type property.', 'details': "Node ID '#5'", 'extra_info': {}}
Here's a more complete version
const filteredPrompt = customFilter.clean(aiTextInput);
const randomSeed = Math.floor(Math.random() * Number.MAX_SAFE_INTEGER); // Generate a random seed
const requestRunPod = {
input: {
workflow: {
5: selectedImages.length > 0
? selectedImages.map((image, index) => ({
inputs: {
image: image.name || `image_${index}.png`,
upload: image.base64 ? image.base64.replace(/^data:image\/\w+;base64,/, "") : "",
},
class_type: "LoadImage",
_meta: { title: "Load Image" }
}))
: {
inputs: {
width: requestBody.input.aspect_ratios_selection.split('*')[0],
height: requestBody.input.aspect_ratios_selection.split('*')[1],
batch_size: 1
},
class_type: "EmptyLatentImage",
_meta: { title: "Empty Latent Image" }
},
6: {
inputs: {
samples: ["5", 0], // Pass image or latent samples to the encoder
},
class_type: "ImageToLatent", // New node for encoding image into latent space
_meta: { title: "Encode to Latent" }
},
7: {
inputs: {
samples: ["6", 0], // Now passing encoded latent samples instead of direct images
vae: ["10", 0]
},
class_type: "VAEDecode",
_meta: { title: "VAE Decode" }
},
8: {
inputs: {
filename_prefix: "ComfyUI",
images: ["7", 0] // Resulting images from the VAE decode
},
class_type: "SaveImage",
_meta: { title: "Save Image" }
},
10: {
inputs: {
vae_name: "ae.safetensors"
},
class_type: "VAELoader",
_meta: { title: "Load VAE" }
},
11: {
inputs: {
clip_name1: "t5xxl_fp8_e4m3fn.safetensors",
clip_name2: "clip_l.safetensors",
type: "flux"
},
class_type: "DualCLIPLoader",
_meta: { title: "DualCLIPLoader" }
},
12: {
inputs: {
unet_name: "flux1-schnell.safetensors",
weight_dtype: "fp8_e4m3fn"
},
class_type: "UNETLoader",
_meta: { title: "Load Diffusion Model" }
},
13: {
inputs: {
noise: ["25", 0],
guider: ["22", 0],
sampler: ["16", 0],
sigmas: ["17", 0],
latent_image: ["5", 0] // Latent image comes from either loaded image or latent space
},
class_type: "SamplerCustomAdvanced",
_meta: { title: "SamplerCustomAdvanced" },
advanced_params: {
disable_intermediate_results: true, // Disable intermediate results
disable_preview: true // Turn off preview generation
}
},
16: {
inputs: {
sampler_name: "euler"
},
class_type: "KSamplerSelect",
_meta: { title: "KSamplerSelect" }
},
17: {
inputs: {
scheduler: "sgm_uniform",
steps: 4,
denoise: selectedImages.length > 0 ? 0.6 : 1, // Use lower denoise for img2img
model: ["12", 0]
},
class_type: "BasicScheduler",
_meta: { title: "BasicScheduler" }
},
22: {
inputs: {
model: ["12", 0],
conditioning: ["6", 0]
},
class_type: "BasicGuider",
_meta: { title: "BasicGuider" }
},
25: {
inputs: {
noise_seed: randomSeed
},
class_type: "RandomNoise",
_meta: { title: "RandomNoise" }
}
}
}
};
Describe the bug
I'm running into an issue when passing images to ComfyUI with the Flux.1 Schnell model. When I pass the following request all indicators are that the image is indeed uploaded to the Comfy worker but the resulting output does not reflect the uploaded image at all. Am I doing something wrong with the below configuration or does image to image workflow not properly work with the Flux model.