Open jdc4429 opened 1 year ago
In the UI you can: click the gear button beside "Queue Size" and "enable dev mode options", then you can use the new button to save workflows in api format that you can use in the API.
Oh cool, I can rule out if it's a setting this way. quite frankly there are so many! :)
Thank you for the quick reponse. I love your interface.
I get an error when I try to use the created api workflow...
Traceback (most recent call last):
File "c:\inetpub\wwwroot\comfy.py", line 276, in
Does not seem to like the format.. I was sure to paste everything from """ to """
Does not seem to match the format I had previously.. had numbers for each section. This one when I look at the save image does not have a '9' for example..
Here is the workflow that was created...
FYI.. It is one of the lines from 15 up that is causing it to fail... Up till 12 it's good. Nevermind, there were \n lines in my output for section 15 messing it up.
I have added the api as instructed. I still get the same lousy output from the API...
Also if you could fix it so the API queue's the requests... You can from the web interface but not the API..
I might have figured out the issue. It seems my code was putting %20 & %2C codes in the prompt and I believe that may have been messing up it's ability to read the prompt correctly.
Nope.. It's still doing it... For some reason even when copying all the api settings from the web interface, the image is just not the same coming from the API. It's like 2d versus 3d...
I believe the Lora is not being applied... WowifierXL but it is listed in the api config.
I believe the Lora is not being applied... WowifierXL but it is listed in the api config.
You'd better insert log printer to LoraLoader or others...
Also if you could fix it so the API queue's the requests... You can from the web interface but not the API..
I've just completed an API integration and can confirm that API requests are indeed queued.
However, like you, I initially encountered issues because the API wasn't accepting my submitted prompt.
I resolved this by adding code to parse my full prompt into a JSON object before submitting it to the API.
Once I verified that my prompt was valid JSON, everything started working as expected! It's a bit tricky, but it's definitely achievable.
I am using the full json...
prompt = json.loads(prompt_text)
# Set the text prompt for our positive CLIPTextEncode
prompt["4"]["inputs"]["ckpt_name"] = smodel
prompt["5"]["inputs"]["height"] = iheight
prompt["5"]["inputs"]["width"] = iwidth
prompt["6"]["inputs"]["text"] = rprompt
prompt["7"]["inputs"]["text"] = nprompt
# Seed must always be different
rnumber = random_number()
prompt["3"]["inputs"]["seed"] = rnumber
ws = websocket.WebSocket()
ws.connect("ws://{}/ws?clientId={}".format(server_address, client_id))
images = get_images(ws, prompt)
Could the quality issues be due to this line?: compress on send line, server.py
Hello again,
I found another issue... For some reason, the quality of the image being generated from the API is much worse then when generated from the Web interface with the same settings.. I can't figure out why. I believe I am matching all the parameters like denoise is set to 1 for example.
If you see the two images, same prompt but one from api and the other from web interface... I'm stumped.