Closed LightTemplar closed 11 months ago
I will add a way to dynamically set it on the settings but for now the only way is by manually change the ip on the scrips/if_prompt_mkr.py
I added the feature to use another ip check the video later is uploading now
From the logs looks like connection is working! Though, I still didn't get that beautiful result from your images :)
what is the prompt you get? did you get the prompt, could you take a screenshot of the image + prompt?
Thanks to your video, I got it working =) I only had to use TheBloke/WizardLM-7B-V1-0-Uncensored-SuperHOT-8K-GGML with Transformers because I ain't got enough vram for 13B GPTQ model with Exllama. My result for the promt "(CatGirl warrior:1.2), legendary flower" was:
That's fantastic thank you so much for showing your result.
Did it work with server or only local.
I will make my own checks this weekend but it would be nice to know, so far I could only run local, so is hard for me to tell if something doesn't work if people don't tell me.
On Tue, Aug 8, 2023, 6:35 AM Light Templar @.***> wrote:
Closed #3 https://github.com/if-ai/IF_prompt_MKR/issues/3 as completed.
— Reply to this email directly, view it on GitHub https://github.com/if-ai/IF_prompt_MKR/issues/3#event-10031058685, or unsubscribe https://github.com/notifications/unsubscribe-auth/AFBUFQRXBVGO6WQ3FFPTY43XUHF3TANCNFSM6AAAAAA2XSOLTU . You are receiving this because you commented.Message ID: @.***>
I understand! They are two VMs on two PCs in local network. For oobabooga I got 5GB VRAM (Quadro P2200), and for AUTO1111 - 12 GB VRAM (RTX 3060)
How to setup, if I have oobabooga installed on another machine?