Closed wikeeyang closed 5 months ago
ok, I try the RealVisXL_V4.0 checkpoints, but the output result is big difference with me get from your Spaces. I can,t use USE_TORCH_COMPILE=1, have error as below: 'StableDiffusionXLPipeline' object has no attribute 'compile'
my output image from example prompt 1 as below:
My env is Windows 11 x64, may be torch.compile don't support windows platform.
What is the speical features of this project?
Have you got the error on diffuser version of PYPI for Loading the components ?, like you are using the diffuser version of <=0.23.0 & you should use use ==0.29.0, You are running it on local or spaces like colab, jy nb, dataspell or with vs.code. Kindly share the source were you have working with it.
So PyTorch doesnt support yours, you are facing offloading kind of a problem..
May i known shards & base components loads like in the image ? instead of compiling on to the env. / workspace ?
Yes, You can try RealVIS v4, Turbo, EPICRealism for the model usage to get better results.
---- ok, I try the RealVisXL_V4.0 checkpoints, but the output result is big difference with me get from your Spaces. I can,t use USE_TORCH_COMPILE=1, have error as below: 'StableDiffusionXLPipeline' object has no attribute 'compile'
my output image from example prompt 1 as below: -----
This means compile method might not be available or required. Instead, focus on ensuring your pipeline is set up correctly without it. Ensure the Correct setup
Thanks a lot for you reply repaidly! Yes, last week, I try this project on your HF spaces demo, the speed and quality very good! then I want to deploy it local for learn and test. my env is local windows platform, windows 11 x64, Python 3.10.13 + torch 2.1.2+ cu11.8, GPU is old P40 24GB, at first I installed diffusers is 0.25.0, due to don't support torch.compile(), then I try diffusers==0.28.2, but also don't support in my enviroment. below is my pip list: D:\AITest\Hamster>pip list Package Version
accelerate 0.30.1 addict 2.4.0 aiofiles 23.2.1 aiohttp 3.9.5 aiosignal 1.3.1 aliyun-python-sdk-core 2.15.1 aliyun-python-sdk-kms 2.16.3 altair 5.3.0 annotated-types 0.6.0 anyio 4.3.0 async-timeout 4.0.3 attrs 23.2.0 beautifulsoup4 4.12.3 bitsandbytes 0.43.0 blis 0.7.11 catalogue 2.0.10 certifi 2024.2.2 cffi 1.16.0 charset-normalizer 3.3.2 click 8.1.7 cloudpathlib 0.16.0 colorama 0.4.6 confection 0.1.4 contourpy 1.2.1 crcmod 1.7 cryptography 42.0.7 cycler 0.12.1 cymem 2.0.8 datasets 2.18.0 diffusers 0.28.2 dill 0.3.8 dnspython 2.6.1 editdistance 0.6.2 einops 0.7.0 email_validator 2.1.1 et-xmlfile 1.1.0 exceptiongroup 1.2.1 fairscale 0.4.0 fastapi 0.110.3 ffmpy 0.3.2 filelock 3.14.0 fonttools 4.51.0 frozenlist 1.4.1 fsspec 2024.2.0 gast 0.5.4 gdown 4.6.0 gradio 4.25.0 gradio_client 0.15.0 h11 0.14.0 hf_transfer 0.1.6 httpcore 1.0.5 httptools 0.6.1 httpx 0.27.0 huggingface-hub 0.23.0 idna 3.7 importlib_metadata 7.1.0 importlib_resources 6.4.0 Jinja2 3.1.4 jmespath 0.10.0 joblib 1.4.2 jsonlines 4.0.0 jsonschema 4.22.0 jsonschema-specifications 2023.12.1 kiwisolver 1.4.5 langcodes 3.4.0 language_data 1.2.0 lxml 5.2.2 marisa-trie 1.1.1 markdown-it-py 3.0.0 markdown2 2.4.10 MarkupSafe 2.1.5 matplotlib 3.7.4 mdurl 0.1.2 modelscope 1.14.0 more-itertools 10.1.0 mpmath 1.3.0 multidict 6.0.5 multiprocess 0.70.16 murmurhash 1.0.10 networkx 3.3 nltk 3.8.1 numpy 1.24.4 opencv-python-headless 4.5.5.64 openpyxl 3.1.2 orjson 3.10.3 oss2 2.18.5 packaging 23.2 pandas 2.2.2 peft 0.4.0 Pillow 10.1.0 pip 24.0 pipdeptree 2.20.0 pipeline 0.1.0 platformdirs 4.2.2 portalocker 2.8.2 preshed 3.0.9 protobuf 4.25.0 psutil 5.9.8 pyarrow 16.1.0 pyarrow-hotfix 0.6 pycparser 2.22 pycryptodome 3.20.0 pydantic 2.7.1 pydantic_core 2.18.2 pydub 0.25.1 Pygments 2.18.0 pyparsing 3.1.2 PySocks 1.7.1 python-dateutil 2.9.0.post0 python-dotenv 1.0.1 python-multipart 0.0.9 pytz 2024.1 pywin32 306 PyYAML 6.0.1 referencing 0.35.1 regex 2024.5.15 requests 2.31.0 rich 13.7.1 rpds-py 0.18.1 ruff 0.4.4 sacrebleu 2.3.2 safetensors 0.4.3 scipy 1.13.1 seaborn 0.13.0 semantic-version 2.10.0 sentencepiece 0.1.99 setuptools 65.5.0 shellingham 1.5.4 shortuuid 1.0.11 simplejson 3.19.2 six 1.16.0 smart-open 6.4.0 sniffio 1.3.1 sortedcontainers 2.4.0 soupsieve 2.5 spacy 3.7.2 spacy-legacy 3.0.12 spacy-loggers 1.0.5 srsly 2.4.8 starlette 0.37.2 sympy 1.12 tabulate 0.9.0 thinc 8.2.3 timm 0.9.10 tokenizers 0.19.1 tomli 2.0.1 tomlkit 0.12.0 toolz 0.12.1 torch 2.1.2+cu118 torchaudio 2.1.2+cu118 torchvision 0.16.2+cu118 tqdm 4.66.1 transformers 4.40.0 triton 2.1.0 typer 0.9.4 typing_extensions 4.8.0 tzdata 2024.1 ujson 5.10.0 urllib3 2.2.1 uvicorn 0.29.0 wasabi 1.1.2 watchfiles 0.21.0 weasel 0.3.4 websockets 11.0.3 xxhash 3.4.1 yapf 0.40.2 yarl 1.9.4 zipp 3.18.2
I will try diffusers==0.29.0, thanks again for your help!
I have VS 2019 and VS 2022 env, and have CUDA tooklit from 11.6 to 12.5, cmake and ninja and so on, in my env I can install and run some need local compile project like TencentARC/InstantMesh.
Everything is fine from your side & i always lookup everyone who working in the computer vision modeling / dev to have eyes on compute units, GPU A100 / T4 is always been better over the time, Terminal of the Virtual site, Integrated Pipelines, Compilation, torch, scheduler, main() gettingup for good quality image via gpu accelerations.
You can also try the demo spaces like this for an better understandment : https://huggingface.co/spaces/prithivMLmods/Midjourney
Thanks a lot of for your help! @PRITHIVSAKTHIUR, I will try your idea and script on my Ubuntu + 4090 24GB Server, yes, your script help me to understand more how to get image more better, thank you for the help a lot again.
Hey @wikeeyang Once again, sorry to interrupt you !
You are asked for the image that has high res quality with the fast computation, for your instance i came up with the idea in acceleration of GPU T4, Yes we know NVIDIA A100 TC GPU is unmatched in its power and highest- performing elastic computations. Apart from that you can use T4 as hardware accelerators. You asked me how to run externally from hugging face right. Use T4 in Google Colab or any other work spaces that compatible of it.
Just have the HF Token to passed for login...
--Authenticate with Hugging Face from huggingface_hub import login
--Log in to Hugging Face using the provided token hf_token = '---pass your huging face access token---' login(hf_token)
Visit my colab space for an instances to run local out of HF Hardware accelerator : T4 GPU See we know, we can get A100, L4 there in colab for premium / for cost. T4 is free for certain amount of computation i went with it . In local hardware you what to do...
Second thing: the amount details you have in prompt also will have the desired results. see the higher-end details prompts via https://huggingface.co/spaces/prithivMLmods/Top-Prompt-Collection or in freeflo.ai, prompthero for better details results.
Colab link ( example of the stabilityai/sdxl-turbo) : https://colab.research.google.com/drive/1zYj5w0howOT3kiuASjn8PnBUXGh_MvhJ#scrollTo=Ok9PcD_kVwUI
** After passing the Access Token, Remove your token to share for others.
Hi! @PRITHIVSAKTHIUR, Thanks a lot again! this message powerful help for me, I will try it, Much thanks again.
MODEL_ID = os.getenv("MODEL_REPO"), what is the model name? is stabilityai/sdxl-turbo or SG161222/RealVisXL_V4.0, I take a look of the scripts, only load one image model in the script.