-
Hunyuan-DiT is a new image generation AI. Benchmarks show that it exceeds SD3 overall.
However, the model is relatively complex and uses a lot of VRAM for training. So I thought it would be nice to b…
-
Hi, great work! I succeed in reproducing the VAE adaption from SD2' to SDXL's, as discussed in Pixart-Sigma. However, the adaption to SDXL's VAE is not successful. After 10k steps finetuning in SAM, …
-
I understand if you guys are busy with other things but right now would be the best opportunity to release any tooling for this model as SD3 is such a mess, I strongly believe that the community could…
-
Hi,
Could you please tell me which mouse you extracted 3360_0x81 srom from? Is it the latest version of the srom? AFAIK, I can't find anything newer than 0x05 version for 3360. Also, Is there a aw…
-
Hey, did you get the results from fine-tuning only the UNet part with a fixed T5 or Llama?
-
hello,when I use the PixArt Training Tutorial to train model, the origin checkpoints "PixArt-Sigma-XL-2-256x256.pth"only has 2.9GB, but my train result has 4.19GB, how to use the trained result ? tha…
-
#81
Hi,
Error still persist and it only works after commenting out the flagged lines of code.
After commenting out multi GPUs and batch size works for all parallelism except pipefusion.
F…
-
Would I be correct in saying that for fine-tuning a topic that the models were not trained on, I could simply finetune PixArt on the subject?
I just noticed `train_t2iv_lora.sh`, that should do!
-
FID computation is from the codes in https://github.com/mseitzer/pytorch-fid/tree/master.
30k prompts are radomly sampled from COCO 2014 captions val split.
After generation using these randomly s…
-
Is it possible to use the PixArt models with a hires fix as part of the workflow? I'm not sure if special adjustments are necessary, but doing it the usual way of upscaling the image then running it t…