-
> Copy/paste autoencoder_kl.py and vae.py in your environment (f.i. /home/user/miniconda3/envs/pixelsmith/lib/python3.11/site-packages/diffusers/models/autoencoders/)
concepts such as this should b…
-
Hi! I noticed that the VQ-VAE used in LAPO is not quite standard, at least compared to the popular implementation from https://github.com/lucidrains/vector-quantize-pytorch and https://github.com/Mish…
-
Hi, I'm interested in your work and try to reproduce it, but there are some details need to be confirmed.
The first one is the implementation of MVAE. The paper said,
> We copy the network archi…
-
OmniGen is a new image generation model that is built by tuning an existing Phi-3 model into a transformer for diffusion task. It appears to have next-level multi-modal capability, like incorporating …
-
Hi,
Looking at running various models with various inputs - it seems a lot of time for the initial runs is being spent benchmarking potential kernels - including the naive ones (e.g. `naive_conv_n…
-
Add option to force `--ntilts` for `cryodrgn backproject_voxel ` and/or `cryodrgn train_vae`.
It would be useful to have an option to exclude particles with tilts < `--ntilts`. For example, if a si…
-
Since we are anyway not training the VAE or any of the text encoders, we can cache the VAE and text embedding latents, this leads to big speed ups and reduction of memory usage. I have made a crude i…
-
Hello,
I’ve been learning various AI/ML-related algorithms recently, and my notes are quite similar to the content of your repository. Also this excellent work has helped me understand some of the …
-
Trying to run Nvidia v4.1 implementation for stable diffusion on RTX 4090.
```
(mlperf) arjun@mlperf-inference-arjun-x86-64-24944:/work$ make generate_engines RUN_ARGS="--benchmarks=stable-diffus…
-
```
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
pipe = pipe.to("cuda")
prompt = "a beautiful landscape photograph"
pipe.enable_vae_tiling()
```