-
In the paper, the dim of VAE latent is 8 or 16 and the experiments covers MLPs of 6-12 blocks. I experimented with an 8-block MLP learning 64 and 1024-dim audio data, while the model struggled to lear…
-
### Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
### What would your feature do?
https://github.com/YangLing0818/RPG-Diffusio…
-
We should add a feature for diffusion dynamics, which would be useful to allow, e.g. Calcium diffusion throughout a cell. I am thinking of a syntax as:
```python
cell = jx.read_swc(fname)
cell.in…
-
How to implement the pre training process? The loss in the code seems to only be the diffusion loss, but as described in the article, there needs to be a feature encoder's feature loss.
-
Hello, I would like to ask what features are extracted from the trained UNet (1,256, 26, 64, 64). How are the 256 and 26 dimensions converted to 6656? Is it through the reshape method?
-
Hi I get this error/warning
```
stable-diffusion-webui-1 | /sd-webui/modules_forge/patch_basic.py:38: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default va…
-
https://platform.stability.ai/docs/getting-started
view-source:https://baojingyu.github.io/stable-diffusion-3/
-
### Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
### What would your feature do ?
q-diffusion implentation would give a speed…
-
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...].
in https://github.com/huggingface/dif…
-
We currently have things like Ollama and InvokeAI available in ujust. It would be really awesome to include a ujust command for creating a quadlet to run Stable Diffusion and ComfyUI.