-
I am implementing a Stabel Diffusion on my dataset (with weather images of size (200, 320)). I found the images reconstructed by the vanilla VAE - used by Stable Diffusion by default - are very vague.…
-
### Checklist
- [X] I have searched for [similar issues](https://github.com/isl-org/Open3D/issues).
- [X] For Python issues, I have tested with the [latest development wheel](http://www.open3d.org/do…
-
I came across this paper in a Web of Science alert that may be helpful in writing our grant:
Shojaie, A., & Sedaghat, N. (2017). How Different Are Estimated Genetic Networks of Cancer Subtypes?. In…
-
## Abstract
- propose Vector Quantised Variational AutoEncoder (VQ-VAE)
- generative model that learns discrete representations
- prior is learnt rather than static
- solves the issue of "po…
-
Thank you for sharing your code! But I find out that the hyperparameter of loss function(lamda_reconstruction and lamda_low_frequency) in your code is different from the paper, which one I should use?
-
I enjoyed reading your preprint ["DynaMight: estimating molecular motions with improved reconstruction from cryo-EM images"](https://www.biorxiv.org/content/10.1101/2023.10.18.562877v1). Although I co…
-
Hello,
While I'm trying to run the composer with the same code, I'm getting a good reconstruction but we can see that the albedo image has shading artifacts and the shading image is mostly white a…
-
Hello,
How is the train-test split done in Instant NGP? And where is it in the code?
Thank you.
-
Hello!
First, I'd like to thank all contributors for this great resource. Now onto my issue:
I'm using parametric umap with the following parameters
```reducer = umap.parametric_umap.Parametr…
erl-j updated
3 years ago
-
The fully connected layers added on top of the capsule network consist of ~1.6m parameters, whereas the capsules only have roughly 60k trainable parameters in the _small_ configuration. As the matrix …