-
Bring all codes to a single standard, optimize with improved algorithms and the use of pointers instead of managed arrays.
-
I propose that we drop some serialization errors in favour of producing a fallback representation of the supplied value.
The rationale is that (a) serialization is often used in contexts like xsl:m…
-
Hi, thanks for the great work!
I just noticed that your paper is actually a concurrent work with LDM (exactly the same conference publication!), just wondering what's the main difference between th…
-
Using Dropout in child_model shows great works on prevent overfitting, however it also cause the final performance on model change significantly during each training with same hyper-params. It is too …
-
When calculating affine parameters, Z_y is input into three full-connection layers, and then the mean and standard deviation are output. Why do we do this? How do full-connection layers train?
-
I initially found the topic below better suited for a discussion, but I have the impression discussions are disregarded compared to issues, and the topic is quite crucial IMHO.
### Discussed in htt…
-
# Few-Shot Unsupervised Image-to-Image Translation #
- Author: Ming-Yu Liu, Xun Huang, Arun Mallya, Tero Karras, Timo Aila, Jaakko Lehtinen, Jan Kautz
- Origin: https://arxiv.org/abs/1905.01723
-…
-
In #14, adaptive thresholds were introduced, which modify the actual warning and critical thresholds according to a "magic factor". This is very useful, but I think the calculation in the code is wron…
corro updated
3 years ago
-
In train.py, I found that the structure loss is not built with adaptive coloring algorithm. It is same as the content loss. So why didn't use adaptive coloring algorithm?
-
I'm using below Leptonica Api [1.81.1] for Orientation detect and rotate the file according to the text.
pixOrientDetect(new HandleRef(pix, pixconv), out pupconf, out pleftconf, 0, 0);
For a par…