MokkeMeguru / TFGENZOO

Library about construction helper for Generative models e.g. Flow-based Model with Tensorflow 2.x.
https://mokkemeguru.github.io/TFGENZOO/
12 stars 2 forks source link

More safe Inv1x1Conv #88

Open MokkeMeguru opened 4 years ago

MokkeMeguru commented 4 years ago

tensorflow probability's inv1x1conv implementation is stable than original one.

but I don't know whether formula is correct or not.

and generate images are not good...

TODO: check the formula more carefully.

MokkeMeguru commented 4 years ago

Tensorflow Probability の実装はオリジナルの Glow のそれと違う気がする…

ちゃんと数値実験して見る必要がありそう

gitlabspy commented 4 years ago

The sampled images are a bit dark as iterations increase... Do you think this is a problem caused by conv1x1?

MokkeMeguru commented 4 years ago

It's preprocess problem... I 'm working more better repo now.

MokkeMeguru commented 4 years ago

@gitlabspy sorry, I takes two weeks to generate the best example...

Can you check this simple packaged example?

It is the trained result with 2048 epoch. but you can get the same result with 1024 epochs. You can try

python task.py --epochs=1024

https://drive.google.com/file/d/1Zyo7DEA7fX5VNf9MhRkDZ8q9n9AxRFjR/view?usp=sharing

MokkeMeguru commented 4 years ago

and then, you can check the result using tensorboard. (If you can checked it, please comment me)

tensorbaord --logdir outputs/
gitlabspy commented 3 years ago

@MokkeMeguru Sorry my friend, I didn't notice that you relied. Thing is I am having trouble downloading things from google drive so that I can not test it for you... I am so sorry...

I found a line of code which kinda looks like a little bug to me: https://github.com/MokkeMeguru/TFGENZOO/blob/3dc42af0fea028e74b78dda989d6da77c8854379/TFGENZOO/flows/factor_out.py#L77 I think it should be mean 0 and std 1 , right? std 0 means all data points are identical. So I think it might be caused by copy-paste right? 😄

MokkeMeguru commented 3 years ago

not std, its "logged std" https://github.com/MokkeMeguru/TFGENZOO/blob/3dc42af0fea028e74b78dda989d6da77c8854379/TFGENZOO/flows/utils/gaussianize.py#L6

so... if logstd = 0, then std = exp(logstd) = exp(0) = 1

and sorry, I did not know how to share my result... can you give me any advice? I need the method to share the experiment result with this problem and also, with other problems.

gitlabspy commented 3 years ago

You mean results from your trained model? Maybe randomly sample some images from the model and display them into one single figure? Or calculate the bit/dim?

MokkeMeguru commented 3 years ago

so that, i should write more better docs in this repos. wait a moment... i'll working heavily in my new company now...

gitlabspy commented 3 years ago

😂 I heard of some stories about how serious overtime is in Japan... Take care my friend!

gitlabspy commented 3 years ago

@MokkeMeguru Hi, recently I found that using bigger size glow (level=4 or more K=18 something) will cause large memory allocation in physical memory of CPU like 20G and more and memory leaking slowly. When bigger dataset like 600K+ images it will allocate 180G RAM... GPU's memory usage is fine though. Do you know what causes this? Thx!

MokkeMeguru commented 3 years ago

I think,

  1. dataset's preloading causes some problem remove tf.data.dataset's "prefetch()"
  2. tf.data.metrics's saved tensor causes problem use loss = ... .reduce_mean()

or ... I don't know what happened...

gitlabspy commented 3 years ago

Hmm... I think it might be something related to the autograph compilation, what you think about this?