Closed samedii closed 2 years ago
I looked at it and this is just caused by order=3
, clip=False
and then me converting to uint8 which makes it look white but it's actually overflowed (negative). I don't think this is an issue for you
Ah yes, it is because I don't clamp because I feed it into the training pipeline as floating point. :)
On Sun, Jun 26, 2022 at 3:46 AM Richard Löwenström @.***> wrote:
I looked at it and this is just caused by order=3, clip=False and then me converting to uint8 which makes it look white but it's actually overflowed (negative). I don't think this is an issue for you
— Reply to this email directly, view it on GitHub https://github.com/crowsonkb/k-diffusion/issues/2#issuecomment-1166490175, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABDQ67WW72IMPKUQSNQFLZ3VRAYHNANCNFSM5Z3XF3JA . You are receiving this because you are subscribed to this thread.Message ID: @.***>
Only seen degradations with the non-leaky augmentations so far but I saw that you've made some updates to it so maybe I'll try again
Thank you for open sourcing this! I tried out your implementation of the non-leaky augmentations. In case it's helpful to you I noticed that there seem to be some artifacts created in the augmentation pipeline that will probably not help in training
Left is normal. Right has an added white line
(I have a different implementation that I built a couple of weeks ago but I didn't do the non-leaky augmentations then. For what it's worth I can say that I've also gotten better results with these techniques than for example v-diff on small real world datasets)