Open JunZhan2000 opened 1 year ago
@junzhan18 I have same question. Have you solved it yet?
replace the model code with the official git the problem solved.
replace the model code with the official git the problem solved.
Can you share the official git? @aa1234241
githuboflk
@aa1234241 Thanks for you reply. Just replace the model code and leave the rest unchanged?
githuboflk
@aa1234241 Thanks for you reply. Just replace the model code and leave the rest unchanged?
I replaced all these components by the official code and it works
githuboflk
@aa1234241 Thanks for you reply. Just replace the model code and leave the rest unchanged?
I replaced all these components by the official code and it works
Thank you for your prompt, I encountered some problems in the process of change, can I disclose your code?
githuboflk
@aa1234241 Thanks for you reply. Just replace the model code and leave the rest unchanged?
I replaced all these components by the official code and it works
@aa1234241 Thanks for the tip. How many rounds did you train? I replaced the model after 200 rounds of training in 800 images and still only contours
githuboflk
@aa1234241 Thanks for you reply. Just replace the model code and leave the rest unchanged?
I replaced all these components by the official code and it works
@aa1234241 Thanks for the tip. How many rounds did you train? I replaced the model after 200 rounds of training in 800 images and still only contours here is my result training after 145 epochs on the flower dataset. Its not perfect and I'm still training. You can double check the code. I've replaced the vqgan model, discriminator model and perceptual loss.
githuboflk
@aa1234241 Thanks for you reply. Just replace the model code and leave the rest unchanged?
I replaced all these components by the official code and it works
@aa1234241 Thanks for the tip. How many rounds did you train? I replaced the model after 200 rounds of training in 800 images and still only contours here is my result training after 145 epochs on the flower dataset. Its not perfect and I'm still training. You can double check the code. I've replaced the vqgan model, discriminator model and perceptual loss.
@aa1234241 The effect looks very good, I changed the code according to what you said, but the effect is still not good at present, I think I made some mistakes, can you give me your code? thank you very much.
githuboflk
@aa1234241 Thanks for you reply. Just replace the model code and leave the rest unchanged?
I replaced all these components by the official code and it works
@aa1234241 Thanks for the tip. How many rounds did you train? I replaced the model after 200 rounds of training in 800 images and still only contours here is my result training after 145 epochs on the flower dataset. Its not perfect and I'm still training. You can double check the code. I've replaced the vqgan model, discriminator model and perceptual loss.
@aa1234241 The effect looks very good, I changed the code according to what you said, but the effect is still not good at present, I think I made some mistakes, can you give me your code? thank you very much.
Sorry, I can't upload the code since it violates company policy. I recommend that you debug both the official VQGAN code and your own. Identify where the outputs diverge. In my case, I directly replaced the VQGAN model, the discriminator model, and the perceptual loss. I also addressed the visualization issue while leaving everything else unchanged. You could also first disable the gan loss, treat the model as a VQVAE and see the results.
This is the VQVAE results after 50 epochs, make sure this step is visually coherent. After that you could add the gan loss, and the results will be better.
githuboflk
@aa1234241 Thanks for you reply. Just replace the model code and leave the rest unchanged?
I replaced all these components by the official code and it works
@aa1234241 Thanks for the tip. How many rounds did you train? I replaced the model after 200 rounds of training in 800 images and still only contours here is my result training after 145 epochs on the flower dataset. Its not perfect and I'm still training. You can double check the code. I've replaced the vqgan model, discriminator model and perceptual loss.
@aa1234241 The effect looks very good, I changed the code according to what you said, but the effect is still not good at present, I think I made some mistakes, can you give me your code? thank you very much.
Sorry, I can't upload the code since it violates company policy. I recommend that you debug both the official VQGAN code and your own. Identify where the outputs diverge. In my case, I directly replaced the VQGAN model, the discriminator model, and the perceptual loss. I also addressed the visualization issue while leaving everything else unchanged. You could also first disable the gan loss, treat the model as a VQVAE and see the results.
This is the VQVAE results after 50 epochs, make sure this step is visually coherent. After that you could add the gan loss, and the results will be better.
@aa1234241 Thank you very much,I will try.
Update. Here is the result of VQGAN trained 300epochs. miniGPT trained 500 epochs. And the sampling result.
githuboflk
@aa1234241 Thanks for you reply. Just replace the model code and leave the rest unchanged?
I replaced all these components by the official code and it works
@aa1234241 Thanks for the tip. How many rounds did you train? I replaced the model after 200 rounds of training in 800 images and still only contours here is my result training after 145 epochs on the flower dataset. Its not perfect and I'm still training. You can double check the code. I've replaced the vqgan model, discriminator model and perceptual loss.
@aa1234241 The effect looks very good, I changed the code according to what you said, but the effect is still not good at present, I think I made some mistakes, can you give me your code? thank you very much.
Sorry, I can't upload the code since it violates company policy. I recommend that you debug both the official VQGAN code and your own. Identify where the outputs diverge. In my case, I directly replaced the VQGAN model, the discriminator model, and the perceptual loss. I also addressed the visualization issue while leaving everything else unchanged. You could also first disable the gan loss, treat the model as a VQVAE and see the results. This is the VQVAE results after 50 epochs, make sure this step is visually coherent. After that you could add the gan loss, and the results will be better.
@aa1234241 Thank you very much,I will try.
Hi, Did you solve your problem? Does it work after your replacement~~
Anyone in here also managed to get such good results as @aa1234241 ? I am trying the VQ-VAE appoarch (removing GAN, replacing the model & LPIPS with code for the original repo). Got these results after 150 epochs:
Hello everyone, I've made my changes publicly available at https://github.com/aa1234241/vqgan.
Anyone in here also managed to get such good results as @aa1234241 ? I am trying the VQ-VAE appoarch (removing GAN, replacing the model & LPIPS with code for the original repo). Got these results after 150 epochs:
seems like the lpips's bug
Hello, thank you very much for your code and videos! I'm using this code repository to train on the flowers dataset with a batch size of 32 for 200 epochs, but the reconstructed images still only have rough outlines without specific details. Is there something wrong somewhere?