Open swagzhang opened 5 years ago
The examples shown in the paper just like every other paper are the best generations from the model and are obtained by several finetunings so you can't expect the model to generalize in all cases. You can further train the model to get better results. But again the training code isn't provided so bad luck there. I've been working on recreating the code in keras but due to lack of time I've been stuck, so fingers crossed if I can get the code complete.
@lz2470 hi, you run and test this code
can you help me, I get this error when trying to run !python demo.py
i use google colab
qt.qpa.xcb: could not connect to display qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "" even though it was found. This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.
Available platform plugins are: eglfs, linuxfb, minimal, minimalegl, offscreen, vnc, wayland-egl, wayland, wayland-xcomposite-egl, wayland-xcomposite-glx, webgl, xcb.
Hi, I tried this demo with model u provide, but the result is not good. However, the result u showed below are quite good. I wonder it is my personal problem or the trained model is not as same as what you use to generate these samples?