I am using your nice repo for a video instead of a webcam.
As so, I don't want to sacrifice quality for speed.
Is there a way to keep the generative part of the pix to pix model without reducing it?
I see a huge difference in my validation set rather than my generation using a reduce model and I don't want that.
Is it easy to bypass this inside the code of reduce_model.py while keeping the rest of the pipeline intact?
Hi,
I am using your nice repo for a video instead of a webcam. As so, I don't want to sacrifice quality for speed. Is there a way to keep the generative part of the pix to pix model without reducing it? I see a huge difference in my validation set rather than my generation using a reduce model and I don't want that. Is it easy to bypass this inside the code of reduce_model.py while keeping the rest of the pipeline intact?
Thanks in advance