Open CJHFUTURE opened 7 years ago
Well, to make features bigger in size, the script doesn't have an explicit option for that, but you can try:
a) train with bigger --image_size
like 728 or 1024 which will require 6 or 12 GB of GPU memory and take 2x or 4x amount of time respectively;
b) process a smaller version of an input image and upscale it back to initial resolution afterwards with any super-resolution technique, like waifu2x. This is what I do to process full HD 1080p frames, which wouldn't just fit in my 4GB of memory otherwise, so my input images are 960x540.
Regarding more abstract and complex features transferred, this is a well-known limitation of this algorithm. Fast implementations like this with a single feed-forward pass sacrifice high level features for speed, as opposed to optimization-based techniques. I discuss this issue in more detail here. One thing you may try is increase --lambda_style
to smth like 20.0
before training, but it might not give the desired result.
Any advice on training setting to achieve larger or resulting in more abstract features? For example seeing more detail in large brush strokes etc?