Open kirilledelman opened 8 years ago
The style image is run through the net once in the beginning to get a "measure" of the style to which the intermediate results will then be compared. This initial measuring of the style is quite fast and it would not make any noticeable difference, I think, to store it for re-use.
The situation is completely different in the recent developments which, instead of iterating to get the image, generate the image in one go using a feed-forward network which has been pretrained using a specific style. The downside using this approach is that it takes a long time to train a model. See https://arxiv.org/abs/1603.08155 and https://github.com/yusuketomoto/chainer-fast-neuralstyle.
@htoyryla Yeah, I tried chainer-fast-neuralstyle, and it's prohibitively slow - takes days to train for one style, results are very grainy, and you don't know what you're going to get. Thanks for your response though.
Just curious - is there a way this algorithm can cache, or store intermediate result to a file, namely the style image, and reuse this cached data when invoked again with the same style image but different content?