Open xpeng opened 8 years ago
Wow, its great! what settings you used to get this result?
training parameters: -model johnson -image_size 512 -style_size 512 -content_weight 1 -style_weight 50 -learning_rate 0.001 -normalize_gradients true -tv_weight 0.000085
dataset is ms coco
Thank you! and the style you used this?
@SantaFlamel yes, exactly
Great!
original: result: parameters: th train.lua -data dataset -model johnson -image_size 512 -style_size 512 -content_weight 1 -style_weight 50 -learning_rate 0.001 -normalize_gradients true -tv_weight 0.000085 -style_image data/textures/Composition.jpg -num_iterations 20000 -batch_size 1
somethis wrong?
make style_weight smaller such as 20 or 10, and i trained for 30000+ iterations
@xpeng how would you able to process 4272 x 2848 resolution image, it seems I can't process any image bigger than 1024 due to cuda memory issue, I already use gtx 1080 gpu. btw, I think the send one looks better than the first one.
@markz-nyc , use cpu mode, with huge memory
@xpeng thanks for the tip, by huge memory, can you more specific, is 32 g memory enough to do the trick? Also it doesn't matter for any cpu? I am currently on i5 6500 cpu and 16g memory...
@markz-nyc 4K size image takes me more than 70g memory
@xpeng thanks, that's a lot memory, but nice result though
Hi @xpeng , do you have solved this? I'm facing the same situation.
@0000sir not now, i tried training with large size style(such as size 1080) and get some larger scale but not satisfactory exactly, meanwhile you can not get so much video memory to train large style on same size content.
I'm getting wrong result because of a mistake from codes. [https://github.com/DmitryUlyanov/texture_nets/issues/56]
Can you send me the codes you used to generate these images? Thank you @xpeng
0000sir#gmail.com
@0000sir i did not change any code of repo, i use follow param to train
th train.lua -data ./datasets -style_image input.jpg -model johnson -image_size 512 -style_size 512 -content_weight 1 -style_weight 20 -learning_rate 0.001
I was so disappointed after tried so many times, wish to make a large image with this, but there're too much details in it, not like painted with big brush. Maybe it's time to give up. thank you @xpeng thank you @DmitryUlyanov It's a perfect project for small size images
@xpeng Here is a recent work which discusses the stroke size problem and also tries to exploit one single model to achieve continuous stroke size control after training: http://yongchengjing.com/StrokeControllable Demo video: https://youtu.be/UNG38tdMSMg
wow, @xpeng how do you tune the brush size, is it possible to control brush size in any images?
Hi, @xpeng
Our code as well as pre-trained models which can scale style at testing stage are finally ready:
https://github.com/LouieYang/stroke-controllable-fast-style-transfer
We have also updated our paper correspondingly at: https://arxiv.org/abs/1802.07101
Welcome to pull requests! Thanks. ^_^
@ycjing great work! and what the performance on your implementation? i noticed that it is very fast in your demo video.
@xpeng Thanks! On a single NVIDIA Quadro M6000, it takes averagely 0.09s to stylized an image with size 1024*1024. Our code supports the flexible control and the real-time stylization, which has already been adopted by AI team of Alibaba Group.
@ycjing cool, i'm sure this will be a very potential project. where will Alibaba use this tech on? some application like prisma?
Hi, i find i can't tune the style scale in result using test.lua, also i can't find any option about it or is it possible in theory to scale style after the training procedure?
recently i trained a style, and created the following results on a photo with size 668 x 448 and with its original size 4272 × 2848:
668 x 448
4272 x 2848
i can understand both of the results are right, and the style is applied correctly. but if i can tune the style scale on large size image as artist using large brush, i think it will be better to user to see consistency between thumbnail(he get this quickly) and the final size result(he get this after a long time).
sorry i don't know if this is mentioned on paper or there is any progress in recently research?