DmitryUlyanov / texture_nets

Code for "Texture Networks: Feed-forward Synthesis of Textures and Stylized Images" paper.
Apache License 2.0
1.22k stars 217 forks source link

Is it possible to scale style when generating image after training(using test.lua) #41

Open xpeng opened 8 years ago

xpeng commented 8 years ago

Hi, i find i can't tune the style scale in result using test.lua, also i can't find any option about it or is it possible in theory to scale style after the training procedure?

recently i trained a style, and created the following results on a photo with size 668 x 448 and with its original size 4272 × 2848:

700px 668 x 448

12m 4272 x 2848

i can understand both of the results are right, and the style is applied correctly. but if i can tune the style scale on large size image as artist using large brush, i think it will be better to user to see consistency between thumbnail(he get this quickly) and the final size result(he get this after a long time).

sorry i don't know if this is mentioned on paper or there is any progress in recently research?

SantaFlamel commented 8 years ago

Wow, its great! what settings you used to get this result?

xpeng commented 8 years ago

training parameters: -model johnson -image_size 512 -style_size 512 -content_weight 1 -style_weight 50 -learning_rate 0.001 -normalize_gradients true -tv_weight 0.000085

dataset is ms coco

SantaFlamel commented 8 years ago

Thank you! and the style you used this? jdzxoc1sl8a

xpeng commented 8 years ago

@SantaFlamel yes, exactly

lvjadey commented 8 years ago

Great!

lvjadey commented 8 years ago

original: house result: stylized parameters: th train.lua -data dataset -model johnson -image_size 512 -style_size 512 -content_weight 1 -style_weight 50 -learning_rate 0.001 -normalize_gradients true -tv_weight 0.000085 -style_image data/textures/Composition.jpg -num_iterations 20000 -batch_size 1

lvjadey commented 8 years ago

somethis wrong?

xpeng commented 8 years ago

make style_weight smaller such as 20 or 10, and i trained for 30000+ iterations

markzhong88 commented 8 years ago

@xpeng how would you able to process 4272 x 2848 resolution image, it seems I can't process any image bigger than 1024 due to cuda memory issue, I already use gtx 1080 gpu. btw, I think the send one looks better than the first one.

xpeng commented 8 years ago

@markz-nyc , use cpu mode, with huge memory

markzhong88 commented 8 years ago

@xpeng thanks for the tip, by huge memory, can you more specific, is 32 g memory enough to do the trick? Also it doesn't matter for any cpu? I am currently on i5 6500 cpu and 16g memory...

xpeng commented 8 years ago

@markz-nyc 4K size image takes me more than 70g memory

markzhong88 commented 8 years ago

@xpeng thanks, that's a lot memory, but nice result though

0000sir commented 8 years ago

Hi @xpeng , do you have solved this? I'm facing the same situation.

xpeng commented 8 years ago

@0000sir not now, i tried training with large size style(such as size 1080) and get some larger scale but not satisfactory exactly, meanwhile you can not get so much video memory to train large style on same size content.

0000sir commented 8 years ago

I'm getting wrong result because of a mistake from codes. [https://github.com/DmitryUlyanov/texture_nets/issues/56]

Can you send me the codes you used to generate these images? Thank you @xpeng

0000sir#gmail.com

xpeng commented 8 years ago

@0000sir i did not change any code of repo, i use follow param to train th train.lua -data ./datasets -style_image input.jpg -model johnson -image_size 512 -style_size 512 -content_weight 1 -style_weight 20 -learning_rate 0.001

0000sir commented 8 years ago

I was so disappointed after tried so many times, wish to make a large image with this, but there're too much details in it, not like painted with big brush. Maybe it's time to give up. thank you @xpeng thank you @DmitryUlyanov It's a perfect project for small size images

ycjing commented 6 years ago

@xpeng Here is a recent work which discusses the stroke size problem and also tries to exploit one single model to achieve continuous stroke size control after training: http://yongchengjing.com/StrokeControllable Demo video: https://youtu.be/UNG38tdMSMg

suke27 commented 6 years ago

wow, @xpeng how do you tune the brush size, is it possible to control brush size in any images?

ycjing commented 6 years ago

Hi, @xpeng
Our code as well as pre-trained models which can scale style at testing stage are finally ready: https://github.com/LouieYang/stroke-controllable-fast-style-transfer We have also updated our paper correspondingly at: https://arxiv.org/abs/1802.07101 Welcome to pull requests! Thanks. ^_^

xpeng commented 6 years ago

@ycjing great work! and what the performance on your implementation? i noticed that it is very fast in your demo video.

ycjing commented 6 years ago

@xpeng Thanks! On a single NVIDIA Quadro M6000, it takes averagely 0.09s to stylized an image with size 1024*1024. Our code supports the flexible control and the real-time stylization, which has already been adopted by AI team of Alibaba Group.

xpeng commented 6 years ago

@ycjing cool, i'm sure this will be a very potential project. where will Alibaba use this tech on? some application like prisma?

ycjing commented 6 years ago

@xpeng Yes. They have applied this technique to one of their products Pailitao to provide users some interesting image processing tools.