Lasagne / Recipes

Lasagne recipes: examples, IPython notebooks, ...
MIT License
914 stars 418 forks source link

Questions about Neural Style implementation #24

Open alexjc opened 9 years ago

alexjc commented 9 years ago

I have a few questions about the notebook with the implementation of "A Neural Algorithm for Artistic Style". Hopefully this is the right place for them? It'd be easier to work with and make pull requests if this was a script rather than a notebook, but it's up to you.

Overall I think this is by far the prettiest implementation I've seen of the algorithm, and it's been a pleasure to work with. My questions:

Bad direction in the line search;
   refresh the lbfgs memory and restart the iteration.

           * * *

   N    Tit     Tnf  Tnint  Skip  Nact     Projg        F
*****    211    259      2     0     0   1.204D-03   2.472D+03
  F =   2472.1531027869614

ABNORMAL_TERMINATION_IN_LNSRCH

 Line search cannot locate an adequate point after 20 function
  and gradient evaluations.  Previous x, f and g restored.
 Possible causes: 1 error in function or gradient evaluation;
                  2 rounding error dominate computation.

Thanks again for the code, I've been very impressed with Lasagne because of it!

alexjc commented 9 years ago

I'm wondering if the second and third issues in my list are caused by software versions... I had to upgrade Theano to the version in the Git repository since the PIP version did not have pooling modes for convolution required by the latest Lasagne—and it causes an error otherwise. I believe things were more reliable before the upgrade (though I had to hack the average pooling out for it to work).

Which combination of versions/revisions were you using for Lasagne and Theano?

f0k commented 9 years ago

It'd be easier to work with and make pull requests if this was a script rather than a notebook, but it's up to you.

It's mostly a notebook because they can be browsed and rendered directly on github: https://github.com/Lasagne/Recipes/blob/master/examples/styletransfer/Art%20Style%20Transfer.ipynb We'd lose that if it was a script.

Is there a way to force it to just allocate the buffers it needs once then keep them in memory throughout the process?

Yes, just set THEANO_FLAGS=lib.cnmem=.5 to have it allocate 50% of GPU memory from the start, or THEANO_FLAGS=lib.cnmem=600 to have it allocate 600 MiB. In addition, you can tell it not to release memory in between, via THEANO_FLAGS=allow_gc=0, or even combine both: THEANO_FLAGS=lib.cnmem=600,allow_gc=0 (that's faster then either of those alone). You can also make those permanent in your ~/.theanorc:

[global]
floatX = float32
device = gpu
allow_gc = False
[lib]
cnmem=600

I had to upgrade Theano to the version in the Git repository since the PIP version did not have pooling modes for convolution required by the latest Lasagne

Yes, we've tried to prominently mention this in the install instructions: http://lasagne.readthedocs.org/en/latest/user/installation.html#stable-lasagne-release

Which combination of versions/revisions were you using for Lasagne and Theano?

I'm working with the bleeding-edge version of both (that's required for lib.cnmem), but I don't know which version @ebenolson used for the notebook. Maybe we should include that information in the notebook? Since the release of Lasagne we did not change anything that would affect backwards-compatibility, though, so I'd expect it to work the same no matter which version you're using.

I'll leave the other technical questions up to Eben!

ebenolson commented 9 years ago

Hi @alexjc, thanks for the questions.

I don't know what versions I was using, but they were likely the current master when the notebook was committed - I think it is probably fine with the current versions, but I'll try to rerun later and confirm that.

I haven't seen that particular LBFGS error, but using scipy is definitely a weak point, I'd like to find an alternative - perhaps I will see if the Torch optimizer can be wrapped/converted easily.

As for edge/image size effects I haven't really investigated, I'll have to get back to you on that.

alexjc commented 9 years ago

Many thanks @f0k and @ebenolson.

I've traced the major problems down to using optimizer=fast_compile and exception_verbosity=high which I had enabled for testing algorithm changes. With those two flags set, LBFGS fails randomly and I presume any form of gradient descent would fail also if the function compiled wrong.

I will report back on the other two issues, which seem minor in comparison!

christopher-beckham commented 8 years ago

I have also seen weird border effects when I have used this example for my own work. Have we figured out a reason for this? :)

alexjc commented 8 years ago

@christopher-beckham Try using an image size that's a multiple of 16 or 32, depending on which layers you use.