Open alexjc opened 9 years ago
I'm wondering if the second and third issues in my list are caused by software versions... I had to upgrade Theano to the version in the Git repository since the PIP version did not have pooling modes for convolution required by the latest Lasagne—and it causes an error otherwise. I believe things were more reliable before the upgrade (though I had to hack the average pooling out for it to work).
Which combination of versions/revisions were you using for Lasagne and Theano?
It'd be easier to work with and make pull requests if this was a script rather than a notebook, but it's up to you.
It's mostly a notebook because they can be browsed and rendered directly on github: https://github.com/Lasagne/Recipes/blob/master/examples/styletransfer/Art%20Style%20Transfer.ipynb We'd lose that if it was a script.
Is there a way to force it to just allocate the buffers it needs once then keep them in memory throughout the process?
Yes, just set THEANO_FLAGS=lib.cnmem=.5
to have it allocate 50% of GPU memory from the start, or THEANO_FLAGS=lib.cnmem=600
to have it allocate 600 MiB. In addition, you can tell it not to release memory in between, via THEANO_FLAGS=allow_gc=0
, or even combine both: THEANO_FLAGS=lib.cnmem=600,allow_gc=0
(that's faster then either of those alone). You can also make those permanent in your ~/.theanorc
:
[global]
floatX = float32
device = gpu
allow_gc = False
[lib]
cnmem=600
I had to upgrade Theano to the version in the Git repository since the PIP version did not have pooling modes for convolution required by the latest Lasagne
Yes, we've tried to prominently mention this in the install instructions: http://lasagne.readthedocs.org/en/latest/user/installation.html#stable-lasagne-release
Which combination of versions/revisions were you using for Lasagne and Theano?
I'm working with the bleeding-edge version of both (that's required for lib.cnmem
), but I don't know which version @ebenolson used for the notebook. Maybe we should include that information in the notebook? Since the release of Lasagne we did not change anything that would affect backwards-compatibility, though, so I'd expect it to work the same no matter which version you're using.
I'll leave the other technical questions up to Eben!
Hi @alexjc, thanks for the questions.
I don't know what versions I was using, but they were likely the current master when the notebook was committed - I think it is probably fine with the current versions, but I'll try to rerun later and confirm that.
I haven't seen that particular LBFGS error, but using scipy is definitely a weak point, I'd like to find an alternative - perhaps I will see if the Torch optimizer can be wrapped/converted easily.
As for edge/image size effects I haven't really investigated, I'll have to get back to you on that.
Many thanks @f0k and @ebenolson.
I've traced the major problems down to using optimizer=fast_compile
and exception_verbosity=high
which I had enabled for testing algorithm changes. With those two flags set, LBFGS fails randomly and I presume any form of gradient descent would fail also if the function compiled wrong.
I will report back on the other two issues, which seem minor in comparison!
I have also seen weird border effects when I have used this example for my own work. Have we figured out a reason for this? :)
@christopher-beckham Try using an image size that's a multiple of 16 or 32, depending on which layers you use.
I have a few questions about the notebook with the implementation of "A Neural Algorithm for Artistic Style". Hopefully this is the right place for them? It'd be easier to work with and make pull requests if this was a script rather than a notebook, but it's up to you.
Overall I think this is by far the prettiest implementation I've seen of the algorithm, and it's been a pleasure to work with. My questions:
+2
to width and height?lbfgs
in scipy to be quite unstable (compared to the one in Torch used by Justin's implementation), as it often returns the error below. This seems to be quite random depending on image size/parameters, and adding new features to the algorithm isn't helping. Any ideas?Thanks again for the code, I've been very impressed with Lasagne because of it!