Closed rdeits closed 7 years ago
I've also fixed all deprecations on julia v0.6 so we can test this properly. Note that this adds a Compat.jl requirement.
Hmm. My memory-allocation tests pass locally on OSX and Ubuntu with Julia 0.5 and 0.6, but I'm seeing lots of memory allocation with Julia 0.5 on Travis. I'm not sure what's up with that yet.
@tkoolen figured out that the memory allocation issues on Travis are due to code coverage being turned on there. Mystery solved!
Thanks so much for the huge effort in bringing this package up to date! Would you like push access to this repo to help me maintain it? Are you ready for me to review and merge?
I looked through your commits pretty quickly - everything looks good, though I agree it would be ideal to have a better solution to @__dot__
everywhere.
I've made a few more improvements over at https://github.com/rdeits/nnls.jl that I'd like to bring in. Should be ready in a day or two. I don't think I need push access, but I'm happy to keep helping out :smile:
Also, I think I just missed the correct combination of parentheses: (!).(x)
works without warnings on v0.5 and v0.6. I'll switch to that.
Any idea why travis is failing?
Merging since the problem seems to be with ADMM and can be addressed in future PR.
Thanks!
Closes #2
This PR brings in my implementation of the NNLS algorithm, which is a straightforward port of the original Fotran code by Lawson and Hanson.
It will probably need some cleanup and change before merging, but I wanted to show what I've got so far. The all-caps comments are all directly from the original Fortran code.
The code is substantially longer than the current implementation, but it's also several hundred times faster (and can be run with no memory allocation if desired). The primary speed difference comes from the fact that this implementation uses Method 3 from Chapter 24 of Lawson & Hanson. This method lets us update the QR factorization of A in-place and incrementally, whereas the current implementation computes
pinv(A)
at every iteration.A neat side-effect of this change is that we now have exactly the same procedure as
scipy.optimize.nnls
, so the results for a given problem are bit-for-bit identical to those in scipy.Other changes:
tol
argument tonnls()
since it doesn't actually seem to be necessary, and isn't implemented in Lawson & Hansonnnls()
is now generic, so it can handle any scalar type@pyimport
s inruntests.jl
withpyimport_conda
which is provided by PyCall and uses Conda to install dependencies. This should fix the PackageEvaluator tests: http://pkg.julialang.org/detail/NonNegLeastSquares.html