JuliaLang / julia

The Julia Programming Language
https://julialang.org/
MIT License
45.43k stars 5.45k forks source link

generic linear-algebra functions for Number types (e.g. BigFloat) #3014

Closed stevengj closed 8 years ago

stevengj commented 11 years ago

It would be nice to have linear-algebra functions for BigFloat (possibly as an external package?) and other generic Number types.

Currently, \ on a BigFloat converts to Float64; eig and qr don't work at all.eig doesn't work.

The good news is that LINPACK-style triple-loop textbook implementations (straight out of Trefethen or Golub & Van Loan) should mostly suffice. Since BigFloat operations are CPU-limited rather than memory-limited, there is no reason to implement a LAPACK/BLAS-style blocked algorithm.

cc: @andrioni

Updated 21 August 2015 by @andreasnoack

mlubin commented 11 years ago

This would also help for #1629.

ViralBShah commented 11 years ago

On a wacky note - I have often wondered if we can translate LAPACK into julia code, and make it type independent.

Realistically, just writing a few loops out of the textbooks may be the quickest way forward. Marking this up for grabs.

JeffBezanson commented 11 years ago

Yes that would be especially good for rational matrices.

stevengj commented 11 years ago

@ViralBShah, realize that the whole structure of LAPACK is dictated by the assumption that the performance of non-blocked algorithms is memory-limited, not arithmetic-limited. It makes no sense to use this structure, with all of the complications that memory-optimization implies, in situations that are arithmetic-limited (i.e. as soon as you depart from hardware-accelerated numeric types).

ViralBShah commented 11 years ago

Yes, LAPACK is certainly designed to optimize memory access. My comment was largely tongue-in-cheek, in that if LAPACK was translated to julia, we would have fast linear algebra for the hardware-accelerated numeric types, and the linear algebra for user-defined numeric types for free.

Undoubtedly, the right thing here would be to do the trivial implementations.

Matrix multiplication of bigfloat matrices should be easy to test, since we already have julia implementations.

andrioni commented 11 years ago

I pretend to start working on this soon, especially since I already have some MPFR matrix code in C, but a generic Number implementations will be extremely slow on BigFloats, thanks to immutability + the immense amount of temporary variables created.

raichu commented 11 years ago

Are BigFloats immutable? I haven't seen it in the docs. If it is the case, this should be related to #4169

andrioni commented 11 years ago

For all purposes and functions in the standard library, they are technically immutable, but you can mutate them by calling MPFR directly.

raichu commented 11 years ago

How can I operate on BigFloats using MPFR directly?

Also, this sounds horrible. How can Julia be efficient by making them immutable? I thought something is immutable only when you explicitly define them to be immutable. It would result in generation of endless garbage, ctor, dtor and copy calls (which I believe is what is responsible for #4169).

Is this going to be fixed? Otherwise, I feel like I'm better off continue using C++ for bignum operations.

andrioni commented 11 years ago

Just use ccall, here's the manual section about it. Every function that takes a mpfr_t will take a Ptr{BigFloat}.

lsorber commented 11 years ago

+1 for this feature, I would love to be able to define my own Number types and use them to compute matrix factorizations. This really hard to do in other languages such as MATLAB.

raichu commented 11 years ago

As I mentioned in #3864, mplapack could solve this problem.

ViralBShah commented 11 years ago

That page was last updated in 2011!

aviks commented 11 years ago

I don't think that page footer is correct, since that page also contains a release announcement from Dec 2012 ... unless he's invented a time machine :)

paulanalyst commented 10 years ago

Do you see a chance to handling in BigFloat for eigen and other matrix algebra calculations?

A=rand(100,100) R = convert (Matrix {BigFloat}, cor (A)) m,v=eig(R) ERROR: no method eigfact!(Array{Any,1}, Array{BigFloat,2}) in eigfact at linalg/factorization.jl:669 in eig at linalg/factorization.jl:673

StefanKarpinski commented 10 years ago

It's actively being worked on by @andreasnoackjensen, but it's a fairly non-trivial endeavor. You can already do LU and QR, I believe.

andreasnoack commented 10 years ago

Yes. This functionality will come, but it still needs some work before it is ready.

paulanalyst commented 10 years ago

Thx, we wait ! Paul W dniu 2014-04-23 08:21, Andreas Noack Jensen pisze:

Yes. This functionality will come, but it still needs some work before it is ready.

— Reply to this email directly or view it on GitHub https://github.com/JuliaLang/julia/issues/3014#issuecomment-41128408.

raichu commented 10 years ago

@ViralBShah How about this? http://eigen.tuxfamily.org/index.php?title=Main_Page

paulanalyst commented 10 years ago

is any way from LU or QR to eigenvecotrs ? Paul

raichu commented 10 years ago

Yes, and with C++ templates so you can use gmp++ with it for instance.

It's also a very active project that is gaining popularity recently.

ViralBShah commented 10 years ago

For Base, we certainly want pure julia code that will be maintainable. Hooking up C++/GMP++ can certainly be done as a Julia package.

raichu commented 10 years ago

For Base, we certainly want pure julia code that will be maintainable.

I don't understand what this means. Aren't you using *BLAS for Float64 already (which is in C not Julia).

ivarne commented 10 years ago

I think it is about shirnking the Base library (https://github.com/JuliaLang/julia/issues/5155) and move functionality out to packages.

A too large standard library makes it hard to run Julia in memory limited environments and is more scary for security conscious system administrators. Introducing more complex functionality in Base is a step in the wrong direction. BLAS is a different story, because it already is in Base and lots of people use it there.

raichu commented 10 years ago

BLAS is a different story

Then you should be happy with it, given that what we are talking about is a BLAS for bigfloat.

ivarne commented 10 years ago

Okay, if you need it spelled out: "Float64 BLAS is a different story", since it has always been there, and is used a lot in many applications. It is also functionality that users expect when they come from Matlab, Numpy and other technical platforms. A pure Julia implementation would make things work for any user defined type (that support the required methods). A C++ version will only support BigFloat.

A discussion about what functionality belongs in Base and what packages we should include in a standard distribution, is probably anyway not the most constructive way of attacking this issue. A reorganization will happen before 1.0 is released anyway. Pull requests with quality code might make Viral change his mind, and there are also others that might raise their voice if it fits poorly in a package.

raichu commented 10 years ago

A C++ version will only support BigFloat.

eigen uses templates, so it should work with all built-in types. It will also be faster than a Julia implementation. This is preferable assuming that speed is important. But the argument here is essentially "having a native Julia implementation is more important than speed", right?

Am I understanding correctly in that you eventually want to replace OpenBLAS with Julia implementation? If not, the argument is "having a native Julia implementation is more important than speed, but Float64 is a special case".

And the question becomes, why does Float64 gets speed as top priority, whereas it's OK to have a slower BigFloat implementation in a probably-distant future?

Because of the size of the user-base of each type?

Keno commented 10 years ago

Replacing OpenBLAS with a pure julia implementation would be great. Unfortunately not very practical because Float64 arithmetic operations are so fast as they are implemented in hardware, that you really need all the crazy contortions that OpenBLAS goes through. This is not the case for BigFloat. Nobody is stopping you from wrapping a library and putting it in a package (in fact that's what we're recommending). I do warn you that it will be rather painful at the moment. If you have convincing code and then make a convincing argument that it should be in Base people may reconsider. At the moment I am not convinced that the C++ version will be significantly faster or less effort than a Julia version, but if you feel so strongly I'd be happy to be proven wrong.

tknopp commented 10 years ago

I am very much against using Eigen (which is an awsome library) in Base. It would mean creating C++ wrapping code that is very slow to compile and this is additional code that has to be maintained.

My take on Blas is that it would be great if we could make this "on top" of a generic Julia implementation. Its ok to be enabled by default but should be easy to disable. We have seen quite some bugs recently that are due to OpenBlas and for small matrix/vector sizes the Julia implementation is actually faster. I know ideally this should be fixed in OpenBlas but I can also see benefits of beeing not so dependent on external projects.

lindahua commented 10 years ago

Replacing OpenBLAS (or MKL) with a pure Julia implementation is not a feasible solution:

tknopp commented 10 years ago

Looking at https://github.com/JuliaLang/julia/blob/master/base/linalg/blas.jl I count 28 functions of which a trivial implementation should be doable in a 10-liner. Do I miss something here?

I have not said optimize it like OpenBlas or that it should compete with it with regard to performance. I have further said to make Blas the default with an easy way to opt out.

However looking at #6951 does not underline the statement that OpenBlas is extremely optimized for small sizes. When we design the FixedSize vectors/matrices I am not that certain that calling OpenBlas will be the best idea.

lindahua commented 10 years ago

@tknopp OpenBLAS (or other BLAS vendor) is not ideal for small-size arrays, and I agree that pure Julia implementation can be useful at that domain. I welcome the idea of pure Julia implementation for small arrays.

Just that for arrays of larger sizes, nothing can replace OpenBLAS (or MKL) yet.

tknopp commented 10 years ago

And small is 2^11 according to #6951 ;-) ... Just kidding. I think we actually have a rather similar opinion. I just wanted to point out that it is quite size dependent whether OpenBlas is faster than Julia. Not everybody is using large arrays.

Looking at Julia from an embedding perspective it would be very attractive to make Julia Base more modular and less dependent on binary packages (again: optional!).

northbymidwest commented 9 years ago

Is anyone actively working on this? If not, I wouldn't mind implementing some simple(textbook?) algorithms to at least get things rolling. I'm kind of tired of having to go into Mathematica to get high precision linear algebra functions.

andreasnoack commented 9 years ago

It is not that visible from this thread, but you can already do some operations, so let me do some advertising, e.g.

julia> A = big(randn(3,3));b = A*ones(3);

julia> A\b # by LU
3-element Array{BigFloat,1}:
 9.999999999999999999999999999999999999999999999999999999999999999999999999999741e-01
 1.000000000000000000000000000000000000000000000000000000000000000000000000000017e+00
 1e+00                                                                               

julia> qrfact(A)\b
3-element Array{BigFloat,1}:
 1.000000000000000000000000000000000000000000000000000000000000000000000000000035e+00
 9.999999999999999999999999999999999999999999999999999999999999999999999999999827e-01
 1e+00                                                                               

julia> cholfact(A'A)\(A'b)
3-element Array{BigFloat,1}:
 9.999999999999999999999999999999999999999999999999999999999999999999999999999827e-01
 9.999999999999999999999999999999999999999999999999999999999999999999999999999827e-01
 9.999999999999999999999999999999999999999999999999999999999999999999999999999827e-01

I'm experimenting with more functions in https://github.com/andreasnoack/LinearAlgebra.jl. It is still quite α, but can try it out if you are impatient. Feedback is very welcome. With the package you can get eigenvalues,

julia> vr, vi, v3 = big(randn(3)), big(randn(3)), big(randn(3));

julia> V = [vr + vi*im vr - vi*im  v3];

julia> A = real(V*diagm([1 + im, 1 - im, 1])/V);

julia> LinearAlgebra.EigenGeneral.eigvals!(copy(A))
3-element Array{Complex{BigFloat},1}:
 1.000000000000000000000000000000000000000000000000000000000000000000000000000035e+00+9.999999999999999999999999999999999999999999999999999999999999999999999999999827e-01im
1.000000000000000000000000000000000000000000000000000000000000000000000000000035e+00-9.999999999999999999999999999999999999999999999999999999999999999999999999999827e-01im
9.999999999999999999999999999999999999999999999999999999999999999999999999999914e-01+0e+00im

for the general problem and

julia> A = qr(big(randn(5,5)))[1] |> t -> t*diagm([1e-40, 1e-30, 1e-20, 1e-10, 1])/t;

julia> LinearAlgebra.EigenHermitian.eigvals!(Hermitian(copy(A), :L))
5-element Array{BigFloat,1}:
 9.999999999999999292928793998801450023472206281233081156701303631711651819658446e-41
 1.000000000000000083336420607585985350931336026866344769362363531395456743464033e-30
 9.999999999999999451532714542095716517295037027873924471078714115834551975538631e-21
 1.0000000000000000364321973154977415791655470655996396089904010295867919921875e-10  
 9.999999999999999999999999999999999999999999999999999999999999999999999999999482e-01

for the symmetric.

I'm still considering how to store the eigenvectors, so you must do shift-invert yourself for now.

By the way, the symmetric tridiagonal solver underneath the Hermitian solver shows that Julia can indeed be fast. I believe my implementation has the same precision (at least for these simple matrices)

julia> A = SymTridiagonal(randn(10000), randn(9999));

julia> @time LinearAlgebra.EigenHermitian.eigvalsPWK!(copy(A));
elapsed time: 1.226747285 seconds (160280 bytes allocated)

julia> @time LinearAlgebra.LAPACK2.sterf!(copy(A.dv), copy(A.ev)); # same algorithm in LAPACK
elapsed time: 2.035283376 seconds (160240 bytes allocated)

julia> @time eigvals(A); # Our default algorithm (which we might want to reconsider)
elapsed time: 3.130386049 seconds (2322992 bytes allocated)
paulanalyst commented 9 years ago

Andreas, big work, THX Do You see some future for eigen on BigFloat in Julia? like this: A=rand(100,100) R = convert (Matrix {BigFloat}, cor (A)) m,v=eig(R) ERROR: no method eigfact!(Array{Any,1}, Array{BigFloat,2}) in eigfact at linalg/factorization.jl:669 in eig at linalg/factorization.jl:673

Paul

W dniu 2014-10-07 o 02:59, Andreas Noack pisze:

It is not that visible from this thread, but you can already do some operations, so let me do some advertising, e.g.

|julia> A = big(randn(3,3));b = A*ones(3);

julia> A\b # by LU 3-element Array{BigFloat,1}: 9.999999999999999999999999999999999999999999999999999999999999999999999999999741e-01 1.000000000000000000000000000000000000000000000000000000000000000000000000000017e+00 1e+00

julia> qrfact(A)\b 3-element Array{BigFloat,1}: 1.000000000000000000000000000000000000000000000000000000000000000000000000000035e+00 9.999999999999999999999999999999999999999999999999999999999999999999999999999827e-01 1e+00

julia> cholfact(A'A)(A'b) 3-element Array{BigFloat,1}: 9.999999999999999999999999999999999999999999999999999999999999999999999999999827e-01 9.999999999999999999999999999999999999999999999999999999999999999999999999999827e-01 9.999999999999999999999999999999999999999999999999999999999999999999999999999827e-01 |

I'm experimenting with more functions in https://github.com/andreasnoack/LinearAlgebra.jl. It is still quite α, but can try it out if you are impatient. Feedback is very welcome. With the package you can get eigenvalues,

|julia> vr, vi, v3 = big(randn(3)), big(randn(3)), big(randn(3));

julia> V = [vr + vi_im vr - vi_im v3];

julia> A = real(V*diagm([1 + im, 1 - im, 1])/V);

julia> LinearAlgebra.EigenGeneral.eigvals!(copy(A)) 3-element Array{Complex{BigFloat},1}: 1.000000000000000000000000000000000000000000000000000000000000000000000000000035e+00+9.999999999999999999999999999999999999999999999999999999999999999999999999999827e-01im 1.000000000000000000000000000000000000000000000000000000000000000000000000000035e+00-9.999999999999999999999999999999999999999999999999999999999999999999999999999827e-01im 9.999999999999999999999999999999999999999999999999999999999999999999999999999914e-01+0e+00im |

for the general problem and

|julia> A = qr(big(randn(5,5)))[1] |> t -> t*diagm([1e-40, 1e-30, 1e-20, 1e-10, 1])/t;

julia> LinearAlgebra.EigenHermitian.eigvals!(Hermitian(copy(A), :L)) 5-element Array{BigFloat,1}: 9.999999999999999292928793998801450023472206281233081156701303631711651819658446e-41 1.000000000000000083336420607585985350931336026866344769362363531395456743464033e-30 9.999999999999999451532714542095716517295037027873924471078714115834551975538631e-21 1.0000000000000000364321973154977415791655470655996396089904010295867919921875e-10 9.999999999999999999999999999999999999999999999999999999999999999999999999999482e-01 |

for the symmetric.

I'm still considering how to store the eigenvectors, so you must do shift-invert yourself for now.

By the way, the symmetric tridiagonal solver underneath the Hermitian solver shows that Julia can indeed be fast. I believe my implementation has the same precision (at least for these simple matrices)

|julia> A = SymTridiagonal(randn(10000), randn(9999));

julia> @time LinearAlgebra.EigenHermitian.eigvalsPWK!(copy(A)); elapsed time: 1.226747285 seconds (160280 bytes allocated)

julia> @time LinearAlgebra.LAPACK2.sterf!(copy(A.dv), copy(A.ev)); # same algorithm in LAPACK elapsed time: 2.035283376 seconds (160240 bytes allocated)

julia> @time eigvals(A); # Our default algorithm (which we might want to reconsider) elapsed time: 3.130386049 seconds (2322992 bytes allocated) |

— Reply to this email directly or view it on GitHub https://github.com/JuliaLang/julia/issues/3014#issuecomment-58122912.

andreasnoack commented 9 years ago

I'm not sure it will end up in base, but it will certainly be a real package at some point. However, you should be warned that the precision of the eigenvalues depends critically on the precision of the data. Compare the results above to this calculation where the data is transferred back and forth from BigFloat to Float64.

julia> float64(LinearAlgebra.EigenHermitian.eigvals!(Hermitian(big(float64(A)), :L))./[1e-40,1e-30,1e-20,1e-10,1]-1)
5-element Array{Float64,1}:
  -5.37474e22 
  -2.74808e12 
 257.307      
  -2.25512e-8 
  -2.72992e-17

julia> eigvals(Hermitian(float64(A), :L))./[1e-40,1e-30,1e-20,1e-10,1]-1
5-element Array{Float64,1}:
   -1.49372e23 
   -6.78702e12 
 1305.47       
   -2.92874e-7 
    4.44089e-16

so the relative precision is about the same.

paulanalyst commented 9 years ago

Thx, I know. I have data in real BIGprecisio,nnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnn ;) I wait and I miss!;) Paul

W dniu 2014-10-07 o 14:45, Andreas Noack pisze:

I'm not sure it will end up in base, but it will certainly be a real package at some point. However, you should be warned that the precision of the eigenvalues depends critically on the precision of the data. Compare the results above to this calculation where the data is transferred back and forth from |BigFloat| to |Float64|.

|julia> float64(LinearAlgebra.EigenHermitian.eigvals!(Hermitian(big(float64(A)), :L))./[1e-40,1e-30,1e-20,1e-10,1]-1) 5-element Array{Float64,1}: -5.37474e22 -2.74808e12 257.307 -2.25512e-8 -2.72992e-17

julia> eigvals(Hermitian(float64(A), :L))./[1e-40,1e-30,1e-20,1e-10,1]-1 5-element Array{Float64,1}: -1.49372e23 -6.78702e12 1305.47 -2.92874e-7 4.44089e-16 |

so the relative precision is about the same.

— Reply to this email directly or view it on GitHub https://github.com/JuliaLang/julia/issues/3014#issuecomment-58178124.

andreasnoack commented 9 years ago

You can try it out now if you'd like with Pkg.clone("https://github.com/andreasnoack/LinearAlgebra.jl"), but you'd have to calculate the vectors by shift-invert for now.

ViralBShah commented 9 years ago

Can we close this, and have specific issues filed as required? I think the larger point of having generic linear algebra functions has been addressed.

andreasnoack commented 9 years ago

Yes the basics are covered, but Eigenproblems and SVD are still missing. I'm okay with either having separate issues for them or renaming this issue to be more precise now that some progress has been made.

paulanalyst commented 9 years ago

what wrong , in last line : ERROR: EigenGeneral not defined

           _

() | A fresh approach to technical computing () | () () | Documentation: http://docs.julialang.org | | | Type "help()" for help. | | | | | | |/ ` | | | | || | | | (| | | Version 0.4.0-dev+1984 (2014-12-07 23:20 UTC) / |_'||_|'_| | Commit 052f54e (24 days old master) |__/ | x86_64-w64-mingw32

julia> using LinearLeastSquares ERROR: LinearLeastSquares not found in require at loading.jl:47

julia> A = big(randn(3,3));b = A*ones(3) 3-element Array{Base.MPFR.BigFloat,1}: -2.9446960791946807933783247790415771305561065673828125e-01 3.236218918047749248945166300472919829189777374267578125e+00 -2.56092452427787808932890811774996109306812286376953125e+00

julia> A\b # by LU 3-element Array{Base.MPFR.BigFloat,1}: 1.000000000000000000000000000000000000000000000000000000000000000000000000000017e+00 1.000000000000000000000000000000000000000000000000000000000000000000000000000017e+00 9.999999999999999999999999999999999999999999999999999999999999999999999999999827e-01

julia> qrfact(A)\b 3-element Array{Base.MPFR.BigFloat,1}: 1.000000000000000000000000000000000000000000000000000000000000000000000000000017e+00 1.000000000000000000000000000000000000000000000000000000000000000000000000000035e+00 9.999999999999999999999999999999999999999999999999999999999999999999999999999741e-01

julia> cholfact(A'A)(A'b) 3-element Array{Base.MPFR.BigFloat,1}: 1.000000000000000000000000000000000000000000000000000000000000000000000000000017e+00 1.000000000000000000000000000000000000000000000000000000000000000000000000000017e+00 1e+00

julia> vr, vi, v3 = big(randn(3)), big(randn(3)), big(randn(3));

julia> V = [vr + vi_im vr - vi_im v3];

julia> V = [vr + vi_im vr - vi_im v3] 3x3 Array{Complex{Base.MPFR.BigFloat},2}: -2.92480246431090773473471244869870133697986602783203125e-01-1.23301741773709269689440759520948631688952445983886 71875e-01im -6.248288026283737028876430485979653894901275634765625e-01+0e+00im 1.2486111443976664059363201886299066245555877685546875e+00-4.780390824486795975367670052946778014302253723144 53125e-01im -1.437786704485576105838617877452634274959564208984375e+00+0e+00im 4.5285454219578136214607866349979303777217864990234375e-02-7.6873228736745721767498196186352288350462913513183 59375e-02im -8.374534723584889928105212675291113555431365966796875e-01+0e+00im

julia> A = real(V*diagm([1 + im, 1 - im, 1])/V) 3x3 Array{Base.MPFR.BigFloat,2}: 2.576802034195741440834721530934809151596626451040076198981114681613706386162393e+00 -2.111621855823541931575 384837701543463270383359650712117854830826270375426030325e+00 -8.559677601741757504896778861974182932186714773709164352191451088403852069777578e+00 9.78052219886619411959 11689523504923366255761234952124648193339633989847339366e+00 -4.188140715194974510238856704548191419461734044485174519602972737487350197843463e-01 1.40012483629965998591 0674299516564706036794287389179472732815371667614610927464e+00

julia> LinearAlgebra.EigenGeneral.eigvals!(copy(A)) ERROR: LinearAlgebra not defined

julia> Pkg.clone("https://github.com/andreasnoack/LinearAlgebra.jl") INFO: Cloning LinearAlgebra from https://github.com/andreasnoack/LinearAlgebra.jl INFO: Computing changes...

julia> LinearAlgebra.EigenGeneral.eigvals!(copy(A)) ERROR: LinearAlgebra not defined

julia> A 3x3 Array{Base.MPFR.BigFloat,2}: 2.576802034195741440834721530934809151596626451040076198981114681613706386162393e+00 -2.111621855823541931575 384837701543463270383359650712117854830826270375426030325e+00 -8.559677601741757504896778861974182932186714773709164352191451088403852069777578e+00 9.78052219886619411959 11689523504923366255761234952124648193339633989847339366e+00 -4.188140715194974510238856704548191419461734044485174519602972737487350197843463e-01 1.40012483629965998591 0674299516564706036794287389179472732815371667614610927464e+00

julia> using LinearAlgebra ERROR: ArrayViews not found in require at loading.jl:47 in include at boot.jl:242 in include_from_node1 at loading.jl:128 in include at boot.jl:242 in include_from_node1 at loading.jl:128 in reload_path at loading.jl:152 in _require at loading.jl:67 in require at loading.jl:51 while loading C:\Users\SAMSUNG2.julia\v0.4\LinearAlgebra\src\cholesky.jl, in expression starting on line 6 while loading C:\Users\SAMSUNG2.julia\v0.4\LinearAlgebra\src\LinearAlgebra.jl, in expression starting on line 11

julia> Pkg.update() INFO: Updating METADATA... INFO: Updating LinearAlgebra... INFO: Computing changes... INFO: No packages to install, update or remove

julia> Pkg.add("ArrayVievs") ERROR: unknown package ArrayVievs in wait at task.jl:51 in sync_end at task.jl:311 in add at pkg/entry.jl:319 in add at pkg/entry.jl:71 in anonymous at pkg/dir.jl:28 in cd at file.jl:30 in cd at pkg/dir.jl:28 in add at pkg.jl:20

julia> Pkg.add("ArrayViews") INFO: Cloning cache of ArrayViews from git://github.com/JuliaLang/ArrayViews.jl.git INFO: Installing ArrayViews v0.4.8 INFO: Package database updated

julia> Pkg.update() INFO: Updating METADATA... INFO: Updating LinearAlgebra... INFO: Computing changes... INFO: No packages to install, update or remove

julia> using LinearAlgebra

julia> LinearAlgebra.EigenGeneral.eigvals!(copy(A)) ERROR: EigenGeneral not defined

Paul

andreasnoack commented 9 years ago

@paulanalyst This should be reported at https://github.com/andreasnoack/LinearAlgebra.jl

andreasnoack commented 8 years ago

Closing in favor of #5429