Open amfaber opened 1 year ago
Hi,
apologies for the late reply! Thanks for the kind words, I am always happy to hear when people find argmin useful :)
Admittedly, I'm not an expert on nalgebra. Most work on the nalgebra backend was done by collaborators, but from a first glance most traits were implemented for the seemingly most general Matrix
type, therefore SVector
and SMatrix
should be covered. If not, I consider this a bug. Some are implemented for OMatrix
and I'm unsure if that covers the statically allocated arrays as well.
The error message you get does seem a bit strange though. I'm not sure if this is related, but there are some inconsistencies regarding the use of f32
and f64
which may cause problems, in particular:
type Param = Vector2<f64>;
type Output = f64;
and
let init_hess = Matrix2::<f32>::identity();
It would be interesting to see if changing the latter to Matrix2::<f64>::identity()
has an effect on the error message.
EDIT:
Also, this should be inv_hessian(init_hess)
:
.inv_hess(init_hess)
Thanks for that, hadn't spotted those mistakes from messing around trying some different things. Unfortunately, the issue arises before the hessian is even checked for trait bounds. I can comment out the lines defining init_hess
and .inv_hessian(...)
and the compiler error still persists. Actually the error still shows if I just shorten the last line to
let res = Executor::new(cost, solver);
I did some detective work but just ended up being confused.
The type of the Hessian is inferred by whatever one passes into inv_hessian
of IterState
(inside the closure passed to configure
). However, when Vector2
is used, the Hessian type H
is for some reason inferred as f64
, even without calling configure
(as you stated before). I wasn't able to figure out why yet.
Interestingly, I don't even get it to work with DVector
, but I think that's a different and probably unrelated problem.
I will have to do a more in-depth investigation of this, but I'm afraid that this may take a while because my time currently is pretty limited. I will do my best though. If you or someone else wants to have a go at this, feel free to do so.
Thanks for reporting this issue! :)
It did feel like something was fundamentally off. Thanks for looking into it! My problem is actually just least squares fitting, so I'll probably end up going with a Gauss-Newton like method. Although I have an awful lot of spots to fit, so I might start looking into implementing one of the algorithms on a GPU with wgpu instead. Once again thanks for the very readable source files :))
To possibly add to the confusion (sorry!) I am using na::Vector3<f64>
as my parameter vector and everything is happy.
Typically, when I've had trait bound issues with nalgebra, its due to conflicting versions. You need to be sure to match the argmin-math
dependency version with your crate's dependency version. What version of nalgebra do you have in your Cargo.toml?
For example, I have:
[dependencies]
nalgebra = { version = "0.30" }
argmin = { version = "0.8" }
argmin-math = { version = "0.3", features = ["nalgebra_v0_30-serde"] }
Hope this help!
I am running into the same problem with Vector3
. I think I have boiled it down to this constraint in the Solver
impl of BFGS
:
P: /* ... */ + ArgminDot<G, H> + ArgminDot<P, H>,
In the nalgebra case, ArgminDot
is implemented for the case type(<vec1, vec2>) = scalar
, here:
and type(<vec1, vec2>) = matrix{dim1 = dim1(vec1), dim2 = dim2(vec2)}
:
The latter is the one that should be used, but if both vectors are either row- or column-vectors, it resorts to the former. In the ndarray impl, it just assumes that the first is a row- and the second a column-vector when a matrix is requested as output.
I'm surprised it works for DVector
though.
I'm not that familiar with nalgebra unfortunately. Could anyone with nalgebra experience help?
In the nalgebra case,
ArgminDot
is implemented for the casetype(<vec1, vec2>) = scalar
, here:
is this right? The signature is:
impl<...> ArgminDot<N, OMatrix<...>> for Matrix<...>
so doesn't this implement type(vec * scalar) = vec{dim = dim(vec)}
, i.e. multiplication with a scalar?
Oh yes that's correct, I must have meant this part: https://github.com/argmin-rs/argmin/blob/e9bebb21d99d2ccad1c36d7373e7f4f53eec1539/crates/argmin-math/src/nalgebra_m/dot.rs#L23-L39
Hi, amazing crate! As someone looking to get further into numerical optimization, just the fact that all of these algorithms are gathered in one place, with trait bounds explaining what each algorithm needs is just amazing, so thanks for that :)
I've been trying to get the BFGS example to work on my system and have rewritten it in an attempt to have it work on nalgebra's Vector2 (importantly not DVector, as I would like the performance benefits of stack allocating my params). This is my current code
However, I'm getting the following compiler error
It's unclear to me why ArgminEye should be required for f64 in this case? What can I do to get BFGS to work with the statically sized types of ndalgebra for that sweet, sweet stack allocation?
Once again, thanks for the amazing work and the time to help me with my issue :)