Closed sunsided closed 5 months ago
Seeing #937 now I'm not entirely sure if I implemented it at the right place, but I'm happy to move it around.
Hmm. In my personal opinion, I don't think we should "automatically" convert from a matrix to a slice. I think this might be acceptable for vectors, but for matrices the underlying slice representation is not unambiguous. For example, nalgebra
uses mainly column major matrices, but other libraries might use row major matrices. This sounds like it could easily lead to bugs if users are just casually passing a matrix into something that accepts a slice. For this situation I'd prefer they rather explicitly opt-in by way of .as_slice()
, as you suggested.
Yeah, that's a fair point. It won't solve my problem exactly, but I can provide auto-impls only for vectors. Do you see value in that?
Thinking about it, the ArrayStorage
is defined as owning a [[T; R]; C]
. That is a row-major matrix (all row elements are contiguous), so AsRef
would be consistent. I also just noticed this definition:
unsafe impl<T, const R: usize, const C: usize> RawStorage<T, Const<R>, Const<C>>
for ArrayStorage<T, R, C>
{
type RStride = Const<1>;
type CStride = Const<R>;
Similarly, the RawStorage for VecStorage
is
unsafe impl<T, C: Dim> RawStorage<T, Dyn, C> for VecStorage<T, Dyn, C> {
type RStride = U1;
type CStride = Dyn;
which also seems to indicate a row-major order by design, if I am reading it correctly.
In any case, I could also specialize the AsRef for Matrix
only to specific storages.
I don't think we should "automatically" convert from a matrix to a slice.
Are you thinking of Deref
? AsRef
/AsMut
don't make anything happen automatically.
@sunsided: virtually everything in nalgebra
is column-major. You can get row major views if you manually specify strides, but otherwise you'll always get column-major data. Row stride 1 indicates column-major storage, because it means that you only have to move one element in memory to get to the next row.
I'm not sure what you mean by "so AsRef
would be consistent". Consistent in what sense?
Yep, just noticed my mistake. Apologies for the confusion! 😅
Are you thinking of
Deref
?AsRef
/AsMut
don't make anything happen automatically.
@Ralith: sorry, that was sloppy. What I meant is that if you have something like
fn foo<T>(slice: impl AsRef<[T]>) {
...
}
then I don't think it would be good if you could just casually pass in a matrix to foo
, since the concept of a matrix does not unambiguously map to a slice. By "automatic" I meant that the matrix could be passed off as a slice without any explicit "conversion" that indicates how the matrix maps to a slice.
In my opinion, forcing users to be explicit about how a matrix is mapped to a slice, e.g. using as_slice
, whose documentation makes it clear how the matrix is laid out as a slice, is more desirable.
Did you mean to close this @sunsided? I think it might be a good idea to implement AsRef
and AsMut
for column and row vectors, since here there is no ambiguity.
@Andlon That's no bueno sadly. I would have to constrain one dimension to be strictly greater than 1, because Vector<T, D, S>
and RowVector<T, D, S>
are aliases for Matrix<T, D, U1, S>
and Matrix<T, U1, D, S>
respectively, but since D
might be U1
as well I have a conflicting implementation.
If I add an implementation for only one of them it feels weirder than before due to the lack of symmetry.
This adds
AsRef<[T]>
andAsMut<[T]>
forArrayStorage
,VecStorage
andMatrix
types withIsContiguous
storage implementations.I understand it is a bit of a duplication given the
as_slice
andas_mut_slice
implementations, but I am hoping that given thecore::convert
types, this makes adoption even easier. I stumbled over the lack of these implementations while attempting to implementnalgebra
support in myno_std
-targeted Kalman Filtering library (https://github.com/sunsided/minikalman-rs/issues/13).