Open Tehforsch opened 9 months ago
I'm fearful of the explosion this would cause, and wonder if we could do something based on structs. Like if tests/gas made Primitive and Conservative indexable, then we could have conversion routines to Vector with impl AsVector for Primitive defining the length and conversion between the struct and array representations. The intent here is that adding a Vector to a Vector would name those types in its error message, rather than extreme verbosity of each entry. This is a rough sketch and may have serious issues -- it feels like a messy problem.
That is an interesting idea. If I understand you correctly, we'd have some sort of derive macro for structs with heterogeneous dimensions that automatically implements AsVector
where the return value isn't just a vector but some "safe" wrapper around a vector that tracks where that vector came from (i.e. whether it came from Primitive
or Conservative
). The trick would be that we simplify the problem by not being express any heterogeneous vector, but only specific ones the user needs. This should allow us to enumerate them at compile time and track that operations between them are safe.
All of that should be possible to autogenerate, but then we'd still need support for transformations between the two representations, i.e. heterogeneous matrices that have some kind of AsMatrix
implementation that turns them into something that can be passed on to a linear algebra library.
In my applications, the small matrices/vectors are assembled into large ones (dimension in the millions), with some rows/columns eliminated for boundary conditions so there isn't a trivial pattern identifying dimensions for any given entry (and in any case, the size is also not known at compile time). As for libraries suitable for the large sizes, faer is by far the most interesting. I think it's not super relevant here because typing likely only applies to statically-dimensioned matrices. nalgebra is probably the most popular for statically sized matrices.
One thought on this: It might be possible to do some dimensional analysis at run-time too. For example if we had some kind of mutable matrix, we could define
struct DimensionedMatrix {
m: Matrix,
row_dimensions: Vec<Dimension>,
column_dimensions: Vec<Dimension>,
}
where Matrix
is some dynamically-sized type from a library of our choice. We could then fill the matrix via something like
fn add_entry<const D: Dimension>(matrix: &mut DimensionedMatrix, i: usize, j: usize, q: Quantity<f64, D>) {
matrix[i][j] = q.value_unchecked();
match matrix.row_dimension(i) {
Some(dimension) => assert_eq!(dimension, D);
None => matrix.store_row_dimension(i, D);
}
match matrix.column_dimension(j) {
Some(dimension) => assert_eq!(dimension, D);
None => matrix.store_column_dimension(j, D);
}
}
of course this comes at performance and memory cost, so it might be something you only want to do in test runs to verify the code and then have it disabled for actual runs. Then again, the runtime cost would be O(n m) with O(n+m) memory, so maybe it might even be tolerable in some applications? Of course I am leaving many implementation details to the imagination here, but I am mainly wondering if this is something that would be interesting in applications?
Extracted from the discussion in #36:
@jedbrown
@Tehforsch
@jedbrown