Closed YuhanLiin closed 2 years ago
@YuhanLiin You should be able to store the values in a Vec<Value>
or DenseSlotMap<KeyType, Value>
separately from the data structure itself. Then you can store keys to this vector or slotmap in the data structure.
Does this work for you?
I think it will but I'll need to do some experimenting. By the way why are those methods necessary for KNN functionality?
I've encountered another obstacle caused by Metric
not supporting different input param types. This requires GATs to solve, so I think the linfa
integration will need to wait until then.
I've encountered another obstacle caused by
Metric
not supporting different input param types. This requires GATs to solve, so I think thelinfa
integration will need to wait until then.
You might want to see if you can use an enumeration as the key type and store references to the data in the metric itself. That might allow you to work with one key type, but internally it is an enum. For instance:
enum InternalOrExternal<'a> {
Internal(usize),
External(&'a [f64]),
}
I think it will but I'll need to do some experimenting. By the way why are those methods necessary for KNN functionality?
In code I write, I typically need to get access to the points returned by knn to compare some of them in addition to the values, which typically correspond to frames, landmarks, or observations in images.
The Metric
param types I'm using differ in their lifetimes. and those lifetimes also need to be arbitrarily short in relation to the lifetime of the data structure. I don't think enums work well with that.
I think it will but I'll need to do some experimenting. By the way why are those methods necessary for KNN functionality?
In code I write, I typically need to get access to the points returned by knn to compare some of them in addition to the values, which typically correspond to frames, landmarks, or observations in images.
Since the KNN traits return points and values already, do we still need additional accessors?
I think it will but I'll need to do some experimenting. By the way why are those methods necessary for KNN functionality?
In code I write, I typically need to get access to the points returned by knn to compare some of them in addition to the values, which typically correspond to frames, landmarks, or observations in images.
Since the KNN traits return points and values already, do we still need additional accessors?
The main reason this was done is so that a default implementation can be provided that retrieves them, but this is limited by GATs (must allocate vector). I don't think any of my code relies on the ability to later retrieve a key or value using an index, so we should be able to remove them, but then implementors will need to supply implementations of all the trait methods, which is probably fine.
The
point
andvalue
methods inKnn
requires values in a KNN structure to be queried using the index type as key. This is actually not possible for tree-based algorithms, since the tree structure layout is based on the distances between points. The only way to efficiently query for a specific datum is to use the point as the key, which is effectively ann
search.