The user ought at any point be able to get a list of all devices (not necessarily handles to all ifaces), routes, etc. There are two forms this could take:
1) a series of callbacks encompassing the space
2) a function that returns all the data into a buffer (requiring streaming for very large return sets)
I think 2 is the better one here, especially because other events could be interleaved with the series of callbacks, leading to confusion (they're also less performant). So if we're returning data, there are three things we could return:
1) A set of shared pointers to actual objects, or
2) A set of copied objects, or
3) A set of summaries
In all cases, it's probably necessary that we have a streaming solution allowing for multiple calls. I don't think such a solution needs to (or should) guarantee atomicity.
In the end, I'm reminded very much of the readdir(3) system call, which is trying to solve the same problem. A space of entries, which might be modified at any time, is to be enumerated over multiple calls. The complete inodes describing these files are fairly large, and we're usually interested in only a small amount of the information.
This points to a pretty solid solution IMHO:
We will return copies of the data, rather than sharing pointers,
except for our streaming fulcrum, which will be an opaque pointer to a (now shared) object
By returning this pointer, we can restart our stream (details below)
Copy all the data that's reasonable. Are these really that big? Look at e.g. IFLA_STATS
That seems reasonable. There are two potential problems, both related to streaming:
hnext pointers currently form a finite list within each hash bucket, so they can't be used to walk the entire space. this isn't a big deal: either make them flow across buckets, or just recalculate our hash slot and move on to the next when we exhaust or list. this latter solution requires less bookkeeping, but requires walking all of our slots, whereas a complete list walks only those slots which are occupied.
If our streaming fulcrum is removed, what happens? it is no longer reachable from any object in the hash, but can still point into the hash. that would suffice, except the target of the object can be removed after the fulcrum itself is removed, and the fulcrum would not be updated. note that a series of additions and removals could lead to multiple deleted objects referring to the same target. there can furthermore be several fulcra per netstack. i see no solution that doesn't require either o(n) space or o(lgn) time (the latter requiring o(lgn) bookkeeping upon fulcrum removal time to work--a sorted list of removed objects) :/.
this latter seems to have no obvious exit, but.....it is unlikely that there will ever be a great many objects pending deletion nor more than a few active enumerations. maybe when we remove an object from the hash, check the active enumerations list (which might be active and/or deleted objects), see if it is hnext for any of them, and fix up hnext (o(1)) in such cases. so that would be o(n) on every delete, but N ought almost always be very small...
The user ought at any point be able to get a list of all devices (not necessarily handles to all ifaces), routes, etc. There are two forms this could take:
1) a series of callbacks encompassing the space 2) a function that returns all the data into a buffer (requiring streaming for very large return sets)
I think 2 is the better one here, especially because other events could be interleaved with the series of callbacks, leading to confusion (they're also less performant). So if we're returning data, there are three things we could return:
1) A set of shared pointers to actual objects, or 2) A set of copied objects, or 3) A set of summaries
In all cases, it's probably necessary that we have a streaming solution allowing for multiple calls. I don't think such a solution needs to (or should) guarantee atomicity.
In the end, I'm reminded very much of the
readdir(3)
system call, which is trying to solve the same problem. A space of entries, which might be modified at any time, is to be enumerated over multiple calls. The complete inodes describing these files are fairly large, and we're usually interested in only a small amount of the information.This points to a pretty solid solution IMHO:
IFLA_STATS
That seems reasonable. There are two potential problems, both related to streaming:
hnext
pointers currently form a finite list within each hash bucket, so they can't be used to walk the entire space. this isn't a big deal: either make them flow across buckets, or just recalculate our hash slot and move on to the next when we exhaust or list. this latter solution requires less bookkeeping, but requires walking all of our slots, whereas a complete list walks only those slots which are occupied.this latter seems to have no obvious exit, but.....it is unlikely that there will ever be a great many objects pending deletion nor more than a few active enumerations. maybe when we remove an object from the hash, check the active enumerations list (which might be active and/or deleted objects), see if it is
hnext
for any of them, and fix uphnext
(o(1)) in such cases. so that would be o(n) on every delete, but N ought almost always be very small...