Open sleeepyjack opened 4 months ago
Few more implementation details come to my mind:
insert
with perfect hashing, we don't need to check the content of the slot first. Instead, we can directly issue the store instruction. This is handled outside the probing iterator. I think we have to add a constexpr switch to trigger this specialized code path to the insert
device function.As mentioned in slack, I would like to work on this issue
Is your feature request related to a problem? Please describe.
Perfect hash functions describe an injective mapping from the input key domain into the hash map's slot index domain. In other words, Each distinct key hashes to a distinct slot in the map.
This setup allows for a set of optimizations:
Describe the solution you'd like
Add a new class
cuco::perfect_hashing<class Hash>
to our probing scheme zoo which behaves as follows:When the dereferencing operator of the probing iterator is called for the first time (at the initial probing position),
return slots + hash(key)
. After incrementing the iterator, alwaysreturn end()
, meaning that there is at most one probing step.A user must ensure that the
Hash
function in combination with the input key set actually forms a perfect hash function, and the maximum hash values is smaller than the map's capacity. Otherwise behavior is undefined.Notes on the implementation:
probing_iterator
class. This new probing scheme doesn't fit into the logic of the existing iterator. Thus I propose to let each probing scheme define its ownprobing_iterator
as a member class.KeyEqual
operator.Describe alternatives you've considered
There is one more optimization we could additionally apply, but I would vote against it due to technical reasons:
Perfect hashing guarantees that there are no collisions. Thus, we could
insert
keys using non-atomicSTG
instructions, which has proven to be significantly faster compared to atomic CAS operations. This however leads to some undesireful side effects due to the relaxed memory ordering of the GPU, which ultimately leads to implausible return values from some of our APIs (insert_and_find
and also bulkinsert
; see example in the bottom paragraph of https://github.com/NVIDIA/cuCollections/issues/475#issuecomment-2113437463).If this optimization is desired, it can still be enabled by specifying
cuda::thread_scope_thread
when instantiating the map type. This is a bit hacky but I think it's better than breaking the existing logic, introducing spurious errors in the aforementioned return values.Additional context
See discussion #475