crossbeam-rs / crossbeam

Tools for concurrent programming in Rust
Apache License 2.0
7.33k stars 461 forks source link

`crossbeam-skiplist`: allows lookup to be customized. #1132

Open al8n opened 4 weeks ago

al8n commented 4 weeks ago

Hi, this PR solves #1133, which changes the lookup API from

where
  K: Borrow<Q>,
  Q: ?Sized + Ord,

to

where
  Q: ?Sized + Ord + equivalent::Comparable<K>,

The equivalent is the crate used by indexmap.

I would like to suggest this change because when using SkipMap, I find the lookup API are limited by Borrow trait. e.g., with Borrow trait, this example cannot be implemented as we cannot implement impl<'a> core::borrow::Borrow<FooRef<'a>> for Foo because of Rust's rule.

#[derive(PartialEq, Eq, PartialOrd, Ord)]
struct Foo {
    a: u64,
    b: u32,
}

#[derive(PartialEq, Eq)]
struct FooRef<'a> {
    data: &'a [u8],
}

impl PartialOrd for FooRef<'_> {
    fn partial_cmp(&self, other: &Self) -> Option<std::cmp::Ordering> {
        Some(self.cmp(other))
    }
}

impl Ord for FooRef<'_> {
    fn cmp(&self, other: &Self) -> std::cmp::Ordering {
        let a = u64::from_be_bytes(self.data[..8].try_into().unwrap());
        let b = u32::from_be_bytes(self.data[8..].try_into().unwrap());
        let other_a = u64::from_be_bytes(other.data[..8].try_into().unwrap());
        let other_b = u32::from_be_bytes(other.data[8..].try_into().unwrap());
        Foo { a, b }.cmp(&Foo {
            a: other_a,
            b: other_b,
        })
    }
}

impl<'a> core::borrow::Borrow<FooRef<'a>> for Foo {
    fn borrow(&self) -> &FooRef<'a> {
        // we cannot return a reference to a local variable
        todo!()
    }
}

fn lookup() {
    let s = SkipMap::new();
    let foo = Foo { a: 1, b: 2 };
    s.insert(foo, 12);

    let buf = 1u64
        .to_be_bytes()
        .iter()
        .chain(2u32.to_be_bytes().iter())
        .copied()
        .collect::<Vec<_>>();

    let foo_ref = FooRef { data: &buf };
    let ent = s.get(&foo_ref).unwrap();
    assert_eq!(ent.key().a, 1);
    assert_eq!(ent.key().b, 2);
    assert_eq!(*ent.value(), 12);
}

After change to Q: ?Sized + Ord + Comparable<K>, just like how indexmap do in its lookup API, the above example can be implemented.

use equivalent::{Comparable, Equivalent};

#[derive(PartialEq, Eq, PartialOrd, Ord)]
struct Foo {
    a: u64,
    b: u32,
}

#[derive(PartialEq, Eq)]
struct FooRef<'a> {
    data: &'a [u8],
}

impl PartialOrd for FooRef<'_> {
    fn partial_cmp(&self, other: &Self) -> Option<std::cmp::Ordering> {
        Some(self.cmp(other))
    }
}

impl Ord for FooRef<'_> {
    fn cmp(&self, other: &Self) -> std::cmp::Ordering {
        let a = u64::from_be_bytes(self.data[..8].try_into().unwrap());
        let b = u32::from_be_bytes(self.data[8..].try_into().unwrap());
        let other_a = u64::from_be_bytes(other.data[..8].try_into().unwrap());
        let other_b = u32::from_be_bytes(other.data[8..].try_into().unwrap());
        Foo { a, b }.cmp(&Foo {
            a: other_a,
            b: other_b,
        })
    }
}

impl Equivalent<Foo> for FooRef<'_> {
    fn equivalent(&self, key: &Foo) -> bool {
        let a = u64::from_be_bytes(self.data[..8].try_into().unwrap());
        let b = u32::from_be_bytes(self.data[8..].try_into().unwrap());
        a == key.a && b == key.b
    }
}

impl Comparable<Foo> for FooRef<'_> {
    fn compare(&self, key: &Foo) -> std::cmp::Ordering {
        let a = u64::from_be_bytes(self.data[..8].try_into().unwrap());
        let b = u32::from_be_bytes(self.data[8..].try_into().unwrap());
        Foo { a, b }.cmp(key)
    }
}

fn lookup() {
    let s = SkipMap::new();
    let foo = Foo { a: 1, b: 2 };

    s.insert(foo, 12);

    let buf = 1u64
        .to_be_bytes()
        .iter()
        .chain(2u32.to_be_bytes().iter())
        .copied()
        .collect::<Vec<_>>();
    let foo_ref = FooRef { data: &buf };

    let ent = s.get(&foo_ref).unwrap();
    assert_eq!(ent.key().a, 1);
    assert_eq!(ent.key().b, 2);
    assert_eq!(*ent.value(), 12);
}

This feature is quite important when using some zero-copy deserialization frameworks like rkyv's types with SkipMap to avoid the allocation.