paulchernoch / hilbert

Hilbert Transformation and inverse for Rust
MIT License
72 stars 8 forks source link

Using a different number of bits per dimension? #3

Open rustyconover opened 4 years ago

rustyconover commented 4 years ago

Hi Paul,

Reading your code it seems that the same number of bits are used for each dimension that is transformed into a Hilbert curve index.

pub fn hilbert_transform(&self, bits_per_dimension: usize) -> BigUint

Would it be reasonable to use a different number of bits in each dimension?

Thank you,

Rusty

DoubleHyphen commented 3 years ago

As it just so happens, I was having pretty much the same discussion with mr Blake Hawkins over at his crate zdex.

Before I start copy-pasting, do you have any specific use-cases in mind for doing this?

rustyconover commented 3 years ago

Hi @DoubleHyphen,

Yes, my intended use for this is multidimensional indexing. Sometimes a dimension doesn't have the same amount of precision as the others. For example, if I am indexing X, Y, Z coordinates there may be less precision in Z than in X or Y. It may make sense no to spend the bits there.

Rusty

DoubleHyphen commented 3 years ago

1) Mr Chernoch appears to be AWOL with regards to this repository. Apart from the issues and PRs being inactive for the better part of a year, the e-mail I sent him remains un-answered. I strongly suspect that we'll need to resort to forking this and doing our own thing.

2) Leaving the issue of "different bits per dimension" aside, would it be useful for your use-case to use nalgebra's Point data-type? I'm asking because I've implemented z-order indexing and hilbert-order indexing for them¹ in this repository right here, and am currently discussing whether it'd serve better as part of nalgebra per se or as a separate crate. I've only implemented it for Hilbert keys that can fit within a primitive unsigned integral data type, which means you're currently confined to 128 bits, but if you need fewer then it automatically selects an u64, u32, or even an u16 depending on context.

¹The latter ends up needing the former! Z-order indexing is what mr Chernoch calls "interleaving" here, but I've implemented it more efficiently.

3) Well, "multi-dimensional indexing" is more vague than I'd hoped; after all, "multi-dimensional indexing" is what Hilbert ordering is all about. In any event, allow me to give more context.

First things first, it appears that you aren't attempting to encode heterogeneous data, as mr Hawkins was. This simplifies things a bit. Again, however, trying to implement the feature you're requesting turns out to raise very complicated questions as to how and even more complicated questions as to why.

In particular: the kernel of the Hilbert encoding itself wouldn't be all that complicated to implement with variable bit widths, and if the compiler folds the constants then I suppose it wouldn't lose performance either. The problems begin when you try to take the bits afterwards, scattered as they are in the various coordinates, and try to interleave them. This would take a much more general action known as "bit scattering" or "bit depositing". With that said:

  1. My own code can't be used for that; it can only interleave each bit with the same amount of zeroes.
  2. Thanks to the bitwise crate (which doesn't even compile on my machine but gave me some excellent inspiration) I found out about the core::arch::x86::_pdep_u32 and core::arch::x86_64::_pdep_u64 compiler intrinsics, which can do exactly what you're after, with only a very small performance hit. They are not as performant as my own specialised code, but the difference is small (~10%) and they can handle this use-case with comical ease. But... that immediately means that your code won't be portable to other CPU architectures.
  3. bitwise also contains a portable implementation. Yes, it's more portable, but no, it's about 4x less performant, but yes, if you're sorting lots of Hilbert keys then it might not matter so much because the key extraction is O(N) and the sorting is O(NlogN), but no if you're not using a radix sort but a comparison-based sort then the amount of bits won't make a difference unless you move to a smaller data-type.

But the question remains... 4) is that really what you're after? Are you sure that the initial bits are equally important, so as to put them in equally-important positions? Can you be certain about the exact amount of bits that each co-ordinate has to offer? I'm not yet convinced that there's a use-case that legitimately calls for different amount of bits for each co-ordinate, but if you have one such use-case than I'd really like to know about it.

rustyconover commented 3 years ago

Hi @DoubleHyphen

You can see more of my thoughts about this here: https://rusty.today/posts/multi-dimensional-indexing.

I'm not overly concerned about performance compared to correctness at this point. Overall I'd be intensely interested in implementing https://github.com/davidmoten/sparse-hilbert-index using Rust.

Rusty

DoubleHyphen commented 3 years ago

Hah! Your immediate example is, again, latitude and longitude, just like the one described in zdex. In case you didn't see the comment I linked you, I analyse this further there. (Also, seeing as you and its creator want the exact same things, maybe it'd be worth communicating with each other?)

Even then, however, there is discussion to be had. You mentioned "sensors". Are those going to be spread all around the globe? How many bits of accuracy are you going to need for each of your co-ordinates, and how exactly are you going to normalise them?

Anyway, the exact thing you're asking for isn't really all that hard to implement, particularly if the coordinates are ordered from "most bits" to "least bits". Look at mr Chernoch's implementation (or, even better, the refactored version I linked above) and you'll see a loop that runs from the the most-significant bit of each coordinate to the second-least-significant one. All it'd take is this: 1) Give the amount of bits per coordinate as the input together with the coordinates themselves 2) Keep 2 separate masks for each coordinate instead of having the same 2 masks for all of them 3) Make sure that, at each point, the loop only checks the coordinates for which there are still bits left to check 4) At the end, when you perform the Morton encoding of the changed coordinates, ensure that you use a method that can support different bits per coordinate

and viola! voilà! it's done. Would you be interested in doing it as an exercise, or should do it when I find the time? (Which won't be any time very soon, sadly.)

PS: I think that the biggest challenge would be to decide exactly how one is going to "swap" the lowest bits of two co-ordinates when they don't have the same amount of bits. I suppose that it'd be enough to only swap the bits that exist in both.

tavianator commented 3 years ago

I have a simple implementation here if it helps anyone: https://github.com/tavianator/kd-forest/blob/main/src/hilbert.rs

DoubleHyphen commented 3 years ago

A new challenger appears!

Mad props; while I haven't had the pleasure to seriously study this yet, it looks very interesting. Just, two questions: 1) Have you published this to Cargo? 2) Do you have documentation somewhere we could take a look?

tavianator commented 3 years ago
  1. It's not on Cargo, it's just part of a toy project I converted from C. I might as well just contribute it here rather than publish a separate crate I think. (Or someone else can, if I don't get to it first.)

  2. What I based this on years ago was this paper. I don't have any other documentation.

DoubleHyphen commented 3 years ago

This paper looks fantastic. It definitely merits close study. Mad thanks. (Sci-hub didn't have it, so it was a very pleasant surprise to see it just posted here.)

I think there's quite a bit of material from your code that could be part of a separate crate. If I end up making a lindel crate for linearisation and delinearisation, could I include its relevant parts with a note of thanks?

PS: If I'm not being too nosy, how did you end up discovering this discussion?

rustyconover commented 3 years ago

@tavianator this paper does indeed look quite interesting! Thank you for posting it.

tavianator commented 3 years ago

@DoubleHyphen Sure, the code is under a very permissive license.

I actually don't remember how I found this. I think I probably starred it earlier this year while I was searching for an existing Rust implementation in case I didn't have to port the old code.

@rustyconover No problem!

DoubleHyphen commented 3 years ago

@tavianator I gave your code a quick glance. It appears to contain functions computing the inverse Hilbert index (key -> coordinates) but no functions for the straight one (coordinates -> key). Is it missing that function indeed, or did I misread your code?

tavianator commented 3 years ago

@DoubleHyphen Yeah, I only needed the conversion in that direction.

rustyconover commented 3 years ago

I'd certainly be appreciative to have the conversion go both ways. Is it hard to write the inverse?

tavianator commented 3 years ago

@rustyconover No, it actually looks easier than the other direction. It's algorithm 7, CompactHilbertIndex, on page 25 of the linked paper.

/// T transformation
fn t(dims: u32, e: usize, d: u32, b: usize) -> usize {
    rotate_right(b ^ e, d, dims)
}

/// GrayCodeInverse
fn gray_code_inverse(mut g: usize) -> usize {
    let mut i = 0;

    while g != 0 {
        i ^= g;
        g >>= 1;
    }

    i
}

/// GrayCodeRank
fn gray_code_rank(dims: u32, mu: usize, i: usize) -> usize {
    let mut r = 0;

    for k in (0..dims).rev() {
        if mu & (1 << k) != 0 {
            r <<= 1;
            r |= (i >> k) & 1;
        }
    }

    r
}

/// CompactHilbertIndex
pub fn hilbert_index(bits: &[u32], point: &[usize]) -> usize {
    let dims = bits.len() as u32;
    let max = *bits.iter().max().unwrap();

    let mut h = 0;
    let mut e = 0;

    // Next direction; we use d instead of d + 1 everywhere
    let mut d = 1;

    for i in (0..max).rev() {
        let (mut mu, free_bits) = extract_mask(bits, i);
        mu = rotate_right(mu, d, dims);

        let mut l = 0;
        for x in point.iter().rev() {
            l <<= 1;
            l |= (x >> i) & 1;
        }
        l = t(dims, e, d, l);

        let w = gray_code_inverse(l);
        let r = gray_code_rank(dims, mu, w);

        e ^= rotate_right(entry_point(w), d, dims);
        d = (d + intra_direction(w) + 1) % dims;
        h = (h << free_bits) | r;
    }

    h
}
DoubleHyphen commented 3 years ago

You da man, man.

DoubleHyphen commented 3 years ago

@tavianator @rustyconover Gentlemen, I am pleased to announce that the lindel crate is now live, powered by the new release of min-const-generics. The code that mr Barnes has so graciously provided us has been copy-pasted there, under the compact_encoding module.

Speaking of which, mr Barnes: To what extent can I hope for your assistance with regards to this module? To keep it closer to the rest of the crate, it'll probably need

  1. Documentation with examples
  2. A bit of testing (if not a lot)
  3. Generalisation to make it usable with various unsigned data-types
  4. Some refactoring here and there

Is there any chance that you might help with one or more of these? I'd be particularly interested in points 1. and 2., because they're the ones I can't do without understanding the code in depth.

tavianator commented 3 years ago

Awesome! I can help with that for sure. Why don't you create some issues on the repo for those 4 things and we can discuss there?

DoubleHyphen commented 3 years ago

Splendid! Done.

paulchernoch commented 2 years ago

A very late answer to the initial query. Indeed, all dimensions must employ the same number of bits. Skillings algorithm is clever and I am not sure how to modify it to work for dimensions of different scales. The Hilbert Transform itself assumes that all dimensions have the same number of twists and turns. Coming up with a relaxed definition for the curve that permits what you desire would be nice, but I have not been able to think up an acceptable algorithm for that.