Closed lgarron closed 10 months ago
This was done in 9f77f299a9f671c53b62828ac16a64b258dfeb3a behind the orientation_packer
feature flag. It does not seem to have a significant impact on performance, but it does allow num_orientations
significantly over 16.
The valid values for
orientation_mod
andorientation
for a given piece are:orientation_mod
:0
or a proper divisor ofnum_orientations
for the orbit.orientation
: any value less thanorientation_mod
(treating0
asnum_orientations
).We currently pack these into the higher and lower nibbles of a single bit: https://github.com/cubing/twsearch/blob/a98834f47fb458dc5a2598578b75cb5bdef17c1f/examples/cpp_port/packed/packed_kstate.rs#L113
This limits us to
num_orientations
≤ 16.But if we use a more efficient encoding, we can fit
num_orientations
up to 107 into a single byte (as well as some composite values up to 221 and primes up to 251):If we use a lookup table (https://github.com/cubing/twsearch/issues/24) there won't even be a performance hit for most calculations.
For the curious, all possible
num_orientation
values that can fit in one byte: