Closed rj00a closed 1 year ago
If it murders compile times, we might have to come up with another solution.
I bet it would be super helpful for compile times if we started separating stuff out into their own crates. Currently, for cargo, each crate is it's own compilation unit, and the rust compiler is very single threaded atm. Bevy has so many sub crates for this exact reason. I think we should put generated code in it's own crate and import it. If we can get protocol
into it's own crate, I think that would be even better for compile times.
I bet it would be super helpful for compile times if we started separating stuff out into their own crates.
I agree. Perhaps we can make a valence_generated
crate. I avoided doing this up until now because it wasn't completely clear to me where the division between crates should be.
Regardless of compile times, I think moving the protocol stuff to a separate public crate is a good idea. People who just want access to Minecraft's protocol can use it without dragging in the rest of Valence. The packet_inspector can use this.
However, this does have some implications worth considering. For instance some fields on packets are considered Text
. Should the text
module be part of the protocol crate or Valence? If it's part of Valence, then those fields would have to become String
s and will be refined to a more accurate type later. Should ByteAngle
be part of the protocol lib or Valence? The distinction isn't clear to me. I don't think it's a big problem, but it's something I thought I'd mention.
Turns out the huge match expressions aren't being optimized in the way I would have hoped. Notice how much time
to_kind
is using in this flamegraph (Note: The particular issue in this flamegraph has been fixed by #142. The terrain example is much faster now).I believe the fix is to rewrite the matches on ranges of
BlockState
s into matches on every possibleBlockState
. This should turn theO(n)
searches (orO(log n)
searches if it's doing a binary search?) into anO(1)
lookup table. However, I'm a bit worried about the explosion of generated code this would cause. If it murders compile times, we might have to come up with another solution.Benchmarks should be done to find which functions are affected.