What's happening here is that when we pick the niche to represent None of Option<T>, we pick the largest (or "roomiest") of the niches in T, so that further enums aroundOption<T> have more optimization opportunities themselves.
(Why just one niche? That has to do with the fact that in an enum, the only part of the representation guaranteed to be always initialized and valid is the tag / niche, but no other variant fields)
For Option2<A, B>, however, we just pick the best niche in A and only look at B if A has no niches at all, so if B has a better niche, we miss it.
This is a small oversight in the code itself, and arguably it will be slightly nicer once this is fixed, but I wanted to open this issue first, before I have a chance to fix it myself (if someone else doesn't get there first).
On x86_64, this prints the values in comments:
What's happening here is that when we pick the niche to represent
None
ofOption<T>
, we pick the largest (or "roomiest") of the niches inT
, so that further enums aroundOption<T>
have more optimization opportunities themselves.(Why just one niche? That has to do with the fact that in an
enum
, the only part of the representation guaranteed to be always initialized and valid is the tag / niche, but no other variant fields)For
Option2<A, B>
, however, we just pick the best niche inA
and only look atB
ifA
has no niches at all, so ifB
has a better niche, we miss it.This is a small oversight in the code itself, and arguably it will be slightly nicer once this is fixed, but I wanted to open this issue first, before I have a chance to fix it myself (if someone else doesn't get there first).
cc @Gankra @nox