Open zx2c4 opened 2 years ago
I think that at a high level you are looking for something like C++ template specialization: a way to write an implementation of a generic function to use for a specific type. The current generics support in Go does not provide a way to do that, which is intentional (https://go.googlesource.com/proposal/+/refs/heads/master/design/43651-type-parameters.md#omissions).
Please let me know if I misunderstand.
That said, if using any(ip1).(type)
in a type switch does introduce runtime overhead (I haven't checked) we can consider compiler optimizations for that case. For example perhaps a type switch on a generic argument with a small number of cases should be used as an indication that we should stencil out those cases rather than using a dictionary. (And of course more generally if the constraint(s) only permit a couple of types we could consider stenciling out those types always.)
That is, perhaps we can approach this as a compiler optimization issue rather than as a language issue.
The other funny aspect of this is that you can't slice the type, even though you can call len(x)
on it.
type ipArray interface {
[4]byte | [16]byte
}
func legal[B ipArray](x B) {
_ = len(x)
}
func illegal[B ipArray](x B) {
_ = x[:]
}
I'm not sure I understand why len(x)
would be okay but x[:]
would not. It's using the same information in both cases to derive the result. I wind up hacking around that using the unsafe.Slice
trick above, but I don't quite see why that should be necessary?
The inability to slice is a deficiency of the current implementation, not a part of the language design. There are a number of similar infelicities in 1.18.
Oh that's good news. So it sounds like we've identified two compiler improvements that might actually be actionable without having to have a complicated language proposal:
1) Optimizing switch x := any(t).(type)
to not require any runtime overhead for reasonably contoured cases.
2) Allow slicing types without having to resort to unsafe.Slice
to force it.
The first would help with a sort of poorman's type specialization via a dispatcher that could disappear at compile time. This would help with the generic case.
The second would help with array specialization, among other things, which could also disappear at compile time depending on the context.
That seems like a straightforward way to address this, right?
Sounds plausible to me.
This may be a bit naive of me, but if this proposal was accepted: https://github.com/golang/go/issues/45380 (type switch on parametric types) wouldn't this solve this issue?
That's similar but it's a language proposal, right? The above is just about adding some small compiler optimization to accomplish the same thing.
Kind ping, any hope of getting this fixed/implemented? Please see commented out line for what should work but doesn't.
I've got a trie structure that works over 4 byte arrays and 16 byte arrays, for IPv4 and IPv6 respectively. I used to just use a
[]byte
slice for this, and adjust accordingly based onlen(x)
when it mattered.Actually, pretty much the only place it mattered was here:
So in converting this all away from slices and toward static array sizes, I made this new type constraint:
Then I broke out those two if clauses into their own functions:
So far, so good, but what is the implementation of
commonBits
?If you try to convert the array to a slice, the compiler will bark at you. You can use
any(ip1).(type)
in a switch, but then you get runtime overhead. I've figured out a truly horrific trick that combines two Go 1.17 features into an unholy mess:This... works, amazingly. Similarly, when I needed to adjust my randomized unit tests, I wound up going with code that looks like this:
I asked some Go experts if there was a better way, and the answer I got was that generics aren't yet well suited for arrays. So, I'm opening this rather vague report in hopes that it can turn into a proposal for something useful.
CC @danderson @sebsebmc @josharian