Open DeedleFake opened 2 years ago
Is this meaningfully different from #41148 ?
As far as channels specifically go, no I don't think so, but I think the generic usage is the most important part of this one. I was primarily inspired by #56102. I would be fine, given #41148, with removing everything except for the generic-specific part of this proposal.
Edit: I updated the original post to explain that the parts of this proposal that are not related to generics should be ignored.
If I understand correctly, this is called Variadic Generics
in other languages?
Yes, I believe so. See https://en.wikipedia.org/wiki/Variadic_template. The Go draft proposal for generics also lists
No variadic type parameters. There is no support for variadic type parameters, which would permit writing a single generic function that takes different numbers of both type parameters and regular parameters.
At least to me, I think that the time that Go has now had generics for has shown that the way that the language handles multiple returns, and their usage in the error handling model in particular, makes the addition of variadic generics quite attractive.
I think it could be pretty confusing to have a construct like var val T
in your example code create multiple variables.
We also need to specify exactly what is permitted with T
defined as [T ...]
. It seems quite limiting. In C++ variadic function templates are quite a powerful construct, but to do that relies on features that Go does not have such as overloading and SFINAE. We probably don't want to be as powerful as C++, but we probably want to be able to do a little but more than just calling a function that returns multiple results.
I've been thinking about what functionality would be required at a minimum for [T ...]
to be useful. The list I have currently is
a = b
where a
and b
are both T
.func (e E[T]) M() T
, with actual usage working as though it had the same number of returns as types represented by T
. In other words, if T
was defined as (int, string, error)
, the user would have to do i, s, err := e.M()
the same as if those were M()
's directly defined return types.func (e E[T]) M(v T)
. Both this and the previous one would apply equally to top-level functions, not just methods.type E struct { v T }
. I think it makes sense to limit struct fields to unexported only. This would reduce the ability to assign directly from the field to something outside. i, s, err := e.v
wouldn't be possible except for inside the same package, though maybe that could be limited, too.func NewE[T ...]() E[T]
.Also, if a T
was used in a list of other types, i.e. something like func E[T ...]() (T, error)
, it would treat it as though the types had been typed in regularly, meaning that E[(int, string)]
would be equivalent to func E() (int, string, error)
.
In terms of access to struct fields, I don't think it would ever be an issue outside of reflect
, which could do other things, such as just simply ignore those fields or treat them as autogenerated separate fields. I can't think of any scenario, barring reflect
usage, that would allow access to the field without having T
defined. You can't defined a function that takes an E
directly, and since you need to define an E[T]
, an equivalent T
must be already defined as well.
For reflect
, it might be possible to treat the 'field' of the struct as being multiple fields with autogenerated, otherwise impossible names. Something like v[0]
, v[1]
, etc. in the example above.
Edit: Let me try to explain what I'm saying above about struct fields more clearly. Given
type Example[T ...] struct {
v T
}
the following would be allowed
func Something[T ...](e E[T]) {
a := e.v // a would be of type T.
// ...
}
but the following would not
func Something(e E[(int, string)]) {
i, str := e.v // Can't destructure from a variable. Has to be multiple returns from a function or method.
// ...
}
Instead, you'd have to use a wrapper function
func Get[T ...](e E[T]) T {
return e.v // Legal because it's just an assignment from T to T with no destructuring.
}
This is why I think it should be limited to unexported fields only. It wouldn't actually be a problem in terms of usage by other packages, but it could be confusing that in some unlikely situations, you wouldn't actually be able to access the values of the field directly and would instead have to write a wrapper function to decompose it. I think that avoiding the confusion outweighs the oddity of disallowing exporting of variadic generic fields.
I think it could be pretty confusing to have a construct like var val T in your example code create multiple variables.
It never would, though. In all cases outside of function arguments are returns, and possible reflect
via struct fields, it would act exactly like a single variable, For example,
type Optional[T ...] struct {
v T // Looks like a single, normal struct field.
ok bool
}
func NewOptional[T ...](v T, ok bool) Optional[T] {
return Optional[T]{
v: v, // Acts like a single variable here.
ok: ok,
}
}
func (opt Optional[T]) Or(v T) T {
if opt.ok {
return opt.v // Again, acts like a single variable.
}
return v
}
But then in usage, it could do something like this:
opt := NewOptional[(int, string)](3, "example", true) // At this end, it acts exactly like the T-typed stuff is more than one argument.
i, str := opt.Or(1, "something") // And the same here in the returns.
So inside of the definition, stuff of 'type' T
looks and acts exactly as though it was a single variable with a single type, but outside, where it's actually used, it does exactly the opposite.
Edit: Another example, based on #56102:
package sync
// Adding in arguments just for fun.
func Lazy[A ...,R ...](f func(A) R) func(A) R {
var once Once
var r R
return func(args A) R {
once.Do(func() { r = f(args) })
return r
}
}
I can think of a number of functions of general forms like func Thunk[In ..., Out ...](func(In) Out, In) func(In) Out
or func FoldMap[Init ..., Args ...](func(Init) Args, ...func(Args) Args) Args
which would be useful. Rather, I get the sense that this feature would be particularly powerful for many kinds of generic higher-order functions. But I think just writing these signatures is enough to convince me that I don't want this in Go.
The ...
syntax is a placeholder. It reads just fine with a predefined identifier:
func Thunk[I various, O various](func(I) O) func(I) O
func FoldMap[I various, A various](func(I) A, ...func(A) A) A
There's nothing particularly strange about these functions signatures. They're not long or particularly complex.
Just to check, would this permit writing a single Metric
type for the example outlined at https://go.googlesource.com/proposal/+/refs/heads/master/design/43651-type-parameters.md#metrics ? Thanks.
In the current design, yes, but only the variant mentioned that uses a comparison function, not the one with the comparable
constraint. For example,
type Metric[T ...] struct {
mu sync.Mutex
eq func(T, T) bool
// ...
}
func NewMetric[T ...](eq func(T, T) bool) *Metric[T] {
return &Metric[T]{
eq: eq,
// ...
}
}
func (m *Metric[T]) Add(vals T) {
m.mu.Lock()
defer m.mu.Unlock()
// Do whatever is necessary to add the data, calling m.eq to compare.
}
This does have some weirdness, though, including the fact that comparison function will wind up with two arguments for each type in T
. For example, NewMetric[(int, string)](func(i1 int, s1 string, i2 int, s2 string) bool { ... })
would be correct.
It might be possible to allow constraints on variadics, but it would be quite limited. Essentially, it comes down to whether or not it makes sense to define operations on variadics. For example, comparability could be defined much like it is for a struct:
func Example[T ...comparable]() {
var v1, v2 T
v1 == v2 // This just makes sense.
}
In this case, a more broad variadic would be written as [T ...any]
instead, which does have some nice parity. The issue I see is that I can't think of many other operations besides comparison that aren't available for any
and make sense with variadics. Calling methods makes no sense, so that means that essentially every interface
that doesn't have a type list would make no sense to constrain a variadic with. You could, but it wouldn't gain you anything. Similarly, basic operators like +
, -
, and so on wouldn't necessarily make sense, although they could certainly be allowed. For example, something like
func Add[T ...constraints.Integer](v1, v2 T) T {
return v1 + v2
}
func main() {
i1, i2 := Add[(int, float64)](1, 2, 3, 4)
}
makes sense, but I'm not sure if it makes sense to allow it or not.
Edit: The constrained version also provides the benefit of allowing structs to maintain comparability. For example, type Example[T ...comparable] struct { v T }
would itself satisfy a comparable
constraint.
Edit 2: I'm looking over the metrics example some more and I'm wondering if this is a good fit for variadic generics in the first place. It seems to make more sense to me to just define a type Metric[T comparable] struct { ... }
and then use comparable struct types for each metric. That approach would result in quite poor ergonomics for something like #56102, but in this case I don't think it would make much difference. For example, instead of httpRequests.Add(1, “GET”, 404)
, it could use httpRequest.Add(1, requestResult{"GET", 404})
.
Edit 3: I realized that I didn't demonstrate how the metric example would work with constraints. Here's the definition working for any number of types:
type Metric[T ...comparable] struct {
mu sync.Mutex
m map[key[T]]int
}
func (m *Metric[T]) Add(vals T) {
m.mu.Lock()
defer m.mu.Unlock()
if m.m == nil {
m.m = make(map[key[T]]int)
}
m.m[v]++
}
// Intermediary type necessary for use as a map key.
type key[T ...comparable] struct {
v T
}
Putting on hold for discussion of future generics changes.
This would be so incredibly helpful for designing more typesafe middleware, so each middleware function could attach data as a new parameter instead of just shoving everything into a Context
and having to do type assertions further down the line.
In order for it to actually be useful in this context though, you do need the ability to pop/push values from/to the parameter list - preferably from/to either side. E.g.:
func PanicOnError[T ...](f func() (T, error)) T {
...t, err := f()
if err != nil {
panic(err)
}
return t
}
(...a, b := ...
just being some temporary syntax analogous to *a, b = ...
in Python to deal with the syntax ambiguity mentioned further up.)
I'm not sure you'd need full tuple types, though, since as far as I can tell the only "variable-like" operations needed are:
v, ...vs := f()
return v, vs
var zero T; return zero
which is a far cry from what was proposed in e.g. #64457, and are all things that should be possible to support without a dedicated tuple type. For example, a very basic solution could be to have this:
func A[T ...](f func() (int, T, string)) (float64, T) {
i, ...t, s := f()
fmt.Println(s)
return float64(i) / 2, t
}
be expanded to this when instantiating A
with T = (X, Y, Z)
:
func A(f func() (int, X, Y, Z, string)) (float64, X, Y, Z) {
i, __x, __y, __z, s := f()
fmt.Println(s)
return float64(i) / 2, __x, __y, __z
}
and since you can't do anything with the individual values without explicitly naming the types of the values to pop, there's no inference or bounds checking to be done for T
.
As a motivating example of what I mentioned at the start, using this you would be able to write a handler like
func MyHandler(
r *http.Request,
dbConn *Connection,
user *User,
) (result *Result, err error) {
// ...
}
which could be transformed into a standard http.HandlerFunc
using something like:
var Handler http.HandlerFunc = LogError(WriteResult(WithUser(WithDB(MyHandler))))
where
type GenericHandler[In ..., Out ...] func(In) Out
// ErrorHandler is any function that can return an error.
type ErrorHandler[In ..., Out ...] GenericHandler[In, (Out, error)]
// RequestHandler is any function that takes an *http.Request
// and can return an error.
type RequestHandler[In ..., Out ...] ErrorHandler[(*http.Request, In), Out]
func WithDB[In ..., Out ...](
inner RequestHandler[(*Connection, In), Out],
) RequestHandler[In, Out] {
return func(r *http.Request, params In) (Out, error) {
conn, err := OpenConnection()
if err != nil {
var zero Out
return zero, err
}
defer conn.Close()
return inner(r, conn, params)
}
}
func WithUser[In ..., Out ...](
inner RequestHandler[(*User, In), Out],
) RequestHandler[In, Out] {
return func(r *http.Request, params In) (Out, error) {
user, err := Authenticate(req)
if err != nil {
var zero Out
return zero, err
}
return inner(r, user, params)
}
}
func WriteResult[Result any, In ..., Out ...](
inner RequestHandler[In, (Result, Out)],
) ErrorHandler[(http.ResponseWriter, *http.Request, In), Out] {
return func(w http.ResponseWriter, r *http.Request, params In) (Out, error) {
result, ...out, err := inner(r, params)
if err != nil {
w.WriteHeader(http.StatusInternalServerError)
return out, err
}
err = json.NewEncoder(w).Encode(result)
return out, err
}
}
func LogError[In ..., Out ...](
inner ErrorHandler[In, Out],
) GenericHandler[In, Out] {
return func(params In) Out {
...out, err := inner(params)
if err != nil {
log.WithError(err).Error("error while handling request")
}
return out
}
}
allowing for amazing composability without losing any type safety.
Right now I think the closest you could really get to something like this is keeping a stack of all pushed data using something like
// Bottom represents the bottom of the stack.
type Bottom struct{}
type Stack[ThisType, ParentType any] struct {
Value *ThisType
Parent ParentType
}
func MyHandler(
r *http.Request,
dbConn Stack[Connection, Stack[User, Bottom]],
) (*Result, error) {
user := dbConn.Parent.Value
// ...
}
func WithDB[In any, Out any](
inner func(*http.Request, Wrapper[Connection, In]) (Out, error),
) func(*http.Request, In) (Out, error) {
return func(r *http.Request, params In) (Out, error) {
conn, err := OpenConnection()
if err != nil {
var zero Out
return zero, err
}
defer conn.Close()
wrapped := Stack[Connection, In]{Value: conn, Parent: params}
return inner(r, wrapped)
}
}
...which isn't particularly pretty.
@arvidfm
since as far as I can tell the only "variable-like" operations needed are:
On what basis? Because AFAICT it would be pretty useful to also have a len
operation - e.g. to be able to recursively apply a function with variadic arguments (in particular: What would your "pop" operation do, if there are zero type arguments in the list?). It's not obvious how to do that without extra syntactical and semantic rules, but if we are adding some anyways, why draw those specific lines?
@Merovius
On what basis? Because AFAICT it would be pretty useful to also have a len operation
Absolutely, I'm sure it would be useful for a lot of different use cases! But I was trying to think of what the minimum requirements would be specifically for the kind of function wrapping that I'd like to see, as in my middleware example. (If it wasn't obvious already, it's heavily inspired by decorators and how ParamSpec
/Concatenate
work in Python.)
I was also trying to keep the requirements as simple as possible to avoid accidentally veering into tuple type territory. With the rules I listed it's relatively straightforward to see how the parameter list could simply be expanded to multiple parameters at function instantiation time (assuming that the Go compiler creates a separate copy for each new combination of generic arguments, which I'm not actually sure if it's the case). I'm not sure len
is as straightforward, but I suppose simply statically resolving the length at compile time as a special case would work.
What would your "pop" operation do, if there are zero type arguments in the list?
"Pop" would only work if you've explicitly assigned the elements you're popping to a separate, non-variadic type paramater. So this would work:
func GetTail[Head any, Tail ...](f() (Head, Tail)) Tail {
_, ...tail := f()
return tail
}
in which case calling GetTail(func() {})
would be a compilation error because a 0-length return type doesn't match the signature (Head, Tail)
which requires at least one return value.
Meanwhile this would not work:
func GetTail[Tail ...](f() Tail) ???? {
_, ...tail := f()
return tail
}
Note how if this was allowed there's no way to even specify the return type without some additional special syntax.
Although your note about recursively applying functions with variadic arguments did get me thinking about different edge cases...
func A[Head any, Tail ...](head Head, tail Tail) Tail {
return A(tail)
}
This will fail because the recursive call will have a different return type, so not worried.
func PrintAll[Head any, Tail ...](head Head, tail Tail) {
fmt.Println(head)
PrintAll(tail)
}
Scary! This would lead to a call chain like PrintAll[int, (int, int)](1, 2, 3) -> PrintAll[int, (int)](2, 3) -> PrintAll[int, ()](3) -> PrintAll[???]()
with the last one being a compilation error since Head
can't be matched.
I'm happy for this to not work at this stage since this is starting to veer into pattern matching territory instead, which would be cool to have to be clear, but certainly not within scope of this feature! But how would you present a sensible error to the user?
func AddOneArg[T ...](items T) {
AddOneArg(1, items)
}
Oops. Now we're suddenly statically generating infinitely many function copies. Well that's no good. Either my idea of statically generating a new copy for each instantiation is a no-go, or we'd have to disallow recursive calls like this.
func AddOneReturnValue[T ...](f func() T) func() (int, T) {
return func() {
return 1, f()
}
}
Well I couldn't come up with a similar way of generating an infinitely long return value list, so maybe it's only an issue with variadically-typed function arguments, not return values?[*] Or maybe I'm just not creative enough.
Limiting variadic types to only be used as return values wouldn't be a huge loss as far as my use case is concerned, but I'm sure there are others who would like to see it. For one, it would mean you couldn't write functions like:
func RunWithLogs[Args ..., Rets ...](f func(Args) Rets, args Args) Rets {
fmt.Println("Running function")
return f(args)
}
result, err := RunWithLogs(myFunc, 1, 2, "test")
though you could still write:
func RunWithLogs[Rets ...](f func() Rets) Rets {
fmt.Println("Running function")
return f()
}
result, error := RunWithLogs(func() (int, error) {
return myFunc(1, 2, "test")
})
Would be interested to see how other languages with variadic types solve this issue.
[*] Edit: OK yeah that's just me not being creative enough:
func AddOneReturnValue[T ...](f func() T) {
AddOneReturnValue(func() (int, T) {
return 1, f()
})
}
Presumably just forbidding (mutual) recursion in variadically-typed functions would be enough to fix this. Are there any major use cases that would break that would be otherwise supported?
But I was trying to think of what the minimum requirements would be specifically for the kind of function wrapping that I'd like to see,
ISTM you are trying to side-step my question. Why is "the kind of function wrapping you'd like to see" the right notion of "the kind of function wrapping we'd like to support". Why is this the right level of support for this feature?
in which case calling
GetTail(func() {})
would be a compilation error because a 0-length return type doesn't match the signature(Head, Tail)
which requires at least one return value.
My question was what would happen if you called it with e.g. func() int {}
. What if the number of variadic type arguments is empty.
Although your note about recursively applying functions with variadic arguments did get me thinking about different edge cases...
That was, if I understand you correctly, the "edge case" I was talking about.
I'm happy for this to not work at this stage since this is starting to veer into pattern matching territory instead, which would be cool to have to be clear, but certainly not within scope of this feature!
I don't find anything about this statement obvious. Why do you use "certainly"? Note that the title of this issue is "multi-type type parameters". That seems certainly open enough to encompass recursive generic code. And note, that even without recursivity, you get into issues:
func F[T ...](f() T) {
...x := f()
fmt.Printf("%T", x)
}
func main() {
F(f())
}
I can't assign meaningful behavior to this program. So it should probably not be allowed? But that seems a pretty severe restriction. For example, one of the main use-cases I would see for this concept is to allow something like this:
func Compose[A, B, C ...](f func(context.Context, A) (B, error), g func(context.Context, B) (C, error)) func(context.Context, A) (C, error) {
return func(ctx context.Context, a A) (C, error) {
b, err := f(ctx, a)
if err != nil {
return *new(C), err
}
return g(ctx, b)
}
}
It seems unfortunately, to disallow calling this with an empty A
or C
argument list (and an artificial constriction to not allow it with an empty B
list, though that would be less obviously useful).
I think this sort of "working with arbitrary function types" stuff is exactly what this is about and adding a "as long as the function types have some arguments and returns" restriction seems - to me - to require some justification.
ISTM you are trying to side-step my question. Why is "the kind of function wrapping you'd like to see" the right notion of "the kind of function wrapping we'd like to support". Why is this the right level of support for this feature?
It's literally just my subjective and intuitive interpretation of the intention of this proposal. For instance there are no examples of recursive variadically-typed functions prior to my comment, which I took to mean that it's not something people are primarily concerned with here. Other basic operations (like len
) should definitely be considered if there are any concrete use cases they unlock, I'd just suggest being mindful that they don't significantly increase the complexity of implementing the feature.
My question was what would happen if you called it with e.g.
func() int {}
. What if the number of variadic type arguments is empty.
Calling
func GetTail[Head any, Tail ...](f() (Head, Tail)) Tail {
_, ...tail := f()
return tail
}
as GetTail(f() int {})
would yield the following function:
func GetTail(f() int) {
_ = f()
return
}
which looking at this I now realised might require special handling of :=
if the parameter list is empty, in that we might end up assigning to a variable list with no new variables, requiring =
instead of :=
(I'd suggest just always shadowing the original name(s) regardless). Other than that though I don't see any complications arising from allowing a variadic type parameter to be inferred as an empty list of types, but maybe there's something I'm missing.
I don't find anything about this statement obvious. Why do you use "certainly"?
I might have been a bit quick to dismiss it, but it does feel like opening up a huge can of worms, because the only way I can see recursive variadically-typed functions being usable - at least with recursive calls that change the number of type parameters - is allowing multiple function definitions so you can specify a base case:
func PrintAll() {}
func PrintAll[Head any, Tail ...](head Head, tail Tail) {
fmt.Println(head)
PrintAll(tail)
}
My reasoning is that it would not be possible to compile something like the following:
func PrintAll[Head any, Tail ...](head Head, tail Tail) {
fmt.Println(head)
if len(tail) > 0 {
PrintAll(tail)
}
}
because when called as PrintAll(1)
the call PrintAll()
(which doesn't match the signature) would still exist in the function body, albeit inside an if
statement whose condition will evaluate to false
.
(Or at least it would not be possible to compile it unless the if
statement is evaluated statically and the block eliminated entirely for the instantiation of the function when called with a single argument, but then this would still break if the condition contains any expression that can't be evaluated statically.)
And if we allow multiple function definitions for different type parameters we're basically heading into Haskell-style pattern matching territory. Which is a laudable goal to be sure, but not something I'm sure we'd want to tackle here!
And note, that even without recursivity, you get into issues:
For the record, this already works in Go:
func GetNumbers() (int, int, int) {
return 1, 2, 3
}
func PrintValues(values ...any) {
fmt.Printf("%T", values...)
}
func main() {
PrintValues(GetNumbers())
}
(It prints int%!(EXTRA int=2, int=3)
.)
So I don't think there's a need to disallow your example, even if it's not necessarily useful. If we wanted to make it more useful for the sake of this feature, perhaps a new format verb could be added that repeats itself (separated by spaces) for every unprocessed argument, or something along those lines. But either way the user could just write their own auxiliary function to do this:
func F[T ...](f() T) {
...x := f()
PrintTypes(x)
}
func PrintTypes(vs ...any) {
for _, v := range vs {
fmt.Printf("%T\n", v)
}
}
For example, one of the main use-cases I would see for this concept is to allow something like this:
Maybe I'm missing something, but shouldn't this be fine? (Using my ad-hoc ...b
notation.)
func Compose[A, B, C ...](
f func(context.Context, A) (B, error),
g func(context.Context, B) (C, error),
) func(context.Context, A) (C, error) {
return func(ctx context.Context, a A) (C, error) {
...b, err := f(ctx, a)
if err != nil {
return *new(C), err
}
return g(ctx, b)
}
}
func F(ctx context.Context) (string, int, error) {
return "test", 0, nil
}
func G(ctx context.Context, _ string, _ int) error {
return nil
}
var f func(context.Context) error = Compose(F, G)
Note that this doesn't break my suggested "pop" rule, because the argument being popped (err
) doesn't actually belong to the variadic type parameter list.
Compose[(), (string, int), ()]
would expand to:
func Compose(
f func(context.Context) (string, int, error),
g func(context.Context, string, int) error,
) func(context.Context) error {
return func(ctx context.Context) error {
__b1, __b2, err := f(ctx)
if err != nil {
// special handling needed if new(C)
// should be supported
return *new(string), *new(int), err
}
return g(ctx, __b1, __b2)
}
}
For the record, this already works in Go:
Yes. Note that the ...
you added into the Printf
call significantly changes the meaning. I intentionally didn't put it into my example.
My question was "what is the type of a variable that is derived from a variadic type parameter list?". Making variadic type parameters useful requires us to be able to declare such variables. And probably from lists derived from them by various means (like your "push" and "pop" operations).
Yes. Note that the ... you added into the Printf call significantly changes the meaning. I intentionally didn't put it into my example.
Yes, I didn't mean for the fmt.Printf("%T", values...)
call to be analogous to your example, but rather the PrintValues(GetNumbers())
call, since the aim of this proposal is for variadic type parameters to behave similarly to or the same as multiple return values. (Though I realise that I've been playing a bit fast and loose with the existing multiple return values rules - eg fmt.Printf("%T", GetNumbers())
doesn't currently work directly in Go, hence why I had to refactor it a bit, but I don't think variadic generics would be useful without loosening this restriction.)
Essentially my suggestion is that a variadic type parameter list doesn't have a type of its own. I.e. if we have:
func F[T ...](f() T) {
...x := f()
fmt.Printf("%T", x)
}
then we get:
// Printf("%T") -> %!T(MISSING)
F(f() {})
// where F[()] is...
func F(f()) {
f()
fmt.Printf("%T")
}
// Printf("%T", 0) -> int
F(f() (int) { return 0})
// where F[(int)] is...
func F(f() int) {
__x1 := f()
fmt.Printf("%T", __x1)
}
// Printf("%T", 0, "test", true) -> int%!(EXTRA string=test, bool=true)
F(f() (int, string, bool) { return 0, "test", true})
// where F[(int, string, bool)] is...
func F(f() (int, string, bool)) {
__x1, __x2, __x3 := f()
fmt.Printf("%T", __x1, __x2, __x3)
}
My question was "what is the type of a variable that is derived from a variadic type parameter list?". Making variadic type parameters useful requires us to be able to declare such variables. And probably from lists derived from them by various means (like your "push" and "pop" operations).
The way that I'm picturing it, the type of the variable is T
. For example,
func UselessExample[T ...](v T) T {
v2 := v
// v2 is T. The actual types are determined at compile-time and have no impact inside of the function.
return v2
}
r1, r2 := UselessExample(3, "example")
// r1 and r2 are int and string. The pseudo-type T is inaccessible outside of the function.
type UsefulExample[T ...] struct {
v T // Again, is type T.
}
func NewUsefulExample[T ...](vals T) UsefulExample[T] {
return UsefulExample[T]{v: vals} // Treated like a single value inside of the function.
}
func (ex UsefulExample[T]) Get() T {
return ex.v
}
r := usefulExample.Get() // Accessible via method call.
Because of that, somethiing like fmt.Printf("%v\n", v)
would not be permitted, since fmt.Printf()
requires any
and T
doesn't satisfy the any
interface. A ...
constrained type is more restricted than anything else in the language. This also leaves it open to be potentially expanded later.
One thing that would be incredibly cool is if you could do is something like:
type Wrapper[T any] struct {
Value T
}
// here the return value would be a variadic type list, i.e.
// WrapAll[(int, string)] has the type:
// (int, string) -> (Wrapper[int], Wrapper[string])
func WrapAll[T ...](values T) Wrapper[T] {
// very jank ad-hoc syntax, but the idea is to be able to
// define a transformation that's run for each individual value
// where the transformation is passed the type of the value
// currently operated on as a type parameter (here U)
return map(values, func[U](value U) Wrapper[U] {
return Wrapper[U]{Value: value}
})
}
// could be compiled to something like:
func WrapAll(__values1 int, __values2 string) (Wrapper[int], Wrapper[string]) {
__map1 := Wrapper[int]{Value: __values1}
__map2 := Wrapper[string]{Value: __values2}
return __map1, __map2
}
I doubt this is realistic (and it would definitely need a better syntax proposal), but if it was possible I bet you could create all sorts of very cool, expressive and ergonomic APIs using this. Especially useful for more declarative APIs, like builders and DSLs. Imagine being able to pass in a list of column definitions to a SQL Scan
function and have it return typed values.
Sorry, I completely forgot about this one. I wrote a similar but more precise version at #66651.
Update
As pointed out by @seankhliao, the pieces of this involving channels were already covered in #41148. However, I think that the generics-related code is still worth discussing. Therefore, this proposal is now only for the
...
constraint detailed below. By limiting the proposal to this constraint, that also partially fixes thereflect
issue, aswould be illegal. A variable of a type constrained by
...
, such asT ...
, would not be allowed to be assigned to anything not also of that exact typeT
, meaning that even passing it to something that takes anany
, such asreflect.ValueOf()
, would be illegal.However, this doesn't fix it completely. In order for this proposal to be useful, things like
type Future[T ...] struct { val T }
would have to be allowed, which would mean thatreflect
could still gain access to the 'variable' via the struct. I'm unsure of how to handle this, though I have a few ideas. One of them is touched on briefly in the original proposal below.Original Proposal
Author background
Related proposals
Proposal
In Go, functions can return multiple values. This is handled directly, rather than via tuples, the way that a lot of other languages, such as Python and Rust. handle it. Because of this, some expressions in which it would be useful are not capable of yielding multiple 'returns' directly. For example, a user can declare a
chan int
, but not achan (int, error)
. Due to the convention of returningerror
from functions directly as their own value, this can lead to some complication when trying to handle errors in other cases, such as when sent from a channel. The common solutions are to either use two channels, one for values and one for errors, or to define a top-level type that can hold the value for you. The first solution is better suited to specific situations, while the second solution mostly solves the problem, but can lead to namespace pollution and other annoyances, especially when the channel operation is happening across package boundaries.This proposal is not a proposal for general tuples. Instead, this proposal is to extend the usage of multiple 'returns' to other places in which types are specified. These 'tuples' would not be possible to store in a single variable except for in one specific situation, but would instead require that they be destructured immediately, just like how multiple returns from a function works. Unfortunately, this can lead to some ambiguities, so these would have to introduce a small new syntax change to fix, but it would only apply in those caes.
The biggest complication that I can think of with these changes is how to handle them with
reflect
. I think the simplest approach is to introduce the idea directly intoreflect.Type
with some new methods that would allow you to pull out the types one by one, much like how it currently works for multiple returns to a function. This is potentially a backwards-compatibility issue, as existing code that didn't expect, for example, the element type of a channel to itself have multiple types would break when then given a channel that does have multiple types.It would be quite useful for a number of recent proposals. For example, in #56102 the current thinking is to add both
Lazy[T any]
andLazy2[T1, T2 any]
just to allow people to use either functions that do or functions that do not return an error. This has happened in a number of other places as well. This proposal would allow that to be replacedLazy[T ...]
instead to handle not only those cases but any number of returns from the wrapped function. #56461 is another that would benefit.reflect
issue detailed above.Costs
gopls
andgofmt
would definitely be affected because they would need to support the new syntax, but it should be relatively minimal.