Closed rodcorsi closed 4 years ago
Why not just make one type a field of the other type?
We can embed a struct into another and reduce this problem, but it's not always that you have full control of the application. If you use a generator to create your model or if you need to pass configurations to a library or you are trying to connect a part of the application with another.
To understand this better I searched the pattern Field: any.Field,
in some projects using this command:
grep -rnP --include \*.go "\s(\w+):\s+\w+\.\1[,}]" . | wc -l
prometheus 1527 occurrences
kubernetes 6982 occurrences
docker 1731 occurrences
minio 228 occurrences
Thanks for looking at real code. Can you link to a couple of examples? That would help see what is going on there. Thanks.
I want to show some real code examples:
Copy fields to different types of structs
return &Scheduler{
SchedulerCache: config.SchedulerCache,
Algorithm: config.Algorithm,
GetBinder: config.GetBinder,
PodConditionUpdater: config.PodConditionUpdater,
PodPreemptor: config.PodPreemptor,
Framework: config.Framework,
NextPod: config.NextPod,
WaitForCacheSync: config.WaitForCacheSync,
Error: config.Error,
Recorder: config.Recorder,
StopEverything: config.StopEverything,
VolumeBinder: config.VolumeBinder,
DisablePreemption: config.DisablePreemption,
SchedulingQueue: config.SchedulingQueue,
}
Both types have exactly the same fields
return &Scheduler{config...}
Copy and change a field of the same type of structs
newEv := &evaluator{
endTimestamp: ev.endTimestamp - offsetMillis,
interval: ev.defaultEvalInterval,
ctx: ev.ctx,
currentSamples: ev.currentSamples,
maxSamples: ev.maxSamples,
defaultEvalInterval: ev.defaultEvalInterval,
logger: ev.logger,
}
IMO short and keep the expressiveness
newEv := &evaluator{
endTimestamp: ev.endTimestamp - offsetMillis,
ev...,
}
Copy fields from type *SDConfig
to gophercloud.AuthOptions
opts = gophercloud.AuthOptions{
IdentityEndpoint: conf.IdentityEndpoint,
Username: conf.Username,
UserID: conf.UserID,
Password: string(conf.Password),
TenantName: conf.ProjectName,
TenantID: conf.ProjectID,
DomainName: conf.DomainName,
DomainID: conf.DomainID,
ApplicationCredentialID: conf.ApplicationCredentialID,
ApplicationCredentialName: conf.ApplicationCredentialName,
ApplicationCredentialSecret: string(conf.ApplicationCredentialSecret),
}
Here we need explicitly ignore some fields of SDConfig
IMO this alternative is better because if new fields are created/updated/removed we need to handle it here
*SDConfig{
Password: _,
DomainName: _,
DomainID: _,
ApplicationCredentialSecret: _,
Role: _,
Region: _,
RefreshInterval: _,
Port: _,
AllTenants: _,
TLSConfig: _,
...gophercloud.AuthOptions{
Password: string(conf.Password),
ApplicationCredentialSecret: string(conf.ApplicationCredentialSecret),
}: opts, // the rest of field assign new instance of gophercloud.AuthOptions
} := conf
I like this idea, but how should this work for embedded structs? Does everything get copied recursively?
I think the type embedding needs to be recursively copied different from a child struct
type Color struct {
R byte
G byte
B byte
}
type Wheel struct {
Size int
}
type Car struct {
Wheel
Color Color
}
type Vehicle struct {
Size int
Color Color
}
func VehicleFromCar(car Car) Vehicle {
// Expands all "Car" fields including "Wheel" fields
// but "Color" fields won't expand
// return Vehicle{
// Size: car.Size,
// Color: car.Color,
// }
return Vehicle{bar...}
}
func CarFromVehicle(vehicle Vehicle) Car {
// Expands all "Vehicle" fields to match with "Car" fields
// a new instance of "Wheel" will be required
// return Car{
// Wheel: Wheel{
// Size: vehicle.Size,
// },
// Color: vehicle.Color,
// }
return Car{vehicle...}
}
If I understand this proposal correctly, changing a field name in one package can cause a silent behavior change in a different package. It could mean that the ...
operation would stop, or start, assigning a field value. The same might be true for changing a field type. That seems like a problem. As a general guideline, supporting programming at scale means that changes to one package should not affect other packages, or they should affect other packages in an "obvious" way, or they should cause an error.
The ...
operation need to match with all fields, different fields need to be explicit ignored with _
.
Changing a field in any package will cause an error.
Given the previous example, changing Vehicle
to:
type Vehicle struct {
Size int
Color Color
Price float64 // new field
}
This function no longer compiles
func CarFromVehicle(vehicle Vehicle) Car {
return Car{vehicle...} // now vehicle has Price field and this field don't exist in Car
}
// now is required to ignore the Price field
func CarFromVehicle(vehicle Vehicle) Car {
Vehicle{Price:_, ...Car{}: rest} := vehicle
return rest
}
In the current proposal this function continues to compile normally, but I agree with @ianlancetaylor this could introduce a bad behavior
func VehicleFromCar(car Car) Vehicle {
return Vehicle{car...}
}
I think we need to change the current proposal to allow the use of ...
operator only if all fields are assign
With this change this function no longer compiles
func VehicleFromCar(car Car) Vehicle {
return Vehicle{car...}
}
// now is required add value to Price
func VehicleFromCar(car Car) Vehicle {
return Vehicle{
Price: 0,
car...,
}
}
Thanks, I managed to miss that.
I just want to note that the newEv := &evaluator{
example in https://github.com/golang/go/issues/33957#issuecomment-527183815 is not all that convincing, as it could be done with a simple assignment followed by a field change.
The case data2 := DataModel{Value: "bar", data...}
in the original comment seems potentially confusing. If I understand correctly, the presence of Value
in the composite literal disables copying the Value
field from data
. That seems tricky.
Separately, consider
type S1 { i int }
type T { S1 }
type S2 { i int }
var v = S2{T{0}...}
Here the question is how the promoted field of T
should be handled when using with ...
. Does it correspond to the normal field of S2
?
I understand that the promoted fields of T
will correspond with normal field of S2
.
We have similar behavior:
type S1 { i int }
type T { S1 }
type S2 { i int }
t := T{S1{0}}
//var v = S2{t...}
v := S2{}
b, _ := json.Marshal(&t)
json.Unmarshal(b, &v)
I just want to note that the
newEv := &evaluator{
example in #33957 (comment) is not all that convincing, as it could be done with a simple assignment followed by a field change.
You're right, but I found many similar cases and thought it was important to show.
We can imagine doing the same thing for slice ?
This code:
s3 := []int{9, 7, 10}
s2 := []int{8, 7, 3}
s1 := []int{3, 5, 4, 3, 5}
s1 = append(s1, append(s2, s3...)...)
would become:
s3 := []int{9, 7, 10}
s2 := []int{8, 7, 3}
s1 := []int{3, 5, 4, 3, 5, s2..., s3...}
It's a little more concise and less verbose, I've already found myself in cases where I had to do this:
c.counters = append(before, middle[0][:]...)
c.counters = append(c.counters, middle[1][:]...)
c.counters = append(c.counters, middle[2][:]...)
c.counters = append(c.counters, after...)
while that could have done that:
c.counters := []int{
before...,
middle[0][:]...,
middle[1][:]...,
middle[2][:]...,
after...,
}
And as the language supports multiple assignment it would be normal to be able to assign by destructuring from a slice.
key, _, value := []string{"jean", "jacque", "pierre'}
@alexisvisco The slice idea is approximately #4096. Let's not confuse it with the struct idea. Thanks.
@rodcorsi When the fields match exactly then the conversion can already be shortened.
https://github.com/kubernetes/kubernetes/blob/d114f33768e4a5874a9d23f611b96219b7f20741/pkg/scheduler/scheduler.go#L294 could be written as:
scheduler := Scheduler(*config)
return &scheduler
A playground demonstration https://play.golang.org/p/uMN1YyZSYQ9 with a similar concept.
In the scope of this proposal, in this case, both syntaxes give the same result.
scheduler := Scheduler(*config)
scheduler := Scheduler{config...}
But if one of these has an embedded struct, the casting isn't possible.
As noted in the original comment, this can be done using reflect. Before changing the language let's consider whether there is some way to make that efficient enough. For example, it might be possible to write a function that takes two reflect.Type
values and returns a function that assigns fields from a value of one type to a value of another type. This would be somewhat similar to reflect.Swapper
, although it wouldn't need any special support from the reflect package.
Nobody has replied to the previous comment, but it seems workable, and there's no clear reason why that wouldn't work. On that basis, this is a likely decline. Leaving open for four weeks for final comments.
@ianlancetaylor
While reflection can work, and there are packages that sort of do that already (i believe github.com/mitchellh/mapstructure can sort of cover that case), such a solution will not support programming at scale
, as you put it. These solutions are flaky and produce runtime errors or weird behavior when either one of the structs change. With compiler destructing, you can confidently do that without having to worry that an upstream struct you are depending on changes, because your program will stop compiling.
Also, the language technically supports destructing slices into varargs with the same operator. This just expands the scope and makes the world just a little bit safer.
@ianlancetaylor Just to go off of what has already been stated.
There are already ways to work around this without implementing the feature, but you could very well say that about the deconstruction of slices to fill the varargs
stated by @urandom.
At that point then what would be the point of the deconstruction ability of slices? If we're all able to just loop through the arguments and build temp slices? It's because it's more boilerplate with little substance. Making the best of the little things of Go is where it shines, and this is definitely something that could help on some boilerplate code out there that have to fillout large structures.
Every proposal could help. There's no question about that.
The question is whether the benefit of the change is worth the cost of additional syntax and everybody learning what it means.
Varargs is used by lots of different code, including, obviously, the very widely-used fmt package. It doesn't seem comparable to this proposal, which would be used by orders of magnitude less code.
While it's true that using the reflect package produces errors at run time rather than at compile time, the point is that this can be done today, albeit imperfectly. So overall the earlier comments still apply. Closing.
It is normal to have a situation where we need to assign a struct using fields from another struct, an example is when the request or response type is different from the storage type. In these situations we can simply write:
The problems start when we have a struct with many fields or new fields is added and we forgot to handle it. We could use reflection to copy fields with the same name and type from a struct to another, but is slow e the code little complex
I'm trying to propose a language change to safely copy fields from a struct to another. The idea is to use
...
operator to copy the fields with the same name and type to another struct.Example:
Uses the
RequestData
fields to create a new instance ofDataModel
. Important to observe that all fields need to be handled when using the...
operator. In this case, theOther
andAnyID
need to be set.This example won't compile because
DataModel
has different fields from RequestData. The fieldsOther
andAnyID
don't exist in RequestData.It creates a new instance of
DataModel
but changes the fieldValue
Embedded structs When you use
...
operator, the embedded struct fields will be handled as a normal field.Destructuring fields
Destructuring with rest. The field
RequestID
will be ignored (_
) and the rest of the fields it will assign the newDataModel
instance, but is required to setOther
andAnyID