Closed belak closed 4 years ago
By default, the option requireUnimplementedServers should default to false.
If this defaults to false
, nobody will use it. Note that we wouldn't even have created a flag if we were starting from scratch. The only reason for the flag is to enable backward compatibility without requiring code changes.
It was always supposed to be a requirement (undocumented, most likely, so I have no pointers for you) that the addition of new methods to a proto would be a backward-compatible change. The initial implementation of the grpc-go proto codegen that didn't allow for this broke that requirement.
This issue is covered in #2318 and https://github.com/golang/protobuf/issues/784.
If you have suggestions for other ways to implement this requirement, then we're willing to discuss -- we still haven't released 1.0 yet, so things can be changed. This is the best approach we found.
Note that C++ and Java both have similar implementations. In C++, there is a base class that you use and then override methods for the handlers, and in Java, the service is generated as an abstract class that your implementation must extend.
By default, the option requireUnimplementedServers should default to false.
If this defaults to
false
, nobody will use it. Note that we wouldn't even have created a flag if we were starting from scratch. The only reason for the flag is to enable backward compatibility without requiring code changes.
It seems useful, especially in a language where you don't have to explicitly implement an interface or override methods, to make this optional because, as mentioned before this removes some of the compile-time checks that you're doing things correctly. Even before this point, I'm fairly sure you could still embed the unimplemented version if you wanted to.
It was always supposed to be a requirement (undocumented, most likely, so I have no pointers for you) that the addition of new methods to a proto would be a backward-compatible change. The initial implementation of the grpc-go proto codegen that didn't allow for this broke that requirement.
It makes perfect sense for clients to not break when things are added, but if you're dealing with the canonical implementation of a service, isn't it more logical to realize you're not implementing an interface at compile time rather than runtime? You're not meeting the contract specified by the service you're implementing. I do understand that for services which are implemented in multiple places this doesn't make as much sense and the changes are an improvement.
If you have suggestions for other ways to implement this requirement, then we're willing to discuss -- we still haven't released 1.0 yet, so things can be changed. This is the best approach we found.
Would you consider making the generated method provided by the UnimplementedSomethingServer public? Something like MustProvideForwardCompatibility
This way you could still opt out if you absolutely wanted to and the generated code wouldn't need to be different between versions.
Alternatively, a second generated struct could be provided allowing you to opt out of the forward compatibility in services when it's embedded - both UnimplementedSomethingServer
and ManualSomethingServer
could be provided as embed options, but strongly recommending using unimplemented over the manual implementation unless it is the implementation of this RPC service.
Note that C++ and Java both have similar implementations. In C++, there is a base class that you use and then override methods for the handlers, and in Java, the service is generated as an abstract class that your implementation must extend.
The nice thing about Java though is that you at least get a compile time guarantee that you didn't misspell a method with @Overrides
. It's been a while since I've written C++, but I assume there would be compile time checks against the header file. With Go and implicit interfaces it's not as safe to do that.
Thanks for explaining your reasoning - I wasn't aware other languages already did this.
The nice thing about Java though is that you at least get a compile time guarantee that you didn't misspell a method with @Overrides. It's been a while since I've written C++, but I assume there would be compile time checks against the header file. With Go and implicit interfaces it's not as safe to do that.
That's a good point. OOP languages have a way of distinguishing adding methods vs. overriding methods, which prevents typos like this. Go isn't OO (it lacks inheritance), and so the type system isn't capable of this kind of feat.
Brainstorming some other options:
Make RegisterFooServer
accept interface{}
. Applications could then use var _ FooServer = (*myFooServer)(nil)
for the check you wanted, above.
UnimplementedFooServer
, half of the value of the type check is already lost so maybe this is not that bad.Alter UnimplementedFooServer
so it is essentially a dispatching implementation instead:
type UnimplementedFooServer struct { ... }
func (s *UnimplementedFooServer) MyMethod(...) ... { return s.myMethod(...) }
func (s *UnimplementedFooServer) SetMyMethod(m func(...)...) { s.myMethod = m }
func UnsafeWrapFooServer(s UnsafeFooServer) FooServer {
// Return simple wrapper that adds a magic method to s. Breaks if a new method is added (due to UnsafeFooServer parameter type).
}
// Usage:
func main() {
s := &myFooServer{}
u := &pb.UnimplementedFooServer{}
u.SetMyMethod(s.MyMethod) // etc...
pb.RegisterFooServer(grpcServer, u)
// -- or --
pb.RegisterFooServer(grpcServer, pb.UnsafeWrapFooServer(&myFooServer{}))
}
I always appreciate the work of the grpc-go maintainers, but I do think this issue deserves some speaking up on the issue, so my two cents:
It was always supposed to be a requirement (undocumented, most likely, so I have no pointers for you) that the addition of new methods to a proto would be a backward-compatible change. The initial implementation of the grpc-go proto codegen that didn't allow for this broke that requirement.
It wasn't a requirement though - there's a lot of things I do in my code that I wish I hadn't done, but unless I'm willing to v2 the code, that's just how it is. Changing that on a minor will cause more harm than the problem it solves.
If you have suggestions for other ways to implement this requirement, then we're willing to discuss -- we still haven't released 1.0 yet, so things can be changed. This is the best approach we found.
The issue is that grpc-go as a whole IS 1.0, and has been for a while. More to the point, in practice, this generator is a port from another 1.0'ed repository, where people are being asked to move to this.
With setting the generator as unreleased and adding this requirement based on granular and/or documented compatibility guarantees, perhaps this doesn't violate the letter of SemVer, but it certainly violates the spirit. It also doesn't follow principle of least surprise - I happen to follow these issues closely, but the vast majority of gRPC users do not, and adding a (especially runtime) requirement to embed this could quite literally break the internet. I sympathize with the intent here, but in practice, this generator should be considered 1.0, and not have breaking changes. Opt-in to this feature, while not optimal, is better than lots of users being being broken, which will result in users leaving the Protobuf/gRPC ecosystem out of a lack of trust.
It wasn't a requirement though - there's a lot of things I do in my code that I wish I hadn't done, but unless I'm willing to v2 the code, that's just how it is. Changing that on a minor will cause more harm than the problem it solves.
Adding methods to services was always intended to be backward compatible. It is a backward compatible change in every other language as far as I'm aware. It is also backward compatible at a wire protocol level. Asking users to do a major version bump of their protocol for every new method added, just because of Go, is unacceptable. In a monorepo, this problem can be mitigated by updating every service implementation atomically with the addition of the method, but that doesn't work in OSS.
The issue is that grpc-go as a whole IS 1.0, and has been for a while. More to the point, in practice, this generator is a port from another 1.0'ed repository, where people are being asked to move to this.
Flags will be provided in protoc-gen-go-grpc
to enable backward-compatible behavior. If you can't tolerate the new version of the codegen, set the flag and you get the old codgen's code. By default we want to encourage the new behavior, which will allow the addition of methods to be backward-compatible. If it helps avoid confusion, we could name the first release of protoc-gen-go-grpc
v2.0
to follow the "spirit" of semver more closely, though I fail to see how this would help, practically speaking.
adding a (especially runtime) requirement to embed this could quite literally break the internet.
The current implementation of this functionality makes it a compile-time requirement, not anything checked at runtime. We are aware of pros and cons of various alternatives and will do everything we can to minimize any disruption.
It is also backward compatible at a wire protocol level. Asking users to do a major version bump of their protocol for every new method added, just because of Go, is unacceptable.
This isn't what's happening though - users' Go code is being broken with this change. It's up to users to decide what compatibility requirements they want for their generated code - for example, we wouldn't expose generated gRPC code as part of SemVer.
In a monorepo, this problem can be mitigated by updating every service implementation atomically with the addition of the method, but that doesn't work in OSS.
This is the key thing though: 99% of gRPC users are not in a monorepo, and SemVer matters as a part of the Golang toolchain at this point.
Flags will be provided in protoc-gen-go-grpc to enable backward-compatible behavior. If you can't tolerate the new version of the codegen, set the flag and you get the old codgen's code.
This in itself is not backwards-compatible. Making breaking changes that are opt-out is not a backwards-compatible change, it has to be opt-in.
This change will further erode trust in the Protobuf/gRPC community and have the opposite effect of it's intent. I strongly recommend reconsidering this, and perhaps just making this flag opt-out for Google internal use? More to the point, set this flag to opt-in for everyone else, and then opt into it Google-wide?
Making breaking changes that are opt-out is not a backwards-compatible change, it has to be opt-in.
Note that there is no change to existing tools. The gRPC service generator in github.com/golang/protobuf/protoc-gen-go
behaves as it has in the past.
Someone changing from github.com/golang/protobuf/protoc-gen-go
to google.golang.org/grpc/cmd/protoc-gen-go-grpc
will need to choose between updating their code or setting --go-grpc_opt=requireUnimplementedServers=false
, but they're already making an explicit choice to change generators.
What was the problem with the original approach? Before you could choose to embed the UnimplementedXXXServer and you would get forward compatibility, but it wasn't required. Requiring that it be embedded makes it so when projects update protobufs, they may miss implementing new methods, which seems worse.
What if there were two structs provided that could be embedded, one which provides forward compatibility and one which doesn't? It wouldn't require a wrapper, the default method name could point to the Unimplemented version rather than the one which doesn't provide forward compatibility, and you would have to explicitly choose to not have forward compatibility for service definitions.
Someone changing from github.com/golang/protobuf/protoc-gen-go to google.golang.org/grpc/cmd/protoc-gen-go-grpc will need to choose between updating their code or setting --go-grpc_opt=requireUnimplementedServers=false, but they're already making an explicit choice to change generators.
Yea but it's a "choice", in the same way people have the choice to not upgrade to golang/protobuf 1.4+ - people are being strongly suggested to come over here, and that plugins=grpc
is being deprecated. As I said, you can make the argument that this is OK per compatibility guarantees/granular versioning, but this definitely violates the spirit of backwards compatibility.
What was the problem with the original approach?
The problem is that the grpc-go codegen missed the requirement that the addition of new methods should be backward compatible.
For many services, this isn't actually a problem, because there is only one service implementer, and they also own the definition, and can update the two in lockstep. There are many other services that are more generic in nature, however, and have many implementers. E.g. the gRPC health service. Here, we added a watch-based health check, and consequently broke everyone attempting to implement their own health service and importing our canonical service definition. Note that in OSS, protos cannot be registered multiple times, so anyone referencing our health protos in code should be using our .pb.go
, or they risk running into init-time panics.
when projects update protobufs, they may miss implementing new methods, which seems worse.
This is the case in literally every other language, and is a desired feature, not a deficiency. If you forget to implement a method, your clients will get UNIMPLEMENTED, which is perfectly appropriate.
Making breaking changes that are opt-out is not a backwards-compatible change, it has to be opt-in.
It is not a requirement of this tool to be backward-compatible with the old tool. Our goal is to minimize any such differences while also bringing in the feature of future-proofing service registration. We will provide an escape valve for people looking to get on the new stack without the need to change any code, but it seems pretty reasonable to expect that some things may be different when migrating to a brand new tool.
@bufdev come on... this has nothing to do with the spirit of semver. Use the old tool if you prefer the old behavior and update all your code when ready. That much doesn't seem to be the pressing issue. (edit: although I admit now that your point about how confusing and unclean/dishonest this transition technically is remains valid especially considering how people are being pushed through the transition by all relevant documentation and example code without clear guidance on why they're encountering a breaking change and what to do about it and how to opt out of it, etc.)
For many services, this isn't actually a problem, because there is only one service implementer, and they also own the definition, and can update the two in lockstep. There are many other services that are more generic in nature, however, and have many implementers. E.g. the gRPC health service. Here, we added a watch-based health check, and consequently broke everyone attempting to implement their own health service and importing our canonical service definition. Note that in OSS, protos cannot be registered multiple times, so anyone referencing our health protos in code should be using our .pb.go, or they risk running into init-time panics.
@dfawley people didn't have to update to your new service definition until they were ready to add a 3 line "unimplemented" method that takes all of 15 seconds to write, though. I understand how this can be confusing because other grpc implementations wouldn't have required code changes to silently add a new server method and I agree it makes sense to try and close that gap. However, the whole idea of base classes in the first place is native to the referenced languages. In the case of golang, switching everyone to a default-embedded "Unimplemented" server is at least a worse out-of-the-box developer experience for us golanders, but more importantly it makes the generated code less flexible and more cumbersome to use.
For example, my grpc server stubs in my tests embed the preexisting unimplemented "base" type. It's not a secret, it's part of the public API for the generated code. However, I choose not to embed the unimplemented type in my actual code for all of the reasons @belak has outlined. I well understand that an addition to the service definition will result in a compile time failure, it's a feature. Forcing me to chose one way over the other regresses the flexibility of the generated code. Golang package ecosystem has always been more about providing primitives that enable other people to stitch things together as desired for their applications without having to entirely reinvent things, not necessarily providing bullet proof never-think-about-it-again-until-it-is-deployed-in-your-cluster-and-you-start-seeing-500s frameworks aimed at satisfying all the uber-ly unproductive dev-monsters that lurk around the intertubes.
Anyway. I haven't looked at the new codegen yet (about to do so), but here's one vote for hoping the API remains mostly as before and the frivolous desire to force default adoption of the new behavior throughout the community by introducing a gen-time option won't break projects that rely on the ability to choose whether or not to embed the unimplemented server on a site-specific basis.
Honestly though I don't see why the community needs to be pushed around at all. Is this really actually a big deal? More plugin flags are just more confusing. Can this not be solved with some rework of the server on-boarding examples/docs to better educate package users on the implications of working with grpc-go code without the addition of a default that mutilates the server development experience by removing all of the support from the compiler for type interface conformance? In fact, one might argue that this is a "failure" of the go language and it should not be the task of the grpc-go project to pave over things with idiosyncratic code. The only reason adding a new field to a protobuf type doesn't cause a compile time issue is because go lets you omit fields from struct instantiation and assigns default values to any omitted fields. There is simply not an analog for type interface conformance. Idiomatically, you would introduce two separate interfaces for each basic unit of functionality and expect the type to implement both. Throwing up our arms and saying well then I guess we can't have nice things and converting the issue to a runtime problem is silly. In reality there probably should have been a new type of heath check service added and then existing health checkers could choose to conform to both the old and new health check interfaces if zero compile time breakage was paramount. That wouldn't have broken anything and you avoid this silly base class hierarchical OOP-style inheritance-themed bull**** in favor of granular composition. But I'm not sold on the whole zero compile-time breakage in the first place.
I'm skeptical about the "adding service methods to an rpc service should mirror adding fields to the end of a protobuf type and not result in a compile-time issue in any language impl" requirement in the first place. The service definitions already are backwards compatible. If you codegen off an old service definition on a client but use a newer definition to gen, build, and run a server, the client will still be happily served, it just won't know about the new rpc (and vice versa). The service IDL is already backwards compatible--just like the basic protobuf types. The idea that "a project using grpc codegen should not benefit from its language's compile-time type checking and isolate developers from dependency updates by only exhibiting lazy runtime failure when interfaces are not satisfied" is totally different and simply orthogonal and is not the concern of the IDL whatsoever. The former is important to the protobuf specification. The latter is simply an observation that different languages have different idiomatic ways to describe types and interfaces and what works in one case might not work in another.
I came to protobuf/grpc so I could have a nice IDL and codegen without sacrificing my language's type system. If it wanted something else I'd use http+json. Like, what's the point of any of the codegen if not to ensure type and interface conformance at compile time?
For some transparency, since this seems like a popular and controversial issue: at this point we are leaning toward something like option 2 from https://github.com/grpc/grpc-go/issues/3669#issuecomment-642156144, which would provide both static type check protection and backward compatibility w.r.t. new methods added to services. This may mean some changes are needed to perform service registration by default, or maybe not depending on how the implementation ends up. We will share more details when they're available. In the meantime, if anyone has any concrete alternatives that also provide those features, we are happy to hear them.
Maybe I wrote too much text. My point is, "why"? I think the existing defaults are just fine and are in line with the language, idiomatically. Why require the majority of use cases to now add an "unsafe" wrapper if they want the safety of compile time interface conformance checking? Respectfully, I think this project is disproportionately responding to a very minor nuisance by proposing to make the server registration API worse in the majority of use cases. I think striving to maintain source compatibility of generated code for interface changes literally makes no sense because by definition an interface has changed. It should really be the responsibility of whoever is publishing a package to make sure that any changes they make are in able to be consumed with as little friction as desirable for any given scenario. In the case of the health check server, the people publishing the updates could simply learn form the lesson and devise a slightly more tactical approach next time they want to change an interface consumed by many dependents. Could have been as simple as providing a snippet people could paste in when updating, or something. Anyway I've said my piece. Thanks for the color.
Why require the majority of use cases to now add an "unsafe" wrapper if they want the safety of compile time interface conformance checking?
Maybe I wasn't clear enough in my response? :smile: It is a goal to remove the "unsafe" wrapper so that type safety can be maintained.
Interfaces can't add methods and remain backward compatible per SemVer. This means you can't publish generated code for your service unless you do a major release whenever new methods are added. This is often infeasible (as in our case), because we don't want to do a major release of gRPC itself when adding a new method to a service exposed by the package (or ever, really). We could split each proto into its own module, but that's more difficult to manage, and ideally, versioning of the generated code should be able to match versioning of the proto definition itself.
Let's wait until we have some concrete proposals to discuss further. Keeping things the way they were before is still an option, but it's not the number one option we're considering at the moment.
So let's assume we want to apply sem-ver to published IDL interfaces. What a library traditionally does when it wants to change an interface is add a new "v2" interface (not requiring a major version bump) and ask people to migrate to the new one by deprecating the old interface and phasing it out over time so it can be removed when the project is ready for a version bump. You can't really have it both ways. You can't create interfaces that don't break contracts when they're changed unless you create the notion of optional method and build a way for consumers to understand that a method may not be available on all supposedly conforming implementations. In other words, without changing the definition of contract. If that's how protobuf services are supposed to work or have always been intended to work, it's news to me.
An interface is a pure contract between two parties that outlines an agreement about how the two parties can operate together. Protobuf does not specify service methods as optional. For proto3 messages, yes, all fields are optional and have default values (an interesting retreat from proto2 to proto3 because people didn't want to version their server implementations). Should gRPC take the stance that all proto service methods are actually optional, it would suggest to me something is fundamentally changing about the semantics of the word "interface".
To try an add something more concrete:
It really feels like the addition of an optional
prefix for service rpc
s to the protobuf spec is in order followed by, perhaps, a feature request in golang for optional interface methods. If not, at least explicit documentation explaining that when using gRPC, the implementation of protobuf interfaces by servers is, by default, purely optional. Since there's no way for a client to know what version of a server they're talking to when sending an rpc (they only know about the interface as it exists in the application's proto package they depend upon), clients will need to design for the possibility of receiving "not implemented" from their peer. Maybe this is only a grpc-go problem and the rest of the grpc world has been operating this way but this is enough of a fundamental shift in my book that it warrants discussion of additional syntax.
Anyway, all of this discussion about the semantics of interfaces is probably only happening because the default behavior is changing. I feel like this would be much less contentious if grpc-go maintained the default behavior of requiring services to implement the all the specified rpc methods and optionally allow people to opt into a forward-compatible base server if that's important to them, much like the original proposal in the inital PR. Then perhaps the philosophical change could be eased in more gradually with more guidance and ideally supporting proto syntax/specification that helps solve the forward interface compatibility issue if it's important for the protobuf community to solve.
Looking forward to the upcoming reveal (;
This is the idea I'm considering right now. It is similar to the preferred approach mentioned above (option #2 in this comment), but is more fully fleshed out and would be simpler/easier to use.
To accommodate per-method registration capabilities and provide forward compatibility for new methods when they are added, we would create a new <ServiceName>Server
struct. Example (based on Echo):
type EchoServer struct {
// These functions are exactly what the application would implement
// in the service interface previously
UnaryEcho func(context.Context, EchoRequest) (EchoResponse, error)
ServerStreamingEcho func(*EchoRequest, Echo_ServerStreamingEchoServer) error
...
}
Users would initialize this struct, which would then work with gRPC as follows:
// This implements the function signature required by gRPC service handlers
// and dispatches to the user's handler (if present)
func (s *EchoServer) unaryEcho(_ interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
if s.UnaryEcho == nil {
return nil, status.Errorf(codes.Unimplemented, "method UnaryEcho not implemented")
}
/// ... the old contents of _Echo_UnaryEcho_Handler
return s.UnaryEcho(ctx, in)
}
// This registers the service on the gRPC server. Note an interface is used
// to allow alternative server implementations, similar to Clients wrapping
// grpc.ClientConns.
func RegisterEchoService(r grpc.ServiceRegistrar, s *EchoServer) {
sd := &grpc.ServiceDesc{
ServiceName: "grpc.examples.echo.Echo",
Methods: []grpc.MethodDesc{
{
MethodName: "UnaryEcho",
Handler: s.unaryEcho,
}
// ...
},
Metadata: "examples/features/proto/echo/echo.proto",
}
r.RegisterService(sd, nil)
// Note: we don't actually need to pass s to RegisterService since the Handler fields
// will reference the proper method on the service implementation directly.
}
Usage would be as follows:
func main() {
// Create the gRPC server as before
grpcServer := grpc.NewServer()
// Initialize your implementation as before
srv := myEchoServiceImpl{ /* ... */ }
// New: add implementation methods to the UnaryEcho struct
es := pb.EchoServer{UnaryEcho: srv.UnaryEcho, /* etc for each implemented method */}
// Changed: register your service with the gRPC server
pb.RegisterEchoService(grpcServer, es)
}
If you would like to guarantee all functions are implemented by your service, we would provide the following. Note that this is unsafe, as newly added methods will break, so "Unsafe" is added to all symbols:
// UnsafeEchoService is the server API for Echo service.
type UnsafeEchoService interface {
// UnaryEcho is unary echo.
UnaryEcho(context.Context, *EchoRequest) (*EchoResponse, error)
// ServerStreamingEcho is server side streaming.
ServerStreamingEcho(*EchoRequest, Echo_ServerStreamingEchoServer) error
...
}
func UnsafeRegisterEchoService(r grpc.ServiceRegistrar, srv UnsafeEchoServer) {
RegisterEchoService(s, &EchoServer{
UnaryEcho: srv.UnaryEcho,
ServerStreamingEcho: srv.ServerStreamingEcho,
...
})
}
These symbols would be generated only if an option is set, since additions to the interface would violate semantic versioning.
Please feel free to comment on this proposal.
This is interesting... switching to a struct rather than an interface for the common case allows for compile time safety of any methods you do implement, but would allow for automatically dispatching missing methods as unimplemented.
Additionally still exposing the UnsafeSomethingService
allows for people who want the old behavior to get the compile time interface checks.
This works with my use case and seems fairly well thought out. Thanks for taking the time to look at this in a bit more depth!
I think it might be worth questioning whether compile-time type safety in service registration is worth the limitations.
The Go Stubby API (a Google-internal RPC system), we initially had a service registration mechanism similar to what gRPC uses today: Each service has an interface definition, and you register a type which implements the interface.
At a later point, we added an alternate, optional mechanism for registering service handlers, which looks something like this:
rpc.RegisterMethod(method, func(ctx context.Context, req *somepb.Request) (*somepb.Response, error) {
// method handler
})
One advantage of this is that it avoids the unimplemented-method problem. (Unregistered methods return a not-implemented error.) But there are other benefits. One big one is that since the method parameter to RegisterMethod
is just an interface{}
, we can support variable method signatures:
rpc.RegisterMethod(method, func(ctx context.Context, req proto.Message) (proto.Message, error) {
// Generic handler receiving and returning a proto.Message.
})
rpc.RegisterMethod(method, func(ctx context.Context, req []byte) ([]byte, error) {
// Handler receiving a raw []byte slice.
})
The downside is that RegisterMethod
isn't compile-time type safe. However, in practice service registration generally only happens at startup time, so errors surface as a binary that panics immediately at startup. On balance, I think the benefits gained from not being tied to a single handler function signature have outweighed the loss of type safety.
@neild I think the gRPC analog to what you're describing is the grpc.ServiceDesc
. The Methods
in it contain a Handler
which is defined pretty generically as:
func(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor UnaryServerInterceptor) (interface{}, error)
(Streams
are similar but have differences due to being a streaming API.) This is how you would use gRPC on a server without the proto codegen.
gRPC could support a RegisterMethod
function that had a similar definition.
Then the way users would use it through the generated code might look something like:
pb.RegisterEchoUnaryEcho(myEchoHandler)
pb.RegisterEchoServerStreamingEcho(myEchoServerStreamingHandler)
Compared to the proposal above, which would look like:
es := pb.EchoServer{UnaryEcho: myEchoHandler, ServerStreamingEcho: myEchoServerStreamingHandler}
pb.RegisterEchoService(grpcServer, es)
In either case, the unimplemented methods would return UNIMPLEMENTED (as all unknown endpoints do).
Neat rework! I agree with @belak.
I dissent only semantically on the choice of Unsafe to signal the concept of "source compatibility". In golang, Unsafe means "does not fall under the domain of the language's memory safety guarantees". In gRPC, Unsafe is further used to indicate a transport method that does not implement security. IMO the Unsafe behavior should be the unlabeled behavior and the forward compatible call should be labeled like RegisterDefaultEchoServer
or RegisterCompatEchoService
(per your example). If this is not negotiable, then alternates for Unsafe might be RegisterEchoServiceChecked
, or RegisterEchoServiceStatic
or something. Meh.. nothing really succinct is coming to mind right now, though.
Regarding the names:
I would like to strongly encourage the use of the future-proof mechanism and strongly discourage the use of the mechanism that breaks compatibility when new methods are added to services. "Unsafe" was chosen because it implies that what one is doing is dangerous. In this case, if you expose a package and you generate these symbols, then there are two possibilities:
pb.go
file once a new method is added to a service without doing a major release. Or,pb.go
files within a major release, and break anyone using these symbols in violation of the Go import compatibility rule. Go has no formal notion of an "experimental" concept, but proper documentation around symbols that may break in the future is the closest we can get. Including a disclaimer directly in the name of the symbol is the most effective way to communicate this. There would also be a comment better explaining the weakness of this method.I understand that "unsafe" can have many different specific meanings, but so far this is my top choice. We need a name that properly conveys "what you're doing may lead to compile time breakages in the future".
Another option would be to give them less-scary names, and make the flag used to generate them something especially onerous like IPromiseToDoMajorReleasesWhenNewMethodsAreAddedToServices=true
.
If Go had proper support for experimental features in stable packages, then we could generate them by default and give them normal names.
I think the challenge here is that there are multiple types of forward compatibility.
The problem arises when we debate the last two types there. I think many people would argue that the common case (protobufs shipped mostly so clients can be generated and the canonical server is tied to the protos) is made less safe by the original change and calling it Unsafe
when it's the common case seems strange, even if you do want to discourage it.
That's still why this change seems strange to me, even though it now would again work for both use cases. In my mind, this always worked for both use cases, but the default case (implement everything) was made simpler and if you needed forward compatibility in your servers, you could always embed the UnimplementedSomethingService
. Reversing the direction makes what I assume will be the common case harder ("unsafe") to implement, and the less common case easier.
Exactly. I was just about to add:
It seems the severity of adding a method to a service definition outside of a major version update contextually depends on a lot of factors. The most important being whether or not you are publishing generated sources that others depend on followed perhaps by the maturity of your project and whether or not you adhere strictly to a semver versioning strategy. In many use cases, projects fall under the following category (also from the linked go modules blog):
For projects that are still experimental — at major version v0 — occasional breaking changes are expected by users
In reality, people commonly:
In both of these cases, requiring developers of existing code to make modifications in order to opt out of "safe" behavior that is not in fact wanted in favor of "unsafe" behavior they've been using everywhere and for valid reason feels coercive and backwards. Unsafe means, semantically "you should probably not be doing this". Is that really true? And if so why go through all the work to support the unsafe flow?
Because of the variety of use cases, I innately resist the coercion towards adopting one strategy as blessed. I understand that it solves for a scenario that specifically bit the maintainers of some of the common grpc code, but there are plenty of other valid ways to integrate generated protobuf/grpc sources such that the dogma feels.. undue. It really feels like this project is trying to solve a downstream problem upstream when the source is actually downstream.
I stumbled here after accidentially breaking my tools with upgrading protoc-gen-go, and then figuring out how to get this new setup working.
Just want to throw my 2 cents here. We're building a pretty extensive system with embedded hardware (raspberry pi/nvidia xavier nx), dart/flutter/golang/c++/gcloud and interfacing everything with grpc, and after having a brief discussion with my team on whether we will use requireUnimplementedServers=false
and keep using the automatic error reporting for unimpelemented methods or move to the new "safe" system, we unanimously agreed that the old way is significantly better from a product development point of view.
We see very few cases where it would be preferred to be able to keep on developing servers that are missing methods. I see it as much more an edge case in actual product development, and hence it, to me, would make practical sense to have the old requireUnimplementedServers=false
way to be default.
We know we will be using that flag set to false as long as it is available and doesn't break anything else.
Some extra background for context:
I believe all of gRPC's other supported languages (I checked C++, Java, C#, and Python) have the backward-compatible behavior, with no option to check at compile-time that all methods are implemented by a service. As far as I'm aware, nobody has requested such a feature in the other languages. Inside Google, it has been implemented the same way for years and for thousands of services, and it was never a problem. The main reason we are including it at all is because that behavior was previously available.
This is also a pretty old issue - the oldest reference I can find is https://github.com/grpc/grpc/issues/5371 about making the same kind of change in C# in 2016, before 1.0 was released - it seems like Go just missed the memo on this.
This was one of my earlier points:
It seems useful, especially in a language where you don't have to explicitly implement an interface or override methods, to make this optional because, as mentioned before this removes some of the compile-time checks that you're doing things correctly.
You mentioned C++, Java, and C# which all have compile-time checking of overridden methods. Because of how Go has implicitly implemented interfaces, this check never happens in Go, leading to potential unexpected runtime failures when you require the Unimplemented version of the service to be embedded.
I actually considered this a feature of the Go version, one which doesn't exist in other versions. You can get the same forward compatibility if you need it by embedding the unimplemented struct, and that way you're explicitly choosing that you want the forward compatibility for servers.
Can you point to where in the gRPC (or protobuf) documentation it says forward compatibility is maintained for servers? I can find where it says the protocol itself is forward compatible, but haven't been able to find claims about implementation. It makes sense to maintain forward compatibility for clients, as existing functionality should continue to work for clients when a new method is added, but for a server, failing to implement a method specified in a proto service definition is usually an error.
At least for me, the question ends up being "is the common case that there are multiple implementers of a service, or that there is only one?" As far as I'm aware, at least outside Google, the common case is that there is a single implementation... and in that case, this change makes that case harder to do correctly.
Thinking about it, I guess this is a number of disagreements... let me see if I can sum them up.
It sounds like there is a disconnect between #1 and #3.
If grpc-go needs to be changed in a backwards incompatible way, I honestly think requiring each user to embed something makes sense... and it would be a trivial change in the generated code to also provide another struct to embed which can provide the private method, but explicitly throws away the forward compatibility. Is there a reason that solution wasn't used? It would allow both methods to work without having to generate separate code for each use case, it reduces the number of random options which fundamentally change the generated code, and it still provides all the proper checks.
I actually considered this a feature of the Go version, one which doesn't exist in other versions.
Unfortunately, even though you and some others prefer it, many people also consider this undesirable. Note that the other languages could have chosen to implement it this way, requiring all methods to be implemented, but they did not do so intentionally. I also can't find anywhere official this is documented, but many things aren't, unfortunately.
I have a working change that I'll send out in the next day or two that includes the proposal above, as well as two additional things:
UnstableFooService interface
that includes all method handlers. To make your code break at compile time if you really want, you can do:var _ UnstableFooService = myFooServiceImpl{}
I am not sure whether this symbol should be produced by default, but by putting "Unstable" in the name, I am less concerned about the problems it could cause.
NewFooService
constructor for easier migration from the old codegen:func NewFooService(interface{}) *FooService
This will return a *FooService
that is initialized using the handlers defined in the object passed in. Any missing methods will not be noticed. This would not be recommended for new users, since it does not detect typos. Current users can migrate by changing:
pb.RegisterFooService(grpcServer, &myFooService{})
to
pb.RegisterFooService(grpcServer, pb.NewFooService(&myFooService{}))
The PR will include migration of the examples to the new generated code, and will document the usage of the new service registration options.
Is there a reason providing two embeddable structs was not used? My alternate proposal would cause the generated code to look somewhat like this:
type FooService interface {
SomeMethod(context.Context, *SomeRequest) (*SomeResponse, error)
mustEmbedBaseFooService()
}
// UnstableFooService is to be embedded when forward compatibility for the server
// is not important. It simply satisfies the private requirement for the interface.
type UnstableFooService struct {}
func (s *UnstableFooService) mustEmbedBaseFooService() {}
// BaseFooService would be renamed from UnimplementedFooService to make it clearer
// most people should be using this, while allowing FooService to be used for the interface
// name. This lines up somewhat with the C# naming.
type BaseFooService struct {}
func (s *BaseFooService) SomeMethod(ctx context.Context, req *SomeRequest) (*SomeResponse, error) {
// Note: I realize this may not be exactly right, but it should get the point across
return nil, status.New(codes.Unimplemented, "unimplemented")
}
func (s *BaseFooService) mustEmbedBaseFooService() {}
At least in my opinion, this is still preferable to what we have now. It makes dropping forward compatibility an explicit decision and seems to meet all the requirements I've been able to see.
It supports both use cases (while still pointing users towards the safer forward-compatible one), requires you to be explicit when dropping forward compatibility, it's simpler (NewFooService doesn't need to exist, there doesn't need to be a separate interface definition for only the unstable variant, etc), it makes it so in both cases (forward compatible and not) there isn't another level of method calls to wrap the service, and it still uses clear names for the embeddable structs so it's clear to new users they probably shouldn't be using the Unstable
variant.
@belak, the downside of that approach is that embedding BaseFooService
automatically causes the user's service implementation to implement every method - meaning if they have a typo in their real method handler's name, everything will compile but the method will return an unimplemented error when called. Using the struct and requiring an explicit assignment for each method handler adds static type checking for the methods the user wishes to implement, while also providing backward compatibility when new methods are added in the future:
pb.RegisterFooService(grpcServer, &pb.FooService{SomeMethod: myService.SomeMethod})
Alright, that's a fair reason, though it would be no worse than it is right now.
Is NewFooService
meant for migrating code implementing some methods, or is it meant for migrating an UnstableFooService
? If that's not recommended for new users, maybe it would be good to prefix that with something?
Also, you could drop pb.RegisterUnstableFooService
if something like this was provided: func UnstableNewFooService(UnstableFooService) *FooService
Is
NewFooService
meant for migrating code implementing some methods, or is it meant for migrating anUnstableFooService
? If that's not recommended for new users, maybe it would be good to prefix that with something?
It was created for people migrating from the previous version of the codegen tool, regardless of whether they previously embedded UnimplementedFooServer
. If you have any good naming ideas to discourage its use, I'm happy to hear them. I'm okay with a not-so-scary name (and the comments indicating it isn't recommended) since it is future-proof.
Also, you could drop
pb.RegisterUnstableFooService
I noticed that, too, and removed it. And even UnstableNewFooService
is unnecessary since you can accomplish that part with a type cast + NewFooService
:
// does not compile if myFooService does not implement pb.UnstableFooService:
pb.RegisterFooService(grpcServer, pb.NewFooService(pb.UnstableFooService(myFooService{})))
I sent PR #3828 for review.
FYI #3828 is now merged; please try the new tool and let us know if you run into any problems or have any concerns.
And #3828 now seems to generate the structs (and also other functions dealing with that service) with stutter...
service ContractService {...}
generates
type ContractServiceService struct {...}
Suffixing with Server before was better, also because the client part has a Client suffix.
IMHO this should be fixed before releasing 1.32.0
A server is a collection of services, which in the gRPC architecture is the grpc.Server
type. In this case, it's more correct for the code generator to annotate your service with Service
. The stuttering comes about because you suffixed your service with Service
, which is similar to:
type PersonStruct struct {
NameString string
AgeInt int
}
We have done the same thing in a few of our protos as well; it's not ideal but also not a deal breaker.
In addition, using Service
for these symbols instead of Server
allows for a nicer migration path. (See the README.)
Looking at the other languages:
<ServiceName>ImplBase
abstract class<ServiceName>::Service
interface<ServiceName>.<ServiceName>Base
abstract class<ServiceName>Servicer
class<PackageName>::<ServiceName>::Service
classI think appending "Service" follows this pattern as best as possible in Go. The other option would be to not suffix it at all. (We don't convert message FooRequest
into a FooRequestMessage
struct, after all.) However, I think this would lead to some awkward symbols, e.g. pb.Health
for the gRPC health service.
Suffixing with Server before was better, also because the client part has a Client suffix.
Users of services are typically called clients so this is still appropriate.
IMHO this should be fixed before releasing 1.32.0
protoc-gen-go-grpc
is in its own module and will be v0 (unstable) until we officially release it (as v1.0).
@dfawley, the main downside I see in the new registration API is that there is no type-safety -- the only registration function takes interface{}
. Even if you could add some warning (like maybe a runtime error if the given object implements none of the service's methods), all compile-time checking is lost.
I understand why, since it's a side effect of being able to only provide a subset of the interface's methods, and the Go language doesn't have other language features to aid with that feature and still provide strong compile-time type-checking.
But for those that are willing to use the Unstable*
interfaces (to make sure all RPCs are implemented at compile-time, regardless of compatibility impact), it would also be nice to have a RegisterUnstable*
method generated, whose type is the unstable interface, instead of interface{}
.
WDYT?
the only registration function takes
interface{}
.
This isn't quite accurate:
But I can see the confusion:
The constructor for that accepts interface{}
. The idea is that the constructor should not be used much, and the struct should be initialized manually. The godoc for that function pretty clearly states: "For this reason, this function should be used with great care and is not recommended to be used by most users." But maybe the design can be further improved.
But for those that are willing to use the Unstable interfaces (to make sure all RPCs are implemented at compile-time, regardless of compatibility impact), it would also be nice to have a RegisterUnstable method generated, whose type is the unstable interface, instead of interface{}.
For users who want compile-time checks, there are a few options. E.g.
var _ pb.UnstableFooService = (*myFooService)(nil)
// or at the call site:
pb.RegisterFooService(s, pb.NewFooService(pb.UnstableFooService(&myFooService{})))
This is mentioned in the migration section of the README.
Maybe we need to rename NewFooService
or do something else to make it obvious it's not the preferred way to create a FooService struct.
The idea is that the constructor should not be used much, and the struct should be initialized manually.
Oof, I didn't realize that. That sounds distinctly worse -- you may have type-safety, but there's nothing at compile-time that helps make sure you've initialized the struct "correctly". It's a lot of boiler-plate that is very mechanical and, let's be honest, best taken care of with code gen.
var _ pb.UnstableFooService = (*myFooService)(nil) // or at the call site: pb.RegisterFooService(s, pb.NewFooService(pb.UnstableFooService(&myFooService{})))
Right, but that is not a very satisfactory solution since you only get type-safety if you remember to use those conventions. I guess I'm slightly bothered that there is no "pit of success" here that can lean more on the compiler.
Basically, the old registration APIs were perfect for many users of gRPC. If you aren't putting the generated protos into an exported/shared/public package or library, then you probably don't whether the generated code is backwards compatible or not. And, for many of those same users, they would actually prefer to see a compile error when a new method is added but not implemented. I like the idea of naming and documenting this alternate set of APIs as Unstable*
, so the compatibility issue is clear. I just wish they more thoroughly matched what was previously provided, for stronger-type-safety end-to-end vs. "opt in" type-safety.
To be clear, I do really like the newer APIs -- they provide much greater flexibility, which is awesome for some use cases. And I also like the idea of that being the "root" API, that others are built on top of. I just think the unstable-yet-more-type-safe form could go a little further...
If it's too late to consider including this in the new plugin, I guess we'll have to add this to our own set of custom protoc plugins to generate it...
That sounds distinctly worse -- you may have type-safety, but there's nothing at compile-time that helps make sure you've initialized the struct "correctly".
I don't understand. The compiler makes sure you set all the handlers correctly because of the function signatures. This is how we imagine it will be used:
Or for something fairly trivial:
Right, but that is not a very satisfactory solution since you only get type-safety if you remember to use those conventions. I guess I'm slightly bothered that there is no "pit of success" here that can lean more on the compiler.
If you do it the recommended way, then I think that's supposed to be the "pit of success". The NewFooService
function is intended only for making the migration of existing code to the new scheme easier. We could have an UnsafeNewFooService
, but it's not necessary (provides no value over the optional type checking options). We could also outright replace NewFooService
with UnsafeNewFooService
.
If it's too late to consider including this in the new plugin, I guess we'll have to add this to our own set of custom protoc plugins to generate it...
It's not too late. We're still v0 and open to feedback.
What about providing RegisterUnstableFooService which still provides the type safety rather than having to pass it through a function which takes an interface{} to build a wrapper? Maybe building the wrapper could still happen internally, but it would avoid exposing an interface{} in the public interface for pb packages.
pb.RegisterFooService would be used with a concrete *FooService and pb.RegisterUnstableFooService would be used with an UnstableFooService.
Also, by convention "NewX" is the recommended way in Go to create an object of type X, so if NewFooService(interface{}) isn't the recommended way of creating a FooService, maybe another name would be better.
This is how we imagine it will be used:
We have services with literally dozens of methods. That will be a significant source of boiler-plate (which is why I suggested it could/should be code-gen'ed.)
If you do it the recommended way, then I think that's supposed to be the "pit of success".
A "pit of success" is a situation where the easy thing to do is also the right thing -- even if you know nothing about best practices, you basically "fall into" the right/recommended way to do a thing. And the easy thing to do with these APIs is to use the reflection construction because it will be annoying to manually wire up lots of methods (for the services with a wide number of endpoints). Also, this isn't the right kind of type safety -- you can use a type assertion to make sure your type implements all of the RPC endpoints, but then accidentally fail to wire one up in the manually constructed service struct. So the compiler's type-checking is still unable to help you.
Having to know the best practices and follow them (vs. them being the most obvious/intuitive way to do something) requires some knowledge and discipline, which is the opposite of a pit of success.
+1 to UnsafeNewFooService
. My primary feedback on the current design is that I'd like to see a way to keep a type safe way of getting the old behavior.
I want to add another piece of feedback to this already long thread. I had read it quite early on at its inception but it seems the discussions have just continued.
Our use-case at my place of work is a monorepo with 10s - 100s of services defined across its proto files with anywhere between 2 and 10 methods. These are generally implemented only in one place, maybe two if there's a "fake" equivalent for use in tests by other services / clients. Such a single implementation is however frequently instantiated many times over in various servers, test suites, individual tests, etc.
The current change in the registration API is going to create multiple months of busy-work for our engineers to write boiler-plate code for each of those service instantiations. And the resulting code will be inherently less safe than what we had with the previous API, despite any of the compile-time guarantees added as part of the ongoing discussion, because the service struct might be accidentally missing one of the required methods.
I can't emphasise enough the point of the new API being 100% boiler-plate in 99% of use-cases with not a single element of meaningful syntax, which is the polar opposite of a "pit of success" or any other form of good API or developer workflow. Registering a service via the instantiation of a new generated struct instead of simply providing my own struct makes me repeat the same tedious code, every, single, time that is almost a copy-paste of the definition of my own struct:
type myServiceImpl struct {}
func (s *myServiceImpl) MethodBar(...) {}
func (s *myServiceImpl) MethodFoo(...) {}
func main() {
impl := &myServiceImpl{}
pb.RegisterMyService(server, &pb.MyService{
MethodBar: imp.MethodBar,
MethodFoo: impl.MethodFoo,
})
}
func TestMyServiceHandlesSituationA() {
impl := &myServiceImpl{}
pb.RegisterMyService(server, &pb.MyService{
MethodBar: imp.MethodBar,
MethodFoo: impl.MethodFoo,
})
}
func TestMyServiceHandlesSituationB() {
impl := &myServiceImpl{}
pb.RegisterMyService(server, &pb.MyService{
MethodBar: imp.MethodBar,
// ... ooops I just forgot to register my MethodFoo here but everything still compiles fine,
// it might luckily just fail at test time thereby only lengthening my development iteration.
})
}
// ... etc
EDIT: I am perfectly aware of the Unstable*
methods that could help out here but my point is that those are becoming the weird-looking ugly little duck with the term "Unstable" being an extremely strong deterrent to any sane engineer. The current new API chooses the default way to be the hardest, most brittle path and the "easiest" way (i.e intuitive from a Go idioms perspective, type-safe, least lines of code) to be the non-recommended detour.
I have read all of the previous comments and the entire discussion about this fixing the discrepancy about forward-compatibility with respect to the generated gRPC code for other languages. I understand it, truly. And I also agree that consistency is generally a good and laudable goal.
However the concept of inter-language consistency is an exception to that principle. As a developer I don't simultaneously use both C++ and Go to write a single implementation of a service. I don't write C++ the same way I write Go and I would write Java differently again, etc. This extends also to the way I use third-party libraries even when those apply to the same dependency but in different languages. I explicitly DO NOT assume or expect the same library to have identical APIs or even semantics in different languages because there's a likely need to work with each language's specificities and paradigms. As a result the discrepancy that purportedly is being fixed is not something that has ever caused me pain or confusion (and neither has it for the 10s of other devs & projects that I know that develop with gRPC in multiple languages).
As it stands from the entire above thread it appears (emphasis on this as the actual truth might be different but not yet appropriately communicated) that the only real reason for the API change is to fix a discrepancy. I have however not seen anyone mentioning or linking any past GitHub issues, blog posts or other sources of users complaining about this discrepancy (outside of the gRPC project itself). It thus seems that this change is being made for mostly aesthetic purposes despite a pretty vocal opposition from multiple well-established members of the community of Go gRPC users and an apparent lack of an actual well-documented user issue to solve.
the resulting code will be inherently less safe than what we had with the previous API ... because the service struct might lack one of the required methods.
For every other language gRPC supports, there are no compile-time or runtime checks that a service implementation includes every method defined by the service in the proto. I have never seen a request for this feature in those languages, either. This is something that is unique to the way the original Go developers chose to implement this, and should be seen as a bug, not a feature. We will continue to offer this in Go as an optional feature, since it was a part of our API in the past, and people here seem vehemently in support of it.
I have however not seen anyone mentioning or linking any past GitHub issues, blog posts or other sources of users complaining about this discrepancy (outside of the gRPC project itself).
To cite an example: #2318. We added a new method to the grpc health service, which is not supposed to be a breaking change, but it was for Go. This meant we had a choice:
The requirement behind this change is not up for debate. The mechanism by which it works is still flexible. Thank you for the feedback; we're trying to find something that will satisfy this requirement, be as easy to use as possible, and not effectively throw away the type system, which the embed of Unimplemented
services does.
This is something that is unique to the way the original Go developers chose to implement this, and should be seen as a bug, not a feature.
It is easy to say that idealistically it should be seen as a bug, not a feature, but this is how an entire ecosystem of gRPC services in Go have been built, so to say it should not be considered a feature is fairly ~disingenuous~ user hostile.
This feature made it much easier to have a canonical implementation of a service. The new API makes it harder to use this common pattern correctly and easier to mess up if you choose to implement all methods.
Previously, if you registered a service, you could know it implemented a service without the additional type check. Under the new system, either all methods need to be manually registered or you need to use the unsafe convenience wrapper with a manual type check.
we're trying to find something that will satisfy this requirement, be as easy to use as possible, and not effectively throw away the type system, which the embed of
Unimplemented
services does.
The new system also throws away type safety (using interface{}
) with the convenience methods that are named as the canonical NewFooService
constructors.
This builds on the idea of using a struct for the blessed path, but also allows fully implementing the service if that is needed.
What about something like this for the generated code:
interface FooServiceImpl {
MyMethod(...)
}
type FooService struct {
MyMethodImpl: func (...)
}
func (s *FooService) MyMethod(...) {
if s.MyMethodImpl == nil {
return "unimplemented"
}
return s.MyMethodImpl(...)
}
Names could be changed, but I think the idea comes across.
This would allow using RegisterFooService
with either a full FooServiceImpl
or a FooService
. Using the blessed path of FooService
wouldn't throw away type safety and would be forward compatible, but the full implementation could also be used with the same registration methods, also without throwing out type safety.
Additionally, what would you think of renaming the NewFooService
to NewPartialFooService
(or something similar)? It makes it a little clearer that you aren't required to have all the methods and that this isn't necessarily the canonical way to make a new FooService.
Previously, if you registered a service, you could know it implemented a service without the additional type check. Under the new system, either all methods need to be manually registered or you need to use the unsafe convenience wrapper with a manual type check.
...and in every other language supported by gRPC, this is not possible at all, nor has it been requested as a feature to the best of my knowledge.
interface FooServiceImpl { MyMethod(...) }
We cannot generate an interface for a service unless we call it Unstable
. It will break backward compatibility to add methods, which violates a primary requirement of the design.
Aside from that discrepancy, this looks almost exactly like what the current code does, once we remove func NewFooService(interface{})
and add func UnstableRegisterFooService(UnstableFooService)
.
...and in every other language supported by gRPC, this is not possible at all, nor has it been requested as a feature to the best of my knowledge.
I understand. I simply disagree with categorizing this feature of the Go version as a bug, though it feels like on this point we're just arguing semantics.
Aside from that discrepancy, this looks almost exactly like what the current code does, once we remove
func NewFooService(interface{})
and addfunc UnstableRegisterFooService(UnstableFooService)
.
I wasn't aware that was planned. It seems like that would meet the requirements I'm hoping for as well. Thanks for taking a look!
I wasn't aware that was planned. It seems like that would meet the requirements I'm hoping for as well. Thanks for taking a look!
Yes, sorry, I'm working on a couple other things at the moment, but I will try to make that change this week.
I have never seen a request for this feature in those languages, either. This is something that is unique to the way the original Go developers chose to implement this, and should be seen as a bug, not a feature. We will continue to offer this in Go as an optional feature, since it was a part of our API in the past, and people here seem vehemently in support of it.
It is indeed very likely that this was not an intentional feature, but as @belak already commented, saying that it's a bug, not a feature is a fallacious argument considering that it has been part of the core workflow of using gRPC in Go for over 5 years with all your users having based their implementations on this pattern. It is not some edge-case undefined behaviour. Pretending otherwise is maybe theoretically correct but completely dismissing the experience of your entire user base.
I have however not seen anyone mentioning or linking any past GitHub issues, blog posts or other sources of users complaining about this discrepancy (outside of the gRPC project itself).
To cite an example: #2318. We added a new method to the grpc health service, which is not supposed to be a breaking change, but it was for Go. This meant we had a choice:
I don't want to break this down into a meta-debate but quoting that exact point I made followed by a link to an issue from the gRPC project itself, closed and 2 years old at that, reads as if you have not read my message or are dismissive of anything that's not the core project.
If there's a range of projects using gRPC that have filed issues about the lack of forward-compatibility of the generated Go code then I'll hold my peace and just focus on the implementation details. But as it stands this seems like the gRPC project has shot itself in the foot and wants to address this by disregarding any users other than gRPC itself. The changed service registration API is a significant step back in the day-to-day developer experience. It still appears to just be to solve an edge-case that has only bothered gRPC itself at one point in the past even though there are other options available to address exactly that situation.
More to the technical details of how this situation has come to be and how it can be addressed:
The core issue that underpins this whole discussion and which sets apart the gRPC team from the users in this thread is that gRPC is trying to transpose the semantics of breaking changes of one language into another and treat multiple languages as following the same paradigms. The gRPC projects treats itself as one single gigantic API surface that mixes the protobuf IDL and the 10+ supported languages. It equates a breaking change in any of these languages to be a breaking change for all of them and uses the exact same release cadence and versioning scheme across the many repositories that implement this very large API.
This might be a valid product decision but combining such a decision with a lack of wanting to bump the major version which means you can't make breaking changes is driving yourself into a corner. The only outcome is that you'll be creating awkward API compromises and are required to make sub-optimal and non-idiomatic designs in any of the languages that disagree with each other in their paradigms. The highlighted issue of the added method to the health service should in my eyes have been addressed by releasing a minor version of grpc
and a major version of grpc-go
, which is akin to one of the potential solutions that you highlighted yourself. But I disagree that it would have been a cause for "major" pains if the health service change had been the only breaking one.
It appears from the contributions of the various participants in this thread that we all tend to accept that a backwards-compatible change in a proto definition can perfectly constitute a breaking change in the generated code. We do not expect semantic versioning to cross language barriers, and I can't quote any well-known projects with APIs in multiple languages that provide such a guarantee. At best they happen to have something that seems to keep to such a rule but it's never a hard guarantee. Their APIs in the different languages are also not necessarily tied to the same versioning scheme, and why should they? Dependency management systems are mostly specific to one language, not cross-language.
My proposal is to revert to the old interface-based approach as it has served everyone well for many years. My suspicion, although I don't know if there's a documented trace of this (the commit adding it does not have any comments), is that the original authors wrote it that way because it's the natural way of declaring a service API in Go that needs to be implemented by a user. They might have not thought explicitly about the breaking-changes angle but they did think about what the idiomatic way in Go would be to achieve their goals.
The new default / recommended way of registering a service based on filing a struct with function pointers is deeply unnatural from a pure Go perspective. Mostly because it comes down to declaring your own struct with attached methods (i.e to implement a given interface
, whether its explicitly declared or not) and then throwing that contract away and copy-pasting the methods attached to a specific instance into another struct. As such the default workflow's design is a red flag from a language perspective. You are reneging the tool that Go provides to achieve your goal (i.e an interface
type) and instead are creating tedious and brittle workarounds.
is deeply unnatural from a pure Go perspective.
The same pattern is used in the Terraform SDK. Setting functions on structs is an accepted way to avoid the backward compatibility issues with type Interface.
The issue is to be forced to create a new major version just to add a grpc method. Accepting this as Ok does not solve the proposed goal, it just says you see the goal as invalid. The proposed solution is not removing the old way, just renaming it and making it possible for proto maintainers to not break people who care about not having a broken build when a method is added in a minor version.
Though the linter might help identify methods which were not set by accident, turning it into a warning. Could this be included somehow?
Accepting this as Ok does not solve the proposed goal, it just says you see the goal as invalid.
To be very clear that is exactly what I am trying to convey with my paragraph around the fact that trying to force the semantics of one language onto another for versioning purposes is not something that should be aimed for.
EDIT below
The same pattern is used in the Terraform SDK. Setting functions on structs is an accepted way to avoid the backward compatibility issues with type Interface.
The case of the Terraform SDK and the current one for gRPC are alike because of their aim to allow for backwards-compatibility but the comparison stops there for me. It might be a tolerable approach for Terraform where there's a single "interface" which any given project might implement and instantiate once or twice and the trade-off for backwards-compatibility is favourable over the cumbersomeness of copy-pasting methods of a struct. In the case of gRPC we have instead many interfaces within a single project that are all likely to be instantiated many times over meaning that the trade-off of workflow cumbersomeness vs. backwards-compatibility is completely flipped around.
There were some changes in #3657 that make it harder to develop gRPC services and harder to find new unimplemented methods - I wanted to start a discussion around the new default and figure out why the change was made. I do understand this is in an unreleased version, so I figured a discussion would be better than a bug report or feature request.
From my perspective, this is a number of steps backwards for reasons I will outline below.
When implementing a gRPC service in Go, I often start with a blank slate - the service has been defined in proto files, the go and gRPC protobuf definitions have been generated, all that's left to do is write the code. I often use something like the following so the compiler will help me along, telling me about missing methods, incorrect signatures, things like that.
This can alternatively be done with
var _ pb.ChatIngestServer = &Server{}
but that theoretically leaves a little bit more memory around at runtime.After this, I add all the missing methods myself (returning the unimplemented status) and start adding implementations to them until I have a concrete implementation for all the methods.
Problems with the new approach
Server
), they may not be detected as an unused method.I generally prefer compile time guarantees that all methods are implemented over runtime checks.
Benefits of the new approach
Proposal
By default, the option
requireUnimplementedServers
should default tofalse
. This option is more valuable when dealing with external protobufs which are not versioned (maybe there should be a recommendation to embed the unimplemented struct in this instance) and makes it harder to catch mistakes if you are developing a canonical implementation of a service which should implement all the available methods.At least for me, the problems with the new approach vastly outweigh the benefits I've seen so far.