Closed bbassingthwaite closed 6 years ago
@jba we could put all our test clients for our own tests into test subpackages, eg. fakeDatastoreClient => go/datastore/test/
Alternatively, @bbassingthwaite-va, you could just copy-paste our clients and fakes for our unit tests into your own pkg.
@adams-sarah This isn't a scalable solution for us and I can't commit to us maintaining the fakes between our many projects with this library.
I would love to see this. Bigtable is a great example of a good test package.
@jba @pongad thoughts on adding test interfaces + impls to some client libs to aid unit testing? ^
@jba and I have a sync-up tomorrow. Since this is not drop-everything urgent, I plan to discuss this then.
Just also want to chime in on this - the areas where there are emulators aren't as bad, but GCS is a particularly pain point I'm experiencing right now, I'm still struggling to figure out the myriad of interfaces and wrapping functions I'm going to have to make to get this to become testable without directly calling GCS.
The client libraries do seem to work really well, but this is a pretty huge change to them. Are there examples of how you (i.e. Google) are using these libraries and testing them without calling GCS?
One of the nice things with the GCS library is that it implements the io
interfaces that are built into golang stdlib. For example: your code that leverages GCS can just rely on the io.ReadCloser
interface and not have to deal with mocking GCS. This works if you are reading a file. You can rely on io.Writer
for when you want to write to a GCS file.
// Reading Example
object := storageClient.Bucket(BUCKET_NAME).Object(folder)
reader, err := object.NewReader(context.TODO())
if err != nil {
return nil, err
}
io.ReadCloser(reader) // Converts a *storage.Reader to an io.Reader interface
// Writing Example
o := c.storageClient.Bucket(BUCKET_NAME).Object("file")
w := o.NewWriter(context.Background())
io.Writer(w) // Converts a *storage.Writer to an io.Writer interface
Thanks, all, for your comments. We understand that this is a pain point. Unfortunately, there's no simple solution that will work for everything.
First, as @adams-sarah mentioned, if there is any test code we have that you'd like to use but cannot for visibility reasons, let us know and we'll try to refactor.
As a general rule, the best way to write unit tests with these clients is to use a fake server. The in-memory fakes for bigtable and logging are our best examples. It's easy to set up an in-memory gRPC server, and we even have some helpers for that, which you can copy or we can expose (but they are trivial). But building and maintaining a fake for a complex API like BigTable or Spanner is expensive. The Cloud Bigtable team supports their fake, but the Spanner team does not intend to provide a fake, and we (the team maintaining these libraries) do not have the bandwidth to write one. It may be feasible to write a fake for Storage, since that API isn't too large. If that would be of interest to you, please open another bug and we'll look into it.
The next best approach is a mock server. You'd use the same in-memory server technology, but provide the server with expected request-response pairs instead of having it simulate the service. I don't think we have any full-fledged mocks at present, although we do use limited mocking throughout for some tests (as in these pubsub and storage tests).
A third option is to use RPC replay. The idea is that you write an integration test, then run it against the real service and capture the server traffic. Future calls to the integration test (with an appropriate flag) run against the stored server responses. This is essentially an automated mock. We've just started considering this ourselves, to deal with the same problem you all have: quick, non-flaky tests. We haven't even looked at it enough to know whether there is a gRPC capture framework out there. If not, we may write one. Let us know if you'd find such a thing valuable for your own testing.
@bbassingthwaite-va, your code is an example of why we recommend putting the test double at the API level. Although things aren't quite as bad as you present them: when you embed, you get the embedded type's methods for free, so you don't need to write most of the adapter methods. That doesn't work, though, for methods that return other test interfaces.
Why don't we make the Spanner methods return interfaces? For example, ReadOnlyTransaction could be an interface instead of a concrete type, and then your test wrappers would be much simpler. The problem with returning interfaces instead of concrete types is that it makes adding a method a breaking change. Adding a method to the concrete type ReadOnlyTransaction type will not break anyone. Adding a method to a ReadOnlyTransaction interface would break any type that attempted to implement that interface.
We're also generally averse to adding interfaces to the client surface solely for testability. They clutter the surface with abstractions that are only interesting to testers. We try really hard to make the godoc for our clients as clean as possible.
That said, we could consider adding something like your collection of interfaces and adapters to a separate package, say spanner/test. I'm not sure that would improve your life. True, you wouldn't have to write and maintain that code. But if we added a method to the client, we would (naturally) add it to the test interface, and that would break your test implementations. You might be better off using your own test interfaces, and adding methods only when your system actually used them.
@jba Thanks for the in-depth explanation. Option 3 sounds very compelling to run with a cache flag for unit testing and a live flag for integration testing.
The adapter methods are required, because I have to propagate the interfaces/adapters down the chain.
If I have a struct BookRepository
that depends on the *spanner.Client
.
type BookRepsitory struct {
*spanner.Client
}
func (br *BookRepsitory) ListBooks(ctx context.Context, author string) ([]string, error) {
ri := br.Client.ReadOnlyTransaction().Read(ctx, "books", spanner.PrefixRange(spanner.Key(author)), []string{"title"})
titles := []string{}
for {
row, err := ri.Next()
if err != nil {
if err == iterator.Done {
break
}
return nil, err
}
var title string
err = row.ColumnByName("title", &title)
if err != nil {
return nil, err
}
titles = append(titles, title)
}
return titles, nil
}
If I changed it to take my custom interface which is ultimately implemented by *spanner.Client
:
type ReadOnly interface {
ReadOnlyTransaction() *spanner.ReadOnlyTransaction
}
type BookRepsitory struct {
ReadOnly
}
I can now write a mock implementation of ReadOnlyTX
that provides its own implementation. But this is useless because I can not provide my own implementation of *spanner.ReadOnlyTransaction
. So now I have to provide an interface for *spanner.ReadOnlyTransaction
.
type ReadOnlyTransaction interface {
Read(ctx context.Context, table string, keys spanner.KeySet, columns []string) spanner.RowIterator
}
This goes down the chain of method calls because I will also want to provide my own implementation of spanner.RowIterator
which is another yet another interface. The adapters are necessary because although *spanner.ReadOnlyTransaction
does technically implement the interface, this goes against the golang compiler since the methods are different:
ReadOnlyTransaction() *spanner.ReadOnlyTransaction
!=
ReadOnlyTransaction() ReadOnlyTransaction
I don't understand the point where adding a method to an interface is a breaking change? You guys control both the interface and the struct that implements the interface. I as a user of an interface that you guys provide will not be broken if you add a new method. I would hope that if you guys added a method to an interface that you would also provide the implementation of that method.
Another example in addition to the ETCD golang SDK example is the kubernetes golang SDK. Their whole SDK is fronted by interfaces. For example, if I want to create a new service, I build a new client and do something like:
k.clientset.Core().Services("my-namespace").Create(myService)
which if you were to look at the implementation is done through interfaces:
type ServicesGetter interface {
Services(namespace string) ServiceInterface
}
type ServiceInterface interface {
Create(*v1.Service) (*v1.Service, error)
Update(*v1.Service) (*v1.Service, error)
UpdateStatus(*v1.Service) (*v1.Service, error)
Delete(name string, options *v1.DeleteOptions) error
DeleteCollection(options *v1.DeleteOptions, listOptions v1.ListOptions) error
Get(name string) (*v1.Service, error)
List(opts v1.ListOptions) (*v1.ServiceList, error)
Watch(opts v1.ListOptions) (watch.Interface, error)
Patch(name string, pt api.PatchType, data []byte, subresources ...string) (result *v1.Service, err error)
ServiceExpansion
}
Which can be found here https://github.com/kubernetes/client-go/blob/204f12b1f3125cc660f989655af4b5bd83cbd9dd/kubernetes/typed/core/v1/service.go#L34
They also provide fakes for the underlying interfaces https://github.com/kubernetes/client-go/blob/204f12b1f3125cc660f989655af4b5bd83cbd9dd/kubernetes/typed/core/v1/fake/fake_service.go
This is a common pattern amongst golang libraries and make using these libraries so much easier. I don't even need you guys to provide the fakes but please provide the interfaces.
Another great example of providing mockable api clients is the AWS SDK for Go. See the subdirectory structure below https://godoc.org/github.com/aws/aws-sdk-go/service so get an idea how this works.
And when you embed the interface in all your mocks the code keeps compiling forever. Even after adding lots of methods.
So the Go proverb of accepting interfaces and returning structs might not be the best dogma for api clients.
I build similar adapters like in the AWS SDK around the BigQuery and Datastore clients and kept all code using them compiling without any changes to the adapters. With a bit more effort, I even abstracted out the differences between appengine and cloud api flavors at the Datastore API.
@bbassingthwaite-va:
The adapter methods are required, because I have to propagate the interfaces/adapters down the chain.
Not always. For example, your function
func (sa *SpannerAdapter) Apply(ctx context.Context, ms []*spanner.Mutation, opts ...spanner.ApplyOption) (time.Time, error) {
return sa.Client.Apply(ctx, ms, opts...)
}
is identical to the one that you get for free. I agree you need the adapters in general, but my point was that you can write a bit less code than you actually did.
I don't understand the point where adding a method to an interface is a breaking change? You guys control both the interface and the struct that implements the interface. I as a user of an interface that you guys provide will not be broken if you add a new method.
If I write
type Iface interface { M1() }
and you write an implementation
type Impl int
func(Impl) M1() {}
var i Iface = Impl(0)
and then I add a method:
type Iface interface { M1(); M2() }
then your variable assignment no longer compiles. Impl
in this case would be your mock implementation.
As @nightlyone points out, you can use embedding to mitigate this problem. But that doesn't change the fact that adding an interface method is a potentially breaking change, and we want to promise that stable clients don't introduce breaking changes. Maybe we should rethink that.
We could certainly do what AWS does, and provide an interface for the client in a separate package. (They do a good job of pointing out the compatibility pitfall and showing how embedding can prevent it.) But AFAICT the AWS clients are "flat," without embedded types that themselves need to be stubbed out. That means the actual client isn't "infected" with the test interfaces. For example, if their AWS Certificate Manager client actually had a Certificate
type, it would need to be an interface solely to support testing. In other words, their actual client would have both a Certificate
interface, used in all method signatures, and some concrete (unexported) type implementing that interface. The client would be cluttered with two types, adding a method to Certificate
would be a breaking change, tools that took you from uses to definitions would mostly fail to work, and probably some other bad things I'm missing. That's why that sort of thing is an antipattern in Go.
If I write my own implementation of an interface you guys ship with the SDK, it is solely for unit testing purposes. I as a user would be 100% fine with dealing with those breaking changes.
At the end of the day, the libraries in their current form are hard to unit test. I personally like the "interface" way, and we've had success doing that internally, as well as integrating with both the ETCD v3 and Kubernetes Golang SDK. But even if you guys decide not go that route, there ultimately needs to be a solution that makes this possible.
Thanks!
For fault injection during testing, see also google/google-api-go-client#232 (still open, but has a change pending) for how to determine what method is being used over HTTP.
In my long comment above I alluded to a gRPC replay tool we were thinking about. That tool is now available for experimentation at cloud.google.com/go/rpcreplay.
To see how you might use it, look at our Datastore integration tests. Like all our packages, running go test -short
invokes just unit tests, while go test
runs unit tests as well as integration tests that hit the actual Datastore service. Running go test -record
runs the full test suite and records the interaction with the service to a file using the rpcreplay
package. Once that file is present, go test -short
uses it to replay the integration tests, effectively turning them into unit tests.
@jba I'm super stoked for that package, and so have the few people I shared it with already. I think it has a lot of use beyond just these cloud libraries. It seems to me like it would be great for discoverability to be a part of the main gRPC ecosystem as it continues to mature.
I've written a light fake for PubSub. It runs in-process and is suitable for unit tests. I realize it's just a start; let me know how we can improve it.
We've added some important testing improvements: interface packages for storage and bigquery, and an HTTP replay tool, also for storage and bigquery. The full set of tools mentioned in this thread is now:
If you're participating in this thread and these tools don't satisfy you, please let us know. If I hear nothing (or just praise :) I'll close this issue in a week.
The provided storage and bigquery interfaces look really great so far!
@jba would you welcome a similar set of mocks for datastore, too?
@nightlyone Yes! I should have mentioned that. We welcome contributions of interface packages to the google-cloud-go-testing repo. To make it easier, you can submit a GitHub PR directly to that repo. No need to use gerrit.
I also want to emphasize that an interface package is good for more than just writing mocks. It lets you stub out the client for any sort of test double, including a fake. Best practice at Google is to mock only in special circumstances. Tests against fakes are less brittle and bring you closer to the real behavior. Don't overuse mocks.
(Apologies for the sermon.)
Closing. If you still have testing woes, please open an issue specific to the offending package.
Hi,
We are now using the google cloud golang sdk for big query, big table, datastore, spanner and storage api. We love that they are now getting hand rolled and are consistent between the different products. But at the same time they are all consistently hard to unit test. And by that I mean that it's hard to test the code that we write that uses these libraries.
For example, in order for me to properly test the code that integrates spanner, I have to write an adapter that looks like:
This isn't too bad but i've had to write one for big table, datastore, etc...
Or you guys could just provide interfaces that make these libraries easy to test. On top of it making it easy to test, it abstracts away the implementation that I don't really care about.
I think a perfect example of this is the etcd v3 golang api which returns a Client that composes the multiple interfaces that etcd exposes such as KV
I understand that most of the products ship with stubs and emulators but these are too heavy for writing a unit test suite that runs in a timely manner.
Thanks, Braden