Closed ijt closed 5 years ago
The main.tf
is optional. For blob
, we've found it easier to create the bucket once, in our shared project (AWS + GCP), and hard-code the name of the bucket in the test file:
https://github.com/google/go-cloud/blob/master/blob/gcsblob/gcsblob_test.go#L37
Issue #300 captures automating this with Terraform. However, it has been low-priority. It would make it easier for external contributors to -record
tests. However, the advantage of always using the same resources when possible is that the golden files don't have to change. If I use a different resource than you, that means any time I run the tests and check in results, all of the golden files have to be updated. There's some discussion of this here.
TL;DR, for now it is probably fine to just create the resources we need and hard-code whatever data is needed in the test to access them.
Using a fixed resource means tests can never be run concurrently.
The right way to handle changes to tests with record/replay is to record some initial state along with the RPC traces. Both the httpreplay and rpcreplay packages do this. The tests that use them create random, unique resources on every run, and record those resources (or the random seeds) so replay can use the same names.
Using a fixed resource means tests can never be run concurrently Right -- they cannot be run in
-record
mode concurrently.record some initial state along with the RPC traces That is what #300 is about, using Terraform to create the random/unique resources (and saving the Terraform output file to record them).
For blob
at least, there are actually two levels of resources:
-- The "bucket". This is what is hard-coded today, and Terraform could replace. The down side of this is that every RPC request/response has the bucket in it, so if you change the bucket, you have to re-record all of the tests (this makes code reviews harder and is annoying, but not necessarily a deal breaker).
-- The blobs in the bucket. For blob
today, each test uses different blob names. This could also be changed to include some random prefix during -record
, and save the prefix for use during -replay
, but IMHO it'd be better to do that at the bucket level. An additional complication is that some providers only have eventual consistency for List
; the List
tests today "solve" this by not cleaning up the blobs after the test, so the test might fail the first time you run it against a bucket, but should eventually succeed.
Not sure how this maps onto pubsub though.
(and I would vote for hand-creating the needed resources, hard coding the names for the tests, and making progress on the tests that way; we can tackle the larger problem later with #300).
I suggest doing this sooner rather than later, because I suspect it will suss out a bunch of places where it's hard to make tests repeatable for record/replay. I.e., anything that may do things out of order because of creating a goroutine.
Doesn't the record/replay tool match by request, so that order doesn't matter?
Doesn't the record/replay tool match by request, so that order doesn't matter?
No, in our version order matters.
This should include a main.tf to create a GCP PubSub topic and subscription. See https://www.terraform.io/docs/providers/google/r/pubsub_topic.html, https://www.terraform.io/docs/providers/google/r/pubsub_subscription.html.