Open t0yv0 opened 1 month ago
I agree that this would be a good idea.
I'd encourage us to move away from the GRPC regression tests as the default as these can be quite sensitive and overfit on bridge internals. There was quite a bit of noise during the PRC work there.
I've recently got a lot of mileage out of the schema + yaml program + pulumitest tests in the SDK: https://github.com/pulumi/pulumi-terraform-bridge/blob/f317dc4a503d3430fbe13f3af964207f0a17616e/pkg/tests/schema_pulumi_test.go#L19
We now have similar tests in PF: https://github.com/pulumi/pulumi-terraform-bridge/blob/67c3b5765ae629a67e9ab69815c09a8b3a015098/pf/tests/schema_and_program_test.go#L18
Is this a reasonable place to point contributors? This requires them to copy the relevant TF schema and the YAML program used. I'd be happy to write up a short guide to using these.
Yes indeed, gRPC-level tests are not the right form for the common regression testing use case, we abused them a bit. These tests you point to are about ideal for the purpose. That's a great starting point.
Per @corymhall feedback it would be very good to centralize the flow of turning bridged provider tests into bridge regression test and adding them. We start with a provider such as pulumi-aws, a Pulumi program, and a scenario, and we need to turn this into a minimal repro that mimicks the outer provider definition and reproduces the issue (with bonus points such as making debugger attachable or x-testing this vs TF CLI). There's now a couple of ways to do this which can be confusing for contributors.