dikhan / terraform-provider-openapi

OpenAPI Terraform Provider that configures itself at runtime with the resources exposed by the service provider (defined in a swagger file)
Apache License 2.0
275 stars 48 forks source link

[FeatureRequest: Issue #311] Support Terraform versions greater than or equal to 1.0.0 #312

Closed bradthebuilder closed 3 years ago

bradthebuilder commented 3 years ago

What problem does this Pull Request solve?

Please link to the issue number here (issue will be closed when PR is merged): Closes #311

Type of change

What type of change does your code introduce to the provider? Please put an x (w/o heading/trailing white spaces) in the boxes that apply:

Checklist

Please put an x (w/o heading/trailing white spaces) in the boxes that apply:

Commentary

All unit and integration tests pass when using TF versions 1.0.1 and 1.0.7 on my system. I'm also able to successfully run make build. I've tested the built provider against my Antsle and it successfully created Antlets using my TF configs.

Targeting the "scratch" branch you made for my previous PR so that you're free to roll it into a future release and squash commits to your liking.

There is also a single commit addressing what I thought was a typo in some documentation.

Checklist for Admins

bradthebuilder commented 3 years ago

Side note: Need to figure out what would be the best way to make the pipeline work for forks. At the moment the pipeline logs into the docker registry to be able to pull images and run the tests but Travis CI prevents the use of secrets in the pipeline if the build is triggered by a fork.

The external PR pipeline matter is tricky. This suggests conditionally disabling portions of the pipeline if a fork is detected, but that's not exactly a desirable "solution".

This has some commentary on the matter... In Gitlab, there are various options to play with like some predefined environment variables that exist for the lifetime of a job (like CI_JOB_TOKEN) to limit authentication to temporary creds created on a per-job basis. If your Docker images are stored in GitHub's Container registry, maybe external forks can use GITHUB_TOKEN?

Just thinking out-loud: you want to stop unknown external code from being able to take action on your secret(s), but if the secret is authentication to retrieve a docker image, we just need the job to be able to authenticate with a temporary cred good for the lifetime of the job. That would eliminate the need to create logins per external collaborator and limit the scope of the cred to only authorizing docker pull actions... but that cred could still be pilfered from the job and then re-used during the job's lifetime outside of the job's context to pull the docker image down to an untrusted machine. So you'd need the CI job/context to be the recognized client instead of a rogue machine...

Do the docker images used actually contain anything sensitive, or can they possibly be served/retrieved from Dockerhub to sidestep the authentication piece altogether?

dikhan commented 3 years ago

Side note: Need to figure out what would be the best way to make the pipeline work for forks. At the moment the pipeline logs into the docker registry to be able to pull images and run the tests but Travis CI prevents the use of secrets in the pipeline if the build is triggered by a fork.

The external PR pipeline matter is tricky. This suggests conditionally disabling portions of the pipeline if a fork is detected, but that's not exactly a desirable "solution".

[This has some commentary on the matter](

https://travis-ci.community/t/allow-external-pull-requests-to-use-secret-variables-from-the-forked-repo/10654/5)... In Gitlab, there are various options to play with like some predefined environment variables that exist for the lifetime of a job (like CI_JOB_TOKEN) to limit authentication to temporary creds created on a per-job basis. If your Docker images are stored in GitHub's Container registry, maybe external forks can use GITHUB_TOKEN?

Just thinking out-loud: you want to stop unknown external code from being able to take action on your secret(s), but if the secret is authentication to retrieve a docker image, we just need the job to be able to authenticate with a temporary cred good for the lifetime of the job. That would eliminate the need to create logins per external collaborator and limit the scope of the cred to only authorizing docker pull actions... but that cred could still be pilfered from the job and then re-used during the job's lifetime outside of the job's context to pull the docker image down to an untrusted machine. So you'd need the CI job/context to be the recognized client instead of a rogue machine...

Do the docker images used actually contain anything sensitive, or can they possibly be served/retrieved from Dockerhub to sidestep the authentication piece altogether?

The images don't have any sensitive information but retrieving the images from DockerHub would still raise the issues of rate limits that DockerHub put in place on November 2020. More context here: https://www.docker.com/increase-rate-limits

The reason why I added the docker login is because the build jobs started failing randomly after that date due to TravisCI jobs (from any customers) sharing the same outbound IP which resulted into the daily DockerHub limits being hit easily and then jobs failing because of that.

Some of the integration tests require the example docker containers to be running locally but the most recent e2e tests actually moved away from needing the example containers and make use of a local api server that only runs for the lifetime of the test. I was planning of porting all the integration tests to use this model so there is no dependency on running containers to execute the integration tests and that might help get rid of the problem with the rate limits or forks not being able to run the tests since these will be purely go tests that don't share any secrets.

Anyways, that's some food for thought but not necessarily blocking this change set. For now to get unblocked I can cherry pick your changes and create a PR so the build will work that way.

Alternatively, we can run the tests manually and add the output to the PR as a reference but I'd rather have the pipeline feedback to keep the process consistent.

Another thing that I was thinking is migrating the pipeline to GitHub actions but I am not sure if we would have the same issue with sensitives and forks there too.

bradthebuilder commented 3 years ago

^^ Good context, thanks. My colleague and I got around the Dockerhub rate limits once before by configuring our daemon.json files to point to Google's registry mirror instead of Dockerhub directly. If you wanted to do something like that, I think it'd be your Docker dind service container that needs the configuration change. I've never attempted that with Travis, although it seems possible with Gitlab with the --registry-mirror option.

Just opened a fresh PR with your requested modification, now targeting your master branch directly. I went ahead and added screenshots, but your cherry-pick workaround to achieve the results in the pipeline is a good idea. I reckon this PR can die now if we're using the new one.