Closed cjpatton closed 3 years ago
I'm not so sure about this. Maintaining the interop table in the README seems unsustainable as the list of endpoints and testcases grows, and I was thinking more along the lines of setting up CI along with a website like this one that we can link to in the README.
With respect to pre-generated artifacts, I envision having a master Golang runner that goes through each testcase, generates the artifacts (calling, for example, cert-tool
with the necessary arguments), and then calls docker-compose
, outputting the interop data which the website then picks up. I see the ability to generate individual artifacts and run individual testcases as being secondary, something that's just there for debugging purposes.
I'm not so sure about this. Maintaining the interop table in the README seems unsustainable as the list of endpoints and testcases grows, and I was thinking more along the lines of setting up CI along with a website like this one that we can link to in the README.
Agreed. I'll remove it from the PR.
With respect to pre-generated artifacts, I envision having a master Golang runner that goes through each testcase, generates the artifacts (calling, for example,
cert-tool
with the necessary arguments), and then callsdocker-compose
, outputting the interop data which the website then picks up. I see the ability to generate individual artifacts and run individual testcases as being secondary, something that's just there for debugging purposes.
What's the upside of generating the artifacts on-the-fly? One downside is that it requires users to have Go instead to run the tests. IMO it would be better to be use a common scripting language like Python for automating tests rather than Go.
Using Python to automate the tests would be nice.
I wrote cert-tool
in Go, since that meant I could pull code from places like mkcert and the BoGo runner, and because cert-tool
is in Go, to avoid having to pull in another dependency I thought we might as well implement the main runner in Go, not to mention that would also let us pull more code from the BoGo runner.
As for artifact generation, one concern is having to maintain different sets of artifacts for different test cases: it'd be nice to run tests where, for example, we randomly or deterministically malform certificates/other artifacts and see how the implementations respond. I'm nervous about how maintaining artifacts would scale with adding more tests and more varieties of tests.
Also, DCs generated today would be invalid a week later unless we adjust the endpoint clocks/regularly generate new credentials, and generating on the fly makes it easier to avoid issues like that.
Closing pending further discussion.
This adds a directory called
testdata/
that contains all of the cryptographic artifacts used in interop tests. This way users don't have to bother generating these before they can run tests locally.~This also adds an interop table to README.md.~