Closed jackdelahunt closed 6 months ago
/hold
@MarianMacik this PR is a copy from an older one where me and @grdryn were still having a discussion about more changes to this. Just want to hold this until then
I have a few questions about supporting more then one pipeline run and want to get feedback.
/unhold
I have a few questions about supporting more then one pipeline run and want to get feedback.
1. Should all of the configuration come from the test params from the make target? I think it makes it easier but means tests can be less varied
The tests don't necessarily need to be super configurable in my opinion: they're in charge of setting up the scenario that they want to test. IMHO the things that need to be configurable are things that are external dependencies, so things like locations of image repos or git repos or object storage buckets, unless the test setup prepares its own versions of these in-cluster/namespace. Those are essentially the things that are configurable in the tests now; do you think they shouldn't be, or are you thinking of other things that could be configurable but aren't yet?
Do you have an example of something in particular that would mean that the tests could be less varied?
2. This pushes both pipeline runs and waits for them in a single test, maybe we want it one per test. I didn't do this because we would just be duplicating the test mostly. But maybe the answer to Q1 will change this
I think it depends on what the purpose of each of the tests is. For a given test, what pipeline scenario is it trying to verify? Don't get too hung up on just testing the examples that we happen to have as pipelinerun files.
3. The tensorflow housing pipeline run is getting the s3 bucket name set even though it uses git to fetch
I'd guess that can be removed, my understanding is that it's being ignored unless the fetch-model
parameter is changed to s3, right?
or are you thinking of other things that could be configurable but aren't yet?
Not anything in particular, just that currently we enforce that all of the models come from the same bucket, or that we push to the same image repo. I guess it doesn't really matter as we are testing the pipelines themselves
what pipeline scenario is it trying to verify?
For me I was thinking each test is a pipeline run, but then when I did that it really just came down to changing which rile we read so it could just be done in a for loop so that is what I did. I think leaving it as is until we want to expand more is probably good. Not to start thinking about scaling when we just have two runs being tested
I'd guess that can be removed, my understanding is that it's being ignored unless the fetch-model parameter is changed to s3, right?
Well because they are all in the same test it gets set anyway but yes it doesn't have any affect in the end
or are you thinking of other things that could be configurable but aren't yet?
Not anything in particular, just that currently we enforce that all of the models come from the same bucket, or that we push to the same image repo. I guess it doesn't really matter as we are testing the pipelines themselves
For the image registry/repo, this comment on my PR might have an alternative.
what pipeline scenario is it trying to verify?
For me I was thinking each test is a pipeline run, but then when I did that it really just came down to changing which rile we read so it could just be done in a for loop so that is what I did. I think leaving it as is until we want to expand more is probably good. Not to start thinking about scaling when we just have two runs being tested
Yeah, we can always improve on this in future.
For the image registry/repo, this comment on my PR might have an alternative.
That is interesting, where does the context come from there?
And if there is onthing more on this PR maybe it is ready for a LGTM. I think prow is failing but not because of my changes I don't think
For the image registry/repo, this comment on my PR might have an alternative.
That is interesting, where does the context come from there?
It's just listed here in the table of variables available in a Pipeline: https://tekton.dev/docs/pipelines/variables/
And if there is onthing more on this PR maybe it is ready for a LGTM. I think prow is failing but not because of my changes I don't think
Yeah, I'm just testing it locally now, then should hopefully be able to get it merged :+1:
[APPROVALNOTIFIER] This PR is APPROVED
This pull-request has been approved by: grdryn, jackdelahunt, MarianMacik
The full list of commands accepted by this bot can be found here.
The pull request process is described here
/override ci/prow/test-ai-edge
@grdryn: Overrode contexts on behalf of grdryn: ci/prow/test-ai-edge
Description
Adds the tensorflow housing pipeline to the go e2e-test suite
Verify steps
oc
go-test
andgo-test-setup
make targets. All of the required fields are listed here and hereoc new-project test-namespace
make AWS_SECRET=... AWS_ACCESS=... ... go-test-setup
make S3_BUCKET=... ... go-test
How Has This Been Tested?