opendatahub-io / ai-edge

ODH integration with AI at the Edge usecases
Apache License 2.0
9 stars 18 forks source link

Add tensorflow housing to go e2e-tests #212

Closed jackdelahunt closed 6 months ago

jackdelahunt commented 6 months ago

Description

Adds the tensorflow housing pipeline to the go e2e-test suite

Verify steps

How Has This Been Tested?

jackdelahunt commented 6 months ago

/hold

jackdelahunt commented 6 months ago

@MarianMacik this PR is a copy from an older one where me and @grdryn were still having a discussion about more changes to this. Just want to hold this until then

jackdelahunt commented 6 months ago

I have a few questions about supporting more then one pipeline run and want to get feedback.

  1. Should all of the configuration come from the test params from the make target? I think it makes it easier but means tests can be less varied
  2. This pushes both pipeline runs and waits for them in a single test, maybe we want it one per test. I didn't do this because we would just be duplicating the test mostly. But maybe the answer to Q1 will change this
  3. The tensorflow housing pipeline run is getting the s3 bucket name set even though it uses git to fetch
jackdelahunt commented 6 months ago

/unhold

grdryn commented 6 months ago

I have a few questions about supporting more then one pipeline run and want to get feedback.

1. Should all of the configuration come from the test params from the make target? I think it makes it easier but means tests can be less varied

The tests don't necessarily need to be super configurable in my opinion: they're in charge of setting up the scenario that they want to test. IMHO the things that need to be configurable are things that are external dependencies, so things like locations of image repos or git repos or object storage buckets, unless the test setup prepares its own versions of these in-cluster/namespace. Those are essentially the things that are configurable in the tests now; do you think they shouldn't be, or are you thinking of other things that could be configurable but aren't yet?

Do you have an example of something in particular that would mean that the tests could be less varied?

2. This pushes both pipeline runs and waits for them in a single test, maybe we want it one per test. I didn't do this because we would just be duplicating the test mostly. But maybe the answer to Q1 will change this

I think it depends on what the purpose of each of the tests is. For a given test, what pipeline scenario is it trying to verify? Don't get too hung up on just testing the examples that we happen to have as pipelinerun files.

3. The tensorflow housing pipeline run is getting the s3 bucket name set even though it uses git to fetch

I'd guess that can be removed, my understanding is that it's being ignored unless the fetch-model parameter is changed to s3, right?

jackdelahunt commented 6 months ago

or are you thinking of other things that could be configurable but aren't yet?

Not anything in particular, just that currently we enforce that all of the models come from the same bucket, or that we push to the same image repo. I guess it doesn't really matter as we are testing the pipelines themselves

what pipeline scenario is it trying to verify?

For me I was thinking each test is a pipeline run, but then when I did that it really just came down to changing which rile we read so it could just be done in a for loop so that is what I did. I think leaving it as is until we want to expand more is probably good. Not to start thinking about scaling when we just have two runs being tested

I'd guess that can be removed, my understanding is that it's being ignored unless the fetch-model parameter is changed to s3, right?

Well because they are all in the same test it gets set anyway but yes it doesn't have any affect in the end

grdryn commented 6 months ago

or are you thinking of other things that could be configurable but aren't yet?

Not anything in particular, just that currently we enforce that all of the models come from the same bucket, or that we push to the same image repo. I guess it doesn't really matter as we are testing the pipelines themselves

For the image registry/repo, this comment on my PR might have an alternative.

what pipeline scenario is it trying to verify?

For me I was thinking each test is a pipeline run, but then when I did that it really just came down to changing which rile we read so it could just be done in a for loop so that is what I did. I think leaving it as is until we want to expand more is probably good. Not to start thinking about scaling when we just have two runs being tested

Yeah, we can always improve on this in future.

jackdelahunt commented 6 months ago

For the image registry/repo, this comment on my PR might have an alternative.

That is interesting, where does the context come from there?

And if there is onthing more on this PR maybe it is ready for a LGTM. I think prow is failing but not because of my changes I don't think

grdryn commented 6 months ago

For the image registry/repo, this comment on my PR might have an alternative.

That is interesting, where does the context come from there?

It's just listed here in the table of variables available in a Pipeline: https://tekton.dev/docs/pipelines/variables/

And if there is onthing more on this PR maybe it is ready for a LGTM. I think prow is failing but not because of my changes I don't think

Yeah, I'm just testing it locally now, then should hopefully be able to get it merged :+1:

openshift-ci[bot] commented 6 months ago

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: grdryn, jackdelahunt, MarianMacik

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files: - ~~[OWNERS](https://github.com/opendatahub-io/ai-edge/blob/main/OWNERS)~~ [MarianMacik,grdryn] Approvers can indicate their approval by writing `/approve` in a comment Approvers can cancel approval by writing `/approve cancel` in a comment
grdryn commented 6 months ago

/override ci/prow/test-ai-edge

openshift-ci[bot] commented 6 months ago

@grdryn: Overrode contexts on behalf of grdryn: ci/prow/test-ai-edge

In response to [this](https://github.com/opendatahub-io/ai-edge/pull/212#issuecomment-1956571593): >/override ci/prow/test-ai-edge Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.