githubtraining / using-github-actions-for-ci

Course repository for "GitHub Actions: Continuous Integration"
https://lab.github.com/githubtraining/github-actions:-continuous-integration
Creative Commons Attribution 4.0 International
3 stars 16 forks source link

Need test job to fail until it receives build artifacts #14

Open hectorsector opened 4 years ago

hectorsector commented 4 years ago

Currently, step 11 asks the learner to upload a job's webpack generated files to artifact storage after the build job, and then expects the test job to fail when it runs npm test if those build artifacts aren't accessible. However, that's not failing.

mattdavis0351 commented 4 years ago

@hectorsector shouldn't the proper CI flow be the other way around?

Why would we store a broken artifact and use our storage space? Artifacts should be usable.

@lmkeston does that sound right to you? I see a very limited use-case for broken artifacts, but maybe I'm missing something.

@hectorsector we can also accomplish holding a job from running by using an if expression on the test job level, or we can make it a dependency.

I'm not sure how relevant this issue currently is because it's so old, but it made it to the board 🤷‍♂️

hectorsector commented 4 years ago

Why would we store a broken artifact and use our storage space? Artifacts should be usable.

@mattdavis0351 I wish I could tell ya why I thought this would need to be the case, but my memory from 10 months ago ain't great 😉. My only guess is I opened this because maybe some response suggested that it should occur as I proposed.

I think the right course of action here is to take the course looking out for this step in particular. If the course stems make sense as is, and there's no responses contradicting it, close it out.

lmkeston commented 4 years ago

@hectorsector shouldn't the proper CI flow be the other way around?

* Webpack builds the code

* Test run

  * Pass: upload artifact
  * Fail: throw error, no artifact

Why would we store a broken artifact and use our storage space? Artifacts should be usable.

@lmkeston does that sound right to you? I see a very limited use-case for broken artifacts, but maybe I'm missing something.

@hectorsector we can also accomplish holding a job from running by using an if expression on the test job level, or we can make it a dependency.

I'm not sure how relevant this issue currently is because it's so old, but it made it to the board 🤷‍♂️

@mattdavis0351 your logic sounds correct to me.

brianamarie commented 3 years ago

If anyone would like to contribute, our first step is duplicating this issue.

bgottula commented 3 years ago

I think I have stumbled on this. After pushing the commit associated with step 10 "run multiple jobs," the jobs run and the bot gives new instructions at the beginning of Step 11 explaining why the test jobs failed:

image

But the tests jobs did not actually fail:

image

So no matter what the proper workflow ought to be, at the very least the bot instructions at the start of Step 11 do not seem to match the actual results at the end of step 10.

brianamarie commented 3 years ago

Thank you, @bgottula! ✨

I have two main questions right now in order to prioritize this work - both aimed at understanding the time it would take to fix this, and to assess how much this bug detracts from the course:

  1. Are these tests passing a result of a change in the framework mentioned in the content? ("This is due to the design of the virtual environments themselves"?) Or, is there a change in how the tests interact with the code?
  2. How much does this affect the learner's experience? Does this specific bug primarily contribute as a distraction, or does it fundamentally take away from the learning objectives?

@bgottula @mattdavis0351 @hectorsector (or other community members 😇 ) do you have any thoughts here?

mattdavis0351 commented 3 years ago

@brianamarie So from looking at bot responses and trying to piece together what the workflow is, from what I can tell these tests will always pass. They have absolutely nothing to do with the artifact.

There is a step to download the artifact, but we never do anything with it. When we run the test matrix we are running it against the source code of the app in the repo, not agains the downloaded artifact.

Time to fix could be minimal if we focus on updating the responses the bot gives rather than rewriting a more compelling workflow.


Are these tests passing a result of a change in the framework mentioned in the content? ("This is due to the design of the virtual environments themselves"?) Or, is there a change in how the tests interact with the code?

The are passing because they have nothing to do with the artifact.

How much does this affect the learner's experience? Does this specific bug primarily contribute as a distraction, or does it fundamentally take away from the learning objectives?

It is mostly a distraction, a better bot response removes that distraction.


Here is what the workflow step looks like

  test:
    needs: build
    runs-on: ubuntu-latest
    strategy:
      matrix:
        os: [ubuntu-latest, windows-2016]
        node-version: [12.x, 14.x]
    steps:
    - uses: actions/checkout@v2

# We download the artifact from the previous job in this next step.

    - uses: actions/download-artifact@main 
      with:
        name: webpack artifacts
        path: public

# Here we setup the versions of Node we will test against
    - name: Use Node.js ${{ matrix.node-version }}
      uses: actions/setup-node@v1
      with:
        node-version: ${{ matrix.node-version }}

# Here we run the tests using the matrix above against the source code and NOT the artifact (we wouldn't run tests against the artifact for this app... ever, it doesn't make sense to create an untested artifact only to test it later)

    - name: npm install, and test
      run: |
        npm install
        npm test
      env:
        CI: true

See annotations in above block for better clarity

Because of this, there will be no failure.


npm test by default runs it's tests against /__tests__/ in the repo not against GITHUB_WORKSPACE/public/ (which is where the artifact is being downloaded to)

bgottula commented 3 years ago

It is mostly a distraction, a better bot response removes that distraction.

I suppose so, but I think it would be pretty unsatisfying for the bot to just say "Well the workflow already works just fine, but let's upload and then download an artifact to a place where it won't be used just because."