Closed thedave42 closed 4 years ago
The title of this pull request isn't what I expected!
To rename your pull request:
I'll respond when I detect this pull request has been renamed.
Great job adding the templated workflow. Adding that file to this branch is enough for GitHub Actions to begin running CI on your repository. This takes a couple of minutes, so let's take this opportunity to learn about some of the components of the workflow file you just added. We'll dive deeper into some of the key elements of this file in future steps of the course.
I'll respond when GitHub Actions finishes running the workflow. You can follow along in the Actions tab, or by clicking Details on the pending status below.
The workflow ran! But it failed :sob:. But, that's OK. Every time CI fails, it's an opportunity to learn from what's causing it. By running CI with GitHub Actions, we have access to the logs for the attempted build. These are found:
If you navigate over to the build logs, you may notice that the error is "No tests found".
Learning how to read build logs and isolate the cause of the problem is an art on its own. We'll try and cover some of the basics here. In our case, the source of the error is the npm test
command. The npm test
command looks for a testing framework. We want to use Jest, as we mentioned earlier. Jest requires unit tests in a directory named __test__
. A __test__
directory doesn't exist on this branch.
Not to worry, I've got you covered! Navigate to the open pull request titled Add Jest tests and merge it into this branch. That'll get us the test files we need. I'll respond when you merge the Add Jest tests pull request into this branch.
Great! Now that the testing framework is properly configured, we should get a response from it soon. This time, you'll practice reading the logs on your own. Just like before, you can follow along as GitHub Actions runs your job by going to the Actions tab or by clicking on "Details" in the merge box below.
When the tests finish, you'll see a red X :x: or a green check mark :heavy_check_mark: in the merge box. At that point, you'll have access to logs for the build job and its associated steps.
By looking at the logs, can you identify which tests failed? To find it, go to one of the failed builds and scrolling through the log. Look for a section that lists all the unit tests. We're looking for the name of the test with an "x".
I'll respond when you enter the name of at least one failing test. You can either copy and paste that portion of the log directly, or type the name of the test as a comment.
Contains the compiled JavaScript
That wasn't the test name I expected, but that's alright. If you typed something slightly different than what I looked for that may explain it.
I expected one of the following test names:
Let's keep going anyway!
One of the failing tests is: Initializes with two players
. If you dig deeper into the logs, you may notice these results in particular:
● Game › Game › Initializes with two players
expect(received).toBe(expected) // Object.is equality
Expected: "Nate"
Received: "Bananas"
12 | it('Initializes with two players', async () => {
13 | expect(game.p1).toBe('Salem')
> 14 | expect(game.p2).toBe('Nate')
| ^
15 | })
16 |
17 | it('Initializes with an empty board', async () => {
at Object.toBe (__test__/game.test.js:14:23)
This tells us that a unit test has been written that names the two players Salem and Nate, and then checks if that name sticks. However, we get :banana: Bananas instead of Nate! How did this happen?
To find out, it may help to know it's common practice to name test files the same as the code file they are testing, but with a .test.js
extension. Therefore, we can assume that the test result from game.test.js
is caused by a problem in game.js
. I'll point it out below.
Make the changes suggested below. I'll respond when the workflow runs.
Let's go to the next step.
Initialize node.js workflow file