I have broken things down into multiple categories and have tried to keep notes along every step of the way as I took this course. Please forgive me for not breaking these out into steps directly, sometimes it's hard to tell which Learning Lab step I am in.
Category
Description
Icon
Flow
Changes that I think would improve the learners ability to navigate through the course along our intended path
:droplet:
Concern
Something jumped out at me about this and I wanted to bring them up for discussion
🤔
Bug
Something broken while I was following the steps for the course as intended
:bug:
Vulnerability
Something broke because of an action I took that wasn't intended. This is being labeled as a vulnerability under the scope that the course is vulnerable to this type of action happening and breaks because of it.
:warning:
So let's get started!
:droplet:
Severity: Low
Problem:
After joining the course I have no real indication about what the first step is. The only thing I am presented with when I visit my repository is this README.md file.
I know from experience that there is either an issue or a pull request open for me, but it's unfair to expect our learners to have that experience.
As expected, when I click the link on the course steps page I am taken to the issue as expected to begin taking this course. ⬇️
Suggestions:
Add a link here in the README.md that also points me to the first issue.
:thinking:
Severity: Low
The desired name for the first pull request is CI for Node and it is highly case sensitive. As you can see, if I use a lowercase version of the same name, ci for node the validation step fails
Can we standardize the input we are collecting from the user to be less case sensitive? Consider how form input on a webpage might be collected and then transformed to all lowercase on the backend for logic consistency.
JavaScript can do this be implementing the toLowerCase() function:
const userInput = "My User Does SiLLy ThInGs";
const ourInput = userInput.toLowerCase();
console.log(ourInput);
output:
my user does silly things
Having something like this would allow for minor mistakes to happen from the learner without impeding the flow of the course.
This may also impact the speed at which a course can progress since we wouldn't always be waiting on a Learning Lab response to explain to the user that they didn't use the proper case when defining the text for something.
:bug:
Severity: Moderate (prevents course progress)
Problem:
If the user names the first pull request incorrectly they can end up in a infinite loop of being told to name it correctly. This prevents to course from progressing.
Steps to reproduce:
continue naming the pull request incorrectly
This will continue forever as long as the user provides invalid names
Suggestions:
Consider having the bot name the pull request if the user fails to get this step correct after n number of attempts.
Any time we let the bot correct the users behavior we should also provide an explanation as to why and what we did.
:bug:
Severity: Moderate(prevents course from progressing)
Problem:
If the check_suite finishes before the bot has a chance to listen for the payload the course does not continue unless the user manually triggers the check_suite again.
Steps to reproduce:
Name the first pull request incorrectly (see other bug :point_up:)
Wait for the check_suite to finish running
Name the first pull request to the correct value
Once the bot finishes explaining the workflow file it responds with the following:
Refresh the page as you are instructed to by the bot
Refreshing does not trigger the check_suite again
The course has officially stalled
Probable Cause:
There is no web hook event that is firing to tell the bot that the check_suite has finished its run. Because of this, Learning Lab will never know when to respond with an explanation of the CI logs
This is fixable if the learner navigates to the Actions tab and manually triggers the check_suite by clicking Re-run checks
Once the check_suite finishes the Learning Lab bot will do it's job and trigger the desired explanation
Suggestions:
If the learner has incorrectly named the pull request we can safely assume that they may be too slow when it comes to renaming it correctly and will trigger this bug.
We can run some sort of check that asks:
if the pull request was named improperly:
then: when we also serve a response that indicates they may need to do
more than refresh the browser to trigger the CI explanation
else: serve the standard response that only mentions a browser refresh
:warning:
Severity: High (prevents course progress, creates rework for the learner)
When following the steps outlined in the first issue:
Go to the Actions tab.
Choose the template Node.js workflow.
Commit the workflow to a new branch.
Create a pull request titled CI for Node.
It is entirely possible to break this course by selecting the incorrect workflow template.
Steps to reproduce:
Navigate to the Actions tab
Select any workflow other than the one intended, in this case I chose the Node.js Package template
Commit the workflow to a new branch
Create full request titled CI for Node
The Learning Lab bot will comment on our pull request with the intended response, but will never continue even after following the suggestions of a refresh:
The following error is generated by Learning Lab:
HttpError: {"message":"Validation Failed","errors":[{"resource":"PullRequestReviewComment","code":"invalid","field":"path"}],"documentation_url":"https://developer.github.com/v3/pulls/comments/#create-a-comment"}
at /app/node_modules/@octokit/rest/lib/request/request.js:72:19
at processTicksAndRejections (internal/process/task_queues.js:93:5)
at async Context.runActions (/app/lib/context.js:216:24)
at async Course.runHandler (/app/lib/course.js:184:32)
Probable Cause:
The config.yml for this course is expecting a very specific file path after the templated workflow is committed. When the wrong template is used, the expected filename changes and thus breaks at this step:
As we can see the file parameter is looking for .github/workflows/nodejs.yml because our learner selected a different template workflow as seen in step 2 above the actual file that exists is .github/workflows/npmpublish.yml.
Further Findings:
Once the error get's thrown renaming the file to nodejs.yml does not allow the course to progress.
Closing the existing pull request and creating a new one that follows the instructions allows the course to progress.
Suggestions
Before the Learning Lab bot tries to run this action there should be some sort of validation to make sure the proper workflow file was selected. This can most likely be accomplished by implementing a gate action to check for the proper file, which can then be used to prompt the learner to double check that they selected the proper workflow.
Instead of asking the learner to fix the workflow file, we could potentially overwrite it for them. The .github/workflows path is protected when the repository is initialized, but once they create a file in this path the Learning Lab bot can write changes to that file. So maybe we give them a change to fix it, and if they get it wrong twice we overwrite the file and explain what we did and why we did it?
Edit this step to be more explicit by making node.js bold font:
Before: Choose the template Node.js workflow.
After: Choose the template Node.js workflow.
:bug:
Severity: Low (course progresses, feels untested and rushed)
Problem:
In the CI for Node pull request the user is prompted to enter the name of a failing test to have the course progress. The Course progresses regardless of what is typed.
No actual learning is reinforced
Steps to reproduce:
When prompted to enter the name of a failing test type something other than what is asked. See below:
I typed the word bread and the course progressed
Suggestions:
Parse the body of the comment and validate what has been typed.
Have the Learning Lab bot respond accordingly based on what is typed.
Problem:
Continual bad changes to game.js keep the learner in an endless loop of identical responses.
Steps to reproduce:
Ignore the suggested changes from the Learning Lab bot
Change the value of this.p2 to anything you want
Watch the bot respond with help
Change the value of this.p2 to another improper value
Watch the bot respond with help
Continue this loop forever
Suggestions:
If the learner has failed to change the value to be correct for n number of attempts we should prompt them with a more helpful and direct response. This can be hard since we can't anticipate what they are going to change, but what we can see is the diff of the game.js file.
Consider giving them the suggested changes again if they fail to edit the file by themselves.
:thinking:
Severity: Low(not tested, just guessing)
Problem:
I am guessing that the Improve CI pull request will suffer from all of the same pitfalls the CI for node pull request did.
Infinite response loops if proper changes aren't made offering no new guidance
Lack of any real input validation
What happens if we change the workflow filename?
What happens if we change a value in the workflow to call an extra step?
what if we try to use a different GitHub Action in our workflow?
We are only validating that the name of the workflow file matches what we expect, we are ignoring it's contents.
:warning:
Severity: High(Course progresses when it shouldn't, eventually breaks)
Problem:
When editing the node.yml workflow file in the Improve CI pull request any change made to the file triggers a 'success' style response from the bot.
Steps to reproduce:
Ignore the suggested changes to .github/workflows/nodejs.yml
Edit .github/workflows/nodejs.yml by making any change you wish to
Commit changes to current branch the PR is for
Watch the bot respond with success style message
Finally, accept the suggested changes, ignoring what is currently being asked of you.
The course is now entirely out of sync since we are only listening for the completion of check_suites. We can no longer accept suggestions
Further findings:
The edited workflow file executes, which is something we expect, however it is something we are not accounting for. Our learners could end up running out of usage quota without realizing it by making unexpected changes to these files. Although that's a risk to them, it's one we should do our best to mitigate by handling the body of these files better than what we are doing.
Take a look at the workflow running.
As a new learner I might not expect this behavior, especially since the Learning Lab bot didn't inform me that the changes that I made were incorrect.
This poses a new challenge for us as course authors. We haven't had to think about how Actions and Learning Lab impact one another. For every good thing we can do when these two features are married together there are ten things we need to account for to provide a better experience.
At this point I have gotten this far in the course, and now my best option is to restart the entire thing
Suggestions:
We can access what is changed in any given file, we should be taking care to make sure that the contents is what we are looking for.
We may want to consider having the bot cancel check_suites when the content of them isn't what we expect from the course.
The bot did not respond properly in this instance, can we somehow enforce accepting the suggestion?
if change was made by accepting suggestion
then: respond with success
else: check the body of contents and act accoring
Once this course breaks it would be quite the task for the learner to try and fix it, especially if their changes are what broke it. They may also continue triggering Learning Lab Actions by trying to fix it.
Biggest Takeaways
There are many things we need to consider from the course design perspective. Working to incorporate more thorough validation of inputs and exit strategies for infinite loops will improve the quality of the courses we create in the future.
Addressing these two issues alone will help us maintain the confidence people have in the Learning Lab platform. These changes will also dramatically improve the quality of our courses.
I ask the question of how can we not only guide the learner as they progress through the course, but also help them dislodge themselves when they get stuck on something they don't understand?
We cannot assume the learner knows anything about GitHub, writing code, working with branches, issues or pull requests. The moment we assume that anything is familiar to them is the moment we fail them.
The exception to that mindset is when we explicitly guide them down a learning path, without allowing access to assets before we know for certain they have the required familiarity.
Using GitHub Actions for CI
I have broken things down into multiple categories and have tried to keep notes along every step of the way as I took this course. Please forgive me for not breaking these out into steps directly, sometimes it's hard to tell which Learning Lab step I am in.
So let's get started!
:droplet:
Severity: Low
Problem: After joining the course I have no real indication about what the first step is. The only thing I am presented with when I visit my repository is this
README.md
file.I know from experience that there is either an
issue
or apull request
open for me, but it's unfair to expect our learners to have that experience.As expected, when I click the link on the
course steps
page I am taken to the issue as expected to begin taking this course. ⬇️Suggestions: Add a link here in the
README.md
that also points me to the first issue.:thinking:
Severity: Low
The desired name for the first pull request is
CI for Node
and it is highly case sensitive. As you can see, if I use a lowercase version of the same name,ci for node
the validation step failsCan we standardize the input we are collecting from the user to be less case sensitive? Consider how form input on a webpage might be collected and then transformed to all lowercase on the backend for logic consistency.
JavaScript can do this be implementing the
toLowerCase()
function:output:
Having something like this would allow for minor mistakes to happen from the learner without impeding the flow of the course.
This may also impact the speed at which a course can progress since we wouldn't always be waiting on a Learning Lab response to explain to the user that they didn't use the proper case when defining the text for something.
:bug:
Severity: Moderate (prevents course progress)
Problem: If the user names the first pull request incorrectly they can end up in a infinite loop of being told to name it correctly. This prevents to course from progressing.
Steps to reproduce:
Suggestions:
:bug:
Severity: Moderate(prevents course from progressing)
Problem: If the check_suite finishes before the bot has a chance to listen for the payload the course does not continue unless the user manually triggers the check_suite again.
Steps to reproduce:
Probable Cause:
Re-run checks
Suggestions:
We can run some sort of check that asks:
:warning:
Severity: High (prevents course progress, creates rework for the learner)
When following the steps outlined in the first issue:
It is entirely possible to break this course by selecting the incorrect workflow template.
Steps to reproduce:
Node.js Package
templateProbable Cause: The
config.yml
for this course is expecting a very specific file path after the templated workflow is committed. When the wrong template is used, the expected filename changes and thus breaks at this step:As we can see the
file
parameter is looking for.github/workflows/nodejs.yml
because our learner selected a different template workflow as seen in step 2 above the actual file that exists is.github/workflows/npmpublish.yml
.Further Findings:
nodejs.yml
does not allow the course to progress.Suggestions
.github/workflows
path is protected when the repository is initialized, but once they create a file in this path the Learning Lab bot can write changes to that file. So maybe we give them a change to fix it, and if they get it wrong twice we overwrite the file and explain what we did and why we did it?:bug:
Severity: Low (course progresses, feels untested and rushed)
Problem: In the
CI for Node
pull request the user is prompted to enter the name of a failing test to have the course progress. The Course progresses regardless of what is typed.No actual learning is reinforced
Steps to reproduce:
bread
and the course progressedSuggestions:
:bug:
Severity: Low (can break progression, easily avoidable)
Problem: Continual bad changes to
game.js
keep the learner in an endless loop of identical responses.Steps to reproduce:
this.p2
to anything you wantthis.p2
to another improper valueSuggestions:
game.js
file.:thinking:
Severity: Low(not tested, just guessing)
Problem: I am guessing that the
Improve CI
pull request will suffer from all of the same pitfalls theCI for node
pull request did.:warning:
Severity: High(Course progresses when it shouldn't, eventually breaks)
Problem: When editing the
node.yml
workflow file in theImprove CI
pull request any change made to the file triggers a 'success' style response from the bot.Steps to reproduce:
.github/workflows/nodejs.yml
.github/workflows/nodejs.yml
by making any change you wish toFurther findings: The edited workflow file executes, which is something we expect, however it is something we are not accounting for. Our learners could end up running out of usage quota without realizing it by making unexpected changes to these files. Although that's a risk to them, it's one we should do our best to mitigate by handling the body of these files better than what we are doing.
Take a look at the workflow running.
As a new learner I might not expect this behavior, especially since the Learning Lab bot didn't inform me that the changes that I made were incorrect.
This poses a new challenge for us as course authors. We haven't had to think about how Actions and Learning Lab impact one another. For every good thing we can do when these two features are married together there are ten things we need to account for to provide a better experience.
At this point I have gotten this far in the course, and now my best option is to restart the entire thing
Suggestions:
Biggest Takeaways
There are many things we need to consider from the course design perspective. Working to incorporate more thorough validation of inputs and exit strategies for infinite loops will improve the quality of the courses we create in the future.
Addressing these two issues alone will help us maintain the confidence people have in the Learning Lab platform. These changes will also dramatically improve the quality of our courses.
I ask the question of how can we not only guide the learner as they progress through the course, but also help them dislodge themselves when they get stuck on something they don't understand?
We cannot assume the learner knows anything about GitHub, writing code, working with branches, issues or pull requests. The moment we assume that anything is familiar to them is the moment we fail them.
The exception to that mindset is when we explicitly guide them down a learning path, without allowing access to assets before we know for certain they have the required familiarity.