Closed github-learning-lab[bot] closed 3 years ago
It is very important to choose events that correlate as directly as possible to what you're asking the learner to do. If you're not using gates* or other tests, the event should be what you're asking the learner to do.
One way to do this is to structure events using dot notation. For example, you could use an event called issue_comment
. This means Learning Lab would be looking for any event related to an issue_comment, like created, edited, or deleted. But, you could also be more specific in the event description with dot notation. If you use issue_comment.created
, Learning Lab will only move on if the event is a newly created issue comment.
_*Gates are an action within Learning Lab that allow you to use logic and verify the learner's behavior. We'll learn more about this later!
Go ahead and put in the events for the rest of the learning objectives. You can use this time to re-order them in a way that makes sense to you for a flow of a course.
config.yml
file on lines 36, 43, 50, and 57.Have you ever thought about what teaching is? What are the behaviors that a good teacher has to make it easy to learn? Maybe you have - and maybe you haven't.
Let's break down teaching into three steps:
Repeat! That may sound simple, but it's the basis of fast feedback that is learning. :rainbow:
Just like we broke down your teaching goal into smaller steps, let's break down teaching in the same way. Let's focus on the smallest possible unit of behavior we can identify.
For starters, let's choose writing unordered lists in Markdown. That's what we want the user to know how to do. Let's apply those three phases.
What does the learner need to be able to exhibit the behavior that we want? Well, they'd need to know about Markdown, and it'd be nice if they had a computer with a keyboard, and a place to type the text. Let's assume those contextual things are taken care of. The main information that a learner would need would be, what is an unordered list? How is that written in Markdown? Then, we'd ask the user to do that.
There's an important part of this step. It's not just the learner doing it, but it's how we are going to watch and observe if they did it correctly or not. In Learning Lab, this is usually an issue comment or a commit changing a file. We give them the space to try it out, and we watch via webhooks. We use gates to "check" if they did what we asked them to.
Based on the observation in the second phase, we can give them the feedback they need. We either confirm that they learned it, or let them know that they didn't do it right, and they should try again. It's important to give the most specific feedback as possible. This is like unit tests - if they're vague, they're not helpful. The more personalized and exact the feedback can be, the better the learner will understand what they did right and/or wrong.
This is how all learning happens, through feedback, whether it's from a teacher in a classroom, a bot like me, or a stovetop that gives you the feedback "if you touch me, it HURTS!". Faster and more exact feedback is always a better teacher.
This is the process that we are going to use for each of the learning objectives you've written.
Before we start writing some for this course, let's practice identifying this three phase process. There are four examples below - some of which are examples of this three phase process, and some of which aren't. For each example, there is a label. For each example that is a good example of the three phases, add the corresponding label to this pull request. Once all of them match what I expect, I'll give you the next instructions.
If you get stuck, add the issue label "help" and I'll give you some more detail.
help
label.Can't figure out which are the right labels? No worries! Here are the reasons and the answers:
If you'd like to know the reasons each option was or wasn't a good example, click the drop-down below.
Let's do one together before we start going into the others. If the first learning objective is based on Issues - specifically, how to open one. What do we need to do in phase one to prompt the user to demonstrate that behavior in phase two?
Now is a good time to learn about responses. The response
directory is where you store the files that will be what the bot says.
To practice, I've got a file ready for you in this pull request. Go ahead and write your instructions (phase 1 for this objective) in the file in this pull request and commit.
You may notice the file name and structure. They represent the best practices we've found to make things clear for our users.
01_first-response.md
file and write the instructions for your first step.
If you'd like to know the reasons each option was or wasn't a good example, click the drop-down below.
Let's do one together before we start going into the others. If the first learning objective is based on Issues - specifically, how to open one. What do we need to do in phase one to prompt the user to demonstrate that behavior in phase two?
Now is a good time to learn about responses. The response
directory is where you store the files that will be what the bot says.
To practice, I've got a file ready for you in this pull request. Go ahead and write your instructions (phase 1 for this objective) in the file in this pull request and commit.
You may notice the file name and structure. They represent the best practices we've found to make things clear for our users.
01_first-response.md
file and write the instructions for your first step.
Awesome! You're probably thinking - how is Learning Lab validating that I did the thing just now!? And you have a point. With text, it's tricky - so I checked to make sure you wrote more than 5 words, but I'm not checking to see if you wrote anything that makes sense. For this type of step, it would be difficult to provide great feedback on what you wrote.
Every comment that I make is an example of a real, human-proofed answer. Compare your answer and notice - is yours similar? Is it very different? What would you change?
type: respond
Now that you've written a response, we need to figure out how to say it to with Learning Lab. Now is the time to learn about Learning Lab's actions. Actions are reusable modules that each Learning Lab course has access to. They are each designed to do very specific things, and nothing more. This is to optimize for reusability and simplicity.
There are all kinds of actions. Learning Lab can do different things like responding, opening pull requests, merging, and more. You can see all of the available actions in Learning Lab's documentation.
You've got the response file, and now it's time to edit the config
file with the proper action: respond. Because this is the first instruction, it belongs in the "before" step. That way, the learner knows what we are waiting for them to do when they first enter the course. It will look like this:
before:
- type: respond
with: 01_first-response.md
config.yml
file in this pull request on lines 17 and 18 to add a respond type, referencing the file that you created for the response.I noticed that your commit to the config.yml
file doesn't have what I am expecting.
I'm using a gate
and using regular expressions to check that you have something committed that looks like:
type: respond
with: 01_first-response.md
Try again by committing to the config.yml
file on this branch, and make sure your changes match my example above.
I noticed that your commit to the config.yml
file doesn't have what I am expecting.
I'm using a gate
and using regular expressions to check that you have something committed that looks like:
type: respond
with: 01_first-response.md
Try again by committing to the config.yml
file on this branch, and make sure your changes match my example above.
I noticed that your commit to the config.yml
file doesn't have what I am expecting.
I'm using a gate
and using regular expressions to check that you have something committed that looks like:
type: respond
with: 01_first-response.md
Try again by committing to the config.yml
file on this branch, and make sure your changes match my example above.
Phase 2 is where we watch to see if the learner did what we asked them to do. With Learning Lab, we are watching for the event to be sent by GitHub and then checking to see if it was the event we were expecting.
The events that are sent to Learning Lab alert Learning Lab that something has happened. But, to create a good learning experience, we should validate that the learner did the right thing. For example, if we ask a learner to commit a function to a file, we'll get an event when they've committed to a branch. But we would receive the same trigger, but they may not have committed to the correct file! In these cases, use a gate action for validation. Gates can:
A :book: gate
is a Learning Lab action. Gates are conditionals, and they behave much like a conditional in Javascript.
You can also get creative here - maybe you want to include tests in the template repository. When the tests are run, the status could be the event that you check.
As an example of how gates work, let's validate the learner's pull request title. This information is accessible to us :book: from the payload that is sent with the pull_request.opened
event.
You can see an example of all the information sent in the GitHub Developer docs.
We'll add the :book: left:
option to the gate, and compare its value to the expected pull request's title.
A completed example of this would look as follows, with comments on the right starting with a hash #
:
actions:
- type: gate # using the gate action
left: '%payload.pull_request.title%' # get the title from the pull request object inside of the payload
operator: === # check for strict equality, see more at https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Comparison_Operators#Identity
right: Add name to README # this is the expected value
config.yml
file on this branch around line 32.type: gate
action on line 32.left:
option to the gate.left:
option. This could be to the pull request's title ('%payload.pull_request.title%'
) or some other information from the payload based on the event trigger.operator:
, usually to ===
.right:
to the title we expect, like the name of the pull request, or regex for what is expected from the commit contents, or any other amount which makes sense in your case.Aren't sure what event and gate to use? No worries - you can borrow these:
- title: Assign yourself description: Assign the first issue to yourself. event: issues.assigned link: 'https://github.com/piton182/lab-starter/issues/1' actions: - type: gate left: '%payload.pull_request.title%' operator: === right: Add name to README
Nice, now you've got phases one and two covered. It's time for phase 3.
What you do in Phase 3 will be based on what you saw in phase 2. Let's keep it simple for now and have only two possibilities: either the user did it right, or they didn't. This structure is pretty basic, but if you use your imagination, you can probably envision more interesting possibilities.
I just added two response files in this pull request - one for the "happy path" where the user did it right, and one to redirect them and give them help to get back on track. Fill in those response files now.
responses/01_nice-work.md
responses/01_try-again.md
done
I noticed that though the correct file is edited, there isn't a lot of substance. It's okay to come back and edit these responses later to make them longer or more complete. But, in the meantime, you should write enough so that you can go through the course as a learner and remember what is expected for each step.
Try again - Edit the 01_nice-work.md
file with at least one sentence of instruction for yourself, then comment in this pull request.
done
I noticed that though the correct file is edited, there isn't a lot of substance. It's okay to come back and edit these responses later to make them longer or more complete. But, in the meantime, you should write enough so that you can go through the course as a learner and remember what is expected for each step.
Try again - Edit the 01_try-again.md
file with at least one sentence of instruction for yourself, then comment in this pull request.
done
I noticed that though the correct file is edited, there isn't a lot of substance. It's okay to come back and edit these responses later to make them longer or more complete. But, in the meantime, you should write enough so that you can go through the course as a learner and remember what is expected for each step.
Try again - Edit the 01_try-again.md
file with at least one sentence of instruction for yourself, then comment in this pull request.
done
To put this response in the config, it will be very similar to phase 1. But, where the action goes is different. We need to be careful of the gate here. If the gate fails, we can have special logic for the "unhappy path" response. The "happy path" response will be a regular response triggered if the gate is successful, like:
- type: gate
left: '%payload.pull_request.base.ref%'
operator: ===
right: main
else:
- type: respond
with: 01_try-again.md
- type: respond
with: 01_nice-work.md
Go ahead and edit the config to add the unhappy path and the happy path response.
Are you noticing that we're asking a bit more of you now? Since you've already added a response before, we're now asking you to do two at a time. This is on purpose - it's important to balance how much you're asking learners to do. It's bad to bore them, but it's also really bad to overwhelm them. Every learner is different, so try to pick a "middle of the road" solution. This is ours. What do you think?
Awesome work so far! Now, you've officially got your first step written. It's a good time to try this course out. Before we do, we need to pay some attention to the metadata in the config file, so that Learning Lab knows what to do with it.
The parts that we need now are the title, description, and the name of the learner's repository. Learning Lab also needs more detail around each step. The information is there in detail in comments in the config file.
Here are a few examples:
title: Introduction to GitHub
description: If you are looking for a quick and fun introduction to GitHub, you've found it. This class will get you started using GitHub in less than an hour.
template:
name: github-slideshow
repo: caption-this-template
description: 'A robot powered training repository :robot:'
title: "Communicating using Markdown"
description: "This course will walk you through everything you need to start organizing ideas and collaborating using Markdown, a lightweight language for text formatting."
template:
name: "markdown-portfolio"
repo: "communicating-using-md-template"
title: Write a Learning Lab course
description: Use Learning Lab's strengths for fast feedback to author your own course.
template:
name: lab-starter
repo: write-a-ll-course-template
Nice job! I'll merge this pull request for you. Your next steps can be found in your next issue.
Events
Alright - you've chosen a project, and you've laid out the steps for your learners. Next, we're going to get into something new with Learning Lab: events! (You can learn more about events in the documentation.)
An
event
is the webhook that is triggered when the learner does something in their repository. Every webhook for the learner's repository is sent to Learning Lab. These events are "read" by Learning Lab. If it is the event the bot has been waiting for, the bot will do what you command. Otherwise, it will ignore the event. You can see all of the events in GitHub's documentation. Some of the most common examples arepull_request.synchronize
orissue.comment
.Map behaviors to events
How can each step translate to a GitHub event? Having too many of the same event may be a bad signal. Make sure that events represent things that you're trying to teach.
For example, you may want to show a lot of information to the learner, and then have them close the issue to signify they've read it. That may make sense for one or two steps in your course. But, imagine going through a whole course like that. It isn't actually checking if the learner read - it's checking if the learner knows how to close issues!
Choose the right events
Try to choose events that correspond directly to what you want the learner to do. If you're trying to teach the learner to import a
npm
module into apackage.json
file, that commit should be the event.pull_request.synchronize
push
pull_request
status
issue_commented.created
Step 5: Map learning objectives to events
Next, your job is to map your learning objectives to events.
Remember the steps you wrote earlier? Let's find the corresponding events. You'll see some are already done for the examples, but you can focus on your own.
:keyboard: Activity: Map the learning objectives you wrote to specific events from GitHub webhooks
config.yml
file on line 29 and make note of the event trigger that matches your first objective.I'll respond below when I detect a commit on this branch.