Closed danmichaelo closed 7 years ago
@danmichaelo Thanks so much for this. You've done a wonderful job at making the lesson more universal (that is, less like it was written by me!)
A few things:
@jt14den: are you happy with these changes?
Thanks for the feedback, James!
'E02: Skipped the introduction to all the different wc flags since there's no conceptual learning from this, and introducing a flag that is different on Mac (unless you upgrade Bash) seemed like a bad idea.'
Can you say a little more about what you mean here? I'm keen we retain a sense of potential use cases throughout the lessons.
Since I added more material to episode 2, I felt like I had to remove some to avoid it growing too large (mental overload). Trying to to keep the focus on redirection and pipes. And since we spend quite a lot of time exploring different flags of two other commands (ls
and grep
), I felt like exploring the flags of wc
could be left out.
One idea could be to move it to an exercise (Learning to read documentation is important, so perhaps "Use the man page of wc
to find out how to print just the number of words"). But I'm also open for other solutions.
'E02: Downplayed the "research data" focus in episode 2 a bit, since the dataset (a table of article metadata) isn't a very typical example of research data'.
The idea here is that a list of books is an example of the kind of thing a researcher and a librarian might collaborate with a point of entry to a dataset. Again, it is a potential use case - and one I've seen a lot working with people in the humanities
Right, someone's metadata is someone else's data or research data :) I like the idea, so my intention wasn't to remove it altogether, but I changed the heading "Counting and mining research data" to just "Counting and mining data", since the dataset doesn't seem to be created for the purpose of research even though it can be used for it. And I think it's a good idea to keep the library focus in the lesson to distinguish it from the SWC shell-novice lesson that also covers working with research data.
What's lacking is perhaps a better introduction. Do you think you have the chance to write one since you are closer to the original idea? Then we could keep the "data" heading, but let the introduction explain the potential for using it in research?
'gallic.txt': Is the lesson even using this file in the PR? If not, I suggest we remove it.
Ah, yeah, I thought about removing that, but forgot. It was used with grep
in the original lesson, but I altered it to take the regexp from an argument instead as from a file.
1) I like the idea of adding "Use the man page of wc to find out how to print just the number of words". Are you happy to do this? 2) On 'research data', this makes sense. I have happy to write a new introduction once this PR has gone through. 3) gallic.txt. Okay, remove. Are you happy to do this?
Good, so I updated d1334d832c6f1c745901535d52465ee4b57945b0 to include removal of gallic.txt
(trying to keep the number of commits involving the quite large zip file low – btw. I stumbled a little bit back and forth about whether to keep it in the repo or not. The size makes it non-optimal, but at the same time it's good that the repo is self-contained)
And added a word count challenge.
Thanks so much for your hard work on this @danmichaelo I'll clear some time (probably next week) to write a proper intro, as discussed above. I hope you can stay involved in LC!
@drjwbaker These are great changes. Thanks @danmichaelo for all of the hard work. Hope the workshop went well! I esp. love the exercises you added.
For the E02: wc
flags issue, we could also have a callout section in-line explaining the different wc
flag usages first and then reinforce with an exercise later. This approach is used in some SWC lessons.
@jt14den : True, there's room for more callouts, and also for connecting callouts to exercises, perhaps something to work on at the next occasion. Another part that could gain from some more love is the learning objectives.
Note that one issue I left dangling is what to with episode 3. As mentioned, I didn't teach it, since it took 3 hours to get through episode 1 and 2, but I wasn't bold enough to remove it from the material either, meaning the whole lesson is now closer to 4 hours than 3. Perhaps it can just be left as an extra bonus episode for those who have time for it, not sure.
@danmichaelo On episode 3, I'm teaching a version of it this month (forked for humanities PhDs). By intention was to use that session as an excuse to significantly reduce episode 3, as well as adding different pathways for different types of free text (to expand the potential to chime with use cases librarians have). I suspect I'll do this when I do the new intro to episode one (so, soonish). Sound good?
:+1:
Hi! When preparing for the library carpentry at UiO, February 2-3rd workshop, we ended up doing quite a lot of changes to the material (also with the sql lesson, but that's for another PR :)).
Overall, we extended episode 1 and 2 to about 90 minutes each, and skipped episode 3 altogether.
You can see our fork rendered here. I'm opening a pull request for everything now just to get the discussion started about what to do with the changes. It's possible that you find the changes to be too radical. It's also possible that you would like to pull in some of the changes, but not all – in that case I'll see if I can do some rebasing and submit another PR. But let's discuss.
In general:
The main changes: (in an ideal world it would all be crystal clear from the commit messages, but then there's time rush and so on...)
Prep: Moved all data files into a zip file that we had the participants download and extract to their Desktop at the beginning of the lesson.
E01: Removed the taster in episode 1. I just couldn't get my head around how to introduce it without scaring people off and possibly leaving them totally disheartened, so I found it better to just say a few sentences about what you can do with the shell and why it's important, and then go on teaching the first commands for navigating around in the shell.
E01: Added another ebook to the dataset in order to provide for a more natural introduction to using the up arrow, and to the important concept of wildcards.
E01: Moved the introduction to redirection (
>
) into E02, since I found it easier to introduce together with pipes.E02: Skipped the introduction to all the different
wc
flags since there's no conceptual learning from this, and introducing a flag that is different on Mac (unless you upgrade Bash) seemed like a bad idea.E02: Added a more thorough introduction to redirection and pipes, mostly using material from the SWC shell-novice lesson.
E02: Downplayed the "research data" focus in episode 2 a bit, since the dataset (a table of article metadata) isn't a very typical example of research data (even though most/all data could potentially be research data). And it felt a little bit like overselling to call
grep
a "powerful research tool" even though it is powerful :)E03: We ended up skipping episode 3 altogether. It's still in the material, but we didn't teach it. While
sed
andtr
are powerful commands, they are also commands that can easily mess up your life (at least your files), and not two commands I would recommend beginners to start with (before learning version control and diffing and such). And generally it's more convenient to use a text editor where you can preview the result first, or something like an interactive Python session)