SAS-DHRH / dhcc-toolkit

Digital Humanities and the Climate Toolkit (draft), Digital Humanities Climate Coalition (DHCC)
https://www.cdcs.ed.ac.uk/digital-humanities-climate-coalition
11 stars 8 forks source link

Maximal Computing — Building Resources and Learning to Fish #33

Open sbutler-gh opened 1 year ago

sbutler-gh commented 1 year ago

https://sas-dhrh.github.io/dhcc-toolkit/toolkit/maximal-computing.html#some-easy-wins

From anecdotal experience, I would add another Easy Win like this:

Learn to fish, build resources, don't repeat yourself (DRY)

Often, when working on something that AI (e.g. ChatGPT) can assist us with, our first instinct may be to prompt the model to complete a task for us. However, by basing the prompt off "completing the task", it means we won't have the capacity to complete that task ourselves in the future — and we'll have to return and use more processing from ChatGPT to accomplish something similar, because we didn't learn a generalizable way to approach a certain task.

For example, say you need to frequently format data in a CSV into a certain configuration. You could write ChatGPT a prompt asking it to do the data formatting task for you — which you would then have to repeat, frequently, every time you wished to format the data.

Or you could ask ChatGPT for code/formulas that you could plug into a tool of choice (e.g. Google Sheets, Excel, Javascript, Jupyter Notebook), which would do that data formatting for you — showing you how it works (beyond just giving you an answer), and giving you a resource that you can then use in the future. So you don't need to come back to ChatGPT for help with the same task, instead you can complete that task using the resource that ChatGPT produced.

Instead of asking for a fish, you are asking to learn how to fish — and everytime you need a fish thereafter, you can fish yourself.

As another example, think about all the people in education (from K-12 and all the way up) who will frequently use ChatGPT to do bullshit assignments. One way of approaching this, which I feel makes more sense, is for assignments to be constructed such that they are less bullshittable — which would probably result in better ways of learning as well. (e.g. If we can't rely on traditional homework assignments and essays to assess learning and comprehension, due to ChatGPT, what would the ramifications be for the activities and exercises we create for those who are learning?)

And another way, in the meantime, is to build a resource for bullshitting — instead of asking for more bullshit from ChatGPT with every assignment.

So instead of prompting, "write me an 750-word essay on Max Ajl's A People's Green New Deal", a person could creatively construct a prompt in such a way that the AI will be creating a resource for them, and helping them learn to fish, which they can use in the future without access to ChatGPT. Such as, "write me a template for an essay of varying length in humanities-related fields, that shows me where to insert certain types of information and facts and arguments, to present a cohesive argument." Formulaic structures like this already exist from K-12 education on up, and yet if people insist on using the new shiny thing (or doing less manual work), building a resource like this will allow them to create more outcomes like the 750 word essay on A People's Green New Deal in the future, by building on the resource and learning created in this prompt/interaction.

And the only two options here are not, (a) prompt ChatGPT to complete a task and (b) prompt ChatGPT to build a resource (e.g. template, code, Google Sheet functions, scripts, regular expressions) that helps me complete this task today and similar tasks in the future.

Other options exist, such as (c) manually researching and learning how to complete a task for ones' self (which, if it involves dozens of Google searches and page loads, it could be more energy intensive than a ChatGPT prompt) and (d) asking other people, whether in person or online (e.g. Stack Overflow, Reddit, Twitter) for help with the same task as well. (I'm not sure of the energy intensity of a Stack Overflow request, with a few thousand views and a dozen comments, in comparison to a ChatGPT request).

However, energy intensity isn't the only consideration here. There is also the aspect of social interaction, communicating, doing things together, learning and exchanging with one another — and of course the publicness, visibility, and indexability of asking for help in places like Stack Overflow / Reddit / Twitter compared to ChatGPT. These are places that more people can encounter and engage with what we're trying to do (potentially bringing more perspectives and approaches than ChatGPT could), and places to have these encounters where others who are having the same problem can find this later. The latter part of this can be addressed, if a person publishes all of their resources/learnings from ChatGPT online, so others can find them in Google Searches (which sounds tedious, but perhaps there's a way to make it less so.) The former part of this – solving things by asking for help, receiving help, communicating with others, passing that learning on – does not seem to be something that can be addressed when using ChatGPT.

This bleeds into the "Then it gets complicated" section of Maximal Computing, and around questions like "can it be deferred? what are the benefits?" etc.

For example, in most cases, a well defined prompt/question to a given online community — whether Stack Overflow, a subreddit, a Discord, a Twitter — will get a helpful answer, crowdsourced from other people. It may just take a few hours.

^ This is even true for building resources, as discussed above. Somebody might not be willing to write you a 750 page essay on a given topic (then again, it's the Internet, who knows). But it feels much more likely, that somebody would be willing to write a template for essays, that you could use to bullshit from in the future — a resource like that. And written by an actual person, or a community of people thinking on a similar problem, the outcome could be better than what ChatGPT offers.

There are certainly probably benefits to this in terms of social interactions, wellbeing, community that crosses from digital into our non-digital lives. Interdependence, asking for help and getting help and giving help, patience, learning, all of those good things.

In light of that, even asking ChatGPT to build resources which you can use for yourself in the future — it seems like one more epitome of the instantaneous, isolated, "one-click" world we live in today. Instead of interacting with others, asking for help, talking, being social, we just engage 1-1 with a screen.

... Wrapping this up, there are some concrete opinions this could point to. For example: