Open p-kuckertz opened 1 year ago
I think it is an interesting perspective, but it is difficult to answer the questions without more details in the question.
For example, "where?" can ask for the location of the text, e.g., publisher website, but it can also ask for the location of the execution of the study. So it would make sense to elaborate the question types with more details and see how many questions we find and how we can address these questions with existing and new predefined commands.
Some thoughts on the mapping between the questions and predefined commands:
I will think a bit about this idea.
I had to reopen the issue. Something went wrong while making a comment
Monica Gonzalez-Marquez (FZJ / Central Library) introduced to me the concept of "science reading". In short, this means that we write narratives of our scientific work (e.g. articles) and this is how our brains can ingest this information best. But sometimes it is hard to extract the key information from a text and often much of this key information was not explained in the text to begin with.
To extract this information from a text while reading it, "science reading" means to ask the following questions over and over again until you have an answer to them or are sure, the the information is not there:
I think, some of those questions you already answer with your predefined commands (e.g. "What is this text about?" : "research problem"; "How was the analysis performed?" : "method"; ...). I suggest to have a closer look at "science reading" (e.g. here: https://upstream.force11.org/science-reading-how-cognitive-narrative-can-help-us-understand-science-better/) and integrate/map more of those key questions/facts into the predefined commands.