marquesarthur / relevance-experiment-tasks

0 stars 0 forks source link

Identifying Task-Relevant Knowledge in Natural Language Software Engineering Artifacts

Principal Investigator:

Dr. Gail Murphy, Dept. of Computer Science, University of British Columbia (murphy@cs.ubc.ca, +1 604 822 5169)


Co-Investigators:

Arthur Marques, graduate student, Dept. of Computer Science, University of British Columbia (msarthur@cs.ubc.ca)


Study Purpose:

The purpose of this study is to design a non-intrusive approach for identifying which parts of a natural language artifact are useful during a software development task so as to aid developers in finding information relevant to their tasks.

To accomplish this objective, we seek to investigate which text in non-source code artifacts software developers deem as relevant to a software task and whether text automatically identified and highlighted assists task completion.


What you will be asked to do:

This is an online study, performed in a single session that will take no longer than 2 hours to complete. The study uses a web browser plugin that allows you to select text deemed relevant to a software task and that automatically highlights text in certain web pages. The study is composed of an introductory session, a period in which you are asked to work on a set of tasks, and a follow-up survey.

If you agree to participate in this study, we will provide you with a link to the web browser plugin and installation instructions. After installation, you will have the opportunity to practice experimental procedures in a sample task.

In the second part, you will be asked to provide a solution in the form of written code for 2 software tasks. The tasks are drawn from Python online problems and require you to inspect reference documentation so that you can write your solution.

For each task, you will have to read in any order that you wish their related documents and write your solution for that task. In the first task, we ask you to use the plugin to manually select text that you consider relevant and that was helpful to reach your solution. For the second, the plugin will automatically highlight text and, after completing the task, you must rate how helpful were the highlights shown.

Completing a task is not mandatory and you will be allowed to work on each task for at most 50 minutes. You will proceed to the next task when you declare that you have finished your current task or after the 50 minutes mark. Please note that the quality of your code or time is not a determining factor, and we encourage you to work at your normal pace.

When you finish your last assigned task, the final step in the experiment is an online survey that allows you to submit the text highlighted in the first task and that asks you to review how helpful was the text automatically highlighted in each document of the second task.


Consent

Your participation in this study is entirely voluntary. You are free to withdraw your participation at any point during the study, without needing to provide any reason. Any information you contributed up to your withdrawal will be retained and used in this study unless you request otherwise.

You will be provided with a link to an electronic consent survey. By consenting to participate in this study, you confirm the following statements:


A full copy of the consent form is available on Qualtrics

February, 2022

Ethics ID: H19-04054