opencert / workshop-2021

Working repository of the 10th International Workshop on Open Community approaches to Education, Research and Technology (OpenCERT 2021) *** towards "Open community approaches" CERTification processes ***
0 stars 1 forks source link

Submission 5 (Education area - Tool paper) #7

Open AntonioCerone opened 3 years ago

AntonioCerone commented 3 years ago

DrPython-WEB: a tool to help teaching well-written Python programs

Abstract A good percentage of students learning how to program for the first time in a higher education course often write very bad code, difficult to read, badly organized, not commented.

Writing inelegant code reduces the student's professional opportunities.

In this paper we present the DrPython-WEB web application to automatically extract linguistic, structural and style features from students' programs and to grade them w.r.t. a teacher-defined assessment template. With DrPython-WEB we want to accustom students to use better stylistic features and better coding practices.

The novelty of DrPython-WEB, w.r.t. other Python linters, is that it analyzes linguistic and stylistic features, beside the usual code quality measures.

Taniadimascio commented 3 years ago

I will review this submission. Tania Di Mascio

Donatella-Persico commented 3 years ago

I will review this submission. Donatella Persico

Donatella-Persico commented 3 years ago

Hi! I read the paper and I think it is very well written, neat and focused paper. I enjoyed it a lot! I only have one general comment, could you try to make the connection with the workshop theme even more explicit? In addition, I have a few, very punctual comments that you can read in the annotated document in attach. Thank you for this interesting piece of work. Donatel OPENCERT paper 5 by 19-11-2021.pdf la

AntonioCerone commented 3 years ago

The first three reviewers for this paper are:

Two more reviewers will be added tomorrow.

The PDF of the paper is: OpenCERT_2021_paper_5.pdf

Hermioni commented 3 years ago

I'm happy to review this paper

AntonioCerone commented 3 years ago

Dear 'Hermioni',

Could you please kindly identify yourself? Otherwise it is not possible to assign you the paper. Thank you.

Kind regards,

Antonio

AntonioCerone commented 3 years ago

The final reviewer assignment for this paper is:

Hermioni commented 3 years ago

Hi Antonio,

apologies, I hadn’t realised I had used my anonymous log in :-)

I’ll create a new login…

Best, Lucia

On 3 Nov 2021, at 02:58, AntonioCerone @.**@.>> wrote:

CAUTION: This mail comes from outside the University. Please consider this before opening attachments, clicking links, or acting on the content.

Dear 'Hermioni',

Could you please kindly identify yourself? Otherwise it is not possible to assign you the paper. Thank you.

Kind regards,

Antonio

— You are receiving this because you commented. Reply to this email directly, view it on GitHubhttps://github.com/opencert/workshop-2021/issues/7#issuecomment-958613958, or unsubscribehttps://github.com/notifications/unsubscribe-auth/ADMEZD54CUT7C4YXLTT6YUDUKCQLVANCNFSM5G2U6C3Q. Triage notifications on the go with GitHub Mobile for iOShttps://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Androidhttps://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.

[cid:6878280A64DB46D396672963CF9D45D1]

Lucia Rapanotti PhD MSc BSc(Hons) SFHEA FBCS CEng | Senior Lecturer School of Computing and Communications Faculty of Science, Technology, Engineering and Mathematics The Open University, Walton Hall, Milton Keynes, MK7 6AA

Editor, Special Issues and Reviews, Wiley's Expert Systems – the Journal of Knowledge Engineering (KE/AI/Machine Learning) OU Academic Lead: Curriculum Sustainability and Analytics OU Academic Lead: Curriculum Performance for Student Success OU Academic Lead: Higher Technical Qualifications Project Chair of the OU Data MeetUps

http://oro.open.ac.uk/view/person/lr38.html

LinkedIn: https://www.linkedin.com/in/lucia-rapanotti-a8270518/

asterbini commented 3 years ago

Hi! I read the paper and I think it is very well written, neat and focused paper. I enjoyed it a lot! I only have one general comment, could you try to make the connection with the workshop theme even more explicit? In addition, I have a few, very punctual comments that you can read in the annotated document in attach. Thank you for this interesting piece of work. Donatel OPENCERT paper 5 by 19-11-2021.pdf la

Thanks for your comments, I have tried to address them in this new version. DrPython_2021-v2.pdf

Donatella-Persico commented 3 years ago

Hi! I am happy with the new version...let us see anyway what the other reviewers have to say. All the best Donatella

opencert commented 3 years ago

Thank you Donatella for your comments and thank you Andrea for promptly incorporating the feedback in a revised version of the paper. While waiting for comments from the other reviewers, we remind all reviewers that this submission is a Tool paper, to present a new tool, a new tool component or novel extensions to an existing tool aiming at supporting open community approaches, or the use/customisation of an existing tool in the context of open communities.

Hermioni commented 3 years ago

Hi, thank you for your article, which I've read with interest. The topic is important and the system outlined appears to make a valuable contribution to improve students' coding proficiencies in introductory programming courses. The article is generally well presented and easy to follow. The standard of English is very good, with only minor typos (please see annotated file attached -- I hope you can read my scribbles!). The main weakness of the paper, as acknowledged by the authors, is the lack of a full evaluation of the proposed system. However, as the system has been tested on a large number of student assessment and exam scripts, I think that a discussion should be added with some indication of how the system has performed in terms of scoring those assignments. For instance, was there any correlation between system scoring and actual marks awarded?
DrPython_2021-v2.pdf.pdf

asterbini commented 3 years ago

Thanks for the comments! We are very interested to discover if some correlation exists between feature observed and grades. Even more, it will be interesting to see if the correlation increases when the system is in use. For the moment we can apply the machinery only to older exercises, and we miss the required data to do the comparison. We hope to address the issue in a future paper, after we have collected the data with the system in-use. I hope this answers your request, I have added to the paper a comment like this. I have also corrected the typos (thanks), here is the new version.

DrPython_2021-v3.pdf

opencert commented 3 years ago

We would like to clarify that the current interactive phase of the review process is for the reviewers to provide comments that help authors to improve their work. The issue on whether to accept or reject the submission should not be mentioned during this phase. It will be considered, discussed and finalised within a closed meeting during the assessment phase, which will be carried out on EasyChair.

Taniadimascio commented 3 years ago

Dear authors, thanks for your article. I enjoyed reading about your valuable contribution, the DrPython-WEB tool (I read the latest version - v3- uploaded here by authors). I think that it could be very helpful in improving students' skills to produce good quality programs. Overall, the paper is well written and easy to follow, and the tool is well presented. However, I agree with you and other reviewers regarding the weakness of this article, i.e., the lack of an empirical evaluation of the proposed system. In addition, I would like to suggest highlighting the novelty elements of your tool concerning other existing solutions to learn python programming (e.g., Learn Python, SoloLearn) to frame your proposal within the state-of-the-art

Donatella-Persico commented 3 years ago

Hi All, I agree with all the reviewers (and authors) who commented that the lack of empirical evaluation of the tool is a weakness of this paper. However, I believe that this weakness can be "accepted" as this is a conference paper, not a journal paper. Especially in times when journals require evidence based research, I think conferences can and should be used to present work in progress, otherwise authors would be forced to work for years without being able to receive feedback from the academic community. For this reason, I do not see the lack of evidence about tool effectiveness as a major problem in this case. All the best Donatella

asterbini commented 3 years ago

If you think it could be useful, in a couple of days I could add to the paper another example of the different outputs for different feature templates.

Regarding the correlations with final grades, instead, we are still in the process of studying the data. From our initial analysis it seems that a simple linear (or monotonic) relation between features and grades is not evident. Thus we are widening the number of processed programs to apply also clustering and/or ML approaches (which need more data and time). I have added a comment in the paper about this. DrPython_2021-v4.pdf

leonorbarroca commented 3 years ago

I've asked my colleague Dhouha.Kbaier dhouha.kbaier@open.ac.uk to review this paper; here is her review: Good paper overall, well-written The main idea/approach is nice and relevant Literature search poor in general. Need to clearly identify the gap, compare with previous works and add more relevant literature Need to insist on the novelty when they talk about the extracted features, here as well need to add more literature and discuss the pros/cons of what is currently offered. The presented example should be clarified on several aspects that are currently a bit confusing My detailed comments are attached

I would give them “Major Revisio DK-Comments-OpenCERT_2021_paper_5.pdf n”

asterbini commented 3 years ago

Thanks for the suggestions and references, I will revise the paper improving the state of the art analysis and comparison.

asterbini commented 3 years ago

I have addressed the @leonorbarroca comments and I hope I have expanded the literature enough. Here is version 5 of the paper, which I tried to upload to easychair without success (I am sorry if I am late).

DrPython-v5.pdf

marte-git commented 3 years ago

ok, now the latest version of the paper is on easychair too

jnoll commented 3 years ago

Colleagues, I read this paper with some interest as I am currently the lead instructor for a class with over 1,700 students, so any kind of automated grading approach would be helpful. The paper is well-written and easy to understand, and the approach described could be useful if it works. I'm especially intrigued by the attempt to analyse stylistic characteristics such as good identifiers and comments. That said, I have three concerns:

  1. How does this approach differ from, or build on, other approaches? DrScratch is mentioned, along with "rubric based" systems, but it's not clear to me what the unique contribution of DrPython is.
  2. Does it work? I realize this system has yet to be deployed extensively, but I would still like to see some evidence that it can identify good and bad instances of the various categories.
  3. What is the connection to the OpenCERT workshop theme? I can guess, but I would like the authors to make this more clear.
asterbini commented 3 years ago

Thanks @jnoll for your questions. 1) the main contribution is the focus on linguistic features. We are doing similar work to rubric based assessments on features extracted from python programs. 2) we are still studying the data to select the best features representing the style and the programming skills 3) we will add the DrPython rubric-based assessment to our Q2A-I system where students discuss their homeworks and participate to formative peer-assessment. We think the addition of a style-based peer assessment phase will be beneficial to the students. We hope this is aligned enough to the OpenCERT interests.

marte-git commented 3 years ago

Thank You everybody, for participating in a very interesting review protocol and experience.

Please place your review on Easychair, so Antonio and I will be able to proceed towards notification. Thanks again, and cheers :) Antonio & Marco

Donatella-Persico commented 3 years ago

Hi All, thanks for this interesting discussion. I'll place my review on easychair. Donatella

asterbini commented 3 years ago

Thank you all for your very constructive comments. Here is the pre-camera-ready version of the paper.

DrPython_pre-camera-ready.pdf