oppia / oppia

A free, online learning platform to make quality education accessible for all.
https://www.oppia.org
Apache License 2.0
5.63k stars 3.79k forks source link

Acceptance Testing - covering all user journeys in form of user stories #17712

Open Shivkant-Chauhan opened 1 year ago

Shivkant-Chauhan commented 1 year ago

Describe the bug

In order to verify that key user journeys do not break, the Oppia development team is implementing Acceptance Testing for the develop branch. This means that, on every commit, all key requirements are verified through a set of acceptance tests that check that each type of end user can do the things they expect to do on the site.

The aim of this issue is to build up a set of such tests. Please see the instructions below for how to claim a task in this issue. (Feel free to contact @oppia/acceptance-test-reviewers if you have any doubts!)

# Instructions

  1. Carefully read the “How to write Acceptance Tests” wiki page.
  2. Select 1 or more “package” (of 1-2 user stories each) from the list below, and get yourself assigned to them from @oppia/acceptance-test-reviewers
  3. After the user stories are assigned to you, go to the user file for the tests inside core/tests/puppeteer-acceptance-tests/spec/<user-type>tests.spec.js
  4. Start writing the assigned user stories based on the Testing Spreadsheet. Each user story in the spreadsheet should be covered within 1 it block, and all the test steps and expectations should match the expected behavior. You might also need to write utility functions, similar to the existing examples below.
  5. After you have written the tests, make sure they run fine on your local machine before submitting your PR. If this is your first time writing acceptance tests, we suggest writing the tests for one user story and verifying that it runs fine before adding the rest of the tests to your PR.
  6. In case of any confusion or for PR reviews, feel free to ask @oppia/acceptance-test-reviewers
  7. After getting assigned to a package(containing 2-3 user stories), your inactivity/no communication for more than 7 days may lead to getting unassigned from the task assigned.

For reference: You can see an example set of acceptance tests for “blog admin”/”blog editor” here. Please follow a similar format and file naming convention for the tests that you write:

List of user stories (arranged by user type)

(serial numbers same as testing spreadsheet row number for each user type)

Exploration Creator

Contributor Dashboard

Logged-in Learner @AFZL210

Anonymous Learner

Logged-in user (browsing static pages)

Topic Manager

Shivkant-Chauhan commented 1 year ago

@maxliu2001 @SACHARYAS @noramark @agallop @suraj-mandal @michbyiri @SahilB1 @jjxiong23 @aashanp01 @katehrkim @caramelmelmel @marcella-stefan

please give updates about the packages you were assigned. thanks!

agallop commented 1 year ago

@Shivkant-Chauhan I'm making progress on the translation admin tests I was running into some trouble using one of the select elements for the Contributor dashboard admin page, but I see it is because there are two of these elements with the same id.

<select id="label-target-form-review-category-select"
    class="form-control e2e-test-form-contribution-rights-category-select"
    ng-options="roleString as roleName for (roleName, roleString) in $ctrl.CONTRIBUTION_RIGHT_CATEGORIES"
    ng-model="$ctrl.formData.viewContributionReviewers.category"
    ng-change="$ctrl.clearResults()">
</select>
<select id="label-target-form-review-category-select"
    class="form-control e2e-test-form-contribution-rights-category-select"
    ng-options="value as key for (key, value) in $ctrl.CONTRIBUTION_RIGHT_CATEGORIES"
    ng-model="$ctrl.formData.addContributionReviewer.category">
</select>

These ids should probably be different. is it okay to make a PR to change the second one to

<select id="label-target-form-add-category-select"
   ...
</select>

Thanks!

suraj-mandal commented 1 year ago

Hi @Shivkant-Chauhan , i was away on a trip.. am back by tomorrow.. will be progressing then

caramelmelmel commented 1 year ago

Hi was away will be progressing later today or early tomorrow

Shivkant-Chauhan commented 1 year ago

@Shivkant-Chauhan I'm making progress on the translation admin tests I was running into some trouble using one of the select elements for the Contributor dashboard admin page, but I see it is because there are two of these elements with the same id.

<select id="label-target-form-review-category-select"
    class="form-control e2e-test-form-contribution-rights-category-select"
    ng-options="roleString as roleName for (roleName, roleString) in $ctrl.CONTRIBUTION_RIGHT_CATEGORIES"
    ng-model="$ctrl.formData.viewContributionReviewers.category"
    ng-change="$ctrl.clearResults()">
</select>
<select id="label-target-form-review-category-select"
    class="form-control e2e-test-form-contribution-rights-category-select"
    ng-options="value as key for (key, value) in $ctrl.CONTRIBUTION_RIGHT_CATEGORIES"
    ng-model="$ctrl.formData.addContributionReviewer.category">
</select>

These ids should probably be different. is it okay to make a PR to change the second one to

<select id="label-target-form-add-category-select"
   ...
</select>

Thanks!

you can fix this by changing the elements' ID in your PR itself. this will unblock you and will correct the behavior for the tests

Shivkant-Chauhan commented 1 year ago

@maxliu2001 @SACHARYAS @noramark @agallop @suraj-mandal @michbyiri @SahilB1 @jjxiong23 @aashanp01 @katehrkim @caramelmelmel @marcella-stefan can you all please give the updates to get to know about the current progress on the assigned task. please note, I am thinking to un-assign the inactive folks from their assigned packages so that the new contributors coming up would have more choices to select their desired packages to work.

suraj-mandal commented 1 year ago

Hi @Shivkant-Chauhan i have done the setup for the first part of the anonymous learner.. like creation of the explanation etc.Now need to do the assertions based on the test cases provided

jjxiong23 commented 1 year ago

@Shivkant-Chauhan apologies for the late response! i will be out of town for a while and won't have the ability to work on my assigned package anymore, so please go ahead and unassign me so that other contributors can claim the package if they'd like. sorry for the inconvenience!

caramelmelmel commented 1 year ago

im doing the assertions on the test case objects and facing some difficulties now

caramelmelmel commented 12 months ago

meeting this error as a prerequisite from the user perspective here @Shivkant-Chauhan Filed an issue here regarding the fact that I am unable to create a exploration as a lesson creator https://github.com/oppia/oppia/issues/18275

caramelmelmel commented 12 months ago

hi so I'm fixing a few errors that the user is experiencing but is not clear to the user:

  1. Bad requests

    • User is trying to publish lesson with directed cases of users being stuck and therefore unable to proceed. This gives a 400 simply because the user is unable to retract the accidental mistake of labelling the option correct and places the stuck option at the same time.
    • state transitions are correct but errors and displays are not very obvious to the user.
    1. When creating lesson, there seems to be an error of not removing the test account so the tutorial does not show up again
    2. one of the text boxes does not have a very obvious validation for a minimum number of characters to be fulfilled before publishing the lesson.
caramelmelmel commented 12 months ago

replied on Google sheet too!

agallop commented 11 months ago

Quick update from me.

I finished writing the tests, I just need to clean up the style and comments, in addition to checking any references to ids I changed.

I'm currently traveling so it may take a few days to get the PR out.

Thanks

Shivkant-Chauhan commented 9 months ago

Since no one seems working on any of the user stories, and I don't see any active Pull Request, so I am unassigning everyone from the assigned user stories.

suraj-mandal commented 9 months ago

Give this week to me.. if not completed then unassign

Shivkant-Chauhan commented 9 months ago

@suraj-mandal no worries, can you please tell the user-story you are working on?

caramelmelmel commented 9 months ago

Have added my commits but forgot my PR oops

Shivkant-Chauhan commented 9 months ago

@caramelmelmel please let me know on which user-story you are working on. thanks!

caramelmelmel commented 9 months ago

Hi @shivkant5 I'm working on 7 and 8 let me reopen this

caramelmelmel commented 9 months ago

@shivkant5 done with 7 finishing 8

Shivkant-Chauhan commented 9 months ago

@shivkant5 done with 7 finishing 8

Hi @caramelmelmel please open a PR for that and assign me on that PR. Thanks!!

caramelmelmel commented 9 months ago

Hi @shivkant5 I will open my PR after I take this week to clean up everything

agallop commented 9 months ago

@shivkant5 It looks like my previous pull request for 3.1, 3.2 closed and I've been super busy. I can open another one. I did address your comments in the original PR, but I did need some help with this ask.

main thing, do a bulk run (at least 20 runs for each test) of the tests on CI by modifying the GitHub Action workflows on your fork.

I think I requested a re-review before the PR closed itself, but maybe I needed to reassign it as well?

Shivkant-Chauhan commented 9 months ago

Hi @shivkant5 I will open my PR after I take this week to clean up everything

sure @caramelmelmel. let me know, once you have a PR opened for the user stories. thanks!

Shivkant-Chauhan commented 9 months ago

@shivkant5 It looks like my previous pull request for 3.1, 3.2 closed and I've been super busy. I can open another one. I did address your comments in the original PR, but I did need some help with this ask.

main thing, do a bulk run (at least 20 runs for each test) of the tests on CI by modifying the GitHub Action workflows on your fork.

I think I requested a re-review before the PR closed itself, but maybe I needed to reassign it as well?

it would be fine if you can re-open that previous PR itself, by that, it would be easy to track the requested changes. for the bulk run, once all the reviews are done, I will do a bulk run on my another repo with your changes, and will inform you about the flakes that are occuring. you need to fix them also, so that after your tests are in into the develop, they must not be flaky

agallop commented 9 months ago

I don't think I am able to reopen the pull request. I think the button would be here if I could?

image

seanlip commented 9 months ago

@agallop I reopened it! Thanks for flagging.

StephenYu2018 commented 9 months ago

@Shivkant-Chauhan @ashish-patwal Can I please do 3.1 & 4.1 under Contributor Dashboard?

Shivkant-Chauhan commented 9 months ago

3.1 & 4.1

hey @StephenYu2018, we are assigning a whole package which contains 2 user stories. can you please take a look again and tell me which package you want to work.

StephenYu2018 commented 9 months ago

@Shivkant-Chauhan I'll take package 4 (Practice Question Submitter) under contributor dashboard.

Shivkant-Chauhan commented 9 months ago

done @StephenYu2018

StephenYu2018 commented 8 months ago

@Shivkant-Chauhan For the practice question submitter acceptance tests, the instructions says to check for a listed opportunity labelled 'Test Topic, First Exploration'. According to this snippet, the topic name is the heading of a skill opportunity (which is fine), but the subheading is the skill description. The exploration/chapter name is not displayed on the listed opportunity itself.

Where is 'First Exploration' supposed to be located on the listed skill opportunity?

seanlip commented 8 months ago

Hi @StephenYu2018 -- I think you are right, this is inaccurate. I've given you edit access to the spreadsheet, could you please update the test expectations appropriately? Thanks for catching this!

StephenYu2018 commented 8 months ago

@seanlip @Shivkant-Chauhan Should I also add setup steps to each test that initializes the newly created skill's rubrics, misconceptions, worked examples, etc (and the corresponding expectations to check that they show up during the question suggesting process)?

seanlip commented 8 months ago

I think that would be great if possible, thanks! (As long as it doesn't significantly blow up the project's complexity.)

agallop commented 7 months ago

Since #18345 is approved, and hopefully will be merged soon, can I take a look at 6.1, 6.2?

seanlip commented 7 months ago

@agallop Go for it :) Can you please leave a similar comment on https://github.com/oppia/oppia/issues/18897 so that we can properly assign you? GitHub doesn't show your name in the assignees list otherwise.

Thanks for helping out with this!

caramelmelmel commented 7 months ago

hello as part of hacktoberfest, I'll be making that PR after the code is refactored!

StephenYu2018 commented 7 months ago

@Shivkant-Chauhan @ashish-patwal Are we allowed an "expect" method defined in a user-utilities file to not take any parameters? Also, does each step or expectation require its own method defined in a user-utilities file?

seanlip commented 7 months ago

@StephenYu2018 In principle, it's totally fine for an "expect" to not take any parameters. It depends on whether the expectation needs to be parameterized. If it doesn't then that's perfectly OK.

Each "general user action" does require its own method defined in a user-utilities file. That enables those actions to be reused by other tests.

agallop commented 7 months ago

3.1 and 3.2 are finished 🙂

StephenYu2018 commented 7 months ago

@Shivkant-Chauhan @ashish-patwal On the testing spreadsheet for practice question submitters, on expectation #10, it says "the progressbar in the dashboard should show <%>". What does this mean?

Also, it's worth noting that the progressbar of a question opportunity doesn't change when a user successfully submits a question suggestion for review. I manually tested it and found no progress difference between before and after the suggestion was submitted.

seanlip commented 7 months ago

@StephenYu2018 I think the first one means that the progress bar should show the correct percentages (though I agree with you that it's not worded that clearly).

Re the second paragraph: shouldn't the progress bar have three parts to it -- questions fully accepted, questions submitted but not yet reviewed, and questions left to submit? At least that's the case for translations, I think.

StephenYu2018 commented 7 months ago

@seanlip When I tried manually testing the flow, in order to publish a topic, I had to attach a skill with at least 3 questions. Later when I went to the submit questions tab on the contributor dashboard, the progress showed "30%" with a green part of the progressbar reflecting that. After I successfully submitted the suggestion, the progress still showed "30%".

seanlip commented 6 months ago

Hi @StephenYu2018 thanks for checking this (and sorry for the late reply). I think this might actually be a bug. Translations don't work the same way, right -- they have three different colours? (See screenshot below.)

Screenshot from 2023-10-28 23-39-04

The questions tab should do this too. If it doesn't then I think let's do the following:

(a) file an issue for the above (b) write the acceptance test based on the current behaviour (two colours), but add a TODO pointing to the issue filed in (a) and saying to update the test when that issue is fixed.

Would that work? Thanks!

imchristie commented 6 months ago

Hi @Shivkant-Chauhan. I would like to work on the Practice Question Reviewer (5.1, 5.2). Would you mind assigning me?

seanlip commented 6 months ago

@imchristie Please leave a comment directly on the associated issue linked from the description at the top, so that we can assign you to it. (Otherwise GitHub won't let us do so.) Thanks.

imchristie commented 5 months ago

Hi @Shivkant-Chauhan @seanlip, I would like to work on Logged-in user (browsing static pages). Would you mind assigning me? Also, I would like to ask for a clearer explanation on the second part (11)

seanlip commented 5 months ago

@imchristie I've assigned you. For (11) this just means visiting the different static pages on the site through the navbar and the footer links. See the testing spreadsheet for more info.

Akhilesh-max commented 3 months ago

@Shivkant-Chauhan @seanlip I’ve carefully noted all the points discussed. I’m particularly interested in working on the 5th and 6th CUJ of the exploration editor. Could you please assign these to me? Thanks.