anitab-org / mentorship-backend

Mentorship System is an application that matches women in tech to mentor each other, on career development, through 1:1 relations during a certain period of time. This is the backend of this system.
https://mentorship-backend-temp.herokuapp.com/
GNU General Public License v3.0
196 stars 449 forks source link

display last 3 achievements #1134

Closed diananova closed 3 years ago

diananova commented 3 years ago

Description

The achievements list retrieved from /home endpoint retrieves the latest three achievements, instead of the first three.

Fixes #393

Type of Change:

Code/Quality Assurance Only

How Has This Been Tested?

Checklist:

Code/Quality Assurance Only

diananova commented 3 years ago

I'm not sure why the build/coverage report is failing. It doesn't seem related to my changes. I'm also getting a different error locally. Local: FAILED tests/mentorship_relation/test_dao_creation.py::TestMentorshipRelationCreationDAO::test_dao_create_mentorship_relation_with_good_args_but_invalid_timestamp Github: FAILED tests/task_comments/test_api_get_task_comments.py::TestGetTaskCommentsApi::test_task_comment_listing_api_with_task_not_existing

epicadk commented 3 years ago

I think you need to make a change to the existing test.

codecov[bot] commented 3 years ago

Codecov Report

Merging #1134 (8d4fe3b) into develop (4193589) will not change coverage. The diff coverage is 100.00%.

Impacted file tree graph

@@           Coverage Diff            @@
##           develop    #1134   +/-   ##
========================================
  Coverage    92.91%   92.91%           
========================================
  Files           38       38           
  Lines         2062     2062           
========================================
  Hits          1916     1916           
  Misses         146      146           
Impacted Files Coverage Δ
app/api/dao/user.py 85.82% <100.00%> (ø)
diananova commented 3 years ago

@epicadk @isabelcosta Ready for review. I didn't make any changes, the tests are flaky.

epicadk commented 3 years ago

@epicadk @isabelcosta Ready for review. I didn't make any changes, the tests are flaky.

Which one exactly? This seems to be the first time it's happened.

diananova commented 3 years ago

@epicadk @isabelcosta Ready for review. I didn't make any changes, the tests are flaky.

Which one exactly? This seems to be the first time it's happened.

tests/task_comments/test_api_get_task_comments.py::TestGetTaskCommentsApi::test_task_comment_listing_api_with_task_not_existing. It's an assertion error (we get code 403 instead of 404). Might have been a one time problem. image

isabelcosta commented 3 years ago

Interesting 🤔 I saw your changes yesterday, and thought you fixed the tests when you added str(i) while concatenating a string. The coverage report steps have been failing at times. So I would not worry about that, this should be an issue to fix aside from your PR @diananova :)

isabelcosta commented 3 years ago

The changes made in this PR were tested locally. Following are the results:

  1. Code review - Done

  2. All possible responses (positive and negative tests) were tested as below:

    • GET /home endpoint
      Screenshot/gif: left (this branch deployed) / right (develop branch deployed) image

      Expected Result: tasks returned are the oldest ones Actual Result: tasks returned are the most recent ones

  3. Additional Comments: Tested on https://ms-backend-review-pr-1134.herokuapp.com/

  4. Status of PR Changed to: Ready to Merge.

devkapilbansal commented 3 years ago

I'm not sure why the build/coverage report is failing. It doesn't seem related to my changes. I'm also getting a different error locally. Local: FAILED tests/mentorship_relation/test_dao_creation.py::TestMentorshipRelationCreationDAO::test_dao_create_mentorship_relation_with_good_args_but_invalid_timestamp Github: FAILED tests/task_comments/test_api_get_task_comments.py::TestGetTaskCommentsApi::test_task_comment_listing_api_with_task_not_existing

@diananova consider opening an issue for both errors including the build id in which they occur. If tests are flaky, we need to fix them too. :thinking: