2) Emails Topic comments Do not send email about topic comment unless set in preferences
Failure/Error: expect(page).to have_content 'Have you thought about...?'
expected to find text "Have you thought about...?" in "Decide Madrid Language: Transparency Open data Local Forums Blog Notifications My activity My account Sign out Debates Proposals Voting Processes Participatory budgeting Help Back to Proposal community Proposal 13 title Topic title 1 Manuela184 • 2018-02-09 • No comments Description as comment 1 Comments (0) Sort by Leave your comment Open government This portal uses the CONSUL application which is open-source software. From Madrid out into the world. For technical assistance enters technical assistance Participation Decide how to shape the Madrid you want to live in. Transparency Find out anything about the Madrid City Council. Open data Every detail about the City Council is yours to access. Ayuntamiento de Madrid, 2018 | Privacy Policy | Terms and conditions of use | Accessibility Twitter Facebook Blog YouTube Instagram". (However, it was found 1 time including non-visible text.)
# ./spec/support/common_actions.rb:133:in `comment_on'
# ./spec/features/emails_spec.rb:147:in `block (3 levels) in <top (required)>'
How
[ ] Explain why the test is flaky, or under which conditions/scenario it fails randomly
[ ] Explain why your PR fixes it
[ ] Create a backport PR to consul/consul when the fixing PR is approved
Tips for flaky hunting
Random values issues
If the problem comes from randomly generated values, running multiple times a single spec could help you reproduce the failure by running at your command line:
for run in {1..10}
do
bin/rspec ./spec/features/budgets/investments_spec.rb:256
done
You can also try running a single spec in Travis:
Add option :focus to the spec and push your branch to Github, for example:
scenario 'Show', :focus do
But remember to remove that :focus changes afterwards when submitting your PR changes!
Test order issues
Running specs in the order they failed may discover that the problem is that a previous test sets an state in the test environment that makes our flaky fail/pass. Tests should be independent from the rest.
After executing rspec you can see the seed used, add it as an option to rspec, for example:
bin/rspec --seed 55638 (check Randomized seed value at beginning of issue)
Other things to watch for
Time related issues (current time, two time or date comparisons with miliseconds/time when its not needed)
What
Tests that fail randomly are called "flakies", this one seems to be one:
Randomized seed: 58037
Travis failed build: https://travis-ci.org/AyuntamientoMadrid/consul/jobs/339441876
Failure:
How
Tips for flaky hunting
Random values issues
If the problem comes from randomly generated values, running multiple times a single spec could help you reproduce the failure by running at your command line:
You can also try running a single spec in Travis: Add option
:focus
to the spec and push your branch to Github, for example:But remember to remove that
:focus
changes afterwards when submitting your PR changes!Test order issues
Running specs in the order they failed may discover that the problem is that a previous test sets an state in the test environment that makes our flaky fail/pass. Tests should be independent from the rest.
After executing rspec you can see the seed used, add it as an option to rspec, for example:
bin/rspec --seed 55638
(check Randomized seed value at beginning of issue)Other things to watch for