Open boris-petrov opened 2 years ago
I wonder what this means exactly:
Capybara waits upon failed predicates/assertions Capybara's RSpec matchers, however, are smart enough to handle either form.
As you say, "it would sometimes pass" means that it would fail most of the time. But what is the purpose of a spec that fails most of the time?
Adding a positive expectation before a negative one doesn't mean that the negative would always be correct. E.g. a positive expectation could be against a statically loaded element, while the following negative against a dynamically loaded one. So this suggestion is not a panacea, and may instil false hope in the spec correctness.
Not to mention, the same behaviour applies to click_link
that may cause a partial page update, not just visit
.
Wondering if using cuprite
/ferrum
backend would solve the race condition issues.
This issue that this cop aims to solve is that when a negative check is performed on elements that are dynamically loaded, there is a possibility that the check will always succeed, even if the loading is slow. In other words, when the loading is slow, the check returns success immediately, and the motivation is to prevent not noticing failures in testing. If there is anything that I might have missed, please let me know.
the following two statements are not equivalent, and you should always use the latter!
# Given use of a driver where the page is loaded when visit returns
# and that Capybara.predicates_wait is `true`
# consider a page where the `a` tag is removed through AJAX after 1s
visit(some_path)
!page.has_xpath?('a') # is false
page.has_no_xpath?('a') # is true
Capybara waits upon failed predicates/assertions, therefore has_no_xpath? does not return false immediately
Why is ‘has_no_xxx’ unsafe then?
@pirj check my first example in my original post. The second line could execute before the first one has loaded anything. Any empty page won't have a link and that check would pass. That obviously is not correct because google.com
has links on it.
So we are talking about the following risk:
I agree with the suggestion. But a system built this way feels whacky.
As a devil advocate, we can imagine some weird JS run on timer that would add and remove elements from the page (making some async server requests along the way) and in that case there’s no “complete” state of the page when we can make assertions against it. So it seems we’re in a situation when we have to consider it complete at some point anyway. This makes Culprite etc orthogonal to the problem in question.
The reason I wanted to implement this cop was to detect offenses in cases like the following.
In an application, the buttons to be displayed change depending on the user's permissions. In such cases, the following tests may be written. What do you think of these cases?
context 'when current user has admin role' do
let(:user) { create(:user, role: :admin) }
it 'shows management button' do
visit foo_page_url
expect(page).to have_button 'Management'
end
end
context 'when current user has not admin role' do
let(:user) { create(:user, role: :common) }
it 'does not show management button' do
visit foo_page_url
expect(page).not_to have_button 'Management' # If the display is slow here, it could always succeed 💥
end
end
Yep, exactly my point - the second case has a bad race-condition that pretty much leads to your test not testing what you meant - it practically tests nothing (also, I think to have_no_button
is better than not_to have_button
).
Note the following code:
Will pass... sometimes. It's a race-condition as the page may or may not have loaded when execution reaches the second line. It would be nice to have a cop that checks for these cases.
Not sure if it should also handle:
In some sense the page has already (at least partially) loaded because of the
within
check but on the other hand perhaps always having a positive check before the negative for sanity is better?