Closed faceyspacey closed 7 years ago
Wallaby.js from day one was designed to run a single test if only that test was affected by your code changes. This also promotes the good practice of having your tests orthogonal/isolated from each other. There're hundreds more articles about why tests should be independent from each other and about ugly consequences if they are not. There're even testing libs that deliberately shuffle unit tests randomly to enforce the practice.
Performance consideration aside, the issue with your second and third (and forth) tests is that they are just a part of one test - these 2-4 tests are actually one test, just artificially divided into 3 'it' functions.
If I were you, I would ask myself this question: why these are multiple tests (it
-s)?
Depending on the answer, I would consider two solutions.
The first solution is to merge them into a single test, like:
describe('MyComponent', () => {
it('...', () => {
// a
let T = snap(MyComponent, store)
// b
T.dispatch(action1)
T.snap()
// c
T.dispatch(action1)
T.dispatch(action2)
T.snap()
// d
T.dispatch(action1)
T.dispatch(action2)
T.tree().props.onChange()
T.snap()
})
})
Comments are doing the same, if not a nicer, job to describe what's going on, there's also less unnecessary code.
One may argue that the test is now not testing one single thing - well, it wasn't doing it when it was in 4 it
-s either (where a part of each test was hidden in previous tests). Or that breaking in a middle would stop the lengthy test from running - well, it was the same with the 4 original tests.
The second solution was mentioned by you: to set up a context for each test in beforeEach
, if required to have nested describe
. This way these will truly be self-sufficient tests, able to run/fail independently. So there're benefits, but the solution has a higher maintenance price (more code to write, even though you could extract common bits into separate functions).
describe('MyComponent', () => {
let T
beforeEach(() => {
T = snap(MyComponent, store)
})
it('a', () => {
// T is truthy
})
it('b', () => {
T.dispatch(action1)
T.snap()
})
describe('when actions dispatched', () => {
beforeEach(() => {
// maybe also an function extracted from test 'b'
T.dispatch(action1)
T.dispatch(action2)
})
it('c', () => {
T.snap()
})
it('d', () => {
T.tree().props.onChange()
T.snap()
})
})
})
the emerging idiomatic jest+react testing workflow
I'm quite surprised to hear that something that I have always thought of as anti-pattern, is becoming a wide-spread practice. I don't have a huge react/jest testing experience outside of wallaby.js related stuff, so could you perhaps please share some articles/docs/etc. where the practice is described/discussed/encouraged?
Having said all that, as a tool developers (even though willing to enforce best practices), we can definitely consider adding the setting you're requesting, but I'd really like to hear back from you regarding the whole thing, as I sincerely think this way of writing code may hurt in a longer term: like costly maintenance long interdependent tests, migrating between testing frameworks (for example ava
runs tests in parallel by default, so would immediately break), etc.
@ArtemGovorov i like your point:
Or that breaking in a middle would stop the lengthy test from running - well, it was the same with the 4 original tests.
I think you're actually right, this is a non-issue since both Jest and Wallaby will show you multiple expected failures within a single test.
The only thing bad about it is it doesn't let you keep neat labels for all these expectations for browsing in Wallaby or seeing as a checklist in other test runners. I think that's enough to argue in favor for re-running the describe
block.
Otherwise, here's one quick post from Jest's github issues that highlights using using code in the describe block:
https://github.com/facebook/jest/issues/2235
it feels very intuitive to add "group-specific logic" inside of the describe function
That obviously isn't arguing in favor of having code in one test that affects another but it's similar Basically I see no way around this if you want helpful labels for your expectations. Well, there is one: the test-runner (Wallaby in this case) labels it the line of code, eg: expect(foo).toEqual(bar)
. That would be more helpful with nicely named variables, but it's hardly as informative as: it('COMPONENT_DID_MOUNT:SET_TIMEOUT:DISPATCHES: hide spinner after 500ms', () =>
..or whatever descriptive info you want to provide. Basically it seems we both understand the problem--you mentioned comments as the solution. I think Wallaby itself stands to lose a lot in that case. I think you want as much information displayed in your IDE/Test-Panel as possible. It gives it more value. And that value is obviously passed on to the developer. Fine grained results about 300 tests is a lot more informative than the same 50 tests it would take in how you re-factored it above to be a single test. That's not to mention comments suffer from less maintainability--there's something about green titles next to checkmarks; for real, there is. They are a lot more likely to stay up to date just because the weight bestowed upon them by the green color and checkmark and amazing test-runners like Wallaby that put them on a pedestal. It's psychological but true (at least for me). In addition, coverage percentages is more precise. The wallaby experience is so great only because the amount of information can be large. If it was just 5 tests, I'm not sure Wallaby would be as useful. So that means, the more tests you have, the more valuable Wallaby gets.
So, as always, why not leave it up to the developer to decided how dependent their tests are on each other. At the end of the day, you're always gonna know the first broken test in the sequence is the one to fix, and often times that will fix all the following ones. I'm not too concerned about the less isolation of the tests if in return you get to think of your react component as truly alive and able to be tweaked and poked at. From my standpoint it's definitely worth the tradeoff. It's really what you wanna do here with React.
It may not be the traditional way, but the past wasn't as reactive. Perhaps that's the clincher in favor of this argument. You're now dealing with a reactive component. The developer and his code should "live" along side it. Not to get too poetic, but basically you should be able to take the component somewhere. It's the difference between looking at components as photos vs videos. The reality is they are moving. Just because we don't want to move along with them, doesn't stop them from moving. The train has already left the station.
The fact that tests will break in groups is of no consequence. It's the same thing as if one test broke anyway. So what? Think about it as if "the group broke." Except now, you have more details on where to look, what to expect, and what the purpose of the tests were in the first place. In short, the same thing can be said about putting all the expectations in one test, except you don't get any of the positive aspects of the tradeoff.
Anyway, the components are alive reactive creatures. It's a very natural experience to poke and prod at the same component without having to re-create it. Just as I was saying yesterday how Wallaby is a natural fit as snapshot browser + snapshot approval IDE for Jest, keeping your reactive component instances alive across tests is also a natural fit and progression of the testing game when it comes to reactive components.
From time to time the rules of any game evolve and change. Wallaby can be at the forefront of what the React testing is evolving into. I mean, on a side note, I'm shocked that Wallaby's name isn't mentioned as much as React, Jest, Redux and Flow among this crowd. It's literally the missing piece. Not just the perfect match, THE MISSING PIECE. WHY? Because React (and especially Redux) does make you do a lot more work and boilerplate in a few places, but what you get for it is predictability. There is no better place to showcase and put that predictability on a pedestal THAN FRIGGIN WALLABY! No, but, seriously, there is no better place to prove the hypothesis that React is more predictable than through testing. If you're not testing your React project, you're missing the point of React. That's where it shines. That's where the hypothesis of explicitness its built on is fully proven. Pass "GO," collect $200. The stars align. Game over. Case closed. The only issue is that without wallaby you're looking at a very scrollable terminal window full of snapshot comparisons. From a business standpoint, I would love to see you guys become the defacto leader of a market that--from my research--you're the only game in town. It's a market more important than ever for the react community. This product is so needed. I think you came out a year and a half ago?? Well, woah, today ur product should have headlines everywhere Jest is mentioned. Even everywhere React is mentioned. Your product makes more sense today than it did when it first came out. And by far. Facebook should straight-up buy Wallaby--at least try ;). Like they did Redux, Inferno, etc. You know what I'm talkin about. This is more valuable than Jest, Redux and Flow. The only part of the stack that beats it is React itself. Everybody should be developing like this. It puts you in full control of your codebase at all times, at least in combination with Jest's reactive rendering capabilities + snapshots. ..OK I'm out, just thought you might like to hear how good your cooking is and a few ideas for future recipes :)
One other thing: I think the most effective use Jest + React + Redux is to think of your tests as integration tests between your components and redux (reducers + actions). Unit tests with Jest is business as usual. But as that article u linked was saying--and I only just started reading it:
In my experience, unit tests are not an effective way to find bugs or detect regressions. Unit tests, by definition, examine each unit of your code separately. But when your application is run for real, all those units have to work together, and the whole is more complex and subtle than the sum of its independently-tested parts.
So if you're writing idiomatic React primarily relying on stateless pure function-based components, and using Redux for state changes, you have a very predictable environment for the complete client side of your application. Create fixtures for actions that make async requests, and through jests's reactive component instances, you have the whole app at your finger tips. No need for webdrivers like Selenium. That's the predictable part of React--you really don't need to concern yourself with the underlying DOM anymore.
So what happens is you can easily do integration tests between your components and what Redux is doing. Of course write unit tests as well--and it's very nice because you're dealing with 90-100% in pure functions--but you don't end up in the situation described in the article where you don't have a tool to uncover regressions. You absolutely do. Any changes to actions or reducers will surface in changes to your components. And any changes to components--when you're doing integration testing like in my examples above--will surface when any actions are not called or are incorrectly called. The feedback loop is very nicely completed. It's not an integration test as far as the server is concerned, but it's the easiest pure client-side integration tests that we've ever been able to write.
Next: snapshots are the cherry on top. After all, you're components are just pure functions. You can subtract a lot of development time needing to write expect
. There's still a place to "snipe" results from time to time--and those times can be absolutely crucial--but you don't have to do it as much and therefore save a lot of time. So that's where the importance of a proper Jest snapshot browser and editor (i.e. to update snapshots) comes into play. You mentioned you don't have a "huge react/jest testing experience"--so that's why I'm just mentioning these things. That's all I've done now for a year. So I just wanted to take a second to make clear exactly how the React testing game has evolved and how Wallaby has a perfect place in it.
..So you're not writing unit tests primarily if you're doing this all correctly. You're writing integration tests. You want to insure the interplay between your React components and Redux does not break. AND, to do this correctly for complex apps, you can't think of your app and its tests as a snapshot of one state. YOU HAVE TO THINK OF IT AS A VIDEO RECORDING OF MULTIPLE STATE CHANGES (DISPATCHES AND EVENT HANDLERS THAT DISPATCH THEM). Very much like you would with end to end testing through something like selenium.
The result is you really have a best of all worlds scenario where you don't have to leave a typical unit test runner environment and concern yourself with unpredictable and annoying tools like PhantomJS and Selenium that have to open up browser windows and the like. Yet, you basically get end-to-end testing for your app, minus the server. So in this world where reactive component instances have a life of their own, it makes a whole lot of sense to keep them alive just as you would keep the whole client side application alive when using webdrivers. Wallaby stands to gain a lot by doing what it takes to become the defacto testing platform for the React world, and optimizing for features just entering the development world, such as reactive component instances and (though this may not be know outside of the javascript world) snapshots. Align Wallaby with that and you guys are not just king of javascript/react testing, you're more than that. Have you heard of "last mile" delivery strategy? It's the concept that the company/startup that can own this part can take the biggest percentage of the profits. They are the ones that connect with the customers at the end of the day. Whereas the guys earlier in the supply chain are subject to more competition and as a result slim margins. Well, Wallaby is the last mile provider my friend. It's the last mile provider of the React stack. It's not the IDE. Besides IDEs have way more competition. Wallaby also does something that IDEs don't do: that's validate the core React assumption: explicit functional predictable testable components (and reducers) are what apps should be built on. Wallaby puts the final checkmark on that assumption, proving its true. Anything wallaby can do, and any news you can generate for deep react support will go a very long way to get you more customers. After all, all the web app news sites are at least 75% about the React ecosystem. Maybe 85% and like only 15% for Angular. I'm surprised I haven't seen anything about Wallabyjs on echo.js, hacker news, /r/javascript, javascript weekly, etc, about Wallaby this past year. With the right deep react features, Wallaby can be just as important as Jest, and garner all the same people writing reviews of their experience with Wallaby just like they have with Redux all throughout 2016. I know I have an article in the works.
Thanks for joining the discussion and clarifying your position!
I understand your point about more granular reporting, though, as you've mentioned, in wallaby's case it's less relevant as it's reporting issues inline. But I can see why one would want to add more it
functions for the sake of more granular reporting. There're some ways to do it in various testing frameworks, but I don't think jest
supports it.
In addition, coverage percentages is more precise.
How so?
It may not be the traditional way, but the past wasn't as reactive.
I honestly don't think it's the matter of reactivity, or any new/emerging patterns. In the past, present and future, there were, there are and there will be cases when an SUT object may seem to be desirable/beneficial for re-using across multiple tests.
The fact that tests will break in groups is of no consequence. It's the same thing as if one test broke anyway. So what? Think about it as if "the group broke."
This is not a bad thing if I was editing a related app/component code, but having my other tests broken when I'm editing a single test would be quite a surprise and is not helpful. As a newcomer to some codebase I'll need to learn the hidden dependencies between your tests to edit an existing or add new tests, instead of just assuming the isolation and trying to use existing easy to understand beforeEach
blocks.
One other thing: I think the most effective use Jest + React + Redux is to think of your tests as integration tests between your components and redux (reducers + actions)
Can't agree more. Who cares what the size of the unit is or how many app layers the tests exercise (as long as they don't cross tiers and become slow/hard to setup).
why not leave it up to the developer to decided how dependent their tests are on each other
This totally makes sense. So I have added the opt-in setting to run all tests in the test file affected by code changes. With the setting your example should work. Here's the example:
module.exports = function () {
return {
...
runAllTestsInAffectedTestFile: true
};
};
OK I'm out, just thought you might like to hear how good your cooking is and a few ideas for future recipes
Thank you, I really appreciate your kind words and encouraging feedback! Lots of valuable information to digest and think over, not just in context of wallaby.js but in the context of React ecosystem/testing in general. You should blog it somewhere, like write a medium post :)
i will
...appreciate the addition. wasnt expecting it anytime soon. that's awesome! ...does wallaby just update itself in my editor? should I clone a repo and set it up from scratch? wait for it in the next release?
...not to nitpick, but don't u think
module.exports = function () {
return {
...
runAllTestsInAffectedTestGroup: true //i.e. describe block
};
};
should be added too? i.e. for a more performant halfway option.
@faceyspacey The new version is published and wallaby updates automatically, but you may force the update.
Re: runAllTestsInAffectedTestGroup
it's much harder to implement, but I'll add it to our backlog as well.
@faceyspacey Single action/shortcut Jest snapshot updates are now available in the latest core. You may update a specific snapshot(s) right from your editor with the Run line/file tests
command/action. You may find more details in the this blogpost.
@ArtemGovorov utterly amazing!!! Put in the web panel though with a big beautiful button. That's where it shines the most. And rename it in all places to UPDATE SNAPSHOT (i.e. add a new command). How are you supposed to otherwise know that snapshots will be updated without having read your blog post?
@faceyspacey Thanks for your feedback. As always very insightful. I'll address the comment you've left in the article here as well.
And rename it in all places to UPDATE SNAPSHOT (i.e. add a new command). How are you supposed to otherwise know that snapshots will be updated without having read your blog post?
We'll better document it, however the name for editor command/action has to stay what it was. The action was there way before Jest integration was added and does what it says in other testing frameworks (as well as in Jest): it manually runs a specific test. The original action is well documented in all of the tutorials for all editors that we support. It's just for Jest we have added one more action to it - updating the snapshot. We could have added a specific action just for Jest, but it'd be showing up in all editors for other testing frameworks, and that is not desirable.
The one thing that I think is absolutely crucial though is that the “Wallaby App” has an UPDATE SNAPSHOT button next to the failing snapshot log.
There's a plan to add the feature to the Wallaby App as well.
I noticed in Webstorm you were able to right-click the red failing message and make a context menu appear — VS Code has you access the command pallet. That makes me — a developer — unsure of how exactly how the command is associated with the line.
In both cases it's a currently active line (the line with the cursor). As you can see in GIFs (and in the docs) the line is always focussed before triggering the action. Many other wallaby.js actions work the same way (as well as many other actions in other extensions/plugins). I always thought it is a very keyboard friendly way to select the line to be the context for some action, not sure how we could further improve the UX (in the editor, Wallaby App is a separate story).
Another feature you might want to consider to take the web panel to the next level of “can’t live without” is: Flow and Typescript errors.
At the moment wallaby doesn't even run Flow, or any kind of static analysis that is not necessary to run tests. While it's technically possible to run various linters (ESLint/TSLint), Flow, TypeScript type checking
Considering the above said, it may be a good idea for a different project/app, with an option to run outside of the unit testing context.
The one thing that I think is absolutely crucial though is that the “Wallaby App” has an UPDATE SNAPSHOT button next to the failing snapshot log.
As promised, done. You'll need the latest core to use the feature.
@ArtemGovorov sick! ..the snapshot button in the app was the clincher. For less keyboard-focused developers, that sorta stuff goes a long way. The Jest developers really should check this out and possibly highlight wallaby integration in their documentation. @cpojer check this out:
https://twitter.com/wallabyjs/status/827059959728795648
notice the camera icon in the top right. Jest + Wallaby is really crushing it right now.
@ArtemGovorov
I finally got around to testing the new feature, and here's what I found when using the camera icon in the wallaby app:
runAllTestsInAffectedTestFile: true
it does not workRe: 'runAllTestsInAffectedTestFile' setting will make it update all snapshots in the file, because with setting it runs all tests in the file. This is how jest works, it updates snapshots for the tests that it runs with 'u' flag.
Re: updating all snapshots, please share a sample repo where I could reproduce it, it's working for me as expected in all my tests.
On 5 Feb 2017, at 6:44 PM, James Gillmore notifications@github.com wrote:
@ArtemGovorov https://github.com/ArtemGovorov
I finally got around to testing the new feature, and here's what I found when using the camera icon in the wallaby app:
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/wallabyjs/public/issues/939#issuecomment-277505305, or mute the thread https://github.com/notifications/unsubscribe-auth/AA7z_iD0437QEHPwgpqE1TbxlE2CIworks5rZYvQgaJpZM4Ld7tJ .
@ArtemGovorov
Here's a test repo of a smaller open source project where I encountered the same issues:
https://github.com/celebvidy/pure-redux-router
you can just git clone that, run yarn install
, open it in VS Code and start wallaby, and then go to to the __tests__
folder and see that there are 2 test files, each with the same simple toMatchSnapshot
tests. Change the object being snapshotted in both files and then press the camera icon in the app and u will see that all snapshots are updated (rather than only one being updated). For now I'm not even referring to using runAllTestsInAffectedTestFile
.
I think you are using the u
flag globally as if watching ALL tests. You need to use the following jest arguments:
--testNamePattern, -t Run only tests with a name that matches the
regex pattern. [string]
--testPathPattern A regexp pattern string that is matched
against all tests paths before executing the
test.
to update the snapshot just for the single precisely matched test.
As for runAllTestsInAffectedTestFile
I get the following message in the wallaby console:
[Error] Test run failure: Cannot assign to read only property 'length' of string '__tests__/pure-utils.js'
I got the same message in both apps I've tested.
@faceyspacey Thanks for the repo! It was a bug in Wallaby App (that we couldn't reproduce because we were missing an important test case). It's fixed now in the latest Wallaby App (you may just refresh the page).
Note that with the runAllTestsInAffectedTestFile
setting, all snapshots in the test file will be updated when using the the camera icon to update a snapshot in that file. It happens because in this case jest runs all tests of the file (with -u
) as instructed, because the tests depend on each other, so it's not possible to run one test, without running other tests.
@ArtemGovorov confirmed, only individual snapshots are updated and runAllTestsInAffectedTestFile
works as expected updating all snapshots in the file without the prior issue.
...I've been trying to think of more jest/snapshot-oriented features to take this even farther, and for now all I have is what I previously mentioned about showing snapshots even if the test isn't failing, i.e. instead of an empty box saying NO ERRORS OR LOGS
or just console logs. My perspective is that the empty box is a missed opportunity. Ideally it would show the test file (i.e. what you see in a modal when you click the eyeball) AND any snapshots (of course all displaying in green since nothing has changed). There would be 3 tabs or buttons in the top right that let you choose between showing the test code, logs or snapshots if available (perhaps no buttons if no snapshots and no logs are available, since there is nothing to toggle between). It would be sticky, so if you have snapshots selected, it would stick to that as you switch between tests. Next, if the test is failing, it could be as is now (at least in an initial iteration to keep the scope to a minimum), but perhaps now you have 4 tabs: ERRORS | LOGS | TEST FILE | SNAPSHOTS. Or perhaps it's just ERRORS | TEST FILE | SNAPSHOTS. with logs mixed with errors. It seems that having 4 is best, and just hide the tabs that are irrelevant for the test, depending on if it's failing or not and if it has snapshots or not, and if it has logs or not.
Final thing: for each logging, it might be nice to see what was passed to console.log
in the app so you know what it's referring to, i.e. display the actual code: console.log(foo, bar)
above or below the logged output.
ps. I LOVE HOW THE APP CHANGES THE FOCUSED TEST AS YOU USE YOUR EDITOR. AND I DIDNT EVEN TO MEAN TO MAKE THIS ANOTHER FEATURE REQUEST, BUT PERHAPS IF IT CHANGED THE FOCUSED TEST JUST BY PUTTING THE CURSOR WITHIN A GIVEN TEST WITHIN YOUR EDITOR, IT WOULD BE EVEN BETTER.
@faceyspacey Thanks a lot for the feedback! I really like of of the ideas. will add them to our tasks.
thanks! ..one other thing: I've added console.log
everywhere in my tests with the intention of keeping them permanently because it's just great to be able to look at logs in wallaby. However, it clutters the output of jest and i presume any test runner when running them from the command line--so what if you had a custom function you could use (w.log
) which only outputs to the wallaby app. no need to respond. ttyl
Final thing: for each logging, it might be nice to see what was passed to console.log in the app so you know what it's referring to, i.e. display the actual code: console.log(foo, bar) above or below the logged output.
Just landed in the latest core: https://twitter.com/wallabyjs/status/829923244308205568, and of course in the Wallaby App:
@ArtemGovorov very nice!
@ArtemGovorov another small similar text improvement to the app:
in these columns: https://www.dropbox.com/s/m72lv0mhiyf6x7s/Screenshot%202017-02-10%2013.14.40.png?dl=0
display the number of tests passing in green and the number of tests failing (if any) in red.
and then in a second phase make the milliseconds and the 2 new columns sortable.
@kamesh2111 Your issue doesn't look related to Wallaby.
Issue description or question
Using Jest, variables in an outer scope, whose values are set in another test become undefined after the test you're editing re-runs.
For example:
In this example
T
is set in a previous typically synchronously/sequentially executed test. In the regular Jest test runner, you can depend on variables set in previous tests. In fact making use of the closure created by thedescribe
block is recommended and idiomatic for such things in the Jest community. But with Wallaby, you can't if you intend on editing a test, which ultimately leads to 2 different unpredictable behaviors with wallaby: once when it first runs all your tests, and another when you actually edit a test, both delivering different results. Re-running the entiredescribe
block would simply (or perhaps not so simply) solve this.So in Wallaby, to make the above call to
console.log
work, I have to edit the outerdescribe
scope, as a workaround. Also note: editing test 1 also won't giveT
a value again, and definitely editing test 2 won't.I'm assuming this probably is a very hard bug/issue, and likely not possible given the async nature of Wallaby. To give importance/weight to this issue--and to counter the only solution being to change my code--basically it's highly useful when snapshotting "alive" react components to be able to snapshot a react component and build a "live" instance, which in this case is contained within
T
and then to be able to assert against state changes in SEPARATE SEQUENTIAL TESTS.beforeEach
as a solution is also sub-par, because the idea is that you have components that change states in a sequence, so you would have to re-do the previous steps/actions/state_changes in subsequent tests anyway. That sub-par workflow looks like this basically:When really the last test, for example, should only be this:
So you are left repeating steps you did in previous tests to achieve a state that would even have in this example 'onChange' attached to a component instance (i.e.
onChange
only appears conditionally for the given state). And perhaps the previous dispatches only create that correct state if dispatched in order.So it's highly useful to be able to think of one group of tests as a sequential set of events. I've built a library called
jest-redux-renderer
that nicely keeps your components alive and reactive with redux (Jest only out-the-box keeps components reactive if you callsetState
which isn't even idiomatic anymore). I think the concept of "live" components that continue to reactively update is the secret sauce that makes Jest so great for React.Now, if you're able to reactively update in response to redux state, you're going to be doing a lot of tests against a store or component created in a previous test. You benefit HUGELY from being able to manipulate that component idiomatically by dispatching actions and calling attached event handlers. But there is no "maniuplating" i.e. "updating" in Wallaby while editing tests--everything can only be created, or you can use
beforeEach
but that only gets you 10% of the way. So that's the case for this perhaps not previously typical way of testing. In short, it makes a lot of sense for a component instance to stay alive and reactive across multiple sequential tests.Basically the only solution I can see as an option (and perhaps not enabled by default) is to re-run all tests in a
describe
group, including any code in the outerdescribe
block if any single test changes, not just the individual changed tests. That would 100% solve it. I know performance is one of Wallaby's key selling points, so perhaps optionally being to toggle this on for scenarios where it's important to you and you're editing tests makes this palatable from a performance perspective. But I see it as a must for the emerging idiomatic jest+react testing workflow where there is much to gain if your component instances live across tests.Wallaby.js configuration file
Code editor or IDE name and version
Visual Studio Code v1.8
OS name and version
OSX