roxiness / routify-starter

https://example.routify.dev/
198 stars 55 forks source link

Tests examples #10

Closed rigu closed 3 years ago

rigu commented 4 years ago

I would like to have some tests for the examples

jakobrosenberg commented 4 years ago

@rixo, what do you think?

rixo commented 4 years ago

@rigu You really mean "examples of tests", right? (i.e. not testing the example itself)

Yeah, that would be great, but I think that's largely out of our current focus, and will remain so in the near future. I'm very reticent to offer half baked examples on this sensitive matter.

Also, I don't think we should ever add this to the default starter template. The problem is that there are different competing tools, and that there isn't a clearly best one. That's really a matter of taste, or your specific situation / project. I don't want people to have to painfully remove config files, test directories and dependencies of one tool we would have chosen from the starter, before being able to use the one that best fits their need. To me, that would go against Routify's goal of being modular and play nice with others.

Beside, testing frameworks (Cypress, Jest...) tend to be pretty heavyweight dependencies, often pulling in Puppeteer or Electron etc. with them, and/or very opinionated (Mocha, Ava...). They also tend to be quite invasive in terms of added files in the project (config files, special directories, etc.). And furthermore, for a complete testing solution (unit, integration, e2e), we'd probably need to embark 2 ou 3 such tools.

In my opinion, what we should do, eventually, is have different branches of the starter template embedding different tools that people can use as starters. Or completely independent examples of how to integrate such and such tool in a Routify project. But, like I said, I think we're not there yet.

There are currently no established "idiomatic" patterns for testing of Svelte components or applications. Some work is being done by other people in other projects with different techniques and tools (testing-library, Jest, Cypress...) but most of it is still relatively exploratory and in-progress, as far as I can tell.

For our part, I think we (Routify) don't have enough expertise on the subject to confidently offer best practice examples. Short of that, I think it's better to have nothing at all for the moment, rather than misguiding people into thinking than we're recommending a specific solution over the others.

We're going to gain some knowledge on the subject, as we increase test coverage of Routify itself and experiment with testing in our own Routify projects and, hopefully, at some point we'll be able to recommend solutions tried and tested by ourselves. Meanwhile, I think it's better to leave the hole open and let people refer to other projects of which testing is the primary focus to see how this hole can be filled. Our own focus is routing, and we're not done with it (and our resources are limited, yada yada)...

rigu commented 4 years ago

@rixo, fully agree with You, Many thanks for Your answer. To explain my question: I asked from point of view of routify usage. And as "tests for the examples", I think it is useful to have some 'self-protecting' tests for routify implementation. One (and single for now :) ) scenario: before to go in prod, check if the tree of all generated routes is corresponding to what is expected. Just based on the fact that each development team produce some human errors, and cand exists some unused routes, or some routes are missed. If not, can You provide here a small example, or ideea :), of how to test this case?

rixo commented 4 years ago

And as "tests for the examples", I think it is useful to have some 'self-protecting' tests for routify implementation.

As I understand it, we're speaking about tests for Routify itself here, not for the example(s). If so, I agree. That's what I'm currently working on.

My approach is some snapshot testing over the whole build chain (i.e. given such and such file layout, I expect such and such generated file). The goal is to protect against unexpected changes in the structure of the generated files.

Given your next question, I guess this might be of interest to you. I'll ping you when something concrete and working lands in the repo. Might take quite a few days, though... Or even more :-x

One (and single for now :) ) scenario: before to go in prod, check if the tree of all generated routes is corresponding to what is expected.

Yeah, so the tricky part with this is how to feed what is "expected" to your test?

You can manually maintain an expected "tree" or something but, to me, that seems prohibitively costly to maintain since your routes will permanently evolve etc. You'd essentially have to replicate the work of Routify by hand in your test. This makes me pause. In practice, your tests would probably get abandoned in no time.

Snapshot testing is a middle ground that can fit this kind of situation well. You do your things, Routify does its things, and then you take the "actual" result as the "expected", that you save for later. And then, each time you change your routes (or maybe upgrade Routify), you regenerate the snapshot. Thanks to git's diff, you'll be able to eyeball the difference pretty easily for validation. If the changes seem reasonable, then you commit the snapshot. Rinse and repeat.

You can't abuse this technique because each snapshot test adds a (really) non trivial amount of continuing maintenance burden, since you need to regenerate and revalidate them again and again (and again). But they do catch all unexpected changes to the test subject. So a few well targeted such tests can offer a decent amount of protection against surprises over large surfaces of your app at a reasonable cost... In the end, it's all a question of balance / trade offs. (And picking the right targets.)

ghostdevv commented 3 years ago

am I able to close this? @rixo @rigu