canonical-web-and-design / practices

Guides and principles from the web team at Canonical and Ubuntu
https://canonical-web-and-design.github.io/practices/
Other
45 stars 30 forks source link

Tests optional? #122

Open squidsoup opened 5 years ago

squidsoup commented 5 years ago

In the python standards doc that just landed, we state:

Although tests are lacking on many of our existing projects, writing tests is strongly encouraged. Please write tests for any new code if at all possible, and any efforts to improve test coverage on any project would be greatly appreciated.

This is a bit at odds with Canonical engineering culture. We should never land code without tests where feasible. If there are technical reasons making it difficult to write tests, we should discuss those and find a solution. Maybe this would be a good discussion for the next dev catchup?

nottrobin commented 5 years ago

Thanks for raising this.

The reason for my response on the Python Practices PR is because I saw the role of that PR as defining current practice rather than forming new practice. To build up this set of practices quickly, I'd like to, where possible, separate the two - quickly write documents that define the way we work first, and then discuss changes to those documents to evolve the way we work.

Yes I think we should definitely discuss it in dev catch-up. I'll also provide a bit of background here:

Up until now, we haven't enforced testing in any sense. Now may be the time to change that. I see the reason that we haven't been more focused on testing as having 2 broad reasons:

1. A very small team

We have a very small number of Python devs managing a very large number of projects. Currently we have an all-time high of 4 devs (if you count me), but we've often been as few as 2 devs, managing over 20 production websites, many more supporting codebases, and the CI & deployment systems & development tooling for all those projects.

2. Not an "engineering team"

Until quite recently, it would have been hard to characterise us as an "engineering" team. We were one small part of the Design team, creating a large number of simple websites of usually fairly static content, to support Design and Marketing goals.

Although our websites have always been public and viewed by millions, unlike other "engineering" teams we were never delivering services or software. We weren't really handling customer data, or providing service contracts that clients rely on or distributing tools to be run by others (apart from of course making our stuff open-source).

We also never had the benefit of either a fixed predictable release cycle, or of ever really getting to choose to release new features only when they were ready. We'd very often have tight deadlines imposed from the very top (this is still true - there's a key example from just yesterday which I can elaborate more on in Dev Catch-up).

We needed to move fast, and I would say that strategy wasn't entirely inappropriate - it worked pretty well. We have successfully got sites out the door in very quick turn around, those sites look good and generally perform well, and we certainly haven't experienced an outsize number of outages or security breaches.

Of course, we've always had some tests (e.g. documentation-builder, yaml-responses, assets.ubuntu.com), and I've tried to write them where I've had time, but up to now I've never strictly enforced it.

The changing team

I say "until quite recently" because the type of software our team produces is definitely changing. The most clear example of that is your work @squidsoup on MAAS and related products. This is work that will be distributed as software, and security and reliability are therefore paramount. However, this particular work doesn't involve Python, and so isn't impacted by the Python test policy. I don't know what testing policy you're following for your work, but I'm guessing you're treating tests as mandatory, which is great. Maybe this could also be formalised in practices, in tandem with evolving the Python policy.

The other significant example would be https://snapcraft.io. This is actually neither a public service or distributed software, but it is significantly more complex than our typical project, and it forms a more significant part of the supporting infrastructure for the Linux ecosystem, and handles very sensitive user tokens and data. This project really does need a higher level of focus on reliability and security. It has fairly high test coverage (thanks to @tbille) for this reason, but currently no official policy to enforce it.

I'm not aware of other existing projects which have the significance of these two, but I do believe the number of such sensitive projects that fall to our team to implement will only increase. So we do need to sure up our processes to handle them.

Options going forward

I welcome the call to revisit this policy.

I'm very open to revising, or creating, a policy around testing. I personally have specific opinions and concerns around how we test - I'm aware that occasionally testing policies of other teams have turned out to be unnecessarily obstructive - so I do believe that it's important to have good practices defining what and how to test - e.g. the importance of not testing internals (as some team members will remember - although that particular point could probably have been better articulated). But that doesn't need to get in the way of evolving a policy around requiring tests to be written, only that we should evolve testing guidelines at the same time, which I intend to do.

As I see it, we could tighten up our policy around Python testing in one of two ways:

  1. Strongly recommend testing for working on most projects, as a default, and then enforce testing any new work for specific projects (e.g. snapcraft.io). This would not be a huge departure from current practice, but would probably lead to a slight increase in testing on snapcraft.io, and hopefully encourage significantly more voluntary testing on other projects, without significantly impacting on delivery when deadlines are tight. Or:
  2. Use this as an opportunity to switch straight to enforcing testing on all new work. This will clash with our pressure to "move fast" fairly often, but we might want to do it anyway and let those pain points happen and work themselves out. However, this switch in policy would need buy-in from @pmahnke, as it would impact stake-holders.

So let's discuss these options further in Dev Catch-up.

squidsoup commented 5 years ago

Thanks very much for the detailed response @nottrobin, good to have some context. I think you're absolutely right that there's a delineation between the traditional mostly static content delivered by the team, and writing applications, which as you point out is a fairly new endeavour - I think we really only need to consider the later here.

We haven't in fact had a formal discussion about our testing policy for JS, but the unspoken assumption has been that tests are mandatory for crbs-ui. I'll try to write something up this week.

Looking forward to chatting about this in more depth during the catch-up!

bartaz commented 5 years ago

I believe the story for JS tests is fairly similar.

For quite a long time webteam didn't build any JS heavy applications. The use of JS was fairly minimal to support more interactive parts of the websites (dropdowns, accordions, etc).

I don't know what are the test policies for JS in projects like MAAS or Juju where webteam is currently involved, but I suppose it may follow the engineering practices of other teams.

It's for sure a good time to start discussing these policies. Even that snapcraft.io started as fairly static API driven website with only 'visual' bits of JS, currently the publisher side of it starts to be a lot closer to JS 'single-page' application (rendered by React, managing state, etc). While there is jest added to project with some small bits of unit tests already running for some of functionality we indeed don't have any policy to enforce testing new JS functionality.

I personally find writing good tests for JS quite challenging. JS very often 'lives' close to DOM and browser UI. Some functionality is simply hard to 'unit' test (although React and Redux makes some of these easier). I think we not only lack the 'tradition' of writing tests, we also as a team need to learn how to write JS that is testable, not just a bunch of event handlers added on document.querySelector('something').

squidsoup commented 5 years ago

Thanks Bartek, that's good to know. I can certainly spend some time writing up the sort of js tests we're writing in crbs-ui (rbac), which are primarily unit tests for reducers and actions, and behavioural tests for components. We also have tests for our async code in sagas. We don't have any snapshot tests, so there are certainly some things to explore in the future.

On Mon, Nov 26, 2018 at 10:06 PM Bartek Szopka notifications@github.com wrote:

I believe the story for JS tests is fairly similar.

For quite a long time webteam didn't build any JS heavy applications. The use of JS was fairly minimal to support more interactive parts of the websites (dropdowns, accordions, etc).

I don't know what are the test policies for JS in projects like MAAS or Juju where webteam is currently involved, but I suppose it may follow the engineering practices of other teams.

It's for sure a good time to start discussing these policies. Even that snapcraft.io started as fairly static API driven website with only 'visual' bits of JS, currently the publisher side of it starts to be a lot closer to JS 'single-page' application (rendered by React, managing state, etc). While there is jest added to project with some small bits of unit tests already running for some of functionality we indeed don't have any policy to enforce testing new JS functionality.

I personally find writing good tests for JS quite challenging. JS very often 'lives' close to DOM and browser UI. Some functionality is simply hard to 'unit' test (although React and Redux makes some of these easier). I think we not only lack the 'tradition' of writing tests, we also as a team need to learn how to write JS that is testable, not just a bunch of event handlers added on document.querySelector('something').

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/canonical-webteam/practices/issues/122#issuecomment-441565971, or mute the thread https://github.com/notifications/unsubscribe-auth/AAH87nFsn64g_JGRCenwmqnROrjerfUkks5uy674gaJpZM4Yv1Rv .

bartaz commented 5 years ago

@squidsoup We did a lot of action/reducers testing with some of behavioural testing of components in BSI (https://github.com/canonical-websites/build.snapcraft.io/tree/master/test/unit/src/common). But that's also a 'child' of a much more strict test policy back in online services days (and the fact that Redux and React are quite test friendly when used properly).