This is the repo for swyx's blog - Blog content is created in github issues, then posted on swyx.io as blog pages! Comment/watch to follow along my blog within GitHub
source: devto
devToUrl: "https://dev.to/swyx/unit-and-integration-testing-for-plugin-authors-352d"
devToReactions: 15
devToReadingTime: 6
devToPublishedAt: "2020-03-11T23:54:58.601Z"
devToViewsCount: 194
title: Unit and Integration Testing for Plugin Authors
subtitle: With Netlify Build Example
published: true
description: Some thoughts on how to set up testing with plugins
category: tutorial
tags: Tech, JavaScript, Testing, Node.js
slug: testing-plugin-authors
displayed_publish_date: "2020-03-10"
I've just completed work on Netlify-Plugin-No-More-404 - a Netlify Build plugin to guarantee you preserve your own internal URL structure between builds. But I'm not here to plug my plugin or Netlify - I just think I had a small realization on plugin testing strategy which I would like to share with you.
Most projects want to be platforms, and most platforms want to have plugins to extend functionality and eventually create mutually beneficial business relationships. Gatsby has plugins, Next.js has plugins, Shopify has plugins, Wordpress has plugins, everybody gets a plugin! If you're successful enough even your plugins have plugins! Figma has written some great stuff about the engineering challenges behind plugins - not least of which is API design, permissions, and security, and I'd highly recommend their writing on this. I have a future blogpost that I hope to do on "how to do plugin systems right", because all plugins system suck in some way.
The scope of this blogpost is much smaller than that - it's just about setting up testing as a plugin author. I think plugin authors should set up:
This talk is about using simple values (as opposed to complex objects) not just for holding data, but also as the boundaries between components and subsystems.
A plugin is a component connecting to a subsystem. Once we think about it this way, it greatly clarifies both the code as well as how to test it. You don't need to watch the talk to understand the rest of this post, but I highly recommend it anyway.
A mental model for plugin authoring
You can view the relationship of a plugin and its core as some overlapping boxes:
Seems simple enough. You can then break it down into business logic and plugin interface:
Note that by Business logic, I mean everything that the core has no knowledge of - something domain specific to what your plugin is trying to do.
By plugin interface, I mean everything imposed on you by the core system: all the settings, utilities, and lifecycles specified by them - and therefore you're writing glue code between your business logic and how the plugin API wants you to expose your work.
The core proposal of this blogpost is that you should first write your business logic via unit tests (fast tests with simple values, ideally with no I/O), and then test your plugin interface code by writing integration tests (slower tests, mocking APIs where needed, with I/O).
But those are generalized testing philosophies. I think for plugin systems, you can let the core system be responsible for end-to-end success, and you get the most bang for your buck with unit and integration tests.
If that sounds obvious, I can say that as a plugin author I didn't really think about it while diving in headfirst, and I paid the price in rewrites today.
Ideally your business logic doesn't really care about what the core system's plugin API looks like, although of course if there are special requirements for idempotence or side effects those concerns will leak through down to how you write your business logic. But ultimately you want to stay as agnostic of plugin API as possible. This serves two benefits:
it is easier to test, since you will be passing in simple values, and
it is also easier to copy your logic over to other plugin systems, which you will be doing!
Because unit tests are meant to be light and deterministic, you should create as many variations of them as to form a minimum spanning tree of what your users could realistically give your code.
Testing the plugin interface
Now that you are happy with your business logic, you can write your integration with the plugin API with high confidence that any errors are due to some mistake with the API itself, not anything to do with the business logic.
I don't have a lot of wisdom here - you will be mocking your system's provided core APIs (if you're lucky, they will provide well documented local testing utilities for you, but its also not a heavy lift to write your own as you learn about what the APIs do), and you will have to set up and tear down any files on the filesystem for these effectful integration tests.
I find myself writing less of these integration tests, since I already did the test-all-variations stuff at the unit test level. At the plugin interface level, I merely need to test that I'm relaying the right information to the business logic properly.
I also set these things up as "fixtures" rather than solid tests - which to me means that it is a test I can quickly manually futz around to reproduce or investigate user reported bugs.
Secret Developer Flags
I also find myself adding two secret developer-experience-focused boolean flags to my business logic, both defaulting to false:
testMode: Inside business logic, plugins should surface helpful warnings and logs and errors to the user; however this can be a little annoying when running tests, so your unit tests can pass testMode: true to silence those logs.
Of course, this isn't perfect - you should also be testing for regressions against expected warnings and errors not showing up - but my project was not ready for that level of sophistication yet.
debugMode: When the plugin is shipped and run live inside the production system, it will still have bugs due to APIs not behaving as you expected. So adding a debugMode flag helps you log out diagnostic information helpful to tell you, the plugin developer, how the real life system differs from your locally tested code. Additionally, if the plugin user is reporting issues, you can also easily tell them to turn on debugMode and send over the resulting logs to help you figure out what they have going wrong.
I like using colocated READMEs in each folder to document what tests should do. The markdown format syntax highlights nicely and it shows up on GitHub. Just a personal preference.
any other tips? reply and I'll write them here with acknowledgement!
Go Slow to Go Far
A final word on the value of testing for plugin developers.
When I first started doing plugins I (of course) didn't write any tests - I think the cool kids now say they "test in production" now. This is fine - until you start to rack up regressions when you try to fix one thing and something else breaks.
Additionally, most of the time this won't be your main job, so you will only infrequently visit this codebase and the context switch will be annoying to the point of discouraging further development.
What helps future you also helps other plugin developers, if you are working in a team or open source.
And when you eventually need to refactor - to swap out underlying engines, or to add new features or redesign internals for scale, the extra sprint effort due to lack of tests may discourage refactors and so cap the useful life of your plugin.
I kind of visualize it like this in my head:
Tests hold the line, and that's a powerful thing for sustained progress over your code's (hopefully long) life.
source: devto devToUrl: "https://dev.to/swyx/unit-and-integration-testing-for-plugin-authors-352d" devToReactions: 15 devToReadingTime: 6 devToPublishedAt: "2020-03-11T23:54:58.601Z" devToViewsCount: 194 title: Unit and Integration Testing for Plugin Authors subtitle: With Netlify Build Example published: true description: Some thoughts on how to set up testing with plugins category: tutorial tags: Tech, JavaScript, Testing, Node.js slug: testing-plugin-authors displayed_publish_date: "2020-03-10"
I've just completed work on Netlify-Plugin-No-More-404 - a Netlify Build plugin to guarantee you preserve your own internal URL structure between builds. But I'm not here to plug my plugin or Netlify - I just think I had a small realization on plugin testing strategy which I would like to share with you.
Most projects want to be platforms, and most platforms want to have plugins to extend functionality and eventually create mutually beneficial business relationships. Gatsby has plugins, Next.js has plugins, Shopify has plugins, Wordpress has plugins, everybody gets a plugin! If you're successful enough even your plugins have plugins! Figma has written some great stuff about the engineering challenges behind plugins - not least of which is API design, permissions, and security, and I'd highly recommend their writing on this. I have a future blogpost that I hope to do on "how to do plugin systems right", because all plugins system suck in some way.
The scope of this blogpost is much smaller than that - it's just about setting up testing as a plugin author. I think plugin authors should set up:
First, a talk on Boundaries
Gary Bernhardt's Boundaries talk is really influential to my thinking. As it says on the tin:
A plugin is a component connecting to a subsystem. Once we think about it this way, it greatly clarifies both the code as well as how to test it. You don't need to watch the talk to understand the rest of this post, but I highly recommend it anyway.
A mental model for plugin authoring
You can view the relationship of a plugin and its core as some overlapping boxes:
Seems simple enough. You can then break it down into business logic and plugin interface:
Note that by Business logic, I mean everything that the core has no knowledge of - something domain specific to what your plugin is trying to do.
By plugin interface, I mean everything imposed on you by the core system: all the settings, utilities, and lifecycles specified by them - and therefore you're writing glue code between your business logic and how the plugin API wants you to expose your work.
The core proposal of this blogpost is that you should first write your business logic via unit tests (fast tests with simple values, ideally with no I/O), and then test your plugin interface code by writing integration tests (slower tests, mocking APIs where needed, with I/O).
Most people will think of Martin Fowler's Test Pyramid or Kent C Dodds' Testing Trophy:
But those are generalized testing philosophies. I think for plugin systems, you can let the core system be responsible for end-to-end success, and you get the most bang for your buck with unit and integration tests.
If that sounds obvious, I can say that as a plugin author I didn't really think about it while diving in headfirst, and I paid the price in rewrites today.
Testing the business logic
I think the key here is to design your business logic code as a single function or module with as small an API surface area as possible for you to get the job done. If your function takes 5 parameters but could take 3 instead if you derive the final 2, then take 3. I'm a fan of argument objects, by the way.
Ideally your business logic doesn't really care about what the core system's plugin API looks like, although of course if there are special requirements for idempotence or side effects those concerns will leak through down to how you write your business logic. But ultimately you want to stay as agnostic of plugin API as possible. This serves two benefits:
Because unit tests are meant to be light and deterministic, you should create as many variations of them as to form a minimum spanning tree of what your users could realistically give your code.
Testing the plugin interface
Now that you are happy with your business logic, you can write your integration with the plugin API with high confidence that any errors are due to some mistake with the API itself, not anything to do with the business logic.
I don't have a lot of wisdom here - you will be mocking your system's provided core APIs (if you're lucky, they will provide well documented local testing utilities for you, but its also not a heavy lift to write your own as you learn about what the APIs do), and you will have to set up and tear down any files on the filesystem for these effectful integration tests.
I find myself writing less of these integration tests, since I already did the test-all-variations stuff at the unit test level. At the plugin interface level, I merely need to test that I'm relaying the right information to the business logic properly.
I also set these things up as "fixtures" rather than solid tests - which to me means that it is a test I can quickly manually futz around to reproduce or investigate user reported bugs.
Secret Developer Flags
I also find myself adding two secret developer-experience-focused boolean flags to my business logic, both defaulting to
false
:testMode
: Inside business logic, plugins should surface helpful warnings and logs and errors to the user; however this can be a little annoying when running tests, so your unit tests can passtestMode: true
to silence those logs.debugMode
: When the plugin is shipped and run live inside the production system, it will still have bugs due to APIs not behaving as you expected. So adding adebugMode
flag helps you log out diagnostic information helpful to tell you, the plugin developer, how the real life system differs from your locally tested code. Additionally, if the plugin user is reporting issues, you can also easily tell them to turn ondebugMode
and send over the resulting logs to help you figure out what they have going wrong.Other Tips
I like using colocated READMEs in each folder to document what tests should do. The markdown format syntax highlights nicely and it shows up on GitHub. Just a personal preference.
any other tips? reply and I'll write them here with acknowledgement!
Go Slow to Go Far
A final word on the value of testing for plugin developers.
I kind of visualize it like this in my head:
Tests hold the line, and that's a powerful thing for sustained progress over your code's (hopefully long) life.