swyxio / swyxdotio

This is the repo for swyx's blog - Blog content is created in github issues, then posted on swyx.io as blog pages! Comment/watch to follow along my blog within GitHub
https://swyx.io
MIT License
342 stars 45 forks source link

How To Optimize for Change #282

Closed swyxio closed 2 years ago

swyxio commented 2 years ago

source: devto devToUrl: "https://dev.to/swyx/how-to-optimize-for-change-a2n" devToReactions: 42 devToReadingTime: 9 devToPublishedAt: "2021-05-20T00:16:31.290Z" devToViewsCount: 2230 title: How To Optimize for Change published: true description: Lessons from React, GraphQL, and Rich Hickey on how to design software that doesn't implode the first time requirements change. tags: Learnings, APIDesign, Tech slug: optimize-for-change canonical_url: https://www.freecodecamp.org/news/how-to-optimize-for-change-software-development/ cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n1078j8opalnrzuu8p4u.png

This post was originally published on FreeCodeCamp.

Imagine you worked at Magic Money Corp, which runs on just three lines of JavaScript:

let input = { step1: 'collect underpants' }
doStuff(input) 
profit(input) // $$$!!!

Now imagine something's wrong with the left phalange and we need to take doStuff down for maintenance. What happens if you temporarily comment out the second line?

Oh no! profit() is erroring all over the place. You've broken our magic money machine! To solve this, you would now have to read through the entire source code of doStuff to understand what it does and replace critical code for profit to function.

Maybe it's better to just leave doStuff there... we don't need a functioning phalange right?

When we are afraid of making changes to our code, it starts to ossify and bloat.

Now let's imagine if we had built Magic Money Corp on immutable data structures instead (or used a functional language):

let input = ImmutableMap({ step1: 'collect underpants' })
doStuff(input)
profit(input) // $$$!!!

It looks the same, but now I can remove doStuff, and have no fear of breaking Magic Money Corp!

I've been obsessed with Dan Abramov's concept of Optimizing for Change since he wrote about it two years ago. It clearly articulates a core design principle of React (the rest are here and here). For me, it is one of the 7 lessons that will outlive React that I now try to apply everywhere else.

The main question it doesn't answer: how exactly do you optimize for change?

TL;DR

Why Optimize for Change

First, an obligatory explanation of this idea:

The inspiration for this came from "Easy-to-replace systems tend to get replaced with hard-to-replace systems" (Malte Ubl) and "Write code that is easy to delete, not easy to extend" (tef). Economics fans will recognize this as an application of Gresham's Law. The idea is the same — a form of anti-entropy where inflexibility increases, instead of disorder.

It's not that we don't know when our systems are hard-to-replace. It is that the most expedient response is usually to slap on a workaround and keep going. After one too many bandaids, our codebase mummifies. This is the consequence of not allowing room for change in our original designs, a related (but distinct) idea to "technical debt" (which has its own problems).

The reason we must allow for changes is that requirements volatility is a core problem of software engineering. We devs often fantasize that our lives would be a lot easier if product specs were, well, fully specified upfront. But that's the spherical frictionless cow of programming. In reality, the only constant is change. We should carefully design our abstractions and APIs acknowledging this fact.

Alt Text

Plan for Common Changes

Once you're bought in to the need to optimize for change, it is easy to go overboard and be overcome by analysis paralysis. How do you design for anything when EVERYTHING could change?!

We could overdo it by, for example, putting abstract facades on every interface or turning every function asynchronous. It’s clear that doubling the size of your codebase in exchange for no difference in feature set is not desirable either.

A reasonable way to draw the line is to design for small, common tweaks, and not worry about big, infrequent migrations. Hillel Wayne calls these requirement perturbations — small, typical feature requests should not throw your whole design out of whack.

For the probabilistically inclined, the best we can do is make sure our design adapts well to 1-3 "standard deviation" changes. Bigger changes than that are rare (by definition), and justify a more invasive rewrite when they happen.

This way, we also avoid optimizing for change that may never come, which can be a significant source of software bloat and complexity.

Common changes can be accumulated with experience - the humorous example of this is Zawinski's Law, but there are many far less extreme changes that are entirely routine and can be anticipated, whether by Preemptive Pluralization or Business Strategy.

Use Simple Values

Once we have constrained the scope of our ambitions, I like to dive straight into thinking about API design. The end goal is clear. In order to make code easy to change:

Rich Hickey is well known for preaching the Value of Values and Simplicity. It is worth deeply understanding the implications of this approach for API design. Where you might pass class instances or objects with dynamic references, you could instead pass simple, immutable values. This eliminates a whole class of potential bugs (and unlocks logging, serialization and other goodies).

Alt Text

Out of these requirements for simple uncomplected values, you can derive from first principles a surprising number of "best" practices — immutable programming, constraining state with a functional core, imperative shell, parse don't validate, and managing function color. The pursuit of simplicity isn't a cost-free proposition, but a variety of techniques from structural sharing to static analysis can help.

Instead of memorizing a table of good/bad examples, the better approach is to understand that these are all instances of the same general rule: Complexity arises from coupling.

Minimize Edit Distance

I mentally picture the braids from Simple Made Easy now, whenever I think about complexity.

Alt Text

When you have multiple strings next to each other, you can braid them and knot them up. This is complexity — complexity is difficult to unwind. It is only when you have just one string that it becomes impossible to braid.

More to the point, we should try to reduce our reliance on order as much as possible:

You can even quantify this complexity with a notion of "edit distance":

You could even imagine a complexity measure similar to the CSS specificity formula - a complexity of C(1,0,0,0) would be harder to change than C(0,2,3,4). So optimizing for change would mean reducing the "edit distance" complexity profile of common operations.

I haven't exactly worked out what the formula is yet, but we can feel it when a codebase is hard to change. Development progresses slower as a result. But that's just the visible effect — because it isn't fun to experiment in the codebase, novel ideas are never found. The invisible cost of missed innovation is directly related to how easy it is to try stuff out or change your mind.

To make code easy to change, make it impossible to "braid" your code.

Catch Errors Early

As much as we can try to contain the accidental complexity of our code by API design and code style, we can never completely eliminate it except for the most trivial programs. For the remaining essential complexity, we have to keep our feedback loops as short as possible.

Alt Text

IBM coined the term "Shift Left" after finding that the earlier you catch errors, the cheaper they are to fix. If you arrange the software development lifecycle from left (design) to right (production), the idea is that if you shift your errors "left" then you'd save actual money by catching errors earlier (For more on this, see my discussion and sources in Language Servers are the New Frameworks).

In concrete terms this might translate to:

The causality may be bidirectional. If you make it easier to change things, you will be able to make changes more frequently. It could work the other way too — because you expect to make frequent changes, you are more incentivized to make things easy to change. An extreme example of this involves not just code, but community - the longer a library (or language) stays on version 1, the harder the difficulty of switching to version 2, whereas ecosystems that regularly publish breaking versions (in exchange for clear improvements) seem to avoid stasis by sheer exposure.

A caution against too much change

Any good idea turns bad when taken to the extreme. If you change things too much you might be favoring velocity over stability — and stability is very much a feature that your users and code consumers rely on. Hyrum's law guarantees that with sufficient users and time, even your bugs will be relied on, and people will get upset if you fix them.

That said, overall, I find Optimizing for Change a net win in my programming, product, and system design decisions and am happy that I've boiled it down to four principles: Plan for Common Changes, Use Simple Values, Minimize Edit Distance, and Catch Errors Early!

Related reads