Regulative Systems in Software Design
Today I started to read Josef Müller-Brockmann’s Grid systems. I came across a term, regulative system, which was a big confusing—it seemed like a Formal Concept, but a quick search didn’t give me the definition I was hoping for.
So my assumption is that it is what it sounds like—some (design) rules that specify what you can and can’t do.
A few pages later:
The use of the grid as an ordering system […] shows that the designer conceives […] work in terms that are constructive and oriented to the future.
This was really confusing for me. Why does M-B (Müller-Brockmann) think that a grid has anything to do with being future-oriented? That doesn’t seem to be a property of grids.
But, contextualizing a bit, I think I’ve gotten some understanding of this. A designer is hired to create a set of rules to follow. And here, part of that regulative system specifies that you must lay out elements in relation to a grid. Finally, because future generations have this strict spec to follow, the design persists even though the people implementing the design may have changed.
In other words, it’s not the grid itself. “Oriented to the future” means that the design remains consistent. And the grid just happens to be a concept that you can make clear, straightforward rules around.
I could rephrase it a few more times, but I don’t think I can get across how significant this idea feels (even though, yes, it feels like a “duh” idea). If you create and follow a set of standards, quality will stay consistent over time.
Okay, let’s move on to software.
Here are a few concepts: interface, type checking, implements, API, tests, build pipelines.[^1] Do you see where I’m going?
[^1: Is it cheating to use interface twice?]
At some point (after I read a lot of code), it felt like there were an infinite number of ways to accomplish something in software. This is of course an exaggeration, but there are a lot of ways. Some approaches differ in style, some differ in quality, some differ in design. It’s similar to writing a sentence. It’s open-ended.
The thing I care about here is that things can be bad or wrong. How you lay out text on a page is open-ended. But getting your text on the page doesn’t make it good or correct. You often need more than that—for example, it normally has to be readable.
Statically typed languages (and/or type checking) have largely won out in tech companies because of this future-oriented, regulative system concept. They are, by definition, a regulative system. Because it’s enforced by a compiler, you must write type-safe code. It feels like the industry got to this point iteratively (“we found that using a type checker reduced bugs by X percent!!!"), but taken through this lens of regulative systems it feels rather obvious.
But more importantly, the problem is solved. A software engineer can’t ship this mistake because it will be caught before production. And it is no longer a concern that other engineers must be on the lookout for.
In software, regulative systems work the best when a machine can do the regulation. That is, in fact, what automatic PR tests and build pipelines are. They add an “enforcement” piece to your regulative system.
There are also rules that machines aren’t enforcing for you. Often these are stylistic, and they are often not enforced because they aren’t accepted across your team or because it feels too hard to add automatic enforcement.
Here’s a simple example: return early.
const divide = (a: number, b: number): number => {
if (b === 0) {
throw new Error("Cannot divide by 0.");
}
return a/b;
}
const divide = (a: number, b: number): number => {
if (b !== 0) {
return a/b;
}
throw new Error("Cannot divide by 0.");
}
This is perhaps a poor example (there’s no nesting), but hopefully you’re already familiar with the concept. Returning early helps your code stay linear and reduces how much you need to keep in your head when reading a piece of code. Returning early is a widely accepted concept, but AFAIK is not easy to automatically enforce.
If you keep looking, it feels like these regulative systems are everywhere in software. I’ll leave that to you because I want to wrap up. There’s one last piece of this regulative system idea that I want to touch on here: evolution of a system over time.
In my experience, most engineers write code to ship a feature. They’ll follow established organizational rules, but often those rules are focused at a very low level (e.g. style at a line/method level) or at a very high level (e.g. acceptance testing). However, everything in between has very little regulation—as long as it works, it’s good.
Engineers often get a good sense of how reliable a system is. Typically they’ll say “that system sucks” and they know it’s junk and a pain to work within. On the other side, it often won’t even register when a system is well made. It just works. I’ve even seen teams build a greenfield system, and by the end of the project they already know it’s garbage.
I don’t have an explanation as to when this happens, but this concept is often what I’m most interested when it comes to software. It feels like the answer lies somewhere along the intersection of an excellent understanding of software principles, domain modelling, a lot of care around interface type signatures, and building a system that is designed to be changeable over time.
A lot of knowledge already exists on these subjects, but it doesn’t feel like there’s very much industry consciousness around this problem. Which, honestly, unsettles me a lot whenever I take the time to think about it.