An interesting intuition from D'Arcy Wentworth Thompson [On Growth and Form; I read the 1942 edition] is that the current shape of an organism or artifact is a byproduct and in a sense a mirror of the forces that were applied during its growth (or construction). Something is flat because it has been pressed into that shape, or cut into that shape. This may seem trivial, but allows for some interesting speculation about the original forces by looking at the current shape.
While thinking about software, I came to a larger conclusion, which may seem equally trivial, but that I feel may hold the key to a more precise formalization of many notions. The shape and material of an artifact are also a recipe for its future reactions to forces. A long beam will bend easily. A wooden table will scratch easily.
When you move into the realm of programming and artifacts, things get interesting, because a recipe is in itself a set of pre-encoded instructions. So, when you consider a software artifact (a class, a function, etc) you see that the artifacts encode:
- a set of explicit instructions for the "essential interpreter", the machine that will execute those instructions. This is what we normally call "the function" or "the behavior" of that code. Those instructions are explicit in the sense that they are directly expressed in the programming language; using poor old C, which is close to the machine, I can say:
int f( int x ) { return x + 1 ; }
and that means "dear computer, please add 1 to this integer x and return the result".
- a set of implicit instructions for the "contingent interpreter" of change, encoded in the shape of code. This may sound like intellectual mumbo-jumbo, so let's ignore the essential / contingent terminology (which I've borrowed from Richard Gabriel [Form & Function in Software]) for a moment and focus on the notion instead.
Just like a physical artifact pre-encodes a set of responses to forces, so does our software artifact. Those responses are pre-encoded in its form. That's a rather powerful notion.
If you look at f, its pre-encoded response to change is rather simple:
- if you want to use another type instead of int, you either change f (and lose the version working on int) or clone f / change the clone.
- if you want to add another value instead of 1, you either change f or clone f / change the clone or add a parameter to f.
- if you want a different algorithm you may just as well implement another function and avoid bothering about transforming f (e.g. to take another function as a parameter), as f is too simple to be worthy.
If I switch to C++, I can say
template< typename T > T g( T x ) { return x + 1 ; }
the pre-encoded reaction to change for g is slightly different, as it basically says:
- it's like f, but you can use it for any type for which "+ 1" makes sense at compile time. So if your change is from int to one of those types, the reaction is that no change to g is necessary (damping).
Of course I can also write:
template< typename T > T h( T x, T y ) { return x + y ; }
and this pre-encodes another slightly different reaction to changes. Or move to:
template< typename T1, typename T2, typename T3>
T3 r( T1 x, T2 y ) { return x + y ; }
and that encodes yet another reaction to changes, even though they would all return the same value when used with an integer and (if necessary) a 1.
Now, if you review a number of programming / design notions from this perspective (OO polymorphism, genericity, etc), you'll see how shaping software in a specific way becomes less and less about "respecting some principle" and more and more about pre-encoding a reaction to change (in the decision space). That's basically another program, which might or might not be executed.
Most likely, that program won't be executed by a computer: it will be executed by a human, the moment those changes will become necessary. But in a sense, that's completely irrelevant. When you code, you're giving a set of instructions to the essential intepreter (the computer) and a set of programs (*) to the contingent intepreter (the future bearer of change). The first is directly encoded in the instructions, the latter is indirectly encoded into the form (shape).
(*) it's a set of programs, not a set of instructions, because it's up to the contingent interpreter to choose the program among those pre-encoded in the shape of the artifact.
While thinking about software, I came to a larger conclusion, which may seem equally trivial, but that I feel may hold the key to a more precise formalization of many notions. The shape and material of an artifact are also a recipe for its future reactions to forces. A long beam will bend easily. A wooden table will scratch easily.
When you move into the realm of programming and artifacts, things get interesting, because a recipe is in itself a set of pre-encoded instructions. So, when you consider a software artifact (a class, a function, etc) you see that the artifacts encode:
- a set of explicit instructions for the "essential interpreter", the machine that will execute those instructions. This is what we normally call "the function" or "the behavior" of that code. Those instructions are explicit in the sense that they are directly expressed in the programming language; using poor old C, which is close to the machine, I can say:
int f( int x ) { return x + 1 ; }
and that means "dear computer, please add 1 to this integer x and return the result".
- a set of implicit instructions for the "contingent interpreter" of change, encoded in the shape of code. This may sound like intellectual mumbo-jumbo, so let's ignore the essential / contingent terminology (which I've borrowed from Richard Gabriel [Form & Function in Software]) for a moment and focus on the notion instead.
Just like a physical artifact pre-encodes a set of responses to forces, so does our software artifact. Those responses are pre-encoded in its form. That's a rather powerful notion.
If you look at f, its pre-encoded response to change is rather simple:
- if you want to use another type instead of int, you either change f (and lose the version working on int) or clone f / change the clone.
- if you want to add another value instead of 1, you either change f or clone f / change the clone or add a parameter to f.
- if you want a different algorithm you may just as well implement another function and avoid bothering about transforming f (e.g. to take another function as a parameter), as f is too simple to be worthy.
If I switch to C++, I can say
template< typename T > T g( T x ) { return x + 1 ; }
the pre-encoded reaction to change for g is slightly different, as it basically says:
- it's like f, but you can use it for any type for which "+ 1" makes sense at compile time. So if your change is from int to one of those types, the reaction is that no change to g is necessary (damping).
Of course I can also write:
template< typename T > T h( T x, T y ) { return x + y ; }
and this pre-encodes another slightly different reaction to changes. Or move to:
template< typename T1, typename T2, typename T3>
T3 r( T1 x, T2 y ) { return x + y ; }
and that encodes yet another reaction to changes, even though they would all return the same value when used with an integer and (if necessary) a 1.
Now, if you review a number of programming / design notions from this perspective (OO polymorphism, genericity, etc), you'll see how shaping software in a specific way becomes less and less about "respecting some principle" and more and more about pre-encoding a reaction to change (in the decision space). That's basically another program, which might or might not be executed.
Most likely, that program won't be executed by a computer: it will be executed by a human, the moment those changes will become necessary. But in a sense, that's completely irrelevant. When you code, you're giving a set of instructions to the essential intepreter (the computer) and a set of programs (*) to the contingent intepreter (the future bearer of change). The first is directly encoded in the instructions, the latter is indirectly encoded into the form (shape).
(*) it's a set of programs, not a set of instructions, because it's up to the contingent interpreter to choose the program among those pre-encoded in the shape of the artifact.