I read the famous “On the cruelty of really teaching computing science” by E. W. Dijkstra today. The “abstract program”, built free from the notions of the model it’s executed on but entirely within the symbolds of the language is defined it is fascinating and feels like a “language” of sorts. A language purely for symbolic manipulation, the program more close to a formula as the essay describes it. Correctness defined in the abstract terms of adherence to the specification rather than “works for this input” and “produces that output”.

An “error-free” program is then one that is “maintenance” free, for unless the specification changes, the program remains correct. The better programmer is one who is the better manipulator of these symbols and can use symbols that are then composites of other symbols, building on a “vocabulary” of logic itself.

I agree with the spirit of the idea and that the algorithm or the formula is always paramount, especially for a learner, but for someone more advanced, the execution model and it’s details also begin to matter. For once you cannot improve anything on the formula, understanding of the execution model gives you tools to still improve the program. The decoupling is impossible if you begin to think of symbols as assembly instructions, hence the abstract program cannot go beyond the “abstraction” of the language it’s defined in.

The simplication/allusion/anthropomorphization is a plague and a simple example can be seen in the typedef article. The simplification is unable to cover the “specification” of typedef, rather just a subset of the “operations” possible with it. The specification is a precedes operation and the full set of operations ( I view an operation as a tuple of input and outputs ) can simply be understood as a set definition from the specification itself.

For example, explaining typedef int myint; defines an operation on typedef and all I know is that I can use myint in place of int. As far as the usage of typedef as a symbol in my abstract program goes, I have no greater understanding of it the one operation specified. An incorrect way to rectify it is an attempt at expanding it to other types as well and trying to extrapolat ( basically guess ) when seeing an even newer case, and still never having an understanding of the full set of operations possible.

Note that an analogy or relation to a different idea in a different field is different from a “coerced allusion”. Algorithms are universel and in fact such relations, wherever apt, only allow you to gain a greater understanding of the gestalt. Though it’s important to be aware of when you are mangling and a don’t have a simple litmus test for it other than “feeling”.

Instead, if typedef is defined as a specification, that takes a declaration and allows it to be referred to by a new name, then this gives you an idea of the full set of operations, and it’s meaning in the abstract program. It is also vasly simpler over going through the multiple operations and if effectively a greater understanding of the symbol itself in context of the language.

The focus on specification and comparison of the formula with the specification also introduces and bakes in the the practice of knowing the requirements aka tranformation set first and then write the formula for the transformation. I’ve personally done this formula creation in the manner where I program without much thinking and then go through a phase of multiple runs, debuggers, print to sort of absent mindedly use the “operation mode” to glue together and “fit” the formula. This is something that is simply not possible in the abstract program.