As a physicist, I think that programming, like any design in general, is all about making as little use of brain resources as possible at the time of solving problems and to transmit the solution to others. This is the reason why it is pervasive in all kinds of engineering the concepts of modularity, isolation, clean interfacing (for which referential transparency is part of) etc. To decompose a problem in parts each one with simple solutions and with as little uncontrolled interaction with other parts is a rule of good design simply because we can not think in nothing but around seven concepts at the same time (By the way, an evolutionary psychologist would say that the beatifulness of simplicity is related with this release of brain resources). As a matter of fact, these rules of "good" design do not apply to the design of living beings simply because the process of Natural Selection has not these limitations. that is indeed the reason because natural designs are so difficult for reverse engineering.
Because such kid of human brain limitations and our lack of knowledge, design has a lot of trial and error. The danger of this is to get lost in the explosion of unfruitful alternatives due to low level issues outside of our high level problem, because the limitation of the tools we are using for it. In this sense, things like the strong type inference is superb for cutting the explosion of erroneous paths that the process of software development can generate. If designing solutions is sculpting order from chaos and against chaos, intelligent tools are the thing needed to keep us concentrated in fruitful courses of action. A physicist would say that the meaning of the engineering activity is to lower the entropic balance of the future by progressively lowering the number of possible states until the only ones permitted correspond with the desired outcomes, called "solutions" and a few bugs, of course.
For me, syntactic sugar is one more of this features that make haskell so great. Once we discover that a solution general enough has a correspondence with something already know, such are the relation of monads with imperative languages, then, why not make this similarity explicit ,with the do notation, in order to communicate it better with other people making use of this common knowledge?.
I have to say also that, without Haskell, I never dream to have the confidence to play simultaneously with concurrence, transactions, internet communications, parsing and, at the time, keeping the code clean enough to understand it after a month of inactivity. This is for me the big picture that matters for real programming.
3 comments:
i like your blog ....
Godness ! this is best motivation of abstraction I ever heard ! ( and EDSL as a consequence! ). JUST in fact short term memory, the number of things you can consider at the same time when you concentrate to resolve a problem. BTW, look like it is even smaller, 1 to 4 items [1], rather than 7 [2]. As a silly counter example, I imagine a function with 15 parameters, each named with 20 char long silly name …:-) …how easy... long live COBOL !!
[1](http://en.wikipedia.org/wiki/Subitizing)
[2](http://ask.metafilter.com/181868/What-cant-the-human-brain-do)
Post a Comment