Saturday, January 17, 2009
Sunday, January 11, 2009
As a physicist, I think that programming, like any design in general, is all about making as little use of brain resources as possible at the time of solving problems and to transmit the solution to others. This is the reason why it is pervasive in all kinds of engineering the concepts of modularity, isolation, clean interfacing (for which referential transparency is part of) etc. To decompose a problem in parts each one with simple solutions and with as little uncontrolled interaction with other parts is a rule of good design simply because we can not think in nothing but around seven concepts at the same time (By the way, an evolutionary psychologist would say that the beatifulness of simplicity is related with this release of brain resources). As a matter of fact, these rules of "good" design do not apply to the design of living beings simply because the process of Natural Selection has not these limitations. that is indeed the reason because natural designs are so difficult for reverse engineering.
Because such kid of human brain limitations and our lack of knowledge, design has a lot of trial and error. The danger of this is to get lost in the explosion of unfruitful alternatives due to low level issues outside of our high level problem, because the limitation of the tools we are using for it. In this sense, things like the strong type inference is superb for cutting the explosion of erroneous paths that the process of software development can generate. If designing solutions is sculpting order from chaos and against chaos, intelligent tools are the thing needed to keep us concentrated in fruitful courses of action. A physicist would say that the meaning of the engineering activity is to lower the entropic balance of the future by progressively lowering the number of possible states until the only ones permitted correspond with the desired outcomes, called "solutions" and a few bugs, of course.
For me, syntactic sugar is one more of this features that make haskell so great. Once we discover that a solution general enough has a correspondence with something already know, such are the relation of monads with imperative languages, then, why not make this similarity explicit ,with the do notation, in order to communicate it better with other people making use of this common knowledge?.
I have to say also that, without Haskell, I never dream to have the confidence to play simultaneously with concurrence, transactions, internet communications, parsing and, at the time, keeping the code clean enough to understand it after a month of inactivity. This is for me the big picture that matters for real programming.
Friday, January 09, 2009
has been uploaded to hackage. http://hackage.haskell.org/cgi-bin/hackage-scripts/package/TCache The main addition of this versión is the capablity to safely handle transact, and serialize to permanent storage many datatypes simultaneously in the same piece of code and incrementally. Just register each new datatype (with registerType ::
). So it is not necessary to glue all types in advance in a single algebraic datatype. I suppose taht "enhanced composablility" applies to this feature.
In this release:
Added a Data.TCache.Dynamic. (SEE dynamicsample.hs)
- Can handle, transact, and serialize to disk many datatypes simultaneously and incrementally
- Dynamic uses the same interface than TCache and add *DResource(s) calls for handling many
- Safe dynamic data handling trough a lighter, indexable and serializable version of Data.Dynamic
- Added KEY object for retrieving any object of any type.
Data.Tcache is a transactional cache with configurable persistence. It tries to simulate Hibernate
for Java or Rails for Ruby. The main difference is that transactions are done in memory trough STM.
There are transactional cache implementations for some J2EE servers like JBOSS.
TCache uses STM. It can atomically apply a function to a list of cached objects. The resulting
objects go back to the cache (withResources). It also can retrieve these objects (getResources).
Persistence can be syncronous (syncCache) or asyncronous, wtih configurable time between cache
writes and configurable cache clearance strategy. the size of the cache can be configured too .
All of this can be done trough clearSyncCacheProc. Even the TVar variables can be accessed
directly (getTVar) to acceess all the semantic of atomic blocks while maintaining the persistence of the
Persistence can be defined for each object: Each object must have a defined key, a default filename
path (if applicable). Persistence is pre-defined in files, but the readResource writeResource and
delResource methods can be redefined to persist in databases or whatever.
There are Samples in the package that explain the main features.