About monadic technologies

    Kirpichov correctly writes about the negligence of an intuitive understanding of imperative languages:  http://antilamer.livejournal.com/300607.html .

    However, it seems to me that it would be important to voice that everything that is now hiding under the name "monad" is itself rather confused in terms of pedagogy and evangelism. The SPJ / Wadler’s classic joke sounds like “we should have called THIS warm fuzzy things so as not to scare people with teorkat.” The joke is strikingly short-sighted. The problem lies in the same plane as the naming of your tasks with the word “stuff” (this is what Allen is struggling with in his GTD).  
    Monads are currently the world as a complex com of historically determined causes, problems, solutions, technical capabilities and theoretical foundations (both algebraic and aspects of the theory of computing). 
    All these layers can (and should) be split in a first approximation as follows (the order is approximately random):
    • the desire to explicate effects (the pure implementation of imperative-like moments in the calculation), (see Wadler's works); here we include I / O, STM, parallel computing, etc.)
    • a convenient mechanism for materializing basic micro-calculation strategies - calling a function (call-by-name / call-by-value), ambiguity, changing state (assignment!), handling exceptions, stopping on failure, continuations, backtracking;
    • typeclasses as a mechanism for introducing monads into a language, and as a result, a convenient mechanism for meta-interception of computations (incredibly convenient for domain-specific embedded languages);
    • strict type checking , resulting from the use of typeclasses, and allowing you to mechanically verify the correct use of objects;
    • the existence of monadic laws into which monads fit, which allows materializing abstract combinators; this allows you to sometimes find unexpected isomorphisms between different subject areas, and also helps in optimizing and verifying programs;
    • elaborated theoretical basis (category theory) on which monads are based; this makes life easier for the creators of the base libraries, on which all real programming is then based;
    • monads are just one of the classes in a long chain of interesting algebraically determined classes , some of which are weaker than monads, and some are stronger: Functor, Applicative, Monoid, Traversable, Foldable, Monad with friends, Arrow with friends;
    • the desire to materialize certain types of calculations into an algebraic structure ( monoidal calculations ); this opens up wide scope for optimizations, verification of programs, creation of abstract combinators, as well as elimination of unbounded recursion - in terms of power of results it is similar to how I / O was once reliably isolated in IO Monad;

    Spend a couple of paragraphs for each paragraph.
    Explication of effects. Clean first (EMNIP) has dealt with putting input-output into the ecosystem of a clean lazy language. I / O is just the first effect that has been dealt with in this way. Now everything that does not fit into cleanliness and laziness is framed in the form of effects: transactions (and their interaction with input-output), random number generators, synchronization in parallel computing). In addition to being connected with the outside world, there are also calculations that do not fit into the native semantics of the language - for example, strict calculation of function arguments (as in all imperative languages).
    Effects allow you to worry only about a clear, limited subset of the problem, because everything that is outside the effects is very “simple” things that cannot be “hurt” - there are no deadlocks, you don’t have to think about barriers and calculation procedures. The monster is thus driven into a cage.
    See 1. SPJ “Beautiful concurrency”  http://www.ece.iastate.edu/~pjscott/beautiful.pdf 2. Wadler “The essence of functional programming”  http://mynd.rinet.ru/~alexm/ monads / essence.pdf 3. SPJ “Tackling the awkward squad”  http://research.microsoft.com/en-us/um/people/simonpj/papers/marktoberdorf/ 4. Kiselev et al. "Purely Functional Lazy Non-deterministic programming"  http://www-ps.informatik.uni-kiel.de/~sebf/pub/icfp09.html
    Basic micro-computing strategies.  Do not forget that the people who stood at the origins of Haskell were primarily interested in programming the compiler itself. Using monads, you can explicate all the basic processes that occur during the calculation - calls to functions with different methods of passing arguments, lexical and dynamic nesting of variables (Env), exception handling, expressing assignment through a state machine (State), ambiguous calculations (amb / List), calculations with failure (Maybe / Error), continuations (Cont), back tracking. 
    Modular interpreters implemented through monads and related monad transformers were also a separate area of ​​research. This design made it possible to construct the interpreter of the desired language (for example, arithmetic + function call + ambiguity) by simply combining the individual components of the interpreter. Each individual component is isolated from the rest and allows separate development.
    It is clear that this approach allows you to write the corresponding block once, check it, and use it for any task that requires the creation of an interpreter or even a compiler).
    See 1. Guy L. Steele Jr. “Building Interpreters by Composing Monads”,  http://mynd.rinet.ru/~alexm/monads/steele.pdf 2. S. Liang, P. Hudak “Modular Denotational Semantics for Compiler Construction” http://mynd.rinet.ru/~alexm/monads/liang2.pdf 3. S. Liang, P. Hudak “Modular Monadic Semantics”, 1998  http://flint.cs.yale.edu/trifonov/cs629/ modular-monadic-semantics.pdf
    Mechanism for meta-interception of computations.  Monks in Haskell are implemented through typeclasses, which are somewhat similar to either interfaces in Java or concepts in C ++. Typeclasses are completely orthogonal to monads! Typeclasses allow you to "intercept" the calculation into a different type of result.   
    For example, you can write a function in Haskell to calculate the roots of a quadratic equation. It can be calculated by passing it three arguments - then it will return a couple of roots. Using typeclasses, you can “calculate” this function in a different context so that the result of the calculation is JavaScript code, which is a JavaScript function that calculates the roots of a quadratic equation.    
    Thus, it is possible to separate the development of the basic algorithm from the method of its calculation.
    A rapidly developing field now is writing DSL for parallel computing. You can write a vectorized algorithm on Haskell, to which you then write a series of backends: for SSE, for CUDA, for regular C, for any other vectorizing technology that has not yet appeared. Naturally, this algorithm can simply be calculated directly from Haskell in order to debug its fundamental correctness.
    See 1. Lennar Augustsson’s presentation “Strongly Types DSEL”   http://www.infoq.com/presentations/Strongly-Typed-DSEL-Lennart-Augustsson (at the end there is a story about the language for generating Excel files); PDF
    2.hackage.haskell.org/package/accelerate  3.http
    //cdsmith.wordpress.com/2009/09/20/side-computations-via-type-classes/  (no monads at all);
    Monadic laws, theoretical foundations, younger and older brothers of monads.  They provide an opportunity to formally prove some basic things in order to be at least somewhat sure that the foundation will not fail. This is interesting not only to the authors of the most basic libraries, but also to those who got to tasks with such complexity, when every opportunity of automatic self-testing is welcomed with enthusiasm.  
    Together with strict type checking, this allows you to catch situations where the task is misunderstood, formulated contradictory, or simply more complicated than it seems. It also allows you to develop the program, dramatically reducing the likelihood of erroneous changes. So, Galois in their presentation says that they are constantly trying to encode certain code requirements in the form of type signatures.  
    Also, the formal structure (including in the monoidal style ) opens up wide scope for optimizations at the compiler level (including those controlled by the programmer through the rules mechanism). Some rigidly structured functions can be optimized as close as possible to the machine code, without sacrificing the abstractness of the source code.
    See 1. http://www.galois.com/blog/2009/04/27/engineering-large-projects-in-haskell-a-decade-of-fp-at-galois /
    2.  http://comonad.com/ reader / 2009 / iteratees-parsec-and-monoid /   (monoidal style)
    3.  http://www.cse.unsw.edu.au/~dons/papers/CLS07.html  (dons et al. "Stream Fusion: From Lists to Streams to Nothing at All)

    Also popular now: