Pragmatic functional programming

Original author: Robert C. Martin (Uncle Bob)
  • Transfer

The move to functional programming began in earnest about a decade ago. We saw how languages ​​like Scala, Clojure and F # began to attract attention. This movement was more than just the usual admiration of “Oh, cool, new language!”. There was something really prompting this movement - or we thought so.


Moore’s law told us that computer speeds would double every 18 months. This law was respected from the 1960s to the 2000s. And then it stopped. Depressingly. Frequencies reached 3 GHz, and stabilized. The limitation of the speed of light has been achieved. The signals could not propagate over the surface of the chip fast enough to provide high speeds.


Therefore, equipment developers changed their strategy. To achieve greater bandwidth, they added more processors (cores). To make room for these cores, they removed most of the caching and pipelining equipment from the chip. Thus, the processors became a little slower than they were, but there were more of them. Throughput increased.


I got my first dual-core machine 8 years ago. Two years later, I purchased a quad-core car. So, the reproduction of nuclei has begun. And we all understood that this would affect software development in ways that we did not even imagine.


One of our reactions was the study of functional programming (FP). FP strongly interferes with the change of state of a once initialized variable. This has a profound effect on concurrency. If you cannot change the state of a variable, you cannot get into a race condition. If you cannot update the value of a variable, you cannot get the parallel update problem.


This, of course, was considered a solution to the multicore problem. As the number of cores grows, concurrency is not, synchronization would become an important issue. The FP was supposed to provide a programming style that would reduce the problems of working with 1024 cores in one processor.


So, everyone began to study Clojure, or Scala, or F #, or Haskell, because they knew that the goods were coming at them, and wanted to be prepared when they arrived.


But the goods did not arrive. Six years later, I purchased a quad-core laptop. Since then I have had two more. It seems that the next laptop that I will purchase will also be quad. Are we observing yet another stabilization?


By the way, last night I watched a 2007 film. The heroine used a laptop, browsed the pages in a trendy browser, used google, and received text messages on a clamshell phone. Oh, it was dated - I could see that the laptop was an older model, the browser was an older version and the phone was far from modern smartphones. And yet these were not as impressive changes as the changes between 2000 and 2011. And even not nearly as impressive as the changes between 1990 - 2000. Are we observing stabilization in the pace of computer and software technologies?

Well, perhaps AF is not as critical a skill as we once thought. Maybe we will not be buried under the cores. Maybe we should not worry about chips with 32768 cores. Maybe we can all relax and return to updating our variables again.


I think that would be a mistake. Big. I think that would be as big a mistake as rampant use goto. I think it would be just as dangerous as refusing dynamic dispatch.


Why? We can start with the reason that interests us first. FP makes concurrency much safer. If you create a system with many threads, or processes, then using the FP will greatly reduce the number of problems with race conditions and parallel updates that you might have.


Why else? Well, FP is easier to write, easier to read, easier to test and easier to understand. Imagine how some of you are now waving their arms and shouting at the screen. You tried FP and found it anything you like, but not simple. All these map and reduce, and all this recursion - especially tail recursion - are anything but simple . Of course. I understood. But this is just a dating problem. As soon as you become familiar with these concepts - and the development of this acquaintance will not take so much time - programming will become much easier.


Why does it get easier? Because you do not need to monitor the status of the system. The state of the variables cannot be changed, thus the state of the system remains unchanged. And you do not need to track not just a system. You do not need to monitor the status of a list, or a set, or a stack, or a queue, because these data structures cannot be changed. When you put an item on top of the stack in the FP language, you create a new stack, not change the old one. This means that the programmer needs to juggle fewer balls simultaneously in the air. Less memorable. Less tracked. And this makes the code easier to write, read, understand and test.


So which FP language should you use? My favorite is Clojure. The reason this Clojure is absurdly simple is because it is a dialect of Lisp, a beautifully simple language. Let me show you.


This is a feature in the Java: f(x);
Now, to make it a function to Lisp, just now taking the first left parenthesis: (f x).


Now you know 95% Lisp, and you know 99% Clojure. A little stupid syntax with brackets is actually pretty much everything about the syntax in these languages. They are absurdly simple.


Now I know, maybe you saw Lisp programs earlier and you didn’t like all these brackets. And maybe you do not like CAR, CDR, CADR,, etc. Do not worry. Clojure has a bit more punctuation than Lisp, so there are less brackets. Also in Clojure CAR, CDRand CADRreplaced by first, restand second. In addition, Clojure is based on the JVM, and gives you full access to the entire Java library, and any other Java framework or library you want. Compatibility is quick and easy. And even better, Clojure will provide full access to the object-oriented features of the JVM.


I hear you say, “But wait!”, “FP and OOP are mutually incompatible!”. Who told you that? That's nonsense! Oh, it’s true that in FP you cannot change the state of an object, but so what? Just as adding a number to the stack gives a new stack, a call to the setting values ​​of the method object gives a new object instead of changing the old one. This is very easy to handle as soon as you get used to it.


But back to the OOP. One of the features of OOP that I find most useful at the software architecture level is dynamic polymorphism. And Clojure provides full access to Java dynamic polymorphism. Perhaps an example will explain this better.


(defprotocol Gateway
  (get-internal-episodes [this])
  (get-public-episodes [this]))

The code above defines a polymorphic interface for the JVM. In Java, this interface would look like:


public interface Gateway {
    List getInternalEpisodes();
    List getPublicEpisodes();
}

At the JVM level, the bytecode produced will be identical. Indeed, a program written in Java would implement an interface if it were written in Java. Similarly, a Clojure program can implement a Java interface. In Clojure, it looks like this:


(deftype Gateway-imp [db]
  Gateway
  (get-internal-episodes [this]
    (internal-episodes db))
  (get-public-episodes [this]
    (public-episodes db)))

Pay attention to the constructor parameter db, and how all these methods can access it. In this case, the implementation of the interface is simply delegated to some local functions, forwarding db.


Perhaps the best is the fact that Lisp, and therefore Clojure, is (wait, wait) homo-conic , which means that the code is data that the program can manipulate. It is easy to see. The following code (1 2 3)is a list of three integers. If the first element turned out to be a function, as in (f 2 3), then this becomes a function call. Thus, all function calls in Clojure are lists, and lists can be manipulated from code. Thus, a program can create and execute other programs.


The point is this. Functional programming is important. You should study it. And if you're worried about which language to learn, I recommend Clojure.


Also popular now: