If the free lunch is over, where can we eat for free?

In the first part of this blog post, I wrote about why FP is mandatory for programmers that work on modern software projects. Summing it up, it was like this – modern hardware scales up by adding cores, therefore, if software wants to scale up and well, then it has to do it by going concurrent, and FP is a good choice to deal with concurrent programming issues.

In this post, I’ll write about some differences between traditional and FP programming and I’ll present one more reason about why FP is a good match with parallel programming.

You can write bad code in any language. That’s a fact. So there’s no magic in functional programming to turn a lame programmer in a brilliant one. On the other hand, you can write pretty good code in the functional paradigm by sticking with the tools provided by functional languages.

So, let’s have a look at this FP thing… First, it is evident that this is not your parent’s FP – map, filter, fold… Just a few of the most recurring spells you’d see in a Scala source and none of them was in the notes I jotted down during my University courses. Even looking at my LISP textbook I couldn’t find any reference to the map operation. (Just to help those with no FP background – map (transform in C++) is an operation that applies a transformation to each item in a collection. In other words, if you have a function that turns apples into pears, then – using map – you can turn your array of apples into an array of pears.)

The advantage of these aggregate functions is that they describe the transformations that applied to your inputs turn them into needed outputs. In fact, it is not about avoiding loops or for statements but it is about describing the operations using instructions at a higher level of abstraction.

The other notable fact is that there is a lot of algebraic theory involved. You can do functional programming without types (as Lisp teaches us), but when you mix strong typing, generic and functional programming, you get a mathematical sound skeleton for your code.

This is disorienting at first. At least for two reasons. First, there is a precise terminology you are not used to (functor, monad, co-functor, and so on). Second, you need to know some theory. I remember high school and university classes on algebra. I found all those Groups, Semigroups, Rings, Abelian Monoids, whatever, just a boring formalism for the nothingness. While I could see practical exploit of the literal calculus part of Algebra, I couldn’t see any practical application for algebraic structures. In fact, I worked as a professional programmer for 20+ years without ever hearing about them. Now it turns out they play quite a primary role in functional programming.

Ok, I see that many of my two readers are still unconvinced. I can hear you saying “yes, immutable state, but why should I bother to learn Algebra theory now?”.

It is much like design patterns or c++ concepts – they define boundaries and connections so that you can compose your software. In FP you have the advantage that you have mathematical proof about the result from your compositions.

Composition, in fact, is a very central concept of FP. By composition, you describe the overall flow of the information and the transformation to apply to your data decoupling them from the presence of the data.

I’ll take another example from my memory bag 🙂 Some years ago I encountered a piece of software that had to initialize a modem. It was a GPRS so it had several operations to be performed either after a given event or after a specific timeout. The code was written in C and made heavy use of callbacks. It was a set of sparse and uncoupled functions, each was a callback that had first to decide the context where it was called, then take actions to set up the next callback.

In practice, it was a mess. So difficult to read and even harder to debug. Maintaining it was like juggling with sharp knives on a boat in a tempest.

FP offers an elegant solution to this problem via the concept of Monads. A Monad (besides the professorial definition you could find on the Internet) is an object that encapsulates an operation (in fact this definition is much more understandable for a programmer). You can chain (combine) monads together so that their actions are performed one after the other only if the former succeeds. The composition you code defines the structure, but it is not interwoven with the execution. So that your control structure may be run and wait for completion when desired.

If you take your time to digest the concept, you’ll see that this is a wonderful tool to deal with asynchronous events. You prepare in a single and readable recipe how the happy path is managed and then let the execution do the rest.

Of course, FP is not free. If you are used to imperative/OOP the learning curve is quite steep, not because of the language itself, but because of the paradigm shift. The new paradigm demands for problems to be solved in a different way.

Shifting from C to C++, then to Java, to PHP and eventually to C# was quite easy. All the languages, especially those supporting OOP, share many of the same concepts. The syntax differs, expressivity may differ, but the tools are the same. A proficient C++ programmer could turn into a proficient Java o C# programmer in a week. The other way around may take more because of C++ subtleties, but it is still doable quickly (if the programmer has a good understanding of how the hardware works).

Shifting from C++ to idiomatic Scala is quite a jump. It may take months.

Is this that bad for programmers? I would say not really. I am a bit annoyed by the trend the profession of programmers took in the last 10-15 years. Basically, everyone who can type on a keyboard can proclaim themselves a programmer. Fouling up themselves and unaware customer. Springing projects that easily go out of control, overtime and over budget, leaving Real Programmers to sort the mess and do serious work to stabilize and industrialize amateurish work.

So, it is not that bad that the extra step of studying algebra is requested to programmers. This also helps to make a distinction between passionate programmers and unpassionate programmers. So I would say it is fair, for functional programmers to be paid more than non-functional programmer all other things equal.

As a coin has two faces, there is nothing without drawbacks. First, it is easier in FP than in traditional programming to meet bleeding-edge addicted. I mean FP is new and shiny, so it attracts those attracted by new and shines regardless of its industrial aptness. I think it is good to look at the boiling surface of the software development innovation, but sound engineering practice should always be the leading light and seldom it is wise to use libraries, languages or tools in their infancy.

The other disadvantage is more technical – you are going to pay for the increased abstraction in programming. This is nothing new, we gave use some percent of execution speed each time we stepped up – from assembly to compiled code, from compiled to structured programming, from structured to OOP, and so on. FP by its nature will generate lots of temporary objects since you cannot mutate them, you need to copy them with new values. This usually means indirections, no memory locality, and garbage collector. In the past, I compared Scala and C++ performances with terrible results for the functional approach. I’m pretty sure that reality is not as bleak as it is depicted by my tests, but I’m also sure the penalty paid for FP is not that light.

For sure the JVM won’t help in this aspect and the Scala native project seems not moving so fast. Despite this, I reckon that FP is a tool that every software engineer must have in her/his toolbox and that FP can be the paradigm of choice for many projects.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.