Is Functional Programming functional to programming?

Some 25 years ago I took my class of advanced programming which included exotic paradigms such as logic and functional programming.

As most of the content of that course, FP went into a seldom-if-ever open drawer during my professional career.

Simply I never saw the needs to use Lisp or had the luxury to afford never changing values.

A few years ago (well not that few – just checked it was 2005) I read on DDJ that the free lunch was over. That has nothing to do with the charitable food bank, but it marked a fundamental twist in the computer performance evolution. Something that would have a strong impact on how software was going to be designed, written and tested.

Until 2005 no matter how your code was badly written and underperforming. You were assured that for free, just waiting for the next hardware iteration, your code would have run faster (bugs included)! This is like deflation in economics – why should I invest my money if keeping them in the matters for a year they increase their value?

Not good for the economy, not good for programmers as well. Programmer’s laziness has been always praised, but it takes an expert programmer to take laziness apart from sloppiness (this sounds like meat for another blog post :-)).

So, what happened in 2005? Simply we hit a physical limit; a limit such as – you can’t get through solid objects. In fact, the speed (the clock rate) of the electronics reached a point were heat couldn’t be moved away fast enough. Increase the speed beyond that limit and the heat accumulates up melting the chip down.

From a more technical point of view, the frequency is the worst contributor since heat grows with the square of the frequency. So chip makers were forced (by Nature) to stop raising frequencies. In order to keep the progress marching forward and money flowing in their cash, they keep adding components. But once you add every possible module, cache, buffer, pipeline, peripheral, register bank… the bag of tricks is empty. All you can do is add another CPU, other two, three, and so one. That’s the multi-core processors that today we all have in our smartphone (mine, which is 3 years old, has 8 ARM cores).

In this way, hardware manufacturers can continue to perpetuate the illusion of linear growth of performances.

As it always happens with hardware design decisions, it is up to poor software scribblers to save the bacon. We can no longer write sloppy code if we want that it will run adequately when it will be launched (or the next year).

So we programmers have to exploit all those cores the hardware guys are throwing at us if we don’t want to explain to our boss why the software has the same performance it had on the previous generation of hardware. The way is, of course, make use of concurrency.

Concurrency is nothing new. We all used it in the past when we had a hard time showing the progress bar on the screen while doing the operation. The interesting reason why we used threads and not process, I think, is that interprocess communication is hard to design since you can’t use a shared state. While threads all share the same memory and you can hammer together in a short time, even without much design, something that seems to work.

In fact, to have two threads to work together, just take a global variable, protect it via a mutex and you are done.

Then it happens what happens with every software – you hastily add features, fix bugs, and you quickly find yourself in the concurrency jungle, where someone accesses the shared memory without going through the mutex, others forget to release the mutex, others starve because of different thread priorities and finally everything deadlocks for resource contention.

In this scenario, you manage to get along only because the project is simple enough or because the number of threads is low or because you are the only developer working on the project and you have zen-monk-like powers to keep everything together.

The reason is clear – this way to deal with concurrency is done using low-level tools. Tools that were invented back in the sixties of the last century when our granddads were discovering the basics of concurrency. When large codebase counted a million of LoC …in assembly (divide by 10 to very roughly get what would have been in C).

The industry took those tools and turned its back to the Academy ignoring newer and sounder solutions that were formalized.

Yes, but what has FP to do with this? Most of the problems about concurrency come from having to deal with a shared state – many entities read and write the same state over and over. In fact, concurrency done with processes is likely to have fewer problems because no shared state is involved. The problem, as mentioned earlier, is that concurrency without shared state needs some design. So, what if we could have a way to avoid changes to the shared state? What if the shared state would be not mutable? Hey, it sounds like FP!

That’s why, IMO, FP is so relevant today for programmers. Because it is a natural companion of modern highly parallel hardware. Of course, you need to evolve to do something useful, but FP lays out solid and sound ways to do so.

It is like when unstructured programming ruled the new-found-land of software development. Then abruptly (well, maybe not that abruptly) structured programming was invented. By hiding the jump instruction behind some fixed construct the resulting code was easier to read, to maintain and to compile.

I’m stopping here for this part.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.