Tag: functional programming

Is Functional Programming functional to programming?

Some 25 years ago I took my class of advanced programming which included exotic paradigms such as logic and functional programming.

As most of the content of that course, FP went into a seldom-if-ever open drawer during my professional career.

Simply I never saw the needs to use Lisp or had the luxury to afford never changing values.

A few years ago (well not that few – just checked it was 2005) I read on DDJ that the free lunch was over. That has nothing to do with the charitable food bank, but it marked a fundamental twist in the computer performance evolution. Something that would have a strong impact on how software was going to be designed, written and tested.

Until 2005 no matter how your code was badly written and underperforming. You were assured that for free, just waiting for the next hardware iteration, your code would have run faster (bugs included)! This is like deflation in economics – why should I invest my money if keeping them in the matters for a year they increase their value?

Continue reading “Is Functional Programming functional to programming?”

Optimization Time – Scala (and C++)

The Case

It is quite a feeling when you learn that your commit, a couple of months ago, broke the build is such a subtle way that it took so long to be detected. Possibly a more thorough testing and validation of the software would have caught it earlier, nonetheless it’s there and you and your coworkers are working hard to delimit the offending code and better understanding what caused the mess.

It turned out that some perfectly working code was taking too much time in one of the hot spot of our codebase. More precisely that code operated a conversion from an incoming data packet into a format suitable for data processing… about 2500 times a second on a modest hardware.

The code resulting in such poor performances that disrupted the device functionality seemed like a good idea in a very functional idiom –

@inline final val SAMPLES_PER_BLOCK = 6
@inline final val BLOCK_LENGTH = 8
@inline final val BLOCK_HEADER = 2

private def decodeFunctional(packet: Array[Byte]): Array[Int] = {
    val lowSamples: Array[Int] = packet.drop(BLOCK_HEADER).map(x => x.toUint)

    val msbs: Int = ((packet(0).toUint << 8) | packet(1).toUint) & 0xFFF
    Array.tabulate(SAMPLES_PER_BLOCK)(
        x => lowSamples(x) | (((msbs << 2 * x) & 0xC00) >> 2)
    )
}

Basically the incoming packet holds six 10-bits samples in 8 bytes and some bits-shifting’n’pasting is needed to rebuild the six sample array for later processing.

Once found that the problem was here I decided rewrite the code to get rid of the functional idiom and take an imperative/iterative approach:

private def decodeImperative(packet: Array[Byte]): Array[Int] = {
    // the code below is ugly, but it needs to be fast, don't touch,
    // unless you know what you are doing
    val samples: Array[Int] = new Array[Int](SAMPLES_PER_BLOCK)
    val msbs: Int = ((packet(0).toUint << 8) | packet(1).toUint) & 0xFFF
    var index: Int = 0
    while (index < SAMPLES_PER_BLOCK) {
        samples(index) = packet(BLOCK_HEADER + index).toUint |
                         (((msbs << 2 * index) & 0xC00) >> 2)
        index += 1
    }
    samples
}

(the comment is a colorful note I decide to leave for the posterity :-)).

The idea is to get rid of the read-only constraint imposed by functional programming and save all the temporaries.

Optimizing is a subtle art, and even if intuition may guide you, only measurement can tell whether the optimization succeeded. So I set up some speed test (see “methodology” below) to objectively assess my results.

The speed improvement was astounding: more than 7 times faster (7.6 to be precise).

Energized by this victory I decided to switch to heavy optimization weapon. Not being sure if scala/java compiler optimization really does its optimization homework, I opted for manually unrolling the loop in the code:

private def decodeUnrolled0( packet: Array[Byte] ) : Array[Int] = {
    val PACKET0 : Int = packet(0).toUint
    val PACKET1 : Int = packet(1).toUint
    Array[Int](
        packet(BLOCK_HEADER+0).toUint | ((PACKET0 << 6) & 0x300),
        packet(BLOCK_HEADER+1).toUint | ((PACKET0 << 8) & 0x300),
        packet(BLOCK_HEADER+2).toUint | ((PACKET1 << 2) & 0x300),
        packet(BLOCK_HEADER+3).toUint | ((PACKET1 << 4) & 0x300),
        packet(BLOCK_HEADER+4).toUint | ((PACKET1 << 6) & 0x300),
        packet(BLOCK_HEADER+5).toUint | ((PACKET1 << 8) & 0x300)
    )
}

I was somewhat disappointed to learn that this version is only marginally faster (x1.6) than the original functional version. This didn’t really make sense to me, since the loop is completely unrolled and just trivial computation needed to be performed. So I decompiled the .class file:

public int[] CheckPerf$$decodeUnrolled0(byte[] packet)
{
  int PACKET0 = UnsignedByteOps(packet[0]).toUint();
  int PACKET1 = UnsignedByteOps(packet[1]).toUint();
  return (int[])Array..MODULE$.apply(Predef..MODULE$.wrapIntArray(new int[] {
    UnsignedByteOps(packet[2]).toUint() | PACKET0 << 6 & 0x300, 
    UnsignedByteOps(packet[3]).toUint() | PACKET0 << 8 & 0x300, 
    UnsignedByteOps(packet[4]).toUint() | PACKET1 << 2 & 0x300, 
    UnsignedByteOps(packet[5]).toUint() | PACKET1 << 4 & 0x300, 
    UnsignedByteOps(packet[6]).toUint() | PACKET1 << 6 & 0x300, 
    UnsignedByteOps(packet[7]).toUint() | PACKET1 << 8 & 0x300 }), ClassTag..MODULE$.Int());
}

Instead of the Java array I expected, some Scala internals are used to create the array and then converting it into the Java thing. Possibly this was the cause of the slowness. By looking around at the decompiled code I found another function that just used a Java int array. So I rewrote the unrolled version accordingly –

private def decodeUnrolled1( packet: Array[Byte] ) : Array[Int] = {
    val samples = new Array[Int](SAMPLES_PER_BLOCK)
    val PACKET0 = packet(0).toUint
    val PACKET1 = packet(1).toUint
    samples(0) = packet(BLOCK_HEADER + 0).toUint | ((PACKET0 << 6) & 0x300)
    samples(1) = packet(BLOCK_HEADER + 1).toUint | ((PACKET0 << 8) & 0x300)
    samples(2) = packet(BLOCK_HEADER + 2).toUint | ((PACKET1 << 2) & 0x300)
    samples(3) = packet(BLOCK_HEADER + 3).toUint | ((PACKET1 << 4) & 0x300)
    samples(4) = packet(BLOCK_HEADER + 4).toUint | ((PACKET1 << 6) & 0x300)
    samples(5) = packet(BLOCK_HEADER + 5).toUint | ((PACKET1 << 8) & 0x300)
    samples
}

The main difference is that here the array is first declared and allocated, then filled. In the above code the array was created, initialized and returned all in the same statement.

The speed improvement was good, but not much better than my imperative version: x7.77 times faster.

Then a colleague pointed out that I was using the “old” Scala 2.10 compiler and that I should try the latest 2.12 that benefits from a better interoperability with the underlying JVM 1.8.

In the following table you get the performance comparison:

Attention: The internal data of table “1” is corrupted!

Functional and unrolled0 are quite unsurprising, just what you expect from the next version of the compiler. Imperative approach yields quite a boost – 22 times faster than the functional version on the previous compiler. The shocking surprise is in the last column – the unrolled1 version of the code runs in excess of 1200 times faster than the original code!

Before jumping to conclusions, I performed the same tests on the code translated into C++. Here is the equivalent code:

typedef std::array<int,SAMPLES_PER_BLOCK> OutputArray;
typedef std::array<uint8_t,BLOCK_LENGTH> InputArray;

OutputArray decodeImperative( InputArray const& packet )
{
    OutputArray samples;
    uint32_t msbs = ((packet[0] << 8) | packet[1]) & 0xFFF;
    for( int index=0; index< SAMPLES_PER_BLOCK; ++index )
    {
        samples[index] = packet[BLOCK_HEADER + index] |
                         (((msbs << 2 * index) & 0xC00) >> 2);
    }
    return samples;
}

OutputArray decodeFunctional( InputArray const& packet )
{
    OutputArray lowSamples;
    std::copy( packet.begin()+2, packet.end(), lowSamples.begin() );
    uint32_t msbs = ((packet[0] << 8) | packet[1] ) & 0xFFF;
    OutputArray samples;

    int index=0;
    OutputArray::const_iterator scan = lowSamples.begin();
    std::generate(
        samples.begin(),
        samples.end(),
        [&index,&scan,msbs]()
        {
            return *scan++ | (((msbs << 2*index++) & 0xC00) >> 2);
        }
    );
    return samples;
}

OutputArray decodeUnrolled0( InputArray const& packet )
{
    uint8_t PACKET0 = packet[0];
    uint8_t PACKET1 = packet[1];

    return OutputArray{
        packet[BLOCK_HEADER+0] | ((PACKET0 << 6) & 0x300),
        packet[BLOCK_HEADER+1] | ((PACKET0 << 8) & 0x300),
        packet[BLOCK_HEADER+2] | ((PACKET1 << 2) & 0x300),
        packet[BLOCK_HEADER+3] | ((PACKET1 << 4) & 0x300),
        packet[BLOCK_HEADER+4] | ((PACKET1 << 6) & 0x300),
        packet[BLOCK_HEADER+5] | ((PACKET1 << 8) & 0x300)
    };
}

OutputArray decodeUnrolled1( InputArray const& packet )
{
    OutputArray samples;
    uint8_t PACKET0 = packet[0];
    uint8_t PACKET1 = packet[1];

    samples[0] = packet[BLOCK_HEADER+0] | ((PACKET0 << 6) & 0x300);
    samples[1] = packet[BLOCK_HEADER+1] | ((PACKET0 << 8) & 0x300);
    samples[2] = packet[BLOCK_HEADER+2] | ((PACKET1 << 2) & 0x300);
    samples[3] = packet[BLOCK_HEADER+3] | ((PACKET1 << 4) & 0x300);
    samples[4] = packet[BLOCK_HEADER+4] | ((PACKET1 << 6) & 0x300);
    samples[5] = packet[BLOCK_HEADER+5] | ((PACKET1 << 8) & 0x300);
    return samples;
}

I’m positive that functional purist will scream in horror at the sight of C++ functional version of the code, but it is the closest thing I could do using the standard library. Should you have any idea on how to improve the functional idiom using the standard library, let me know and I’ll update my test set.

Here are the C++ results –

Attention: The internal data of table “2” is corrupted!

This is mostly unsurprising but for the the lower right column which seems to indicate that Scala version is faster than C++. I guess that the reason is because I am using timing values that are rebased considering a standard price you pay for executing the code under test and that I don’t want to consider in my measurements (see below about the methodology I used). My interpretation is that the overhead in Scala hides the cost for a trivial expression so that the base time is much more close to the execution time than it is in C++.

Conclusions

Drawing general conclusions from a specific case is all but easy and trivial. Anyway according to data and to intuition, if you want to code using the functional idiom you pay a price that sometimes can be hefty. Paying an execution price, be it memory or cpu cycles, to raise the abstraction level is something that has been with computer programming from the dawn of this craft. I remember echoes from the ranting against high level languages in favor of assembly to avoid paying a 30% penalty. Now those times are gone, but facing a x4-x7 overhead can be something not always you can afford.

(Raising the level of abstraction by employing the functional idiom is something that is not undisputed, but I will spare this for another post).

Writing efficient code is not always intuitive. Constructs that seem innocent enough turns out to be compiled into time consuming lower level code. Peeking at compiled code is always useful. While languages as C++ are accompanied by detailed information about performances of various constructs and library components, Scala (as Java) lacks of these information. Also the huge amount of syntactic sugar employed by Scala does not simplify the life of the programmer that has to optimize code.

Eventually I cannot save to remarks the performances of interpreted languages (and even more so functional interpreted languages), despite of the progress in the JVM technologies, are still way behind native language ones. Even employing a functional approach C++ is some 80 times faster than Scala. You can argue that the C++ code above is not really functional, but there is nothing preventing me to use a mostly functional library for C++, with roughly the same performances of the functional version of the C++ code. It could be more tedious to write because of the lesser grade of syntactic glucose in C++, yet it is still functional.

I’m curious to see how scala-native compares, but this again is a topic for another post.

The last thought I want to report here is that you can make Scala code much faster, by avoiding the functional paradigm and peeking at compiled code. It is something you are not usually expected to do, but and improvement of 1200 times is a worth gain for the most executed code.

A very last thought for those who think that after all nowadays we have such powerful computers that analyzing performances and optimizing is a waste of time. I would like to remember that data processing requires energy and today energy production mostly relies on processes that produce CO2, something that the world doesn’t need. A data processing that runs just twice the speed of another will produce half of the carbon dioxide emissions. The environment benefits for switching to C and C++ should be clear to everyone.

Methodology

A couple of words on methodology. The compilers were Scala 2.10.6, Scala 2.12.2 and GNU c++ 7.3.1. All the test were performed on Linux (Fedora Core 27) using the command /bin/time –format=”%U %S”  command to print the User and Kernel times of the command and summing the two values together. These are times that CPU spent executing the process, not wall clock. The decoding function is executed 0x7FFFFFFF (some 2 billions) times per run and each program is tested for 12 runs. The fastest and the slowest run are discarded and the other are used to compute an average that is considered the measure of the test.

The base time is computed by employing a dummy decoder that does nothing but filling the result array with constants.

This setup should take in account the JIT effect and measure just the code I listed above.

Lambda World 2017 – Lambda Core, hardcore

Despite the title which is a bit obscure, this talk has been very clear and entertaining. The speaker Jarek Ratajski (@jarek000000) who defines himself as a functional programmer, wizard, and anarchitect, definitively knows his art and how to make an engaging speech.

Btw, an anarchitect is a person that breaks the rules so that something can be eventually done.

As usual, mistakes, errors, and inaccuracies are all my faults. A last note: you need Scala 2.12 to compile the code presented here, older versions are likely to fail.

I really recommend watching the presentation video on youtube.


Let’s start with a question for the attendees – you are on a desert island and you can take just one thing from the civilized world. What would you like to have to survive?

[someone from the audience shouts “wifi”]

Well, you need Math. Math is the base for everything, but it is huge! So, maybe we can find a small subset, a core, that allows us to recreate the whole Math.

Lambda is a building block of math. Using lambda calculus we can build – one step at a time – the whole Math. Historically Math has been built using sets, unfortunately, sets had some problems in terms of paradoxes, so mathematicians resorted to this simpler mechanism.

So lambda (λ) is a function that takes some lambda as a parameter and produces another lambda. A lambda is written as:

λ x. some expression

A λ x is applied to λ y with the following notation –

x(y)

to a scala developer:

trait Lambda {
    def apply(x: Lambda) : Lambda
}

Now suppose we are on a deserted island with just our Scala compiler. Just the compiler – no jars, no collections, no scala utils, no Boolean, Int, Float, Double, String… anything at all.

The only exception to this nothingness is the badlam package in order to simplify and help the presentation of concepts.

Let’s define the identity, which is a function that returns the function accepted as an argument:

λ x. x

In Scala it turns out to be:

val identity : Lambda = (x) => x

val id2 = identity(identity)

Now we can define a function that accepts multiple arguments (via the curry mechanism) and always returns the first one:

val funny: Lambda = (x) => (y) => x

Note that names are not important, but the structure is. The “funny” function structure can be used to implement true and false concepts:

val aTrue = funny
val aFalse : Lambda = (x) => (y) => y

Note that aFalse is complementary with respect to aTrue. Now that we have true and false symbols we can define boolean operators:

val and : Lambda = (p) => (q) => p(q)(p)

You can read this like: and is a function that via currying accepts two arguments p and q. Consider p: it can be either true or false. If it is true its lambda evaluates to its first argument, otherwise to the second. So, when p is true, the and expression evaluates to q, so and is true when both arguments are true. If p is false, then p is returned – no need to consider q.

Let’s check with Scala:

val result1 : Lambda = and (aFalse)(aTrue)

This evaluates to aFalse, while

val result2:Lambda = and (aTrue)(aTrue)

This evaluates to aTrue.

Or function is based on the same scheme used by and:

val or:Lambda = (p) => (q) => p(p)(q)

To define not is a matter of reversing the true/false definition:

val not: Lambda = (p) => (a) => (b) => p(b)(a)

So long, so far, we have defined all the tools we need for boolean calculus. Let’s see if we can express numbers. One approach could be:

val zero: Lambda = (f) => (x) => x
val one: Lambda = (f) => (x) => f(x)
val two: Lambda = (f) => (x) => f(f(x))
val three: Lambda = (f) => (x) => f(f(f(x)))

So numbers are functions (of course) that accept a function and an argument and return zero or more applications of the function to the argument.

Zero is the identity. One is the single application of lambda f to the identity. Two is the single application of lambda f to one, that is that application of lambda f to the application of lambda f to the identity, and so on…

This may work, but it is boring to encode all numbers in this way. A function that computes the successor of a given number could be a good tool to lessen the burden:

val succ:Lambda = (n) => (f) => (x) => f(n(f)(x))

[NdM – well this is mind-boggling and I have a hard time decoding. Conceptually is simple – just remember that an integer n is represented by a lambda that accepts a function f and an argument x and returns a function composing n nested applications of f over x. So replace x with f(x) and you are done.]

Now we can define addition and multiplication

val plus: Lambda = (m) => (n) => (f) => (x) => m(f)(n(f)(x))
val mult:Lambda = (m) => (n) => (f) => m(n(f))

[NdM -It takes a while for these two as well. Sum is intuitively easy, take 3+2=5, in lambda is: x => f(f(f(x))) + x => f(f(x)) = x => f(f(f(f(f(x))))). You may read plus as: (m,n,f) => (x) => m(f)(n(f)(x)) , that is a function that takes two operands and one function. Remember that m represents an integer by applying a function m times to an operand. So swap in f as an operand and you have m times the application of f, now applies this function to an argument that is n times the application of f to x.]

Multiplication is similar, x is omitted to keep the code terse, but you can read it as:

val mult:Lambda = (m) => (n) => (f) => (x) => m(n(f))(x)

Keeping the sum approach in mind, this is similar – applies m times a function that is composed by n application of f to x. I needed quite a neuron effort to figure this out.]

Predecessor can be defined as:

val pred: Lambda = (n) => (f) => (x) => n((g) => (h) => h(g(f)))((u) => x)((u) => u)

[NdM. I tried hard to understand this, but simply I couldn’t wrap my mind around it. I don’t have even the slightest idea of how this is expected to work. If you figure it out, let me know, please… well indeed, I think that the last part (u) => u, being an identity, is used to skip an application in the application list, therefore reducing by 1 the application list…]

[NdM. I found this thorough explanation of the predecessor on stackoverflow]

Now we can do something even more interesting – conditionals:

val ifLambda: Lambda = (c) => (t) => (f) => c(t)(f)((x) => x)
val isZero: Lambda = (n) => n((x) => aFalse)(aTrue)

Recursion would be a nice addition to the computing arsenal of lambda calculus since it allows the expression of iterative algorithms. But how are we supposed to call a function when lambda functions are anonymous?

First, let’s define a function that makes its argument call itself:

val autocall:Lambda = (x) => x(x)

Then we need a helper function that makes something call itself

val Y: Lambda = (f) => autocall((y) => f((v) => y(y)(v)))

And finally, we define a function containing the body of the recursive function:

val G:Lambda = (r) => (b) =>
  (ifLambda(isZero(n))
    ((x) => one )
    ((x) => mult(n)(r(pred(n))))
  )

Now we have everything to recursively compute a factorial:

val fact = Y(G) // this makes G recursive

[NdM: Once more I have a hard time trying to understand this part. It makes sense intuitively, but I’m lost in details such as function Y. The overall idea is pretty well laid out]

Turing Equivalent?
Lambda calculus has been proved to be Turing- equivalent, meaning that every algorithm that can be implemented on a Turing Machine, can also be implemented using Lambda Calculus. Therefore, you have no excuse, everything can be done purely functional!

Equivalence

In mathematics there are a lot of problems, an interesting one is about theorems.

For a mathematician, a theorem is something like

if ( condition(args)) then statement(args)

That is, if something holds, then something else is true. It would be nice to build a machine that automatically checks this. This is what about a century ago, mathematicians were looking for – a machine that renders us useless by proving theorems by itself.

In the same way, we used lambda calculus to represent numbers, booleans, and statements, we could rewrite the theorem as a lambda calculus expression and then execute it to let it show true or false to determine whether the theorem holds or not.

This could be such a machine:

def main( args: Array[String]) : Unit = {
    val autocall: Lambda = x => x(x)
    println( SmartDisplay.web.display(autocall))
    val OMEGA = autocall(autocall)
}

[NdM: also after having watched the youtube video of the conference, I can’t tell where this function comes from. I believe Jarek, but I have no explanation of why this function should prove theorems.]

That would be awesome, regrettably, it doesn’t work. In fact, autocall function is invoked over autocall itself causing a stack overflow. This is generally what happens when you try to analyze a lambda function for equivalence.

This fact has been proved in 1936 by Alonzo Church: “No computable function can decide the equivalence of two different lambda functions”.

Despite this, there are two guys on stack overflow that are trying to do exactly this in C#.

Lambda calculus is convenient, but it undergoes the incompleteness theorem by Kurt Gödel – For any consistent, effectively generated formal theory that proves certain basic arithmetic truths, there is an arithmetical statement that is true, but not provable in the theory.

In other words, it doesn’t matter how cool your formalism is, you still can’t fully automate through an algorithm to prove the properties of another algorithm.

So this is what nearly a century ago a group of mathematicians, led by Alonzo Church, devised in search of more elegant mathematics.

What I presented today is called Untyped Lambda Calculus. Take plus and try to add true and factorial together. It doesn’t make sense. So, by using this notation you can also write correct expressions that don’t make sense.

Since not every expression you can write makes sense, it is a programmer’s duty to write and use only those that make sense.

Typed lambda calculus checks correctness for you. Untyped lambda is very much like javascript in which you can write everything and is not checked for type correctness.

I haven’t talked about Lazy/Eager evaluations in lambda calculus. This is a very complicated issue, even if very fun. If you do lambda calculus on paper you will notice that sometimes you need to evaluate lazy and other times you need to evaluate eager.

Usually, when you read about lambda calculus in Scala, you don’t find what I have shown you, you find something else – lambda calculus done on the type system of Scala:

sealed trait True extends Bool {
    type If[ T <: Up, F <: Up, Up ] = T
}

sealed trait False extends Bool {
    type If[T <: Up, F <: Up, Up] = F
}

This is not performed at run-time but at compile time. This is awesome because the compiler does all the calculations and produces such fine errors you can’t even figure out where they end.

Sources

  • wikipedia
  • blog: (C# Version) Dixin’s blog – this is the post where I took inspiration for this talk.
  • Book: Roger Penrose: The Emperor’s New Mind – this is not about only lambda calculus, but about computation. Reading this book will help you better grasp the subject.
  • Lambda Visualization – Badlam this is nothing special just a package to help the presentation. It is known to work only in presentation and not elsewhere.
  • This same presentation in Java – is the original work. This presentation has been done first in Java (for fun and drinking beer of course).

NdM: that’s the end of the presentation. Here I add my fragment of code I used to check the code. If you prefer you can use Badlam, the code below is in Scala.

Function to convert from lambda numbers to decimal –

def dec( y : Lambda ) : Int = {
    case class Counter( counter: Int) extends Lambda {
        def apply( x: Lambda ) : Lambda = {
            this
        }
    }
    case class Fun() extends Lambda {
        def apply( x: Lambda ) : Lambda = {
            Counter( x.asInstanceOf[Counter].counter+1 )
        }
    }

    y( Fun() )(Counter(0)).asInstanceOf[Counter].counter
}

The trick is to give function f of lambda the semantic of incrementing its argument by 1. If the argument has type int and starts counting from zero the conversion is done.

The following code is just a pretty printer for lambdas that are not integers:

def del( x : Lambda ) : Unit = {
    x match {
        case `aTrue` => println( "true" )
        case `aFalse` => println( "false" )
        case `and` => println("&lt;and&gt;")
        case `or` => println("&lt;or&gt;")
        case `not` => println("&lt;not&gt;")
        case _ => println("?")
    }
}

(where aTrue, aFalse, and all the other symbols are defined as in Jarek’s presentation.

Lambda World 2017 – Unconference – Category Theory Crash Course

Here I am in Cadiz to attend 2017 edition of Lambda World. Cadiz is really a nice frame for this conference and the weather is so good that it seems to be back at vacation time instead then in mid-Autumn.

Anyway, on this morning I attended the “unconference”, a sort of informal prelude to the real conference. After the first, planned talk, people could register to present their talk and other attendants could vote their preferred talk. This yield two interesting talk – one on declarative testing and the other one on category theory. Here is what I learned from the last one, by Rùnar Bjarnason.

First, this is algebra stuff, really. Not the kind of ‘a+b’ you do in mid schools, but the high order stuff you do later when you study groups, monoids and the likes.

How does these concepts applies to practical programming still eludes me, but I sense that something is there waiting for my brain to grasp it.

Basically an abstract algebra of function wants to answer to the following questions – what functions do? And what can we say about them? Here we go, all mistakes, errors, inconsistencies are just mine, blame me not Rùnar.

A Category is defined by objects and arrows and how we can compose those arrows.

If you consider language types as objects then functions can be thought as arrows from an object to another object:

Some types: A, B, C.
Some functions f, g.

f: A => B
g: B => C

a composite functions is formed by invoking a function on the result of another one:

g compose f = (x => g(f(x)))

As long as types line up we can concatenate function composition.

If a function links an object to itself (in the example a type to itself), this is the identity function:

identity function: identity[A] : A=>A
identity compose f = f
f compose identity = f

Sometime composition is expressed with a dot. An interesting property is that composition is associative:

h.g.f = (h.g).f = h.(g.f)

(so no need for parenthesis).

In Scala you have a category for functions and types, but also for types and inheritance (subtyping). This category is defined as follows:

Objects: Scala Types: A, B, C…
Arrows: Subtype relations: A <: B, B <: C…
Composition: [ A <: B, B <: C ] => A <: C
Associativity: A <: B <: C (no need for parenthesis)
Identity: A <: A

Another example of category is the the pre-order (<=). Objects are instances of a Type on which <= binary relationship is defined:

Objects a, b, c…
Arrows: a <= b, b <= c
Transitivity: <=
Identity: a <= a

E.g. natural numbers are a category for preorder.

Monoids are categories with one object. This object, by convention, is written as ‘*’. All arrows links * to itself. In a turn of notation I’ll use the symbol |+| for composition and ‘mzero’ for identity.

Objects: *
Arrows: f, g, h: * -> *
associativity
identity: * -> *
mzero |+| a = a
a |+| mzero = a

Object is just an anchor, to hold arrows.

Category of preorders:
A preorder is a set with <=
Objects: all the preorders (Scala integers)
arrows: Homomorphisms on preorders

[NdM: at this point things start to be complicated and I have likely lost the speaker. What follows are likely to be glimpses of his talk… the parts I got]

Monotonic functions.
e.g. string length : _.length is a monotonic function.
Preserves relationship:
x <= y then h(x) <= h(y)

Category of monoids:
Objects: all the monoids
Arrows: monoid homomorphism
h( x |+| y) == h(x) |+| h(y)

Category of categories
Objects: all the categories
Arrows: category homomorphism F
F( f . g ) = F(f) . F(g) e.g. _map( f compose g ) = _.map(f) compose _.map(g)
these are functors.

Given a category C, there is a category of endofunctors on C
endofunctor F[_]
Objects: functors
arrows: functor homomorphism
h: F[A] => G[A]
h compose _.map(f) = _.map(f) compose h

trait ~>[F[_].G[_]] { def apply[A](a: F[A]) : G[A] }

A monad on C is an endofunctor M[_] together with two natural transformations:

pure[A] : A=>M[A]
join[A] : M[M[A]] => M[A]

A monad is a kind of category:
M[M[M[A]]] : can be associated as we want, preserving the result.

A monad is a monoid in an endofunctor category.

[And with this truth the brief talk is over]