## Map fusion: Making Haskell 225% faster

Posted by Eric Kidd Sat, 10 Feb 2007 09:55:00 GMT

Or, how to optimize MapReduce, and when folds are faster than loops

Purely functional programming might actually be worth the pain, if you care about large-scale optimization.

Lately, I’ve been studying how to speed up parallel algorithms. Many parallel algorithms, such as Google’s MapReduce, have two parts:

1. First, you transform the data by mapping one or more functions over each value.
2. Next, you repeatedly merge the transformed data, “reducing” it down to a final result.

Unfortunately, there’s a couple of nasty performance problems lurking here. We really want to combine all those steps into a single pass, so that we can eliminate temporary working data. But we don’t always want to do this optimization by hand—it would be better if the compiler could do it for us.

As it turns out, Haskell is an amazing testbed for this kind of optimization. Let’s build a simple model, show where it breaks, and then crank the performance way up.

### Trees, and the performance problems they cause

We’ll use single-threaded trees for our testbed. They’re simple enough to demonstrate the basic idea, and they can be generalized to parallel systems. (If you want know how, check out the papers at the end of this article.)

A tree is either empty, or it is a node with a left child, a value and a right child:

``````data Tree a = Empty
| Node (Tree a) a (Tree a)
deriving (Show)
``````

Here’s a sample tree containing three values:

``````tree = (Node left 2 right)
where left  = (Node Empty 1 Empty)
right = (Node Empty 3 Empty)
``````

We can use `treeMap` to apply a function to every value in a tree, creating a new tree:

``````treeMap :: (a -> b) -> Tree a -> Tree b

treeMap f Empty = Empty
treeMap f (Node l x r) =
Node (treeMap f l) (f x) (treeMap f r)
``````

Using `treeMap`, we can build various functions that manipulate trees:

``````-- Double each value in a tree.
treeDouble tree = treeMap (*2) tree

-- Add one to each value in a tree.
treeIncr tree   = treeMap (+1) tree
``````

What if we want to add up all the values in a tree? Well, we could write a simple recursive sum function:

``````treeSum Empty = 0
treeSum (Node l x r) =
treeSum l + x + treeSum r
``````

But for reasons that will soon become clear, it’s much better to refactor the recursive part of `treeSum` into a reusable `treeFold` function (“fold” is Haskell’s name for “reduce”):

``````treeFold f b Empty = b
treeFold f b (Node l x r) =
f (treeFold f b l) x (treeFold f b r)

treeSum t = treeFold (\l x r -> l+x+r) 0 t
``````

Now we can double all the values in a tree, add 1 to each, and sum up the result:

``````treeSum (treeIncr (treeDouble tree))
``````

But there’s a very serious problem with this code. Imagine that we’re working with a million-node tree. The two calls to `treeMap` (buried inside `treeIncr` and `treeDouble`) will each create a new million-node tree. Obviously, this will kill our performance, and it will make our garbage collector cry.

Fortunately, we can do a lot better than this, thanks to some funky GHC extensions.

Posted by Eric Kidd Mon, 22 Jan 2007 08:32:00 GMT

Yesterday, I was working on a Haskell program that read in megabytes of data, parsed it, and wrote a subset of the data back to standard output. At first it was pretty fast: 7 seconds for everything.

But then I made the mistake of parsing some floating point numbers, and printing them back out. My performance died: 120 seconds.

You can see similar problems at the Great Language Shootout. Haskell runs at 1/2th the speed of C for many benchmarks, then suddently drops to 1/20th for others.

Here’s what’s going on, and how to fix it.

(Many thanks to Don Stewart and the other folks on #haskell for helping me figure this out!)