# Map fusion: Making Haskell 225% faster

**Or, how to optimize MapReduce, and when folds are faster than loops**

Purely functional programming might actually be worth the pain, if you care about large-scale optimization.

Lately, I've been studying how to speed up parallel algorithms. Many parallel algorithms, such as Google's MapReduce, have two parts:

- First, you transform the data by mapping one or more functions over each value.
- Next, you repeatedly merge the transformed data, "reducing" it down to a final result.

Unfortunately, there's a couple of nasty performance problems lurking here. We *really* want to combine all those steps into a single pass, so that we can eliminate temporary working data. But we don't always want to do this optimization by hand---it would be better if the compiler could do it for us.

As it turns out, Haskell is an amazing testbed for this kind of
optimization. Let's build a simple model, show where it breaks, and then
crank the performance *way* up.

### Trees, and the performance problems they cause

We'll use single-threaded trees for our testbed. They're simple enough to demonstrate the basic idea, and they can be generalized to parallel systems. (If you want know how, check out the papers at the end of this article.)

A tree is either empty, or it is a node with a left child, a value and a right child:

```
data Tree a = Empty
| Node (Tree a) a (Tree a)
deriving (Show)
```

Here's a sample tree containing three values:

```
tree = (Node left 2 right)
where left = (Node Empty 1 Empty)
right = (Node Empty 3 Empty)
```

We can use `treeMap`

to apply a function to every value in a
tree, creating a new tree:

```
treeMap :: (a -> b) -> Tree a -> Tree b
treeMap f Empty = Empty
treeMap f (Node l x r) =
Node (treeMap f l) (f x) (treeMap f r)
```

Using `treeMap`

, we can build various functions that manipulate
trees:

```
-- Double each value in a tree.
treeDouble tree = treeMap (*2) tree
-- Add one to each value in a tree.
treeIncr tree = treeMap (+1) tree
```

What if we want to add up all the values in a tree? Well, we could write a simple recursive sum function:

```
treeSum Empty = 0
treeSum (Node l x r) =
treeSum l + x + treeSum r
```

But for reasons that will soon become clear, it's much better to refactor
the recursive part of `treeSum`

into a reusable
`treeFold`

function ("fold" is Haskell's name for "reduce"):

```
treeFold f b Empty = b
treeFold f b (Node l x r) =
f (treeFold f b l) x (treeFold f b r)
treeSum t = treeFold (\l x r -> l+x+r) 0 t
```

Now we can double all the values in a tree, add 1 to each, and sum up the result:

```
treeSum (treeIncr (treeDouble tree))
```

But there's a very serious problem with this code. Imagine that we're
working with a million-node tree. The two calls to `treeMap`

(buried inside `treeIncr`

and `treeDouble`

) will each
create a *new* million-node tree. Obviously, this will kill our performance,
and it will make our garbage collector cry.

Fortunately, we can do a lot better than this, thanks to some funky GHC extensions.

### Getting rid of the intermediate trees

So how do we get rid of those intermediate trees? Well, we could merge:

```
treeSum (treeIncr (treeDouble tree))
```

...into a single recursive call:

```
treeSumIncrDouble Empty = 0
treeSumIncrDouble (Node l x r) =
treeSumIncrDouble l + (x*2+1) + treeSumIncrDouble r
```

But that's ugly, because it breaks the encapsulation of
`treeSum`

, etc. Worse, it requires us to manually intervene and
write code every time we hit a bottleneck.

Now, here's where the GHC magic comes in. First, we add the following line to the top of our source file:

```
{-# OPTIONS_GHC -O -fglasgow-exts -ddump-simpl-stats #-}
```

This turns on optimization, enables certain GHC-specific extensions, and tells GHC to summarize the work of the optimizer. (Also, we need to make sure that profiling is turned off, because it blocks certain optimizations.)

Next, let's walk through the first optimization we want the compiler to
perform---merging two calls to `treeMap`

into one:

```
treeIncr (treeDouble tree)
-- Inline treeIncr, treeDouble
treeMap (+1) (treeMap (*2) tree)
-- Combine into a single pass
treeMap ((+1) . (*2)) tree
```

Here's the magic part. We can use the `RULES`

pragma to explain
this optimization to the compiler:

```
{-# RULES
"treeMap/treeMap" forall f g t.
treeMap f (treeMap g t) = treeMap (f . g) t
#-}
```

Note that this is only valid in a *pure* functional language like
Haskell! If we were working in ML or Lisp, then `f`

and
`g`

might have side effects, and we couldn't safely combine the
two passes without doing a lot more work.^{*}

We can similarly merge an adjacent
`treeFold`

/`treeMap`

pair into a single pass:

```
{-# RULES
"treeFold/treeMap" forall f b g t.
treeFold f b (treeMap g t) =
treeFold (\l x r -> f l (g x) r) b t
#-}
```

Using just these two rules, I saw a 225% increase in the number of nodes processed per second. Under the right circumstances, GHC can even outperform C code by applying these kinds of inter-procedural optimizations.

### Where to learn more

Rewrite rules are documented in the GHC manual and on the Haskell Wiki. Don Stewart also suggests using QuickCheck to verify the correctness of rewrite rules.

There's also a lot of good papers on this subject. Here are a few:

- Functional Programming with Bananas, Lenses, Envelopes and Barbed
Wire uses
`fold`

and related combinators to optimize recursive functions. - Theorems for Free shows how to automatically derive valid
rewrite rules for
`map`

and*any*polymorphic function. This is closely related to the idea of a natural transformation in category theory. - Cheap Deforestation for Non-Strict Functional Languages
discusses techniques for eliminating intermediate "trees" from a
computation. See also Deforestation: Transforming Programs to Eliminate Trees.
*Thanks, pejo!* - Comprehending Queries uses map/fold fusion to optimize database queries.
- Rewriting Haskell Strings uses rewrite rules to massively improve Haskell's string performance.
- Google's MapReduce Programming Model---Revisited analyzes MapReduce in more detail, porting it to Haskell.
*Thanks, augustss!* - Data Parallel Haskell: A status report shows how to use rewrite
rules to optimize nested data parallel code.
*This*is how to optimize a parallel map/reduce.

Enjoy!

(Special thanks to Don Stewart, who helped me get all this working. See also his insanely optimized tree code for further performance ideas.)

Want to contact me about this article? Or if you're looking for something else to read, here's a list of popular posts.

This kind of thing is why I think Haskell and lazy pure-functional programming is the future (of FP, and of programming, hopefully). Referential transparency allows this kind of logical separation of concerns and at the same time it also allows the specification of formal, provable, rules to transform programs.

The biggest benefit is that often the most naturally written code is also the best performing; while in strict languages there is usually an inversely proportional relationship between beautiful code and good performance.

The fact that GHC does so much of this work behind the scenes, and so effectively, is amazing already. Of course, there is plenty of room for improvement.

I disagree with the premise that you need the compiler to validate the side-effect-freeness of your functions. I would rather have the programmer make explicit assertions that there are no side-effects

that matter.I would buy the argument that it might be valuable to have a compiler that can recognize these optimization opportunities automatically more often, but the actual assertion was that you need purity to get the optimization at all. I don’t think that’s been proven.

Further discussion here.

It’s true, you could achieve quite a bit with manual annotations of purity. But to get any kind of consistent optimization, you’d need to use a lot of them, and do so correctly. You’d also need to write your code in a largely functional style, or have some scary loop-fusion support.

An alternate approach would be to use a compiler with robust inter-procedural analysis. LLVM, for example, can do a lot of optimizations along these lines.

But in both cases, we’re comparing my toy example, above, to a serious production compiler. The rabbit-hole goes

muchdeeper than this.Once you start treating programs like an algebra, you can do all kinds of crazy things: automatically infer rewrite rules using Wadler’s free theorems, optimize tricky concurrent code, exploit rank-2 types—the list goes on and on.

Now, for many programmers, functional purity might seem a ridiculously high price to pay. But there’s a surprisingly large number of functional languages out there already: query languages like SQL, OQL and LINQ; vertex and fragment shaders on your GPU; and even some hardware modeling languages (if you squint at them right).

All of these can benefit, today, from algebraic optimizers. But in each case, the price of admission is a proof of purity, whether performed by the programmer, inferred by the compiler, or enforced by the language.

I strongly suspect (but cannot prove) that supporting purity at the language level will take you much further than either of the other two approaches. And you don’t have to give up state to get there; at worst, you need better support for monads.

An excellent point by Neel Krishnaswami:

Read the rest for more good insights.

Why doesn’t GHC automatically have these optimizations thanks to lazy evaluation? It seems like the ‘x’ that treeFold sees shouldn’t be calculated until it is read, at which time it would simply apply the functions from treeMap. Does GHC internally produce a million node tree with “thunks” first or am I misunderstanding something?

The ‘x’ would not be calculated until it is read… however, all sorts of intermediate heap objects are generated when you don’t actually fuse things, which means that the GC has to be run more. Fusing maps together also gives the compiler a chance to merge the mapped functions, etc.

I wonder what the codata for a tree like this looks like…

I encountered the following problem which will hurt us in case of generalized monads as sketched in http://www.randomhacks.net/articles/2007/03/15/data-set-monad-haskell-macros

and of data structure like StorableVector

http://code.haskell.org/~sjanssen/storablevector

that requires a constraint on the element types. There it makes a difference if you separately map and fold or if you do it in one go. In the first case you need a Storable constraint for intermediate result type, too. That is, these constraints make implementation issues observable via the type signature.

Jim,

GHC can’t automatically know that such a rule is valid. For example, (filter f . filter g) is not equivalent to (filter (f.g))

Each author’s implementation of functions called treeMap could be quite different, and it would be difficult to improve GHC such that it could correctly know which implementations could be optimized.