Posted by Eric Kidd
Tue, 02 Oct 2007 07:50:00 GMT

From October 5-7, I’ll be at the Haskell Hackathon in Freiburg.

I’ll be working on probability monads, attempting to turn my various blog articles into a real Haskell library.

Some resources:

If you were a peer reviewer, or gave me feedback on the paper, my humble thanks–and apologies. I haven’t had a chance to revise the paper yet, and so your feedback is not yet included.

See you at the Hackathon!

Tags Haskell, Math, Monads, Probability, ProbabilityMonads

Posted by Eric Kidd
Tue, 18 Sep 2007 12:36:00 GMT

Good morning! I’ll be in Freiburg for several events this October, including CUFP and the Haskell Hackathon.

#### Commercial Users of Functional Programming (CUFP)

On October 4th, I’ll be speaking at CUFP ’07, describing the use of Scheme in a real-world multimedia engine. Some likely topics:

- How we switched to Scheme (and why refactoring is your friend).
- How our artists learned to program in Scheme (it’s all about the tools).
- The tension between functional programming and objects: Can we have both?

#### Dylan Beer Night

Once upon a time, I dreamt of generic functions and built RPMs for Gwydion Dylan.

Some current and former Dylan hackers are hoping to meet in Freiburg, most likely on October 4th. If you’re at ICFP or one of the workshops, we’d love to hear from you.

#### Haskell Hackathon

I’ll be at the Haskell Hackathon from Friday to Sunday.

Perhaps it’s time to whip Control.Monad.Probability into shape?

Tags Dylan, Haskell, Monads, Probability, Scheme

Posted by Eric Kidd
Tue, 18 Sep 2007 11:15:00 GMT

Just married! Our thanks and love to everyone,

for everything.

Posted by Eric Kidd
Sun, 01 Jul 2007 19:00:00 GMT

Programming in Ruby makes me happy. It’s a lovable language, with a
pleasantly quirky syntax and lots of expressive power.

Programming in JavaScript, on the other hand, frustrates me to no end.
JavaScript *could* be a reasonable language, but it has all sorts of
ugly corner cases, and it forces me to roll everything from scratch.

I’ve been trying to make JavaScript a bit more like Ruby. In particular, I
want to support Ruby-style metaprogramming in JavaScript. This would make it possible to port over many advanced Ruby libraries.

You can
check out the interactive specification, or look at some examples
below. If the specification gives you any errors, please post them in the comment
thread, and let me know what browser you’re running!

Read more...
Tags JavaScript, Macros, Planetary, Ruby

Posted by Eric Kidd
Sat, 28 Apr 2007 11:53:00 GMT

Bowling is a tricky game to score. It’s just complicated enough
to act as a good programming exercise. And Ron Jeffries has performed this
exercise many times, in C#, Smalltalk, and other
languages. He’s been searching for a tidy and elegant solution, one
which makes the rules of bowling as clear as possible.

In the past, though, Jeffries has been a bit skeptical of Haskell
implementations of bowling:

The recursive [Haskell] solution, however, is questionable on
more fundamental grounds. A game of bowling consists of ten frames, not
less or more, and the “ten-ness” of the game is not represented in the
recursive solutions at all. Even if we let that slide, the recursive
solutions make it a bit hard to understand what’s going on.

Let’s see if we can do better. No knowledge of bowling is required–if we
do this right, our program should be at least as clear as an
English-language version of the rules.

Along the way, we’ll encounter lazy lists, an interesting recursion combinator, and Hoogle, the Haskell search engine.

### The rules of bowling

In bowling, we roll balls down a lane, trying to knock down pins. If we
know how many pins we knock down with each ball, we can compute the final
score. So our program looks something like this:

```
type Balls = [Int]
type Score = Int
scoreGame :: Balls -> Score
scoreGame balls = ???
```

But how do we implement `scoreGame`

?

### Scoring a frame

A bowling game is divided into 10 **frames**. Ordinary frames consist of 1 or 2 balls. The 10th frame may have an additional 1 or 2 bonus balls, which we discuss below.

To score an individual frame, we need to do two things: (1) calculate the score for our frame, and (2) figure out where the next frame
starts. Our scoring function will return both pieces of information:

```
scoreFrame :: Balls -> (Score, Balls)
```

If we knock down all 10 pins with the first ball in a frame
(`x1`

), we call it a **strike**, and move on to the next
frame immediately. But we also get a bonus—we’re allowed to count balls
`y1`

and `y2`

from the *next* frame towards
*this* frame’s score:

```
scoreFrame (x1: y1:y2:ys) | x1 == 10 =
(x1+y1+y2, y1:y2:ys)
```

If we knock down all the pins using *two* balls (`x1`

and
`x2`

), we call it a **spare**. And we get to count one ball from
the next frame as our bonus:

```
scoreFrame (x1:x2: y1:ys) | x1+x2 == 10 =
(x1+x2+y1, y1:ys)
```

If we don’t manage to knock all 10 pins with two balls, we call it an **open
frame**. And we don’t get any bonus:

```
scoreFrame (x1:x2: ys) =
(x1+x2, ys)
```

What happens if we have a strike or a spare in the 10th frame?
We get to roll our bonus balls anyway. Conventionally, these extra balls
are recorded as part of the 10th frame (making it 3 balls long), but
they’re really just phantom balls hanging off the end of the game.

Next, we need to turn `scoreFrame`

into a recursive function.

Read more...
Tags Haskell

Posted by Eric Kidd
Thu, 19 Apr 2007 20:43:00 GMT

**Refactoring Probability Distributions:**
part 1, part 2, part 3, part 4, **part 5**

Welcome to the 5th (and final) installment of *Refactoring Probability
Distributions!* Today, let’s begin with an example from
Bayesian Filters for Location Estimation (PDF), an excellent paper by Fox and colleagues.

In their example, we have a robot in a hallway with 3 doors. Unfortunately, we don’t know *where* in the hallway the robot is located:

The vertical black lines are “particles.” Each particle represents a
possible location of our robot, chosen at random along the hallway. At
first, our particles are spread along the entire hallway (the top
row of black lines). Each particle begins life with a weight of 100%, represented by the height of the black line.

Now imagine that our robot has a “door sensor,” which currently tells us that
we’re in front of a door. This allows us to rule out any particle which is located *between* doors.

So we multiply the weight of each particle by 100% (if it’s in front of a door) or 0% (if it’s between doors), which gives us the lower row of particles. If our sensor was less accurate, we might use 90% and 10%, respectively.

What would this example look like in Haskell? We *could* build a
giant list of particles (with weights), but that would require us to do a
lot of bookkeeping by hand. Instead, we use a monad to hide all the
details, allowing us to work with a single particle at a time.

```
localizeRobot :: WPS Int
localizeRobot = do
pos1 <- uniform [0..299]
if doorAtPosition pos1
then weight 1
else weight 0
```

What happens if our robot drives forward?

Read more...
Tags Haskell, Math, Monads, Probability, ProbabilityMonads, Robots

Posted by Eric Kidd
Thu, 15 Mar 2007 22:29:00 GMT

**…and how to fake Lisp macros with Template Haskell**

(I wrote this article in response to a comment by
sigfpe. You may find it pretty dry reading, unless you want to
build domain-specific languages in Haskell. Proceed at your own
risk.)

Haskell’s built-in `Monad`

type has some serious limitations. We can fix those limitations using a number of advanced Haskell techniques, including Template Haskell, Haskell’s closest equivalent to Lisp macros.

We can illustrate the limitations of `Monad`

with an example from math. In set theory, we can define a set by specifying how to compute each
element:

{ *xy* : *x* ∈ {1,2,4}, *y* ∈ {1,2,4} }

We can read this as, “the set of all *xy*, where *x*
is one of {1,2,4}, and *y* is one of {1,2,4}.” To calculate the
answer, we first multiply together all the possible combinations:

1×1=1, 1×2=2, 1×4=4, 2×1=2, 2×2=4, 2×4=8, 4×1=4, 4×2=8, 4×4=16

We then collect up the answers, and—because we’re working with sets–we
throw away the duplicates:

{1,2,4,8,16}

Can we do the same thing in Haskell? Well, using Haskell’s list
monad, we can write:

```
listExample = do
x <- [1,2,4]
y <- [1,2,4]
return (x*y)
```

But when we run this, Haskell gives us lots of duplicate values:

```
> listExample
[1,2,4,2,4,8,4,8,16]
```

Our problem: We’re using lists (which can contain duplicate
values) to represent sets (which can’t). Can we fix this by switching to
Haskell’s `Data.Set`

?

```
import qualified Data.Set as S
setExample = do
x <- S.fromList [1,2,4]
y <- S.fromList [1,2,4]
return (x*y)
```

Unfortunately, this code fails spectacularly. A Haskell monad is required
to work for *any* types `a`

and `b`

:

```
class Monad m where
return :: a -> m a
fail :: String -> m a
(>>=) :: m a -> (a -> m b) -> m b
```

But `Data.Set`

only works for some types. Specifically, it
requires that values of type `a`

can be ordered:

```
data (Ord a) => Set a = ...
```

As it turns out, we can make `Data.Set`

into a monad. But be
warned: The solution involves some pretty ugly Haskell abuse.

Read more...
Tags Haskell, Macros, Monads

Posted by Eric Kidd
Mon, 12 Mar 2007 19:39:00 GMT

This morning, a programmer visited #haskell and asked how to implement
backtracking. Not surprisingly, most of the answers involved
monads. After all, monads are ubiquitous in Haskell: They’re used for IO,
for probability, for error reporting, and even for quantum
mechanics. If you program in Haskell, you’ll
probably want to understand monads. So where’s the best place to start?

A friend of mine claims he didn’t truly understand monads until he
understood `join`

. But once he figured that out, everything was
suddenly obvious. That’s the way it worked for me, too. But relatively
few monad tutorials are based on `join`

, so there’s an open niche in a crowded market.

This monad tutorial uses `join`

. Even better, it attempts to
cram everything you need to know about monads into 15 minutes. (Hey, everybody needs a gimmick, right?)

### Backtracking: The lazy way to code

We begin with a backtracking constraint solver. The idea: Given
possible values for `x`

and `y`

, we want to pick
those values which have a product of 8:

```
solveConstraint = do
x <- choose [1,2,3]
y <- choose [4,5,6]
guard (x*y == 8)
return (x,y)
```

Every time `choose`

is called, we save the current program
state. And every time `guard`

fails, we backtrack to a saved
state and try again. Eventually, we’ll hit the right answer:

```
> take 1 solveConstraint
[(2,4)]
```

Let’s build this program step-by-step in Haskell. When we’re done, we’ll
have a monad.

Read more...
Tags Haskell, Math, Monads

Posted by Eric Kidd
Sat, 10 Mar 2007 19:05:00 GMT

Haskell is a marvellous language, but there are some things I don’t like about it. My least favorite: Haskell has no fewer than 8
different APIs for reporting errors.

To make a bad situation worse, the choice of API varies between
popular libraries. To give a particularly unfortunate example,
Network.URI.parseURI and Network.HTTP.simpleHTTP report
errors in entirely different ways, turning a “download this URL” program into a page of code,
nearly half of which is devoted to dealing with various kinds of errors. (The rest is boilerplate that could be refactored into a nice wrapper.)

Let’s begin with a toy function, the simplest possible program that could
actually fail:

As every algebra student knows, we can’t divide by zero. Using this
function as our example, let’s take a look at all the different ways we
can implement error-reporting in Haskell.

Read more...
Tags Haskell

Posted by Eric Kidd
Wed, 07 Mar 2007 08:01:00 GMT

Download the free book or visit the official site

My college linear algebra course was held early in the morning, and
it was devoted almost entirely to blackboard proofs. The professor would
stand in front of the room, half asleep, and write:

“Theorem. Lemma. Lemma. Proof. Theorem…”

Despite this experience, I somehow managed to learn about eigenvectors
and kernels. Or at least, I learned how to write proofs about them. But I
had no intuition for linear algebra: I couldn’t visualize it, and I couldn’t
explain why anybody, anywhere, ever *cared* about eigenvectors.

Years later, in a computer vision class, I finally learned to care about
linear algebra. It could solve all sorts of cool problems!
(Eigenfaces, in particular, blew me away.) And since then, I’ve
encountered linear algebra everywhere. But my intuition is still
piecemeal, built from half-a-dozen applications over the years.

My motto for math is, “If it keeps showing up, build a rock-solid intuition
for how it works.” And towards that end, I’ve been looking for a good
linear algebra textbook.

My ideal linear algebra textbook would:

- Include plenty of motivating examples.
- Show how to solve real-world problems.
- Devote plenty of time to proofs.

The proofs, after all, are necessary in the real world. If you ever
attempt to do something slightly odd, you’ll want to
prove that it actually works.

### Jim Hefferon’s *Linear Algebra*

Professor Jim Hefferon’s *Linear Algebra* is available as a free PDF
download. But don’t be fooled by the price: Hefferon’s book is
better than most of the expensive tomes sold in college bookstores.

Everything in Hefferon’s book is superbly motivated. The first chapter
begins with two real-world examples: Unknown weights placed on balances,
and the ratios of complex molecules in chemical reactions. These examples
are used to introduce Gauss’s method for solving systems of linear
equations. Further into the book, the examples begin to tie back to
earlier chapters. Determinants, for example, are motivated by the
usefulness of recognizing isomorphisms and invertible matrices.

But Hefferon’s emphasis on real-world examples is admirably balanced by an
abundance of proofs. The first proof appears on page 4, and nearly
everything is proven either in the main text or in the exercises. This
will be helpful for readers who (like me) are trying to bring more rigor to
their mathematical thinking.

### The “Topics”: Fascinating real-world problems

The most delightful part of the book, however, are the “Topics” at the end
of each chapter. These cover a wide range of fields, including biology,
economics, probability and abstract algebra. The topic “Stable
Populations” begins:

Imagine a reserve park with animals from a species that we are
trying to protect. The park doesn’t have a fence and so animals cross the
boundary, both from the inside out and in the other direction. Every year,
10% of the animals from inside of the park leave, and 1% of the animals
from the outside find their way in. We can ask if we can find a stable
level of population for this park: is there a population that, once
established, will stay constant over time, with the number of animals
leaving equal to the number of animals entering?

Hefferon relates the solution to Markov chains and eigenvalues, cementing
several important intuitions firmly in place.

Other topics include basic electronics, space shuttle O-rings, and the
number of games required to win the World Series. There are plenty of
CS-related discussions, too: a survey of things that can go wrong in naive
numeric code, the time required to calculate determinants, and
how the memory hierarchy affects array layout.

Hefferon’s love for linear algebra is infectious, and his “Topics” will
appeal to anybody who does recreational math.

### “Free” as in “freedom”

*Linear Algebra* is published under the GNU Free Documentation
License and the Creative Commons Share Alike license.

What
this means: You may make copies of the book, or even print them out at a
copyshop and charge students a fee. You may also create a custom version of
the textbook, and share it with anybody who’s interested. The only
restriction: You must “share alike,” honoring the original author’s terms
as you pass along the textbook.

### Miscellaneous notes

Hefferon has put out a call for extra material. In particular, he’d love
to have a section on quantum mechanics:

Several people have asked me about a Topic on eigenvectors and
eigenvalues in Quantum Mechanics. Sadly, I don’t know any QM. If you can
help, that’d be great.

On the downside, the internal PDF links in *Linear Algebra* are broken
in MacOS X Preview. This is odd, because the LaTeX `hyperref`

package usually works fine with Preview.

The reddit discussion of
*Linear Algebra* has pointers to several other linear algebra textbooks, with varying emphasis. And many other free math
textbooks are available online.

If you have any favorite math books (paper or PDF, for any area of
mathematics), please feel free to recommend them in the comment thread!

Download the free book or visit the official site

Tags Math