McCarthy's Ambiguous Operator

Posted by Eric Tue, 11 Oct 2005 00:00:00 GMT

Back in 1961, John McCarthy (the inventor of LISP) described an interesting mathematical operator called amb. Essentially, amb hates to be called with no arguments, and can look into the future to keep that from happening. Here's how it might look in Ruby.

# amb will (appear to) choose values
# for x and y that prevent future
# trouble.
x = amb 1, 2, 3
y = amb 4, 5, 6

# Ooops! If x*y isn't 8, amb would
# get angry.  You wouldn't like
# amb when it's angry.
amb if x*y != 8

# Sure enough, x is 2 and y is 4.
puts x, y 

Of course, amb can't actually see the future. However, it can rewind into the past whenever it sees trouble, and try a different coice.

So, how could we implement this function? As it turns out, we need continuations. Here's a basic implementation in Ruby.

# A list of places we can "rewind" to
# if we encounter amb with no
# arguments.
$backtrack_points = []

# Rewind to our most recent backtrack
# point.
def backtrack
  if $backtrack_points.empty?
    raise "Can't backtrack"
  else
    $backtrack_points.pop.call
  end
end

# Recursive implementation of the
# amb operator.
def amb *choices
  # Fail if we have no arguments.
  backtrack if choices.empty?
  callcc {|cc|
    # cc contains the "current
    # continuation".  When called,
    # it will make the program
    # rewind to the end of this block.
    $backtrack_points.push cc

    # Return our first argument.
    return choices[0]
  }

  # We only get here if we backtrack
  # using the stored value of cc,
  # above.  We call amb recursively
  # with the arguments we didn't use.
  amb *choices[1...choices.length]
end

# Backtracking beyond a call to cut
# is strictly forbidden.
def cut
  $backtrack_points = []
end

If you'd like a fun, non-technical overview of continuations, see the explanation at RubyGarden.

Tags , , ,

Comments

  1. Dan Piponi said 134 days later:

    I have an implementation of “amb” that can be used in C here. Thanks for giving a name to this operator for me – I never knew what to call it.

  2. Ben said 463 days later:

    Thats really interesting. It means that you could write a constraint DSL in Ruby or any other language which implements continuations. I thought constraint engines were hard…

    Cheers Ben

  3. Ben said 463 days later:

    Wow, I just implemented the send+more=money solver with amb:

    
    # define variables
    m = 1
    n = amb(0, 2, 3, 4, 5, 6, 7, 8, 9)
    o = amb(0, 2, 3, 4, 5, 6, 7, 8, 9)
    r = amb(0, 2, 3, 4, 5, 6, 7, 8, 9)
    s = amb(2, 3, 4, 5, 6, 7, 8, 9)
    y = amb(0, 2, 3, 4, 5, 6, 7, 8, 9)
    e = amb(0, 2, 3, 4, 5, 6, 7, 8, 9)
    d = amb(0, 2, 3, 4, 5, 6, 7, 8, 9)
    
    # reduce the problem space
    amb unless(d+e  y || d+e  y+10)
    amb unless(s+m+1 >= 10)
    
    	
    1. The actual problem send = s1000 e100 n10 + d more = m1000 o100 n10 + e money = m10000 o1000 n100 e10 y amb unless(send+more == money)
    1. p “m=#{m}, n=#{n}, o=#{o}, r=#{r}, s=#{s}, y=#{y}, e=#{e}, d=#{d},”
    1. Make sure all variables are different a = [m, n, o, r, s, y, e, d] t = Array.new(10, 0) a.each { |v| amb unless (t[v]+=1)<2 }
    1. stop cut
    p “Final solution: m=#{m}, n=#{n}, o=#{o}, r=#{r}, s=#{s}, y=#{y}, e=#{e}, d=#{d},”

    Amb is pretty cool! Ben

  4. Eric Kidd said 467 days later:

    Wow! That’s very cool.

    If you’re into constraint languages, you might also enjoy Oz, which can do heavily-optimized searches of constraint problems.

    The big advantage of Oz: Before choosing which path to investigate, it tries to infer as much information as possible (aka “propagation”). And then it makes very intelligent guesses about exactly which “guess” to make next (aka “distribution”).

    Since a backtracking search takes O(2^N) time (yikes!), it’s worth doing a lot of work before actually branching.

    And the most popular implementation of Oz can actually spread the search over an entire cluster of computers!

  5. Christian Theil Have said 605 days later:

    “Since a backtracking search takes O(2^N) time (yikes!), it’s worth doing a lot of work before actually branching.”

    This really depends on the nature of the problem. If you are only going to backtrack a little, it might not be worth spend cycles on forward-checking or ensuring arc consistency. Propagation can also be quite costly.

    But sure, in most cases, amb is not a very efficient way solving constraint problems..

  6. Doug Auclair said 778 days later:

    @Christian, you are quite right! For this particular problem (SEND+MORE=MONEY), eliminated selected digits narrows the search-space considerably, speeding up the calculation by a factor of ~50:

    http://www.cotilliongroup.com/arts/DCG.html

    (done with Prolog’s definite clause grammars, which have a similar feel to Haskell’s monads)

    And, I suppose, one can use, e.g., Gwydion Dylan’s matrix library (http://www.opendylan.org/gdref/gdlibs/libs-matrix-operations.html), along the lines of a gausse-jordan elimination, to solve the system of equations linearly? Perhaps not apropos to this problem, but handles several kinds of constraint problems.

  7. Doug Auclair said 779 days later:

    The Haskell version of SEND+MORE=MONEY in the amb-style:

    > sendmory = [(s,e,n,d,m,o,r,y) | s <- pos, e <- all, n <- all, d <- all,
    >                                 m <- [1], o <- all, r <- all, y <- all,
    >                                 y  d + e || y  d + e – 10,
    >                                 s  m  1 >= 10, 
    >                                 num [s,e,n,d] + num [m,o,r,e]
    >                                       == num [m,o,n,e,y],
    >                                 allDiff [s,e,n,d,m,o,r,y]]
    >   where
    >     pos = [2..9]
    >     all = 0:pos
    
    

    num :: [Int] → Int num = foldl ((+).(*10)) 0

    > allDiff :: [Int] → Bool > allDiff [] = True > allDiff (x:xs) = notElem x xs && allDiff xs

    ... note that a definition for amb is unnecessary, as list compression (as a MonadPlus) already makes choice.

    The above, as already pointed out, is not very efficient, so using a state monad to narrow the selection in-place:

    > sendmory' :: StateT [Int] [] [Int]
    > sendmory' = -- do [m] <- item
    >             --    verify (m == 1)
    >             -- the first two lines can be written 'do let m = 1'
    >             -- iff the passed in list does not contain the number 1
    >             do let m = 1
    >                [s] <- item
    >                verify (s + m + 1 >= 10)
    >                [e] <- item
    >                [d] <- item
    >                [y] <- item
    >                verify (y  d + e || y  d + e – 10)
    >                [n] <- item
    >                [o] <- item
    >                [r] <- item
    >                verify (num [s,e,n,d] + num [m,o,r,e]
    >                                    == num [m,o,n,e,y])
    >               —item obviates allDiff
    >                return [s,e,n,d,m,o,r,y]
    >
    
    

    —split list into all combinations of one element and rest splits :: [a] → [([a],[a])] splits l = splitAccum [] l where splitAccum ys [] = [] splitAccum ys (x:xs) = ([x],xs++ys) : splitAccum (x:ys) xs

    choose :: StateT a [a] choose = StateT $ \s → splits s

    item :: StateT Int [Int] item = choose

    > verify :: Bool → StateT Int () > verify p = StateT $ \s → if p then [((), s)] else []

    splits, choose, and verify are thanks to Dirk Thierbach on comp.lang.haskell. This version is 200x faster than the amb-style one—10x speed up for redundant digit elimination and on top of that a 20x speed up from placing the guards as far up in the computation as possible. Although it requires some helper functions to implement nondeterminism with elimination, sendmory’ retains the declarative feel of it’s amb-styled counterpart.

  8. backtrack said 1064 days later:

    There’s also goal directed computation as implemented in the Icon language, though you probably know about it.

  9. person-b said 1361 days later:

    This reminds me of Damian Conway’s Positronic::Variables CPAN Perl module. He wrote it for his (hilarious – highly recommended) talk on “Temporally Quaquaversal Virtual Nanomachine Programming In Multiple Topologically Connected Quantum-Relativistic Parallel Timespaces…Made Easy!” (http://blip.tv/file/1145545/). It actually did look into the future.

Comments are disabled