Monkey Multiplication Table Problem

This is a response to R. Andrews post here. I have dubbed it, “The Monkey Multiplication Problem.” it was fairly neat, and I was going to post a response in the comments of his LJ, but I didn’t want to bother to sign up for an account over there, so instead I’ll post it here.

Enjoy. And forgive the lack of comments.
I managed to do it in 9 lines of haskell (not including module, import, or type declarations) however, I don’t have any datasets larger than your 12×12 table to test against, the printout is kind of funny looking, it’s designed so that you turn it 45 or so degrees clockwise and it looks right, (comes from the fact that I generate it by columns)

code follows (17 lines, with spaces and extra decls)


module Table where
import Data.List

genTable :: Int -> [[((Int, Int), Int)]]
genTable max = map (genCol max) [1..max]

genCol :: Int -> Int -> [((Int, Int), Int)]
genCol max n = [((n,n), n*n)]
	     ++ zip z (map (\(x,y) -> x*y) z)
	where z = zip [max, max - 1 .. n + 1]  (repeat n)

printTable :: Show a => [[a]] -> IO ()
printTable = putStrLn
	   $ concat
	   $ intersperse "\n"
	   $ map (concatMap (\x -> show x ++ " "))

monkeyTable = printTable $ map (map (snd)) $ transpose $ genTable 12

You can load that up in ghci and type in “monkeyTable” to get the printout, printTable, btw, is general enough to apply to anything- so if you’d like to see the internal structure of the table, you can switch that “map (map (snd))” to “map (map (fst))”. note that the ugliness of the monkeyTable function is from the fact that I used tuples instead of a customized datatype, or just a more specific genCol function.

Anywho, fun problem, I think I might use it in my local coding dojo, have fun!

~~jfredett

Published in: on December 30, 2007 at 4:25 am  Leave a Comment  
Tags: , , ,

DFAs, Categories, and Typecheckers.

I’ve recently started reading (in parallel) “Type and Programming Languages” and “Basic Category for Computer Scientists.” The latter of which is really only interesting if you’re a math junky, like me. It’s somewhat dry, and very matter-of-fact, but the subject is terribly interesting.

Whilst reading these books, I’ve also been working on a library for Haskell called “HFA” (pronounced: “Huffa”, or if your feeling silly, “Hoffa”), for “Haskell Finite Automata.” The library’s purpose is to create a simple to use, generic, relatively fast implementation of various Automata (Notably (D|N)FA’s, PDAs, Turing Machines, etc.), so that anyone intending to use these abstractions will be able to without knowing much about the internal theory, eg how to minimize a DFA, or how to convert an NFA to a DFA, etc. It’s also intended to be available as a easy to understand tool for learning/teaching about automata, it will eventually have the ability to display Automata as Graphviz graphs, and is currently capable of generating state diagrams (with some extensions to mark final, initial, and initial-final states).

Recently, I had just finished writing some refactor code for HFA, and decided to take a break and read “Basic Category Theory” for a while, it dawned on me upon looking at the diagram of a category that what I was looking at was essentially a DFA, with all states final, and the arrows between them being processing parts of the Delta Functions. That is, if a Category is defined as a a set of objects, and a set of arrows (where an arrow is defined as f : A -> B, where A and B are objects in the category), then the equivalency is as follows:


Category DFA
Objects States
Arrows Transitions

with delta(S,x) = S' iff there is an arrow between x is an arrow between S and S’.

Notably, we can also define a DFA as a Category by simply reversing the definition. I’m pretty sure this fact has been discovered before, its to obvious to believe otherwise (though it would be cool I could name this “The Fredette Isomorphism”, ^_^). The interesting thing about this Isomorphism is that, if we can generalize a DFA, whats to say that we couldn’t generalize the category in the same way? Take Imperative languages for instance. I don’t know if it works out (and I certainly don’t have the skill to prove it if it does work out, at least not yet), but it is a hypuothesis of mine that an imperative program can be represented in a category with multiple arrows going from one object to another simultaneously, that is, an imperative program is a kind of “Nondeterministic” category. Ring any bells? We know (by the simple fact of Turing completeness) that a TC imperative language program can be written in a TC Pure Functional language (assuming there’s a Pure Functional way to deal with State, eg Monads). Similarly (and this is almost to much of a guess to even _think_ of it as a hypothesis) if a TC Imperative Language is a “Nondeterministic” (ND) category, then if a ND Category is isomorphic to a NFA, then we know that NFA’s are isomorphic to DFA’s, and we know that Pure Functional Languages are isomorphic to operations withing a “Deterministic” Category, eg a “DFA”, so that would “prove” (I use that term _very_ loosely) that any old TC Imperative program has an equivalent TC Pure Functional Program.

Pretty Neat, huh? It probably doesn’t work somewhere, but still- it’s cool.

We can further use the DFACategory relationship as a kind of simple “composition” checker.

If the States of our DFA are types in a language, and the transitions functions mapping one type to another, then we can say that if the delta function is defined as above, and in the case there is no defined transition between some state S and some other state S’, and if such a transition is called for in the delta function, then we simply send the output to a non-accepting “fail” state.

Here, the simple language consists of the following.


The Category contains:

Objects = {Int, Float, Char, Bool, Unit}
Arrows = {isZero :: Int -> Bool,
,ord :: Char -> Int
,asc :: Int -> Char
,sin :: Float -> Float
,not :: Bool -> Bool}
Values = {zero :: Int, true :: Bool,
,false :: Bool, unit :: Unit}

The corresponding DFA works something like this


f1,f2, ... fn are symbols which have type an -> bn,
where n is the same value as the nth symbol, and a and b are not type variables, eg: f1 :: Int -> Char, a1 = Int, b1 = Char
v is a type


(f1 . f2 . f3 . ... . fn) v
=> ([f1, f2, ..., fn], v)
=> [a1,b1,a2,b2, ..., an,bn,v]
=> [(init,a1),(b1,a2),(b2,a3),...,(bn,v)]

given this list of pairs, we define the DFA trace function as follows, this presumes a list like the one from above.


trace :: State -> [(Sym,Sym)] -> Bool
trace st [] = (st /= failState)
trace st [(s1,s2):syms]
| s1 /= s2 = False
| otherwise = trace (delta st (head syms)) syms

where failState is a pseudonym for whatever the magic non-accepting failure state is

and where delta simply points to the next state (be it the fail state, or otherwise). I’ll cover that in more detail in my next post (I’m actually going to build this little guy for a simple language like the one above.)

I’ve digressed a bit from my topic, my goal was to show that Categories are really terribly neat, and apparently related to automata, which most people understand pretty easily, if they are explained well. I don’t pretend to be an authority here, but hell, implementing a (very) simple type checker is a pretty cool feat, considering It was only a few months ago I started learning Haskell. I know that this isn’t a robust, powerful mechanism, but as far as I know, given Compose (the (.) function in Haskell) apply (($) in Haskell) and a few other functions, you have a TC language, a la Backus’ FP or FL systems.

Anyway, next time I intend to implement this little type checker, and (hopefully) eventually implement a (full) type checker for something akin to FP or FL, using this DFA style approach. Heck, I’d also be able to play with parsing (another automata rich subject).

Also, for those interesting in looking at HFA, it’s available at

http://code.haskell.org/HFA/testing

you can just do a darcs get to pull everything down.

###DISCLAIMER###
I don’t intend to present any of this as proven, either formally or by any other means, the ideas and conjectures in this post are just that, conjectures. Please, don’t believe I’m an expert, I’m still learning about all these things, and I don’t want to lead anyone down the wrong paths under the assumption I actually _know_ what I’m doing.

That said, I do think the conjectures made have some validity, if you know otherwise, please inform me. Thanks

~~Joe

Published in: on August 5, 2007 at 4:43 pm  Comments (3)  

Peano’s Axioms Part III: Making use of Haskell’s Powerful Polymorphism

This time, we’ll be looking at making use of the polymorphic capabilties of Haskell’s type system.

In Haskell, Polymorphism is provided via type variables. That is, you may have a function



foo :: (a -> b) -> [a] -> [b]

which requires a function of any type to any other type, and a list of elements of the first type, and it returns a list of elements of the second type. Most people call this function map, though it can represent others.

So in addition to getting the functions defined in the typeclass, we get all the prelude functions with variable type signatures.

But wait- theres more. When we define certain types of functions, we often want to limit the scope of the function to only operate on certain variables. Like defining an instance of an interface in Java, we can specify a “scope” (though thats an abuse of the term) for a type variable. As in the following:



foo2 :: Ord a => a -> a -> a

Here, we state that a must be an instance of the typeclass Ord (and by inheritance, an instance of Eq too.) So now we know that foo2 takes two comparable elements and returns a comparable element.

Polymorphism is nothing new to any language. However, I think that Haskell really has an advantage not in the area of semantics, which are pretty much uniform — at least in the case of polymorphism. I think that Haskell’s advantage is in the syntax of polymorphism. Type signatures are easily wonderfully simple ways to express polymorphism. Both this basic kind of polymorphism (called rank-1 polymorphism), as well as higher order polymorphism (rank-2, rank-3, rank-n).

The point of this post is to show the rich library of polymorphic functions which become available with just a few (I think we’re up to 7, one derived, 6 implemented) type classes. This, as always, is a literate file, just cut and paste to a .lhs and run


$> ghci .lhs



> module Peano2 (Nat(..)) where
> import Peano
> import Data.Ratio

==============================================================
Continuing on which defining math in terms of Peano’s axioms
==============================================================

Last time I noted that we’d be working on exp, div, mod, and some other
higher-level functions. I also mentioned that we “sortof” got exp for free, in
that

S(S(S(Z)))^k,

where k is an integer, works fine, but what if we k be a natural. we’ll notice
readily that this will fail, with the error that theres no instance of
Integral Nat.

Why is that a problem? because (^) isn’t in Num, its defined elsewhere, its
signature is



(^) :: (Num a, Integral b) => a -> b -> a

Okay, so now what we should do is define Nat to be an Integral Type. So, lets go
for it.

=======================================

so, for Integral, we need quot, rem, and toInteger. We have the last of these, from the last time. Its quot and rem that we need. So, how do we define these?

well, we know that quot and rem are (effectively) just mod and div, in fact, not having negative numbers means that they are exactly the same. Lets then realize that mod is just repeated subtraction until we hit modulus > remnant. further, we relize that div is just the same, but the count of times we subtracted till we met that condition.



> quotNat :: Nat -> Nat -> Nat
> quotNat k m
> | k == m = 1
> | k < m = 0
> | otherwise = 1 + (quotNat (k-m) m)

> remNat :: Nat -> Nat -> Nat
> remNat k m
> | k == m = 0
> | k < m = k
> | otherwise = remNat (k-m) m

> quotRemNat :: Nat -> Nat -> (Nat, Nat)
> quotRemNat k m = ((quotNat k m), (remNat k m))


now, we just instantiate integral




> instance Integral Nat where
> toInteger = ntoi
> quotRem = quotRemNat
> -- this fixes a problem that arises from Nats not having
> -- negative numbers defined.
> divMod = quotRemNat


=======================================

but now we need to instantiate Enum and Real, oh my. Lets go for Enum first.

Enum requires just toEnum and fromEnum, thats pretty easy, to and from enum are just to and from Integer, which we have.



> instance Enum Nat where
> toEnum = integerToNat
> fromEnum = natToInteger

Real is actually relatively easy, we’re just projecting into a superset of the
Naturals, notably, the Rationals, so we do this simply by pushing the value
into a ratio of itself to 1, that is

toRational S(S(S(S(Z)))) ==> S(S(S(S(Z)))) % 1



> instance Real Nat where
> toRational k = (ntoi k) % 1

=======================================

Next time, we’ll go for primes.

oh- and by the way, pushing Nat into Integral gave us lots of neat things, notably even/odd, gcd/lcm, the ability to do ranges like [(Z)..], and all the appropriate functions that go with that.

So far, I’ve spent about 1 hour making all this work, you can imagine how this speed could be useful if you have defined your problem as having certain properties. Type classes are an extremely powerful tool, which can help make your code both clean, as well as powerful. In one hour, I’ve managed to build up a simple bit of code, based on some fairly primitive axioms, and create a huge amount of powerful math around it.

Imagine if you could define these same relations around data? What if you were able to define strings as having properties of numbers, heres an Idea:

Imagine you have some strings, you can define the gcd of two strings as the least common substring of two strings. If you can sensically define the product of two strings, then you can get a concept of lcm as well. Granted, the may not be the best example. But you can just imagine the power you can push into your data by defining an arithmetic, (not even an algebra!) on them. Imagine creating an arithmetic of music (been done, sortof, check out Haskore) or pictures? I use
arithmetic,because what I’m implementing here is only a little peice of the power you can instill. _This_ is why Haskell is powerful. Not because its purely functional, not even because its lazy. It’s the _math_ that makes Haskell capable of doing this. The type theory upon which Haskell rests makes Haskell powerful.

Remember, next time, Primes and Totients and Number Theory, and a datatype
representing the Integers,
Oh my!

Published in: on July 21, 2007 at 4:16 am  Leave a Comment  

Peano’s Axioms Part II: Orderability and Code

So, lets get started with this Peano arithmetic stuff. For those who do not know what Peano’s axioms are, heres a brief explaination.

Peano’s Axioms define a formal system for deriving arithmetic operations and procedures over the set of “Natural” numbers. Peanos Axiom’s are most often stated as follows, though variants do exist.

1) 0 is a natural number. (Base Case)
2) If x is a natural number, S x is a natural number (Successor)
3) There is no number S x = 0 (0 is the successor of no number)
4a) 0 is only equal to 0.
4b) Two natural numbers Sx and Sy are equal iff x and y are equal.
4c) If x,y are natural numbers, then either (x == y /\ y == x) or (x /= y /\ y /= x)
5) If you have a subset K, and 0 is in K. Then if some Proposition P holds for 0, and Sx for all x in K, then K contains the naturals. (Induction, from Wikipedia)

(see Peano’s Axioms Wikipedia Entry)

The goal of this post is to define Equality, Orderability, and basic arithmetic over the Naturals. We’ll also see how the standard Peano numbers. Lets talk about Orderability, for a second.

Equality is provided for by the axioms, but what is orderability? When we think about something being ordered, we can think about a “total” ordering, or a “partial” ordering. A Total Ordering is one that satisfies the following conditions.

For some Binary relation R(x,y), (notated xRy),
1) R is antisymmetric:
(xRy /\ yRx) iff (x==y), where == is equality.
2) R is transitive:
if (aRb /\ bRc) then (aRc)
3) and R is total:
(xRy \/ yRx)

a partial order only lacks in the third axiom. What does all this mean though? Well, axiom 1 gives us an important link to equality. We can actually use this fact to either define Equality from Orderability, or vice versa. Also, Axiom 1 gives us the ability to make the MRD for the Ord class very succint, only requiring (<=) at the very least. In Haskell, the Ord class is a subclass of Eq, so we need to define equality first in Haskell. This is not a problem, as we can always use Axiom 1 to define an equality function retroactively. That is, define (<=) as a function external to the type class, over the naturals. Then define (==) as ((x<=y)&&(y<=x)). We can then instance the both classes together. Axiom 2 is pretty self explainatory, it allows us to infer an ordering of two elements from two separate orderings. One neat thing this does, that not many people point out, is this is the axiom that allows for the concept of sorting. Since effectively, when we sort, we want to chain orderings together so we can have a list with elements of the type with the property of: k_1 <= k_2 &&amp;amp; k_2 <= k_3 && … && k_(n-1) k_1 <= k_n that is, element 1 is less than element 2, and so on, such that the first element of the list is ordered with the last. It’s relatively trivial to realize, so much so that most people don’t even bother to mention it, but it certainly is interesting to see. Axiom 3 is the defining feature of total orderings, it’s similar to the law of the excluded middle. We can see how certain relations are non-total, take for instance the relation (<). That is, the relation x
> {-# OPTIONS -fglasgow-exts #-}
> module Peano (Nat(..), one, p,
>  &nbsp iton, ntoi, natToInteger,
>  &nbsp integerToNat) where

=============================================
Defining Arithmetic based on Peano’s Axioms
=============================================

===============================================================================
First, we’ll define Nat, the set of natural numbers w/ 0,

This covers Axiom 1,2, and 3.

> data Nat = Z | S Nat
> &nbsp deriving Show

This encodes the concept of Natural Numbers, we aren’t going to use Haskell’s
deriving capabilities for Eq, but Show thats fine, it’d just be
tedious.

Handy bits.

> one :: Nat
> one = (S Z)

===============================================================================
Now lets build up Eq, the Equality of two numbers, this covers Axioms
2,3,4,5,7, and 8


> instance Eq Nat where

every number is equal to itself, we only need to define it for Zero, the rest
will come w/ recursion for free.

> &nbsp Z == Z = True

No number’s successor equals zero, and the inverse is also true, zero is the
successor of no number

> &nbsp S x == Z = False -- no successor to zero
> &nbsp Z == S x = False -- zero is no number's successor

Two numbers are equal iff there successors are equal, here, we state it
backwards,

> &nbsp S x == S y = (x == y)

And that, my friends, it Eq for Nat

===============================================================================

Now, lets define orderability, these two things will give us some extra power
when pushing Nat into Num.

> instance Ord Nat where
> &nbsp compare Z Z = EQ
> &nbsp compare (S x) Z = GT
> &nbsp compare Z (S x) = LT
> &nbsp compare (S x) (S y) = compare x y

Easy as pie, follows from Axioms 1,2 and 8.
===============================================================================

Now, we can push this bad boy into Num, which will give us all your basic
arithmetic functions

First, lets write (p), the magic predecessor function

> p :: Nat -> Nat
> p Z = Z -- A kludge, we're at the limit of the system here.
>  &nbsp -- We'll come back to this when we start playing with ZZ
>  &nbsp -- (the integers)
> p (S x) = x

=======================================

Heres (+) in terms of repeated incrementing.

> addNat :: Nat -> Nat -> Nat

First, we know that Z + Z = Z, but that will follow from the following
definitions

> addNat x Z = x
> addNat Z y = y
> addNat (S x) (S y)
> &nbsp | (S x) | otherwise = addNat y (S (S x))

=======================================

Heres (*)

> mulNat :: Nat -> Nat -> Nat

Simple, here are our rules
y’ = y
Z * Sx = Sx * Z = Z
SZ * Sx = Sx * SZ = x
Sx * y = (x) * (y+y’)


> mulNat _ Z = Z
> mulNat Z _ = Z
> mulNat a b
> &nbsp | a | otherwise = mulNat' b a a
> &nbsp where
>  &nbsp mulNat' x@(S a) y orig
>     | x == one = y
>     | otherwise = mulNat' a (addNat orig y) orig

=======================================

We’re gonna stop and do integerToNat and natToInteger just quick.

> natToInteger :: Integral a => Nat -> a
> natToInteger Z = 0
> natToInteger (S x) = 1 + (natToInteger x)

easy enough, heres integerToNat

> integerToNat :: Integral a => a -> Nat
> integerToNat 0 = Z
> integerToNat k = S (integerToNat (k-1))

pretty nice, huh? Lets add a couple of aliases.

> iton = integerToNat
> ntoi = natToInteger

=======================================

Now we just need (-), we’ll talk about abs and signum in a second

> subNat :: Nat -> Nat -> Nat
> subNat x Z = x
> subNat Z x = Z
> subNat x y
> &nbsp | x | otherwise = subNat (p x) (p y)

=======================================

Few, after all that, we just need to define signum. abs is pointless in Nat,
because all numbers are either positive or Z,

so signum is equally easy. since nothing is less than Z, then we know the
following

> sigNat :: Nat -> Nat
> sigNat Z = Z
> sigNat (S x) = one

and abs is then just id on Nats

> absNat :: Nat -> Nat
> absNat = id

===============================================================================

After all that, we can now create an instance of Num

> instance Num Nat where
> &nbsp (+) = addNat
> &nbsp (*) = mulNat
> &nbsp (-) = subNat
> &nbsp signum = sigNat
> &nbsp abs = absNat
> &nbsp negate x = Z -- we don't _technically_ need this, but its pretty obvious
> &nbsp fromInteger = integerToNat

Phew, that was fun. Next time- we’ll play with Exp, Div and Mod, and maybe some
more fun stuff.

Quick note, pushing Peano into Num gives us (^) for free (sortof.), but we’ll
define it next time anyway

Published in: on July 12, 2007 at 3:05 am  Comments (4)  

Peano’s Axioms Part I: Haskell and Type Theory, and Object Oriented Programming

Well, the title’s a little misleading, this has very little to do with type signatures, I decided to start a little series on what I think is the most powerful aspect of Haskell. Of course, I think that the most part is Math. Specifically in the realm of Type Classes. I’ll give you a little background here.

I’m primarily familiar with Java’s class system. Notably, upon reading about and toying with other class inheritance systems, Java stood out to me as being my favorite. It feels the most algebraic. One thing that should be noted is that typically, the only “real” difference between these systems (I’m generalizing wildly.) is in the world of multiple inheritance. Java, in particular, uses interfaces, which I intend to show as being a kind of type class, in fact, it is about one step away from being a real live type class.

Now, Lets start talking about inheritance.

Assume we have an Object Joe. Joe is a “Reified” (to borrow some language from type theory) object, that is. He’s a finite, usable, object, with real methods, that do real things. Effectively, he’s material, you can “poke” him, as it were.

Lets now say Joe has some Children, Jack and Emily. These are also Reified objects. However, they have a special property, they have a “Parent” object, namely Joe. This “special property” allows them to inherit properties from Joe, (say… devilishly good looks, and powerful intellect?). By now, the female readers are probably moaning, “Those have to come from the mother.” And thats a good point, how do we get the Mom in on this? We need Multiple Inheritence. Before we get there though, lets talk about a different kind of object, an Abstract object.

An Abstract Object describes some underlying property of an object, but it does so through this Parent/Child relationship. An Abstract Object in our setting might be something like “Ancestor” or even “Parent”. Notable things about Abstract objects is that they don’t contain any “Reified” methods, that is, methods containing code. The contain stubs which describe the signature of a method, but not the method itself. So we might have a Abstract class “Parent” which contains a method “getChildren” which has a signature void -> List, its purpose is to return a list of the extending classe’s (like Joe) Children. The important thing is, it doesn’t force that to be the result, it only enforces a type of the content, not the content itself.

But what about Multiple Inheritence. Assume we have another object called “Sarah” which also extends the parent class, and also assume that Jack and Emily are Sarah’s children too. Lets assume further that Sarah implements getChildren differently than Joe does. How then do Jack and Emily access there respective parents getChildren Methods? If they are fundamentally different implementations, then how can they choose which to use? Further, what if we want to have Parents for Joe, but we also want Joe to extend the Ancestor class? Now we have a mess…

How do we fix it? Well, the answer is to create a super version of Abstract classes, lets call them interfaces.

What does an Interface do? Well– in Java, no class is allowed to inherit from more than one parent class. Abstract or Reified. Interfaces exist outside of the usual inheritance tree, and can therefore give information about the tree as a whole. For instance, we can now create our inheritance tree by saying that each ancestor class implements the “hasChildren” interface, which forces the ancestor to have the void -> List method, we can then say that the Children found in that list are instances of the Child class, rather than the class itself, this gives us a solution to the problem by forcing us to access Jack through the Sarah object.

Could we have done this with normal Abstract/Reified classes, sure. Would it work as well? Probably not.

So why are interfaces cool? Well- you can think of an Interface as a way to describe a set of classes. Classes which meet certain properties. Indeed, they use interfaces like this all the time. For instance, Java has a interface called “Comparable” which- as the name suggests, means that any class that implements Comparable can compare instances of itself in a meaninful way. Comparable doesn’t say how to do the actual comparison, it just says that there must be a method compare which has type: (using Haskell-esque syntax)


compare :: Comparable a => a -> a -> {-1,0,+1}

The Javadocs recommends that it should have 0 mean that x == y, but that its not required. The interface only defines the kind of interaction you can have.

Why is this cool? Well, consider the obvious example, what if I wanted to write quicksort over a generic list of elements? How do I ensure that I have a sortable list? Lets say we have an Object Foo, which has no ordering, and another Object Bar, which can. I have a magic method qsort which will actually do the sorting, but how do I write the method such that I know that the list is sortable? That is if I write tests:


assertFail(qsort(List = {foo1, foo2, ...}));
assertPass(qsort(List = {bar1, bar2, ...}));

How can I ensure those both succeed (with the assumption, of course, that those testing constructs exist)? Well, the easy answer is to define qsort as having the method signature:


List qsort(List);

(I think thats the right syntax, it’s been a while since I wrote Java.)

This will prevent (at compile time, no less) the use of qsort on List, because Foo doesn’t implement Comparable. Running it on List is Fine, because its comparable, like we said before.

So now how is this different from Type Classes. Well, principly, there is one difference. Type Classes can not only specify methods the Type must have defined on it, but it can also define other methods derived from those methods. A seemingly trivial extension to Interfaces. However, that one simple change gives you enormous power in creating data abstractions. For instance, given the Haskell equivalent of Comparable, which is the pair of classes Eq and Ord (for Equality and Orderability, resp.), which require only 2 definitions total (== or /=, and compare or <=, /= is not equal). We get for free, about 20-30 functions including: all your comparison and equality operators, the ability to put items implementing the class in a set, the ability to use them as a key in a finite map, sorting of implementing items, some extra list functions, and more. All that, from 2 definitions, in two type classes, along with the Prelude and Haskell’s own polymorphism capabilities.

How does that work? Well, lets dissect some code, here’s the Ord class:

ignore the ‘.”s, they’re just for spacing.

class (Eq a) => Ord a where
........compare :: a -> a -> Ordering
........(<), (=),(>) :: a -> a -> Bool
........max, min :: a -> a -> a
................--minimal complete definition:
................-- (<=) or compare
................-- using compare can be more efficient for complex types.
........compare x y
.............| x == y = EQ
.............| x <= y = LT
.............| otherwise = GT
........x <= y = compare x y /= GT
........x < y = compare x y == LT
........x >= y = compare x y /= LT
........x > y = compare x y == GT
........-- note that (min x y, max x y) = (x,y) or (y,x)
........max x y
.............| x <= y = y
.............| otherwise = x
........min x y
.............| x <= y = x
.............| otherwise = y

So, lets dissect.

First, we declare what the class implements, that is, what methods are required to exist if the class is implemented. However, this is not the same as saying that all those methods must be defined over the structure before we can say it implements the class. We instead only need the “MRD/MCD” or “Minimal Requisite/Complete Definition” that is, the minimal number of functions needed to define all the other functions the class provides. How do we know what the MRD is? Well- thats up to you to decide. We do that by defining each function of the class in terms of other functions in the class, even cyclicly if necessary. I like to call this set of cyclic definitions the “internal definitions” of the set. These internal definitions provide the real power of the type class. Also note how, in the class definition, we have an inheritance bit, in that of Eq a => Ord a. This allows us to assume that we have basic equality functions provided by Eq.

In the next few posts in this series, I’ll be implementing Peano Arithmetic in Haskell, principly by pushing one simple data type:


Nat = Z | S Nat

Into as many type classes as I can, as of this writing, I have two more posts planned, and have pushed Nat into Eq, Ord, Num, Integral, Enum, and Real, I hope to get it into Rational and Fractional which will turn it into Nat-QQ (Natural Rationals (that is, all the Positive Rationals)), and maybe then refactor Nat into ZZ (the integers), and get the Regular Rationals out of that.

Published in: on July 9, 2007 at 2:52 am  Comments (2)