Peano’s Axioms IV: Advanced Functions and Integers

So here’s the last installment, we’ll make use of all the infrastructure we’ve build up to define primality, and associated functions like, a divisor function, totient, and sigma and whatever else I can come up with.

as always this is a literate source, save to Peano3.lhs and make sure you have the previous posts, execute with:

 ghci Peano3.lhs

and have fun.

No big preambles this time, Lets go.

module Peano3 where> import Peano2> import Data.List

First, all this stuff will boil down to prime numbers. So lets come up with a way to test if a number is prime or not.

An easy method is to create a seive of all prime numbers, then our isPrime function is just a search on the list, is it the fastest method in the world? Not really, does it work? You bet it does.


> natPrimes :: [Nat] natPrimes = sieve [S(S(Z))..]
> sieve :: [Nat] -> [Nat] -> sieve [] = []
> sieve (x:xs) = x : sieve [y | y -> xs, y `mod` x /= 0]

Awesome, we all know about Haskell’s powerful List Comprehensions. Now lets implement that simple linear search on the list.

> isPrime :: Nat -> Bool
> isPrime x = isPrime' x natPrimes
>         where
>              isPrime' x (p:ps)
>                     | (p                      | otherwise = (p == x)


EDIT: augustss mentioned that my use of seq was pointless, theoretically, it would be more efficient to use strict evaluation here, but it won’t till I learn how to do that. It’s unfortunate ghc/hugs won’t recognize tail-recursion and make it strict automatically :/

Cool, all we’re doing here is building up a chain of “does element p_k equal the given number?

Now what? Well, since we’ve defined prime numbers over the naturals, we can do some handy things like create a factorization function. We’ll just use trial division to determine the factors


> factorize :: Nat -> [Nat]> factorize x 
>    -- if the number is prime, we're done.
>    | isPrime x = [x]
>    -- if not, then we know we just need to find the first
>    -- factor, and then recurse on the number `div` the factor
>    | otherwise = firstFactor 
>                : (factorize (x `div` firstFactor))
>    where
>       divides x y = (y `mod` x) == 0
>       firstFactor = head 
>                   $ dropWhile (\p -> not (p `divides` x)) natPrimes

Fast? Hell no, it takes quite some time to factorize 210, and I didn’t even bother to wait till it finished 2310, but it does work.

We know we can create a generic “closure” function, which takes a list and a operation on the elements of that list and recursively applies that function till the list is “closed” that is, applying closure again returns the same list. Lets write that quickly.


> closure :: Ord a => (a -> a -> a) -> [a] -> [a]
> closure f ls = closure' f ls []

> -- closure' is just a helper function which keeps track of the 
> -- last list for comparasion against the closed list
> closure' :: Ord a => (a -> a -> a) -> [a] -> [a] -> [a]
> closure' f ls old 
>      -- if the old list is the same as the new list, return the
>      -- list
>      | ls == old = sort ls
>      | otherwise = closure' f (performClosure f ls) ls

> performClosure :: Eq a => (a -> a -> a) -> [a] -> [a]
> performClosure _ [] = []> performClosure f (x:xs) = [x] 
>                         `union` (map (f x) xs) 
>                         `union` (performClosure f xs)

Well, okay- it wasn’t that simple. However, using this, we can write the “divisors” function, which returns a set of all numbers which divide a given number. We’re going to use a nifty trick here too, I’ll take a moment to explain it.

Normally, we see closed sets defined with a “mod” function, as in the group of integers mod 10, etc. We can define the closure operation (in Haskell’s lambda notation) as being:

\ x y -> (x + y) `mod` k

for some constant k. However, this really doesn’t work well when trying to come up with a closure operator for divisors. What we need is a function which will dynamicly limit the power to which each factor is raised. Fortunately, a nice mathematician named Euclid came up with a handy algorithm for creating a function just like this, it is called the greatest common divisor, the GCD.

Upon examination, you’ll note that the function:

\ x y -> (x * y) `gcd` k

will force the product to only contain appropriately sized factors, because if the multiplication creates a number with an factor with an exponent greater than that of the same factor in k, then it will simply return the factor to the lower of the two powers.

So now, lets create a divisors function based on that concept and previously defined functions, we need to add 1 to the list because our factorization function won’t return that.


> divisors :: Nat -> [Nat]> divisors k = S(Z) : closure closer (factorize k)
>      where closer x y = (x*y) `gcd` k

Pretty simple, just one more reason to love Haskell. Lets define sigma, which is the sum of divisors to a power function. That is


> sigma :: Nat -> Nat -> Nat
> sigma k p = sum (map (^p) (divisors k))

Hmm, lets do one more big one, how about Totient, that is, the total number of all numbers x less than k that satisfy the property gcd(x,k) == 1


> totient :: Nat -> [Nat]
> totient k = length [x | x <- [S(Z)..(k-1)], gcd x k == 1]

List comprehensions are awesome, aren’t they?

Okay, last thing on our list, Integers. So far, we’ve been dealing with Naturals so far, and as such, have not had negative numbers to deal with. What I intend create a “Smart” Datatype which can cleverly increment and decrement without much difficulty. The problem with Integers is that the naive method for creating them, using the standard data types, is that when we try to decrement a positive number (or vice versa) we have to ensure that we just remove one of the increment/decrement symbols. Rather than just add a new one.

Heres the Datatype, you’ll note its similarity to Nat.


data ZZ = Pre ZZ -- decrement an integer
	| Z      -- Zero is an integer
	| Suc ZZ -- increment an integer

Note that all we really did was relax the “0 is the successor of no number” axiom. Much of mathematics is discovered by this method of removing the restrictions imposed by some axiom, or assuming an axioms converse/inverse, etc. The most popular example is that of Geometry, for many years, Euclidean Geometry was all there was. However, in the 19th century, Janos Bolyai and Nikolai Ivanovich Lobachesevky (the same mathematician of Tom Lehrer’s “Lobachevsky”) independently published papers about Hyperbolic Geometry, which changed the infamous “parallel” postulate of Euclidean Geometry to say that, instead of only one line, that there are two lines which pass through a point P not on a line L that do not intersect L. Riemannian, or Ellipic Geometry, states that instead of two lines, that there are no lines. In fact, you can imagine an infinite number of geometries based on the number of lines that can be parallel to a given line. For more about Euclidean and Non-Euclidean Geometries, wikipedia has some very nice articles, links are at the bottom of the post.

So the real test is to create some smarter constructors than what is provided, then we already have. The first thing, really, is to note that we can, in fact, pattern match on functions, eg


> precInt :: Int -> Int -> Int> precInt (x + 1) = x

works just fine. So lets use that to create two functions, s and p, which are successor and predecessor over the Integers. We’ll start a new module for this, this should be placed in a separate .lhs file called “ZZ.lhs”


> module ZZ(ZZ(Z), s, p) where

> data ZZ = P ZZ
>         | Z
>         | S ZZ
>         deriving (Show, Eq, Ord) -- that'll allow us to skip ahead a bit

Notice how we don’t have to deal with s(Z) or p(Z) that happens automagically for us.


> s :: ZZ -> ZZ
> -- if we're incrementing a negative number, we can just eliminate a P
> s (P x) = x
> -- these take care of the normal cases, just like in the Nats
> s x = (S x)

> p :: ZZ -> ZZ
> -- Now we just do p, in a similar way.> p (S x) = x> p x = (P x)

so now we can define addition, which is all we’ll define over the Integers, most of it will be the same or similar to the Naturals, and if you’d like to see it, I encourage you to try it, I’d love to see it working.

Here it is, Addition:


> addZZ :: ZZ -> ZZ -> ZZ
> addZZ Z y = y> addZZ x y 
>     | y      | x == Z = y
>     | y == Z = x
>     | x      | x > Z  = addZZ (p(x)) (s(y))

Notably, this also defines subtraction, given the capability of negation, anyway. Hopefully you’ve enjoyed seeing the buildup from a simple concept of Z and S(x::Nat) to a powerful arithmetic. I think the next stop on my list is dealing with DFA/NFA and PDA evaluator and eventually a Regex Recognizer DFA automata, ala Thompson’s NFA, then maybe I’ll build a Turing Machine Interpreter. All plans are subject to change randomly for no reason at all, Stay Tuned!

~~Joe

Wikipedia articles about various geometry stuff.
Geometry
Euclidean Geometry
Non-Euclidean Geometry
Elliptic Geometry
Hyperbolic Geometry

EDIT: fixed some indentation errors

Published in: on July 28, 2007 at 10:39 pm  Comments (4)  

Peano’s Axioms Part III: Making use of Haskell’s Powerful Polymorphism

This time, we’ll be looking at making use of the polymorphic capabilties of Haskell’s type system.

In Haskell, Polymorphism is provided via type variables. That is, you may have a function



foo :: (a -> b) -> [a] -> [b]

which requires a function of any type to any other type, and a list of elements of the first type, and it returns a list of elements of the second type. Most people call this function map, though it can represent others.

So in addition to getting the functions defined in the typeclass, we get all the prelude functions with variable type signatures.

But wait- theres more. When we define certain types of functions, we often want to limit the scope of the function to only operate on certain variables. Like defining an instance of an interface in Java, we can specify a “scope” (though thats an abuse of the term) for a type variable. As in the following:



foo2 :: Ord a => a -> a -> a

Here, we state that a must be an instance of the typeclass Ord (and by inheritance, an instance of Eq too.) So now we know that foo2 takes two comparable elements and returns a comparable element.

Polymorphism is nothing new to any language. However, I think that Haskell really has an advantage not in the area of semantics, which are pretty much uniform — at least in the case of polymorphism. I think that Haskell’s advantage is in the syntax of polymorphism. Type signatures are easily wonderfully simple ways to express polymorphism. Both this basic kind of polymorphism (called rank-1 polymorphism), as well as higher order polymorphism (rank-2, rank-3, rank-n).

The point of this post is to show the rich library of polymorphic functions which become available with just a few (I think we’re up to 7, one derived, 6 implemented) type classes. This, as always, is a literate file, just cut and paste to a .lhs and run


$> ghci .lhs



> module Peano2 (Nat(..)) where
> import Peano
> import Data.Ratio

==============================================================
Continuing on which defining math in terms of Peano’s axioms
==============================================================

Last time I noted that we’d be working on exp, div, mod, and some other
higher-level functions. I also mentioned that we “sortof” got exp for free, in
that

S(S(S(Z)))^k,

where k is an integer, works fine, but what if we k be a natural. we’ll notice
readily that this will fail, with the error that theres no instance of
Integral Nat.

Why is that a problem? because (^) isn’t in Num, its defined elsewhere, its
signature is



(^) :: (Num a, Integral b) => a -> b -> a

Okay, so now what we should do is define Nat to be an Integral Type. So, lets go
for it.

=======================================

so, for Integral, we need quot, rem, and toInteger. We have the last of these, from the last time. Its quot and rem that we need. So, how do we define these?

well, we know that quot and rem are (effectively) just mod and div, in fact, not having negative numbers means that they are exactly the same. Lets then realize that mod is just repeated subtraction until we hit modulus > remnant. further, we relize that div is just the same, but the count of times we subtracted till we met that condition.



> quotNat :: Nat -> Nat -> Nat
> quotNat k m
> | k == m = 1
> | k < m = 0
> | otherwise = 1 + (quotNat (k-m) m)

> remNat :: Nat -> Nat -> Nat
> remNat k m
> | k == m = 0
> | k < m = k
> | otherwise = remNat (k-m) m

> quotRemNat :: Nat -> Nat -> (Nat, Nat)
> quotRemNat k m = ((quotNat k m), (remNat k m))


now, we just instantiate integral




> instance Integral Nat where
> toInteger = ntoi
> quotRem = quotRemNat
> -- this fixes a problem that arises from Nats not having
> -- negative numbers defined.
> divMod = quotRemNat


=======================================

but now we need to instantiate Enum and Real, oh my. Lets go for Enum first.

Enum requires just toEnum and fromEnum, thats pretty easy, to and from enum are just to and from Integer, which we have.



> instance Enum Nat where
> toEnum = integerToNat
> fromEnum = natToInteger

Real is actually relatively easy, we’re just projecting into a superset of the
Naturals, notably, the Rationals, so we do this simply by pushing the value
into a ratio of itself to 1, that is

toRational S(S(S(S(Z)))) ==> S(S(S(S(Z)))) % 1



> instance Real Nat where
> toRational k = (ntoi k) % 1

=======================================

Next time, we’ll go for primes.

oh- and by the way, pushing Nat into Integral gave us lots of neat things, notably even/odd, gcd/lcm, the ability to do ranges like [(Z)..], and all the appropriate functions that go with that.

So far, I’ve spent about 1 hour making all this work, you can imagine how this speed could be useful if you have defined your problem as having certain properties. Type classes are an extremely powerful tool, which can help make your code both clean, as well as powerful. In one hour, I’ve managed to build up a simple bit of code, based on some fairly primitive axioms, and create a huge amount of powerful math around it.

Imagine if you could define these same relations around data? What if you were able to define strings as having properties of numbers, heres an Idea:

Imagine you have some strings, you can define the gcd of two strings as the least common substring of two strings. If you can sensically define the product of two strings, then you can get a concept of lcm as well. Granted, the may not be the best example. But you can just imagine the power you can push into your data by defining an arithmetic, (not even an algebra!) on them. Imagine creating an arithmetic of music (been done, sortof, check out Haskore) or pictures? I use
arithmetic,because what I’m implementing here is only a little peice of the power you can instill. _This_ is why Haskell is powerful. Not because its purely functional, not even because its lazy. It’s the _math_ that makes Haskell capable of doing this. The type theory upon which Haskell rests makes Haskell powerful.

Remember, next time, Primes and Totients and Number Theory, and a datatype
representing the Integers,
Oh my!

Published in: on July 21, 2007 at 4:16 am  Leave a Comment