Monkey Multiplication Table Problem

This is a response to R. Andrews post here. I have dubbed it, “The Monkey Multiplication Problem.” it was fairly neat, and I was going to post a response in the comments of his LJ, but I didn’t want to bother to sign up for an account over there, so instead I’ll post it here.

Enjoy. And forgive the lack of comments.
I managed to do it in 9 lines of haskell (not including module, import, or type declarations) however, I don’t have any datasets larger than your 12×12 table to test against, the printout is kind of funny looking, it’s designed so that you turn it 45 or so degrees clockwise and it looks right, (comes from the fact that I generate it by columns)

code follows (17 lines, with spaces and extra decls)

module Table where
import Data.List

genTable :: Int -> [[((Int, Int), Int)]]
genTable max = map (genCol max) [1..max]

genCol :: Int -> Int -> [((Int, Int), Int)]
genCol max n = [((n,n), n*n)]
	     ++ zip z (map (\(x,y) -> x*y) z)
	where z = zip [max, max - 1 .. n + 1]  (repeat n)

printTable :: Show a => [[a]] -> IO ()
printTable = putStrLn
	   $ concat
	   $ intersperse "\n"
	   $ map (concatMap (\x -> show x ++ " "))

monkeyTable = printTable $ map (map (snd)) $ transpose $ genTable 12

You can load that up in ghci and type in “monkeyTable” to get the printout, printTable, btw, is general enough to apply to anything- so if you’d like to see the internal structure of the table, you can switch that “map (map (snd))” to “map (map (fst))”. note that the ugliness of the monkeyTable function is from the fact that I used tuples instead of a customized datatype, or just a more specific genCol function.

Anywho, fun problem, I think I might use it in my local coding dojo, have fun!


Published in: on December 30, 2007 at 4:25 am  Leave a Comment  
Tags: , , ,

Intermezzo: Mental Vomit.

Phew, it’s been a long few days, I have the last few posts in the Peano series coming up, and I’ve been planning out a new series of posts about a project I’ve been toying with- a Haskore-esque Music system. By the looks of it, Haskore is no longer being mantained, so maybe someday I’ll this project will actually turn into something useful, but really- I’m just looking to do something nifty with Haskell.

Today, though, I’m just going to brain-vomit and talk about some random thoughts I had.

Programming the Universe, by Seth Lloyd.

Excellent book, I’m about halfway through it. It’s really brilliantly written, very engaging, though its occasionally a little too conversational for my tastes. It’s also relatively thin, which is good, because it makes me feel proud to say I’m halfway through in less than 3 days, even though halfway through is 100 pages. It’s also kind of unfortunate, as I’ll be done with it soon, though I suppose if you have to have some problem, not wanting the book to end because your enjoying it to much is probably a good one to have. The book, principly, is about how the universe can be described as a giant quantum computer, it goes into information theory and its relation to thermodynamics and entropy. It talks about Quantum Logical Operations, though (at least up till now) not in any real detail, althoug h I saw some Bra-ket notation in later chapters. I’m not very knowledgable about Quantum Computing in general, so I hope to pick up some basic understanding from this, or at least enough language so I know where to look for more info. I figure, in 10 years, this technology will be relatively grown up, and I’ll need to know it to stay current. Might as well start now.

I am a Strange Loop, by Douglas Hofstadter

I’m a Hofstadter fanboy, I’ll admit it. GEB is the best book ever. Metamagical Themas is the second best book, and hopefully, this will be the third best. I didn’t even read the back cover when I bought this, I saw “Hofstadter” and I actually squealed a little. Yes, I squealed, but honestly, DH is quite possibly my favorite non-fiction writer ever. I’m allowed to be a fanboy. It was a manly squeal. Shut up. 😛

I haven’t started reading it yet, but my first impression (from my random flipopen to one of the pages) is that it’ll be a entertaining read, to say the least. I opened to a section headed with “Is W one letter or two?” I think it’ll be good, and the cover art is really quite nice.

Reinventing the Wheel, and why It’s a good thing.

Now a little more on topic. I’ve been hearing more and more, or maybe I’m just noticing that people say this more and more, that we — as programmers, mathematicians, etc — should not try to reinvent the wheel. For the most part, I agree. Some problems are solved, but should we discourage people from reinventing the wheel entirely? I think there is something to be said for reinventing the wheel every now and again, especially for new programmers. For instance, the recent posts about Peano’s Axioms. This has probably been done to death by other Haskeller’s out there, but why do I do it now? Partially because it shows the wonders of type classes, but also because the exercise of solving this already solved problem is an excellent way to learn about how the solution works, and furthermore how to apply those concepts to other problems. I mean, maybe I’m just ranting, but don’t we reinvent the wheel over and over? Insertion sort is a perfectly good sorting algorithm, it does the job, heck, it even does it far quicker than we could. However, if it weren’t for the fact that someone sat down and said, “Maybe this wheel isn’t as great as it could be” and reinvented it, and came up with quicksort, or radix sort, or count sort, then where would our applications be? Even looking at the actual wheel, how many times has it been reinvented? It started out as some big stone, then it was probably wood, then wood with metal plating, then mostly metal, now its a complex part with alloys and rubber and all sorts of different materials. I m guess what I’m trying to say is maybe instead of “never reinventing the wheel” we should, “Never reinvent the wheel, except in cases where reinventing the wheel would give us a better solution.” I suppose its the logical resolution to the problem presented from trying to combine this adage with the adage:

“Never say never, ever again.”

Anyway, It’s time to get back to procrastinating, or not, I guess I’ll do it later.

PhD. Procrastinate.

Published in: on July 18, 2007 at 4:07 am  Leave a Comment  

Peano’s Axioms Part II: Orderability and Code

So, lets get started with this Peano arithmetic stuff. For those who do not know what Peano’s axioms are, heres a brief explaination.

Peano’s Axioms define a formal system for deriving arithmetic operations and procedures over the set of “Natural” numbers. Peanos Axiom’s are most often stated as follows, though variants do exist.

1) 0 is a natural number. (Base Case)
2) If x is a natural number, S x is a natural number (Successor)
3) There is no number S x = 0 (0 is the successor of no number)
4a) 0 is only equal to 0.
4b) Two natural numbers Sx and Sy are equal iff x and y are equal.
4c) If x,y are natural numbers, then either (x == y /\ y == x) or (x /= y /\ y /= x)
5) If you have a subset K, and 0 is in K. Then if some Proposition P holds for 0, and Sx for all x in K, then K contains the naturals. (Induction, from Wikipedia)

(see Peano’s Axioms Wikipedia Entry)

The goal of this post is to define Equality, Orderability, and basic arithmetic over the Naturals. We’ll also see how the standard Peano numbers. Lets talk about Orderability, for a second.

Equality is provided for by the axioms, but what is orderability? When we think about something being ordered, we can think about a “total” ordering, or a “partial” ordering. A Total Ordering is one that satisfies the following conditions.

For some Binary relation R(x,y), (notated xRy),
1) R is antisymmetric:
(xRy /\ yRx) iff (x==y), where == is equality.
2) R is transitive:
if (aRb /\ bRc) then (aRc)
3) and R is total:
(xRy \/ yRx)

a partial order only lacks in the third axiom. What does all this mean though? Well, axiom 1 gives us an important link to equality. We can actually use this fact to either define Equality from Orderability, or vice versa. Also, Axiom 1 gives us the ability to make the MRD for the Ord class very succint, only requiring (<=) at the very least. In Haskell, the Ord class is a subclass of Eq, so we need to define equality first in Haskell. This is not a problem, as we can always use Axiom 1 to define an equality function retroactively. That is, define (<=) as a function external to the type class, over the naturals. Then define (==) as ((x<=y)&&(y<=x)). We can then instance the both classes together. Axiom 2 is pretty self explainatory, it allows us to infer an ordering of two elements from two separate orderings. One neat thing this does, that not many people point out, is this is the axiom that allows for the concept of sorting. Since effectively, when we sort, we want to chain orderings together so we can have a list with elements of the type with the property of: k_1 <= k_2 &&amp;amp; k_2 <= k_3 && … && k_(n-1) k_1 <= k_n that is, element 1 is less than element 2, and so on, such that the first element of the list is ordered with the last. It’s relatively trivial to realize, so much so that most people don’t even bother to mention it, but it certainly is interesting to see. Axiom 3 is the defining feature of total orderings, it’s similar to the law of the excluded middle. We can see how certain relations are non-total, take for instance the relation (<). That is, the relation x
> {-# OPTIONS -fglasgow-exts #-}
> module Peano (Nat(..), one, p,
>  &nbsp iton, ntoi, natToInteger,
>  &nbsp integerToNat) where

Defining Arithmetic based on Peano’s Axioms

First, we’ll define Nat, the set of natural numbers w/ 0,

This covers Axiom 1,2, and 3.

> data Nat = Z | S Nat
> &nbsp deriving Show

This encodes the concept of Natural Numbers, we aren’t going to use Haskell’s
deriving capabilities for Eq, but Show thats fine, it’d just be

Handy bits.

> one :: Nat
> one = (S Z)

Now lets build up Eq, the Equality of two numbers, this covers Axioms
2,3,4,5,7, and 8

> instance Eq Nat where

every number is equal to itself, we only need to define it for Zero, the rest
will come w/ recursion for free.

> &nbsp Z == Z = True

No number’s successor equals zero, and the inverse is also true, zero is the
successor of no number

> &nbsp S x == Z = False -- no successor to zero
> &nbsp Z == S x = False -- zero is no number's successor

Two numbers are equal iff there successors are equal, here, we state it

> &nbsp S x == S y = (x == y)

And that, my friends, it Eq for Nat


Now, lets define orderability, these two things will give us some extra power
when pushing Nat into Num.

> instance Ord Nat where
> &nbsp compare Z Z = EQ
> &nbsp compare (S x) Z = GT
> &nbsp compare Z (S x) = LT
> &nbsp compare (S x) (S y) = compare x y

Easy as pie, follows from Axioms 1,2 and 8.

Now, we can push this bad boy into Num, which will give us all your basic
arithmetic functions

First, lets write (p), the magic predecessor function

> p :: Nat -> Nat
> p Z = Z -- A kludge, we're at the limit of the system here.
>  &nbsp -- We'll come back to this when we start playing with ZZ
>  &nbsp -- (the integers)
> p (S x) = x


Heres (+) in terms of repeated incrementing.

> addNat :: Nat -> Nat -> Nat

First, we know that Z + Z = Z, but that will follow from the following

> addNat x Z = x
> addNat Z y = y
> addNat (S x) (S y)
> &nbsp | (S x) | otherwise = addNat y (S (S x))


Heres (*)

> mulNat :: Nat -> Nat -> Nat

Simple, here are our rules
y’ = y
Z * Sx = Sx * Z = Z
SZ * Sx = Sx * SZ = x
Sx * y = (x) * (y+y’)

> mulNat _ Z = Z
> mulNat Z _ = Z
> mulNat a b
> &nbsp | a | otherwise = mulNat' b a a
> &nbsp where
>  &nbsp mulNat' x@(S a) y orig
>     | x == one = y
>     | otherwise = mulNat' a (addNat orig y) orig


We’re gonna stop and do integerToNat and natToInteger just quick.

> natToInteger :: Integral a => Nat -> a
> natToInteger Z = 0
> natToInteger (S x) = 1 + (natToInteger x)

easy enough, heres integerToNat

> integerToNat :: Integral a => a -> Nat
> integerToNat 0 = Z
> integerToNat k = S (integerToNat (k-1))

pretty nice, huh? Lets add a couple of aliases.

> iton = integerToNat
> ntoi = natToInteger


Now we just need (-), we’ll talk about abs and signum in a second

> subNat :: Nat -> Nat -> Nat
> subNat x Z = x
> subNat Z x = Z
> subNat x y
> &nbsp | x | otherwise = subNat (p x) (p y)


Few, after all that, we just need to define signum. abs is pointless in Nat,
because all numbers are either positive or Z,

so signum is equally easy. since nothing is less than Z, then we know the

> sigNat :: Nat -> Nat
> sigNat Z = Z
> sigNat (S x) = one

and abs is then just id on Nats

> absNat :: Nat -> Nat
> absNat = id


After all that, we can now create an instance of Num

> instance Num Nat where
> &nbsp (+) = addNat
> &nbsp (*) = mulNat
> &nbsp (-) = subNat
> &nbsp signum = sigNat
> &nbsp abs = absNat
> &nbsp negate x = Z -- we don't _technically_ need this, but its pretty obvious
> &nbsp fromInteger = integerToNat

Phew, that was fun. Next time- we’ll play with Exp, Div and Mod, and maybe some
more fun stuff.

Quick note, pushing Peano into Num gives us (^) for free (sortof.), but we’ll
define it next time anyway

Published in: on July 12, 2007 at 3:05 am  Comments (4)  

An Analogy for Functional versus Imperative programming.

I was thinking the other day, about many things, as we drove back from my Aunt and Uncles in NY. I had been discussing with my father about C vs Haskell and, more generally, about functional versus imperative programming. I had been trying to explain why Functional programming is useful, particularly in areas relating to parallelism and concurrent code. To put this in perspective, my father has been writing low level system verification code for a very long time. (he started way back in the 70s) so he’s pretty situated in the imperative world, with a (vast) knowledge of C and languages like Verilog and stuff. Me, I’ve only been writing code since I was 14-15. I have far more knowledge in languages like Scheme, Haskell, and to an extent, the ML family. I also regularly use Java and friends, So it’s safe to say that trying to cross our respective expertises is most often quite difficult. Anyway, I came up with a pretty clever analogy, I think, for how Functional and Imperative programs relate.


I have no idea if this analogy has been used before, but if it has, kudos to whoever came up with it, basically, it goes like this.

an imperative language is monolithic. it effectively can be modelled as one giant state machine that gets permuted. Like a Rube Goldberg machine, you don’t design an imperative program to manipulate inputs, you design it for its side effects.

Here in Massachusetts, we have the Boston science museum, in which, there is a rube goldberg machine (RGM). It is, (next to the math room), by far one of my favorite exhibits. But what does an RGM do? Put simply, its just a while(1) loop. It ferries some bowling balls or whatnot to the top, and then drops then. The interesting part is the side effects. The bangs and whizzes and clanking and ratcheting of the chain as various balls drop and gears spin and cogs do whatever they do, cogulate, I guess. The point of the RGM is to manipulate the world indirectly. Someone, at some point, “said” to the machine, “start.” From thence, it has worked to make noise and spin and do all that nifty stuff.

So, RGM’s, you say, are mildly useless, right? Well. We’ll come back to that in a minute, but suffice to say, like everything else in the world, they have there place.

So if an Imperative language is Like a RGM, whats a functional language?

Well, lets realize that effectively, all a program does is turn some set of inputs, to some set of outputs. Kind of like how a factory may take in some raw material, (steel, plastics, etc.) and create a new, better “material” from those outputs, (eg, a car). A language does the same thing, a user “inputs” a query to a database server (technically, he, or someone else, have given the program the database, too. Kind of like currying, here. hmm). Then after your query, you get back a database entry or list thereof which match your search parameters. Now, a RGM-type machine, or more accurately, a monolithic program, which are typically written in imperative languages (though you can write monolithic functional programs) take an input, and, completely internally, turn that input to an output. Kind of like a whole factory in a box. useful, yes, reusable not necessarily. A functional approach, on the other hand, is like the components that make up a factory. When I write a program in Haskell, I have a series of functions which map input data to intermediate data, and functions which map intermediate data to intermediate data, and then finally functions which take intermediate data and map it to output data. For instance, if I want to create a function which takes a list and returns it along with its reversal, in Haskell, I’d write:

myReverse :: [a] -> [a]
retPair :: [a] -> ([a],[a])

myReverse [] = []
myReverse (x:xs) = myReverse xs : x

retPair ls = (ls , myReverse ls)

So you can see how retPair starts the chain by taking input. copies ls and sends it to an output, and then sends the other copy to another “machine” in the factory, which turns a list of anything ‘[a]’ to a list of anything. The result is then sent to output with the original as a pair ‘([a],[a])’

You can see this in the diagram:

................/---------[a]->[a]-------\...they get............
>---------------|split ls................|---([a],[a])-->output..

So what does this “individual machine method” give us? For one, its free to reuse, its very easy to pull apart this “factory” of “machines” and reuse any given “machine” in some other “factory”. It would not be as easy to do if we had written it procedurally, as in C/C++. I can hear the screams of imperative programmers now, “We would have written exactly the same thing, more or less!” and I know this, and don’t get me wrong, you _can_ write this “factory style” code in C/C++, but what about less trivial examples? In Haskell, I can only write pure functional code (barring monads, which are borderline non-functional). Whereas in C/C++, writing this kind of reusable code is often hard. In a functional language, writing this kind of code is almost implicit to the nature of how you think about code. The point I’m trying to make is simply this, FP-style languages force you to write (more or less) reusable code. Imperative languages in many cases force you to write once-off code you’ll never see again. I’m not saying this makes FP better, in fact, in a huge number of cases, I, as an FP programmer, have to write one-off imperative-esque state mangling RGM’s to get things done. The point is Haskell helps me avoid those things, which makes for more reusable code.

Another thing, FP is famous for being “good” at concurrency. This analogy works wonders at explaining why. Think about the factory example, when I split ls into two copies, I split the program into two “threads”. I effectively set up an assembly line, when I send done the copy of ls to the myReverse function, you can imagine a little factory worker turning the list around pi/2 radians so that it was backwards, and sending it down the line… You can even imagine the type constrictions as another little worker who hits a siren when you send down the wrong material. Imagine, however, trying to parallelize an RGM. RGM’s are often dependent on the inability to be made concurrent, even if that wasn’t the programmers intention. Imperative programs fight the programmer with things like deadlocks (two balls in the RGM get stuck in the same spot) and race conditions (two balls in the RGM racing towards the conveyor belt, with no way of determining who will win, how do you handle that?) whereas FP implicitly allows multiple “workers” to manipulate there own personal materials to make there own personal products at there station. In a purely functional program, each function is implicitly a process, you could even go so far as to give it its own thread. Each machine’s thread would just yield until it got something to work on, at which point it would do its work, and go back to waiting, it doesn’t matter which information gets to the next machine first, because it will just wait till it has all the information it needs to execute. Bottlenecking (a common problem in all code) is easier to see in this “factory” style view, since a bottle neck will (*gasp*) look like a bottle neck, all the functions will output to a single function. Thats a sign that its time to break it up, or have two copies of it running in parallel. FP makes this stuff simple, which makes FP powerful. Because for a language to have true power, it must make it so the programmer has to only think about what he wants to do, and not how to do it.

On the other hand, Imperative programming world. You have a number of excellent things going for you. Imperative code is remarkably good at clever manipulations of the machine itself. It is, in some ways, “closer” to the machine than a Functional language could ever be. So even though you have your share of problems, (parallelism and code reuse are the two I think are the biggest.) you have benefits to. Code in C is well know to be very fast, Object Oriented Languages are, I think, best described imperatively, and OO is a wonderfully intuitive way to think about programming. Imperative Programming is also makes it infinitely easier to deal with state, which can be a good thing, if use properly. Don’t get me wrong, I love monads, but even the cleverness of monads can’t compare to the ease of I/O in Perl, C, Java, Bash, or any other imperative language.

Hopefully this was enlightening, again, I want to say, Imperative programming isn’t Bad, just different. I like FP because it solves some big problems in Imperative Programming. Other people are of course allowed to disagree. It’s not like I’m Jake or anything.

Published in: on July 5, 2007 at 5:21 pm  Leave a Comment  

Programming Lanugages Part II: Beginner Friendly

Title says it all, really, I’m not asking for much.

Recently, my girlfriend asked about what languages were good for a beginner to learn. Her sister is interested in computers and is going to be going to community college soon, and wants (or at least, is wanted) to prepare for possible classes in programming by learning a simple language which will teach her the fundamentals.

Myself, being born of a number of languages, Common Lisp and VB6 as well as others, immediately thought, “Scheme.” I soon realized though that, if this girl is going to community college, chances are they don’t really want to teach her to be a brilliant, deep thinking, professorial type of person, but rather a run-of the mill, decent, get the job frakking done coder. Now I want to say, run-of-the-mill coders are not run-of-the-mill intelligent people, they are often orders of magnitude smarter than most. Maybe I’m bias, but to give some perspective, I consider my father, who has a Bachelors degree from Northeastern University in Boston, and about 25 years of experience in the field, to be a run-of-the-mill coder. There is noone on the planet who I think is smarter than my father, not even me. Now that you have that nice perspective thing. Realize that though Scheme is a wonderful language for learning about CS as Theory, and even Math to some extent. It presents an unfortunately distorted world view. In the real world, we write code in an imperative style (though thats slowly changing, and I’m quite happy of that). In the real world, we write code in C, Java, or similar languages. In the real world, we write mostly object oriented code. In the real world, we generally solve problems iteratively using arrays as a principal data-structure– not recursively with lists as a principal data-structure. So I said to my girlfriend, “Well, you have a few options.” I continued to think about what makes a good languages, heres my list:

  • Easy to Setup

Chance are, if this is your first programming language ever, you might not understand all the intricacies of setting up a compiler/editor chain, or an IDE, or whatever, so this is obviously critical. You can’t use a language you can’t set up. This is why languages like VB are so popular, in my day, Visual Studio was trivial to install and use, and thats why I used it. I could have just as easily learned C or C++, my brain was plenty big enough for them, but I couldn’t grasp all the arcane mysticism of the GCC compiler at that time, so how was I supposed to do anything? In this area, I think Scheme beats Java, Notably, mzscheme’s DrScheme, which I still use when I write scheme code. It is an excellent, intuitive IDE for Scheme, It just works. It’s as easy (if not easier) than VS was to install, and I really wish I had found it before Dad’s copy of VS. Java, though I think a better “real world” option, is a little tougher to set up, obviously a editor/compiler chain, though a wonderful, no-frills way to write code, is not necessarily as intuitive to a complete beginner as a nice IDE. Since Eclipse (admittedly, the only Java IDE I’ve ever used) is not designed (like DrScheme is) to be used as a learning tool, there are alot of superfluous things that I, as a seasoned Java programmer, might use, but to a beginner, these things are just clutter. Clutter, as far as I’m concerned, means confusion.

  • Good Errors

I’ve known many languages in my time, and many– many of them had the worst, most inconceivably bad error messages in the world. It’s getting better, but even now — with languages designed for teaching, like haskell– the error messages are archaic and often unreadable. Now mind you, they are only such to the uninitiated haskellers, and as such, I don’t have a problem with trying to decipher “ambiguous type variables” and type errors and other such things. But to a beginner, all these kind of errors perpetrate is the myth that writing code is hard, and something that should be left to the realm of the ubergeek. So my opinion here is pretty standard, Good Errors => Good Beginner language.

  • Results Oriented

By “Results Oriented” I mean that it should be easy to see the results of your work. The Idea is simple, when someone is learning a language, they want to see there helloworld program just work. The don’t want to go through the work of writing line after line of code and then 12 different commands to compile the code, and then finally get to type that brilliant ./a.out command and see that you misspelt hello as hewwo. The point is, a beginner programmer, more than anything, needs encouragement. If a language can’t give a beginner a positive boost every time they do something right– then its not a good language for a beginner. HTML/Javascript are both wonderful examples of 100% results oriented languages. If I write an html file, and look at it in a browser, then I know immediately whether I did it correctly or not. I know exactly if it looks and reads the way I think its supposed to. Similarly with Javascript, its trivial for me to see whether my script works or not. This kind of language allows the newbie programmer to just get results, and thats the best thing a newbie programmer can have.

So, by now, you probably want to know what I thought was the best language to learn, well- I didn’t pick just one, but for what its worth, here is my list of the top few good beginner languages:

  1. HTML/Javascript
  2. Python
  3. Ruby
  4. Scheme or Java
  5. Other Web Languages (PHP, ASP, etc)
  6. C
  7. C++
  8. Perl
  9. Haskell/Erlang/ML et al
  10. Assembler?

Those rate from one being the absolute best language I think a beginner should learn to 10 being the absolute worst language for a beginner, I split up C/C++ because the object oriented stuff in C++ makes it even more complicated than just C alone with its pointers. Between gcc’s mild, jovial inanity, and pointers, makes C just a little to tough to make me think it’s a good option for a beginner. I want to mention that I am not judging these languages absolutely, I know most of them (though I have considerably less experience than I would like in most of them) and think that they are all quite wonderful. I’d especially like to learn Python soon. It’s been a while since I picked up an imperative language.

Oh, by that way– I only tossed Assembler on the list to make the list an even ten. Assembler is a terribly confusing subject to the uninitiated, and makes a good +infinity on the list. I suppose the list should also have a 0, which would be the metamagical mythical languages of “JustRight” where no matter what you type, there are never any bugs, it compiles and runs in optimal space and time, all NP problems become P, and magical unicorns prance in fields of cotton candy and happiness outside your cube.

Then again, you could argue JustRight would present a distorted realworld view too.


Published in: on June 2, 2007 at 2:24 am  Comments (3)  

Programming Languages Part I: Syntactic Similarity

I like languages. when I was younger, I remember reading The Hobbit and spending more time reading the first two preface pages about the Moon Runes, A gussied up version of the Futhark, than actually reading the book. For a good bit of time after that, I wanted to be a Linguist, and not a mathematician.

But Alas, over time my interests went from Natural Language to Formal, from Formal Language to Abstract Language, and from there to the wide world of Algebra and Logic. A good transition, I think. Nevertheless I still love Languages, and that love has now been turned to specifically Programming languages. I like these languages because they are first and foremost utilitarian. They are designed from the start to do one thing, get a point across to an idiot. Lets face it, Programmers and Computers alike are pretty damn dumb. The language is the smart thing. A Programmer has no capability to tell a computer what to do without the help of a good language, and a Computer can’t do anything it isn’t told to do. So the most fundamental piece of Computer Technology is that of the Language, the layer that binds Programmer with Programmee.

I love languages, but I often hate them too. Take, for instance, Java. Java is an exceptionally pretty language, but it’s also ugly as your Aunt Anita Maykovar (May – ko – var). Java effectively boils down to one syntactic structure, the class. Every Java file revolves around it, and in some ways, this is really good. The fundamental commonality this structure brings allows Java code to be easier to learn to read. Your brain is good at processing things that are similar, the pathways it has to draw are all about the same– so it’s easier to optimize up there. The issue I have with Java actually is that, sometimes its too good at looking the same. To the point where I forget where I am. I get lost in my own neural pathways while I try to figure out whether I’m looking at an Abstract class or an Interface, or if I’m looking at a dirty hack of a databearing class or if I’m looking at something more legitimate. C/C++ is great at making this distinction, but it’s also, IMO, ugly as shit, uglier even. I like C often for its ability to compartmentalize things, but I think it takes it to far, nothing looks alike, even if it should. One of my peeves with C vs Java is they’re taking extreme views on something which should be easily decided. I’d like to sum it up as a fundamental rule I want to see in all programming languages (though that will probably never happen). Here it is:

Syntacticly similar things should be Semantically similar things, and vice versa, according to the proportion of similarity.

That is, If I want to create an object called “List” which implements a linked list, and then I want to create an interface (which is really just a Type Class, I’ve come to realize, but thats a story for another day.) called “Listable” which, when implemented, forces a object to have list-like properties. These things should have some similar structure. However, this is not to say we should copy Java. Java takes this too far, I think. In that, Java follows the rule: “If things are Semantically similar, they are Structurally almost Identictal.” This is bad, Interfaces should look different than Classes, but only in a minor way. I’d like Java Interfaces, heck, Java in general if I could specify type signatures ala Haskell. I think Haskell has got it right when it comes to how types should work syntactically. The brilliance of this comes in when you try to write a Java Method with this Type Sig ala Haskell type syntax, here’s Fibonacci:

fib :: public, static :: int -&gt; int

fib(nth) {
(nth == 1) ? 1 : fib(nth-1) + fib(nth-2);

(I think that’ll work, but it’s been a while, so I might have it wrong.)

Granted, there are issues with this. Notably, Java has things like side effects, but these could be built into the type signatures. I think that the ultimate benefit of this kind of type signaturing is a separation of concerns syntactually. I think that overall, this would make the language as a whole a lot cleaner. As interfaces would no longer have stubs like:

public static int fib(int nth);

which, though nice, doesn’t carry the same amount of information that could be held in something like:

fib :: public, static :: int -> int

Syntactually, the latter structure is more extensible, it could allow for the incorporation things like side effect tracking, or thread signatures, which might look like:

fib :: public, static, threaded :: int -> (int, int, …) -> int

which says that effectively fib is a method with takes a int to a unspecified number of threads, to an integer.

I’m really just spitballing at the end here, with some neat ideas I think that a small change in Java’s syntactic structure could bring.

Just my thoughts.

PS: I don’t know if the Syntactic/Semantic Similarity rule is well known, but either way, its a damn good idea.

Published in: on May 18, 2007 at 4:09 am  Leave a Comment  

Haskell: The Good, Bad, and Ugliness of Types

I’ve started to learn Haskell, for those who don’t know, Haskell is a wonderful little language which is based on Lazy Evaluation, Pure Functional Programming, and Type Calculus.

Effectively, this means that, like Erlang and other sister languages, If I write a function foo in Haskell, and evaluate it at line 10 in my program. Then I evaluate it again at Line 10000, or 10000000, or any other point in my code. It will– given the same input– always return the same value. Furthermore, if I write a function to generate an arbitrary list of one’s, like this:

listOfOnes = 1 : listOfOnes

Haskell just accepts it as valid. No Questions asked. Schemer’s and ML’ers of the world are probably cowering in fear. Recursive data types are scary in an Eager language, But Haskell is lazy. Where the equivalent definition in scheme:

(define list-of-ones
(cons 1 list-of-ones))

would explode and probably crash your computer, (that is, if the interpreter didn’t catch it first.) in Haskell, its not evaluated till its needed, so until I ask Haskell to start working on the listOfOnes structure, it won’t. I like Languages like that, IMO, if a language is at least as lazy as I am, its good.

The third really neat thing about Haskell, and what really drew me to it in the first place, is the Type Checker. I’ve used Scheme for a while now, and I love it to death. Sometimes, though- Scheme annoys me. For instance, I was working on a function like this once:

;count-when-true : [bool] x [num] -&gt; num
;supposed to be a helper for filter, I want do a conditional sum. So I pass in (filter foo some-list-of-numbers) and some-list-of-numbers,
; and I should get out a sum of the elements
(define (count-when-true list-of-bools list-of-numbers)
;or-list : ([a] -> bool) x [[a]] -> bool
;applys a filter across a list of lists and ors the results
(cond [(or-list nil? (list-of-bools list-of-numbers)) 0]
[(car x) (+ (car list-of-number) (count-when-true (cdr list-of-bools) (cdr list-of-numbers)))]
[else (count-when-true (cdr list-of-bools) (cdr list-of-numbers))]))

This probably has bugs in it, doesn’t work right, etc. but the idea is to return a conditional sum, now. I want to use this on lists, thats how its defined, but sometimes the calling function would try to call it on atoms, instead of lists. Big problem? not really, pain in the ass to find, you bet. The issue was, when I was trying to figure out what was wrong, Scheme didn’t realize that the type of the inputs were wrong. This would have made the error obvious, but Scheme doesn’t care about types, thats it’s principle strength, until it starts making bugs hard to find. I HATE it when its hard to find bugs.

Lets face it, as programmers, we suck, we write lots of buggy functions, things are generally done wrong the first (two or three… thousand) times. Programming is a recursive process, we write some code, run it, check for bugs, fix bugs, run it, check, fix, etc. Until we get tired of finding bugs/the program doesn’t come up with any. IMO, languages should not be designed to force programmers to write bug-free code, which seems to be the consensus today. At least, thats what I gather from the interweb and such. The goal should be to make all bugs so blatently obvious, that when the programmer sits down and trys to debug his program, he can’t help but to smack himself in the face and proclaim, “!@#$, I missed that!” This is where Haskell Shines.

When I write Scheme, I typically don’t want to be burdened by knowing which types go where. Scheme is great at this, however, it takes things to far, I think, in that it forces you to never have types. Sure, typed schemes exist, but most of them suck, because scheme isn’t designed for types. Don’t get me wrong, typed schemes are wicked cool, I’ve used types in CL too, and they’re great, especially when you want to compile. So to solve the problem of not having types, we invented contracts, which are cool. For the unenlightened: a contract is a specification of what the given datastructure or function does in terms of its arguments. eg:

+ : num * num -> num
toASCII : string -> num
toCHAR : num ->; string


These can be read as follows:


+ is num cross num to num

in english

+ is a function which takes two numbers and returns another number.

In Scheme, these contracts are basically comments, so Type checking is left to the programmer. This is all well and good, but I find it often leads to the practice of what I like to call single-typing. In which the programmer attempts to force all of his data to have the same type, or lists of the same type, or lists of lists, or etc. Typically, this results in convoluted datastructures which give FP in general a bad name. I’ve seen some horrible code written by single-typers, its bad, horrific even, It makes me want to gauge out my eyes with a pencil and tear my brain out… Okay, maybe its not that bad. Still, single-typing is most often bad. So how does Haskell fix it?

By not changing a thing.

Contracts are a wonderful Idea, they work, they just don’t work in Scheme. Because it was designed that way. Haskell has type inference, you don’t ever need to touch the Type Calculus capabilities of Haskell, You can– more or less– literally translate Scheme to Haskell with minimal difficulty. (Though, it may be easier just to write scheme in haskell.) But the brilliance of haskell is this:

Heres the Standard Factorial function in Scheme:

;Fac : int -> int

(define (fac x)
(cond [(= 0 x) 1]
[else (* n (fac (-n 1)))]))

Here it is in Haskell:

fac :: Int -> Int
| (x == 0) 1
| otherwise x * fac(x – 1)

(I used a ML style to make things look the same.)

The only real difference (besides some syntax changes) is the lack of the semicolon in front of the contract.

But what does all this do? Well, the difference comes during evaluation, watch this:

In Scheme:

(fac 1.414)

we have an infinite recursion, because:

(fac 1.414) -> 1 * fac(0.414) -> 1 * 0.414 * (fac -0.586) …

In Haskell:

fac 1.414

is a type error, and the whole thing kersplodes. Over, Evaluation Done, Haskell has Denied your function the right to evaluate.

In short, you have been rejected.Enough about the wonderfulness of the Type system. My title says the Good -> Bad -> Ugliness, obviously we’ve seen the good. How about the Bad?

Type Errors in Haskell:

Type errors in haskell suck, easy as that. They’re hard to understand, and in general, not very helpful. Further, alot of the differences between types are very subtle. For instance, consider the factorial function again, (just the type contracts for succinctness)

fac0 :: Int -> Int
fac1 :: Num -> Num

The look equivalent, right? Wrong. Num != Int, it includes Reals too.* So no lovely type errors here. These things are unfortunate, yes, but nothings really perfect. I could deal with this, but what I can’t deal with is exactly the problem I hoped to solve with Haskell, My bugs are hard to find. Not only that, they’re not hard to locate, I know exactly where they are, I just can’t decipher the cryptic text from the Haskell error stream to know exactly what the bug is. So I have to resort to piecing through the code bit by bit, trying to figure it out.


Type Signatures are Ugly:

I Like Contracts, but Haskell doesn’t technically use them. Haskell has type signatures. Which are different.

So far, I’ve written contracts like this:

F : S * T * U * … -> D

I could also have:

F : S * T * … -> (D1, D2, …)

or if I wanted HOF’s

F : (G : X -> Y) * … -> (D, …)

these are all pretty easy to understand, (if you know how to read the shorthand). We know exactly what the arguments should be, elements of the set of elements of type S, or T etc. We also know exactly what the return types are, elements of the typed-set D, or ordered k-tuples of elements of typesets D1 through Dn, etc. Equivalent signatures in Haskell are:

(assuming f = F, and any capital letter is a valid type, and that …’s would be replaced with types in the end result.)**

f :: S -> T -> U -> … -> D
f :: S -> T -> … -> (D1, D2, …)
f :: (X -> Y) -> … -> (D, …)

Now, I understand that, since Haskell is Lazily evaluated, we want the type signatures to be heavily curried, hence the load of arrows. Honestly though, how hard is it to convert all that to a form Haskell can use? I’m not saying get rid of the arrow version, maybe just add an option to provide a “normal form” version, I shouldn’t have to add these in my code as comments, solely so I can understand whats going on. I understand that the implication method more accurately reflects what the compiler is doing, but as a programmer, I don’t really give a rats ass what the compiler is doing. As a mathematician,

foo :: Int -> String -> Num -> Bool

looks ugly, do I know what it means? Yes. Do I like the way it looks? No. I grasp that, as a Haskell Compiler, reading these type of signatures in makes things easier, and further, that these definitions make things easier to prove correct*** but damnit Haskell, I’m a mathematician, not a miracle worker, I want to be able to read those definitions intuitively, and not have to muddle around trying to figure out exactly what that signature represents. It’s ugly, fix it.

On that note, I am beginning to work on some Haskell Code which will convert a Type Signature of the form:

f :: S^n1 * T^n2 * … -> (D1,D2, … Dn)

to the form:

f :: S -> S -> .. n1 times .. -> T -> T -> ..n2 times.. -> (D1, D2, … Dn)

and hopefully, given some user input, the latter to the former as well. (This is not harder, sortof, but I can’t know what the normal form of the type signature should be without some user input about the in-arity (arity) and out-arity (ority) of the function.

Anywho, Haskell is awesome, go play with it.


*= Aside: I’m quite glad Haskell calls them Reals and not
something silly like Float (though that is allowed) or Double. Us
Mathematicians have had these names for years, IEEE can call the format
Double precision floating point of w/e the hell they want, they’re
reals, not doubles. Silly computer scientists…

Edit: Note that in fact I understand that floats != reals, but its about state of mind. I know I’m working on a computer, and so I’m not going to treat things as reals, but I want to be thinking as if I’m not limited, so that when I work with my code, I’m not tuning the algorithm to work with the computer, I’m tuning the computer to work with my algorithm. In this way, the problem becomes a problem of making the compiler better, rather than hacking my algorithm to work.

**= Haskell doesn’t really like capitalized function names.

***= Proofs of correctness are done through the Curry-Howard Isomorphism, which effectively states that if the contract of a given function is a valid statement of Logic, then the function is correct, otherwise its not. Note that this requires the Signature to be correctly written, ie:

concatString :: String -> String -. String as a signature for a function which zipped two strings together would be “correct” but only in the sense that the contract would be satisfied. A Proof of correctness means that the function of that type can exist, there are other methods related to this Isomorphism which allow for a better proof of the semantic correctness, as opposed to the more syntactual flare of Curry-Howard

Published in: on May 1, 2007 at 9:14 pm  Comments (5)  

Compilers and Orchestra

When I started this silly blog thing, I had hoped to be able to post it in on a regular basis about current events and happenings in my many (yah, right) mathematical (sometimes) travels (sittings) .

Needless to say, it didn’t work out that way.

Heres what I’m working on, and what I’m not working on anymore:

Orchestra, A CAMaCS:

Orchestra is designed to be a Composer Assistant, something that helps you get out of a rut by learning from previous compositions and applying rules you supply it to suggest the next few notes of whatever your working on. I hope to have the Paper I’m writing about it as well as the actual source up on my site soon. It’ll be pretty bare, I don’t think I’m going to get to the GUI any time soon, but it’ll be workable, more or less, if you don’t mind hand editing code to make it work… 🙂

Joe, a mini java compiler:

I’m taking CS41 something or other, a Senior level CS course,

I’m a Sophmore level Math major.

I suck.

I don’t think I’m going to be able to finish it, fortunately, I only took this course as an elective, I don’t need it to graduate, so I should be fine when I fail it miserably.

The material is interesting, I just don’t have the code-writing skill to handle the requirements.

Anywho, I intend to take PLT next year (what I should have taken this year) which I do want for my “Major” (which is Math + Algebra + AI + Functional Programming Languages + Logic = Math w/ concentration in Logic and Algebra, and a healthy smattering of AI and Formal Languages) I hope to end up working with Automated Theorem Proving/Proof Assistant systems, (hence my inspiration for Orchestra, and Automated Music Writing system, more or less.) with all that, but who knows.



Published in: on March 31, 2007 at 5:38 am  Leave a Comment  

Of Compilers and Virtual Things

C++ is ugly and it sucks ass.

Java is pretty.

I’m writing a compiler/virtual machine I’m considering just saying “fuck it” and doing it in java. Because I know java, and java is pretty.

But I dont know, it might end up slow, and that would make me sad, theres nothing worse than a slow VM/Compiler.

I’m just so annoyed with the state of computer languages in respect to math these days. The only really worthwhile languages are all about computers, I mean– come on… Who wants a computer language to primarily do computer stuff? But seriously, I need a language that will have all the beauty and awesome power of languages like Lisp and Perl. But still be efficient and portable enough to be useful. So I finally said, “screw it, I’ll do it myself”, I think my current method will be to write the VM is C++, for speed purposes. But maybe I’ll write the final compiler in Java, because it will be a significantly more complex bit of code. In C++, debugging anything bigger than a few lines hither and thither is a bitch- but for the speed it offers– I dont know- maybe I can find a decent cross compiler and then just go back and comb out the resulting code… I suppose I should get back to learning about templates in C++, ugly little bastards. I hate pointers. Hate them hate them hate them…


Published in: on January 8, 2007 at 4:28 am  Leave a Comment