Why Testing code should be Laissez-faire

I’ve been working on the HFA Library some more, and got to the point now were I really want to start testing it deeply. Since HFA is in Haskell, the weapon of choice for testing pure code is QuickCheck. The interesting bit about QC is that test data is fundamental random. This is great for two reasons.

1) It forces you to actually test for _general_ properties of objects/functions, rather than specific properties of certain instances of those structures, like a unit test does
2) By Generating Random Data, you don’t have to worry about being too nice or too mean to your program, as with unit testing.

I’ll state it now, for the record, Unit Testing is great. I wouldn’t test my Java (or soon enough, Scala, ^_^) programs with (mostly) anything else. However, when you have referential transparency on your side, QuickCheck is a superior system.

The real brilliance of QuickCheck though, is that it is extensible enough that you can define new “checkable” items, that is, instead of having to use the standard checkable types when checking a function, you can define how QuickCheck can generate random data for a particular type by making it an instance of the Arbitrary class, which is defined by QuickCheck. This means that, as long as you can define a couple of methods for your datatype, it is damn near trivial to have QC generate examples of your datatype and test them quickly.

Why is this good? Well, consider you’re writing unit tests for your code. You’ve been intimately involved with this mangled peice of imperatively worded text for days and weeks. You know every inch of it, and you have in your mind the next new feature you want to add. It is not uncommon (at least for me) to begin writing and toying with the code in your mind, figuring out where potential bugs might be. As a human being, you are typically not looking to make things harder for yourself than needbe. So maybe, when you’re writing those unit-tests that will guide your programming to ensure that the code is written correctly, you — maybe subconciously, maybe not — decide to make those few unit tests a little bit easier on the part of the code that you think is more fragile. I’m guilty of this, certainly, and I’m sure if you’re honest with yourself and you’ve developed a good bit of code with the test-first methodology (which I like only mildly less than the test-shortly-after-you-realize-you’re-supposed-to-write-tests-first methodology), that you would find that you’ve done it too. QuickCheck fixes that wagon pretty quickly, you don’t get to reason about how hard some tests will be on the code, QuickCheck enforces a “Hands Off” or “Laissez-faire” (I’m french, and I like history, sue me.) form of testing. You don’t get to decide what is tested, just what the result should be, which is how it _should_ be done. I _shouldn’t_ be thinking about what data I want to test, I shouldn’t have to write all the test-data, ideally, I should only have to say, “minimizing a DFA twice is equivalent to minimizing a DFA once” or “if the regular expression foo is generated by DFA f, and the expression foo’ is generated by the minimized version of f, then foo == foo’ for all f::DFA.” I guess the point is, the computer is unbiased, it won’t be nice to the code, it won’t be mean to it, it’ll be fair. Is that to say that coders will be biased towards or against their code? Yes, it is, we spend alot of time with these projects, we develop a vested interest in seeing them work, finding out that you did something wrong can be discouraging. Granted, small things may not devastate you, like using the wrong function name, or misplacing a variable. But if you’re unit test catches a major flaw in the structure of a program you designed, that represents alot of work that just got blown to peices. At the very least, if not for your pride, testing systems like QC are good for your productivity, they allow you to test a (potentially) huge number of arbitrary cases every time you run your program. Heck, you could even have QC running live against your code in the background, telling you in real time what tests your failing, what cases gave you failures, etc. All of that is Automatic and Vital data, its 24/7/365 testing of your code, for free. Slowly building assurance that your code is, in fact, doing the right thing on all inputs.

By the way, Yes, I know many, many people have written about QuickCheck before, and about how wonderful it is, but I think it’s always worth saying again. Good Ideas deserve to be talked about, QuickCheck is a _very_ good idea.

Published in: on August 10, 2007 at 6:35 am  Comments (5)  

DFAs, Categories, and Typecheckers.

I’ve recently started reading (in parallel) “Type and Programming Languages” and “Basic Category for Computer Scientists.” The latter of which is really only interesting if you’re a math junky, like me. It’s somewhat dry, and very matter-of-fact, but the subject is terribly interesting.

Whilst reading these books, I’ve also been working on a library for Haskell called “HFA” (pronounced: “Huffa”, or if your feeling silly, “Hoffa”), for “Haskell Finite Automata.” The library’s purpose is to create a simple to use, generic, relatively fast implementation of various Automata (Notably (D|N)FA’s, PDAs, Turing Machines, etc.), so that anyone intending to use these abstractions will be able to without knowing much about the internal theory, eg how to minimize a DFA, or how to convert an NFA to a DFA, etc. It’s also intended to be available as a easy to understand tool for learning/teaching about automata, it will eventually have the ability to display Automata as Graphviz graphs, and is currently capable of generating state diagrams (with some extensions to mark final, initial, and initial-final states).

Recently, I had just finished writing some refactor code for HFA, and decided to take a break and read “Basic Category Theory” for a while, it dawned on me upon looking at the diagram of a category that what I was looking at was essentially a DFA, with all states final, and the arrows between them being processing parts of the Delta Functions. That is, if a Category is defined as a a set of objects, and a set of arrows (where an arrow is defined as f : A -> B, where A and B are objects in the category), then the equivalency is as follows:


Category DFA
Objects States
Arrows Transitions

with delta(S,x) = S' iff there is an arrow between x is an arrow between S and S’.

Notably, we can also define a DFA as a Category by simply reversing the definition. I’m pretty sure this fact has been discovered before, its to obvious to believe otherwise (though it would be cool I could name this “The Fredette Isomorphism”, ^_^). The interesting thing about this Isomorphism is that, if we can generalize a DFA, whats to say that we couldn’t generalize the category in the same way? Take Imperative languages for instance. I don’t know if it works out (and I certainly don’t have the skill to prove it if it does work out, at least not yet), but it is a hypuothesis of mine that an imperative program can be represented in a category with multiple arrows going from one object to another simultaneously, that is, an imperative program is a kind of “Nondeterministic” category. Ring any bells? We know (by the simple fact of Turing completeness) that a TC imperative language program can be written in a TC Pure Functional language (assuming there’s a Pure Functional way to deal with State, eg Monads). Similarly (and this is almost to much of a guess to even _think_ of it as a hypothesis) if a TC Imperative Language is a “Nondeterministic” (ND) category, then if a ND Category is isomorphic to a NFA, then we know that NFA’s are isomorphic to DFA’s, and we know that Pure Functional Languages are isomorphic to operations withing a “Deterministic” Category, eg a “DFA”, so that would “prove” (I use that term _very_ loosely) that any old TC Imperative program has an equivalent TC Pure Functional Program.

Pretty Neat, huh? It probably doesn’t work somewhere, but still- it’s cool.

We can further use the DFACategory relationship as a kind of simple “composition” checker.

If the States of our DFA are types in a language, and the transitions functions mapping one type to another, then we can say that if the delta function is defined as above, and in the case there is no defined transition between some state S and some other state S’, and if such a transition is called for in the delta function, then we simply send the output to a non-accepting “fail” state.

Here, the simple language consists of the following.


The Category contains:

Objects = {Int, Float, Char, Bool, Unit}
Arrows = {isZero :: Int -> Bool,
,ord :: Char -> Int
,asc :: Int -> Char
,sin :: Float -> Float
,not :: Bool -> Bool}
Values = {zero :: Int, true :: Bool,
,false :: Bool, unit :: Unit}

The corresponding DFA works something like this


f1,f2, ... fn are symbols which have type an -> bn,
where n is the same value as the nth symbol, and a and b are not type variables, eg: f1 :: Int -> Char, a1 = Int, b1 = Char
v is a type


(f1 . f2 . f3 . ... . fn) v
=> ([f1, f2, ..., fn], v)
=> [a1,b1,a2,b2, ..., an,bn,v]
=> [(init,a1),(b1,a2),(b2,a3),...,(bn,v)]

given this list of pairs, we define the DFA trace function as follows, this presumes a list like the one from above.


trace :: State -> [(Sym,Sym)] -> Bool
trace st [] = (st /= failState)
trace st [(s1,s2):syms]
| s1 /= s2 = False
| otherwise = trace (delta st (head syms)) syms

where failState is a pseudonym for whatever the magic non-accepting failure state is

and where delta simply points to the next state (be it the fail state, or otherwise). I’ll cover that in more detail in my next post (I’m actually going to build this little guy for a simple language like the one above.)

I’ve digressed a bit from my topic, my goal was to show that Categories are really terribly neat, and apparently related to automata, which most people understand pretty easily, if they are explained well. I don’t pretend to be an authority here, but hell, implementing a (very) simple type checker is a pretty cool feat, considering It was only a few months ago I started learning Haskell. I know that this isn’t a robust, powerful mechanism, but as far as I know, given Compose (the (.) function in Haskell) apply (($) in Haskell) and a few other functions, you have a TC language, a la Backus’ FP or FL systems.

Anyway, next time I intend to implement this little type checker, and (hopefully) eventually implement a (full) type checker for something akin to FP or FL, using this DFA style approach. Heck, I’d also be able to play with parsing (another automata rich subject).

Also, for those interesting in looking at HFA, it’s available at

http://code.haskell.org/HFA/testing

you can just do a darcs get to pull everything down.

###DISCLAIMER###
I don’t intend to present any of this as proven, either formally or by any other means, the ideas and conjectures in this post are just that, conjectures. Please, don’t believe I’m an expert, I’m still learning about all these things, and I don’t want to lead anyone down the wrong paths under the assumption I actually _know_ what I’m doing.

That said, I do think the conjectures made have some validity, if you know otherwise, please inform me. Thanks

~~Joe

Published in: on August 5, 2007 at 4:43 pm  Comments (3)