UW CSE P 505 – Monads

Monads were created in response to the problems that arise when a pure functional programming language encounters the dirty impurities of the real world. One particular problem that vexed researchers and developers early on was how to represent file I/O in a language such as Haskell. Take a relatively straightforward method such as a function prints a string to the console. It’s pretty obvious that the entire purpose of the function is so that it can execute a side effect. It’s also not clear that this function has a return value. Indeed, in many procedural languages, methods such as Console.WriteLine in C# have a return type of void. Functions that have side effects and have no return value clearly aren’t functions in the mathematical sense.

To address this issue, languages such as Haskell differentiated between pure functions and actions, the latter which is a value that represents the result of a computation that may cause side effects. A monad encapsulates this notion of computations and their state, and provides a way to transition from one computation to the next based on the resulting action from the previous computation. This is known as the bind method, or simply “>>=”. Essentially, it takes the value “wrapped” in the state of the previous action, performs some computation on the unwrapped value, and then wraps the result with the monad. The other important method, return, explicitly wraps some value type “a” with the monad.
This is very abstract, so let’s look at the Haskell IO monad as an example. The bind method would have a function signature of type:

IO a -> (a -> IO b) -> IO b

In other words, given some action IO a, bind takes a function (call it F) that takes in the unwrapped type a and returns a wrapped type b, applies F to a and returns the resulting b “wrapped” in an IO action. Let’s look at how we would use bind to read a line from the user and output it back to the screen.

The getLine function in Haskell reads a line from standard input and has type IO String. putStrLn has a return type of IO (). We want to call getLine, take the resulting string from that, and call putStrLn with it. In the IO bind method, a is type String, b is type Unit (), and the mapping function F is putStrLn. The (a -> IO b) matches up with (String -> IO ())

The use of bind would look something like this:

getLine >>= \input ->
putStrLn (“You typed “ ++ input)

What this has accomplished is the separation of concerns between the side effects of an action with the results of an action. By encapsulating any state changes brought about by a computation inside of a monad, we retain the ability to interact with the results of these computations in a functional way. Thanks to the syntactic sugar that Haskell monads provide, we also get rid of a lot of boilerplate code that would normally be required to pass the state from one computation to the next, as well as any tedious checking that involves. One can easily imagine a scenario in an imperative language that consists of a series of nested if statements in the code that do null pointer checks on the results from a series of computations. The pseudocode of such a scenario might look like:

httpRequest = parseHttpRequest()
If (httpRequest != null)
{
	queryString = parseQueryString()
	If (queryString != null)
	{
		print(queryString)
	}
}

Haskell provides syntactic sugar in the form of the “do” notation:

Do
	Req <- parseHttpRequest()
	QueryString <- parseQueryString()
        putStrLn QueryString)

Where we could imagine there being some sort of monad that would wrap the imaginary request and querystring types returned from parseHttpRequest and parseQueryString respectively. This looks similar to the imperative style of coding where we define a series of computational, but what’s nice about the monad is that if at any point there is an error in the chain of computations, the rest of the computations in the do block are short circuited, bypassing the need for cumbersome if statements along every step in the computation.

This of course, only scratches the surface of what a monad is and what they can be used for, as evidenced by the plethora of guides out there written about them.

UW CSE P 505 Notes – Continuation Passing Style

Continuation passing style (CPS) is a style of programming wherein function calls are chained together. Each function in the chain performs some computation on a value and then passes the result of that computation on to the next function in the chain. Using this style, functions do not directly return a value. Instead, they return a function call. Exactly which function is called next is determined by making it an explicit argument to the current function.

Functions written in CPS will have the form … ->(a -> r) -> r in Haskell where “…->” denotes the set of argument values that this function does a computation on. This portion of the function signature would match up exactly with an equivalent function that did not use CPS. For example, a sum function would have signature Int->Int->Int in direct style, and have signature Int->Int->(Int->r)->r in CPS style. The difference lies in the (a -> r) –> r, that is, a function that takes as an argument another function that takes a value and returns a value, and that itself returns a value. (a->r) is the continuation and represents the next function in the chain that will be called. Note that the type of the argument a does not have to match the return type of r. The only constraint is that the return type of the continuation must match up with the return type of the current function.
This is abstract, so some simple examples can help illustrate what is going on. Consider the quadratic equation y =3×2 + 2x + 5. A function that solves for y given a value of x might look something like the following without continuations:

add :: Int -> Int -> Int
add x y = x + y
 
square :: Int -> Int
square x = x * x

mult :: Int -> Int -> Int
mult x y = x * y

solveQuadraticExample :: Int -> Int
solveQuadraticExample x = 
    add (add (mult 3 (square x)) (mult 2 x)) 5

With continuations, the code becomes:

addCont :: Int -> Int -> ((Int -> r) -> r)
addCont x y = \k -> k (x + y)
 
squareCont :: Int -> ((Int -> r) -> r)
squareCont x = \k -> k (x * x)

multCont :: Int -> Int -> ((Int -> r) -> r)
multCont x y = \k -> k (x * y) 
 
solveQuadraticExampleCont :: Int -> ((Int -> r) -> r)
solveQuadraticExampleCont x = (\k ->
multCont 2 x (\twoX ->
    squareCont x (\x2 ->
        multCont 3 x2 (\threeXSquared ->
            addCont threeXSquared twoX (\sum -> 
                addCont sum 5 k)))))

This can be invoked by passing in a top level continuation of type (Int -> r) to the solveQuadraticExampleCont. We can use the identity function:
solveQuadraticExampleCont 3 id which prints out 38 from the ghci REPL. The continuations serve to dictate what happens next with the result of the current function, with the order of operations and the intermediate values calculated made explicit. The continuations are somewhat analogous to callback functions. Event driven frameworks that use callbacks to define what happens next in an asynchronous call are a form of CPS. Indeed in Node JS, you will typically see nested anonymous callback functions chained together, as in this contrived example:

http.createServer(function (req, res) {
  parseHeaders(req, function(headers) {
	parseContentType(headers, function(contentType) {
		//....... do some stuff here
	})
  })
}); 

Where in the imaginary parseBody and parseQueryString methods, the second argument is a function that represents what is essentially a continuation to call next with the results of the current function. Often times, these callbacks in NodeJS will also have an error argument that is itself an error handling function. The power of continuations lies in this flexibility of being able to control the flow and add new functionality to existing functions (without the need to modify the code which might be in third party libraries) by defining their continuation. Indeed, continuations can be used to implement any form of non-local control flow, from exception handling to implementing Y combinators.

Notes from UW CSE P 505 – Writing an interpreter

One of the canonical exercises given in Computer Science graduate level Programming Language courses is writing an interpreter for a Scheme like language, due to Scheme’s relatively simple syntax and minimalist design. My UW CSE P 505 class was no exception. Using Haskell as our implementation language, we started off implementing a simple version of the Scheme language and gradually added features to illustrate concepts such as store passing style, continuation passing style, and type systems. How we approached the problem of writing an interpreter was particularly insightful, and the following is a high level overview.
The key takeaway from the course is to think of the interpreter as a series of mappers between successive layers. Much like a web application ultimately maps from the data access layer all the way up to the presentation layer that presents it to the end user, an interpreter maps between an input string to a core expression tree. By adding multiple intermediate steps along the way, each layer is only responsible for a few specific tasks, making it easier to reason about any given step.
The first task of the interpreter is to tokenize the input. Let’s call this method parseToken. It will take in a string and return a list of tokens. It will strip out all the whitespace and comments until we are left with a sequence of tokens that represent numbers, parentheses, and identifiers. In the Haskell code for the interpreter, the Token type looks something like this:

data Brace = Round -- ()
           | Square -- []
           | Curly -- {}
           deriving (Eq, Show)

data Token = Open Brace -- An open brace (of the given shape).
           | Close Brace -- A closing brace (of the given shape).
           | NumTok Integer -- A numeric literal.
           | IdTok String -- An identifier.
           deriving (Eq, Show)

Note that at this layer, an ID Token has no semantic meaning. It exists as a catch all bucket to differentiate between a brace and a number token. The ID Token could be a keyword, an operator, a variable, or something else entirely, but the tokenizing layer does not care. All it cares about is finding them. Passing in “{(5)}” to the parseToken method returns [Open Curly, Open Round, NumTok 5, Close Round, Close Curly]
The subsequent task is to convert the list of tokens into an SExpression tree. Let’s call this method parseSExp. It will take a list of tokens as input and return a single SExp and any leftover tokens not consumed in the parsing (more on this later). This can be visualized as a tree with the single SExp being the root. It may also be helpful to think of the top level SExpression as containing a nested list of SExpressions as well.

data SExp = NumS Integer -- numeric expression
          | IdS String -- identifier
          | ListS [SExp] -- list of S-expressions
          deriving Eq

Because this layer deals with SExpressions, we create new data types that correspond to the types. Although at first glance IdS and NumS seem redundant with IdTok and NumTok, its important for each layer to have its own types so that in the mapping code it is always clear what layer we are mapping to and from. Converting between open and close brace tokens to the newly introduced ListS type is the main responsibility of this layer. An open brace token denotes the start of a new list while a close brace token denotes the end of a list. This means the SExp parser will need to make sure the parentheses are balanced correctly. An SExpressions appearing outside of a pair of open close braces can only appear once. This means that any tokens not consumed in the parsing will be leftover (and can optionally be returned in the output from parseSExp).
If we run parseExpr on the list of tokens corresponding to “(foo (4 bar))”

[Open Round, IdTok "foo", Open Round, NumTok 4, IdTok "bar", Close Round, Close Round]

We get:

ListS [IdS “foo”, ListS [NumS 4, IdS “bar”]]

Again, this layer is not concerned with syntax, just mapping brace tokens into lists of SExpressions and checking that the braces match. It is in the subsequent step that we map the nested list of SExpressions into an expression tree. Let’s call this method parseExpr. Assuming no syntax errors (which would cause the method to fail accordingly), the output from parseExpr will represent a valid expression in our language that can be interpreted. This does not necessarily mean however, that it will not result in any runtime errors. For example, the expression (+ (* 2 3) (+ 1 2)) is represented as a tree with root “+” and two subtrees with * and + as their roots, respectively. Representing the expression as a tree means that we can write our subsequent parse and interpret methods recursively, simplifying the logic.
For simplicity sake, let’s say that our language supports if statements, functions, apply statements, and with* statements (a binding expression that binds a variable to a function or some value):

data Expr = NumE Integer
          | IfE Expr Expr Expr
          | VarE Var
          | FunE [Var] Expr
          | AppE [Expr]
          | WithStarE [(Var, Expr)] Expr
          deriving (Eq, Show)

FunE, AppE, and WithStarE all take a list of arguments. This makes writing an interpreter for the expression more complex than necessary. To further simplify things, we want to desugar our language into a core syntax. We can get rid of the WithStarE which is just a specialized case of AppE. Likewise, by utilizing currying, we can restrict functions and apply statements to only have one argument in the parameter list. Any statements with more than one argument will desugar into partial applications. The core expression would look something like:

data CExpr = NumC Integer
           | IfC CExpr CExpr CExpr
           | VarC Var
           | FunC Var CExpr
           | AppC CExpr CExpr
           deriving (Eq, Show)

Here are some examples of desguaring. The following with expression

(with* ([x (+ 1 2)]
        [y (* x x)]
        [x (+ y 3)])
  (+ x y))

Would desugar into something that looks like this.

((fun (x)
   ((fun (y)
      ((fun (x)
         (+ x y))
       (+ y 3)))
    (* x x)))
 (+ 1 2))

In terms of actual CExpr, it’d parse into:

(AppC (FunC "x" (AppC (FunC "y" (AppC (FunC "x" (AppC (AppC (VarC "+") (VarC "x")) (VarC "y"))) (AppC (AppC (VarC "+") (VarC "y")) (NumC 3)))) (AppC (AppC (VarC "*") (VarC "x")) (VarC "x")))) (AppC (AppC (VarC "+") (NumC 1)) (NumC 2)))

Here’s a simpler example:

((fun (x) (* x x)) (+ 1 2))

Desugars into

AppC (FunC "x" (AppC (AppC (VarC "*") (VarC "x")) (VarC "x")))
     (AppC (AppC (VarC "+") (NumC 1)) (NumC 2))

This concept of desugaring is a useful pattern to have in the toolbox: Whenever the logic for parsing or interpreting an expression becomes too unwieldy, mapping it to a simpler intermediate language with more rigid constraints helps to simplify things.

The final step in the interpreter of course, is to actually interpret the expression. This will map the desugared CExpr into a Value type. Again, because our CExpr is a tree, we can call our interp method starting from the top level of the tree. If there are no run time errors, this will reduce to a final value:

data Val = NumV Integer
         | BoolV Bool
         | FunV Var CExpr Env
         | PrimV String (Val -> Result Val)   

The PrimV type here is of interest, as it represents primitive operators in our language. Its first argument is a string representing the operator (eg “+” for addition), and the second argument is a function that takes a val parameter and returns a val. In order for our interp method to succeed, we will pass in an initial environment that contains all the predefined primitives. Note that our desugared CExpressions have no notion of a Boolean expression, because true and false will be defined as PrimV in this initial environment instead. This environment will grow during the interpretation as we encounter additional variables and their bindings (via variable application expressions). When interp encounters a function application, it up its argument parameter (the desugared CExpr guarantees only one variable per function) in the environment. If it does not exist, the result is an error.
To give you an idea of what the interp code looks like, here is the snippet for interp being called on an IfC statement would look something like:

case expr of
    IfC cond b1 b2 -> case interp cond env of
        Ok (BoolV True) -> interp b1 env
        Ok (BoolV False) -> interp b2 env
        _ -> Err "Conditional does not evaluate to a boolean"   
… other cases here

How to represent addition, subtraction, comparison operators and the like as a PrimV in Haskell is out of the scope of this article, as is how to write the parsing and interpretation methods. As is the popular saying, these are left as an exercise for the reader (or perhaps explained in a subsequent tutorial).

Haskell is cool (infinite self referencing lists)

I’m currently taking CSE 505:Programming Languages at the University of Washington for my masters degree in computer science. We are using Scala at work, a language that incorporates a lot of advanced programming language constructs, so I felt that work and school would synergize well together. As luck would have it, we are learning Haskell for class, and its pretty obvious that a lot of the Scala design and syntax is inspired by Haskell. I’ve been pretty amazed at how expressive a purely functional programming language is compared to a hybrid language like Scala that incorporates both OOP and functional paradigms into its design.

Take for example, lists. In Haskell, you can define data types that are self referencing recursive types. This is not limited to just data types though, you can do the same for lists as well! In the ghci REPL, you can type:

Prelude> let ones = 1:ones

This does exactly what it looks like. “ones” is a list variable that recursively references itself. It is the number one appended to itself over and over again. In fact, if you then type “ones” in ghci, it will print out an infinite list of ones. The beauty here is that Haskell does not evaluate the “ones” list until you actually do something with it. This lazy eval lets you literally work with lists of infinite length! You can then type something like:

Prelude> take 10 ones

which will evaluate to a list with ten 1’s:

[1,1,1,1,1,1,1,1,1,1]

You can also use functions that combine lists. zip takes two lists and produces a single list of pairs, where each nth pair in the list consists of the nth element from lists one and two. If one of the lists has more elements than the other, the resulting list has length equal to that of the shorter list. The extra elements are not paired up with anything and discarded. zipWith is similar to zip. It zips a pair of lists and takes in an additional function parameter that is then applied to each pair in the list. This function takes in two arguments and returns just one.

Prelude> let twos = zipWith (+) ones ones
Prelude> take 10 twos

This prints out
[2,2,2,2,2,2,2,2,2,2]

This leads up to the canonical “hello world” example in Haskell: the one line definition of the fibonacci sequence.

Prelude> let fibs = 0 : 1 : zipWith (+) fibs (tail fibs)

So what exactly is going on here? The tail function returns a list with every element in the original except the first one. To see how the statement works, first recall that zipWith ignores extra elements. So you can think of the above statement as being built out as follows:

Prelude> zipWith (+) [0, 1] [1]
[1]

0:1 are then prepended to [1] which produces [0,1,1]

Prelude> zipWith (+) [0,1,1] [1,1]
[1,2]

0:1 are prepended to [1,2] which produces [0,1,1,2]

Prelude> zipWith (+) [0,1,1,2] [1,1,2]
[1,2,3]

0:1 are prepended to [1,2,3] which produces [0,1,1,2,3]

Prelude> zipWith (+) [0,1,1,2,3] [1,1,2,3]
[1,2,3,5]

0:1 are prepended to [1,2,3,5] which produces [0,1,1,2,3,5] and so on, repeat an infinite number of times.

Note that each recursion adds just one new element to the list, which is the sum of the last and second to last elements in the list. This is exactly the definition of the fibbonacci sequence.

I can interact with this infinite list just like I could any other list. If I wanted to get the 100th element in the sequence, I can type the following and find out that the answer is 354224848179261915075:

Prelude> fibs !! 1000

Awesome.