This page contains some personal notes accumulated over several months of learning to program in Haskell, developing a moderately-sized application.
I found there was plenty of good tutorial material for introducing a new programmer to Haskell, and the language report was a very precise and concise reference for language specialists, but there seems to be little in between for a "jobbing programmer". The report could sometimes be unhelpful when trying to learn details of some subtle new feature of the language not covered by the tutorials.
A book in the style of, say, the "The Annotated C++ Reference Manual" (Ellis and Stroustrup) would be useful: when I was learning and using C++, this book was invaluable for me, as it contained just the right mix of formal definition and in-depth pedagogical material.
I find the Haskell report tends to concentrate largely on definition, and sometimes skimps on explanation. This would be fine, but the other, more tutorial, sources I consult don't seem to have complete coverage of the language.
Error messages can sometimes be very obscure, misleading even. Especially in the realm of types and classes.
The type system is very subtle. Maybe this is unavoidable. Better diagnostics would help; GHC seems to be better in this respect than Hugs.
I think that because so many concepts are separated, it's easy to focus on the wrong aspects of one's type declarations.
Although not part of the "official" Haskell 98 language, multi-parameter classes seem to be pretty much essential for many sizeable tasks or re-targetable software components. I think clearer signposting of this widely-implemented feature would help a new programmer. (But see also point 12.)
Also, more accessible documentation in this area would help.
I note others, too, have observed the importance of multi-parameter classes.
Also, I think it could be made clearer that multiparameter classes are general relational constraints. I got locked into an incorrect reading that 'C a s' meant that 'a s' was an instance of 'C', rather than C is a relation on types, of which (a,s) is a member.
It's not clear why I made this mistake -- I think the seeds of my confusion may have been laid by the form of subclass declarations:
(C a) => (D a)
which means 'D' is a subclass of 'C', or 'a' stands for an instance of 'C' as well as 'D'. In OO programming, classes are types and instances are values -- Haskell separates these ideas more carefully.
I don't know if this was just a personal confusion, or more widely experienced. I recognize that it's all explained, but it's difficult to take on all the new material at once without being lead by previous experience
It can be difficult to distinguish between simple typos and deep abuse of the type system.
In this example:
class (Eq k, Show k) => LookupEntryClass a k v where entryKey :: a -> k entryVal :: a -> v newEntry :: (k,v) -> a data (LookupEntryClass a k v) => LookupMap a k v = LookupMap [a k v]
the error message reported (by HUGS) was of an invalid type constructor in the 'data' declaration, when the real problem was missing type parameters in the class method declarations; i.e. should have been:
class (Eq k, Show k) => LookupEntryClass a k v where entryKey :: a k v -> k entryVal :: a k v -> v newEntry :: (k,v) -> a k v data (LookupEntryClass a k v) => LookupMap a k v = LookupMap [a k v]
Monads can be a source of confusion. The basic ideas are easy enough to get hold of, and using existing monad-based libraries is quite simple, but the deeper issues of when and when not to use monads are less well covered. Also, I found little help given on how to design new monads.
Maybe this is covered in the common language specification and/or tutorial materials, but if so it's not clear where to find the appropriate descriptions. I found the paper State in Haskell by Simon Peyton-Jones and John Launchbury to be one of the more helpful sources in this respect.
(The introductory approach of introducing IO first, then generalizing to state -- as in, say, The Craft of Functional Programming -- I found to be less helpful in this respect, as it didn't help me to understand how monadic values are created and activated. This is covered in later material, but by then some incorrect/incomplete views about how monads work have become entrenched. I have some ideas for writing a short "programmers view of monads".)
I suspect that even though I am now able to write Haskell programs of moderate complexity, I am not really using the functional idioms as effectively as I might. E.g. I have read claims that functional programming allows for better modularization of code, but I've found my own attempts to modularize seem to run into difficulties. I'm probably trying to apply conventional techniques inappropriately.
I could use a good source of information about effective programming style and idioms in Haskell.
Later, I read John Hughes' paper Why Functional Programming Matters, and Scrap Your Boilerplate by Ralf Laemmer and Simon Peyton-Jones, which have helped me to some insights about how one might strive to use functional programming most effectively.
I kept on bumping into this idea of a "functor" without any real motivation or sense of what it was used for. Then suddenly it seemed to click...
A "functor" is a structured collection (or container) type with a method (fmap) that accepts a method and applies that method to the members of the collection yielding an isomorphic collection of values of a (possibly) new type. Is this right?
If so, it doesn't seem to be expressed clearly anywhere that I've come across yet.
Tomasz Zeilonka responds: Your explanation of Functor excludes many useful Functors which are rather not collections. For example, every monad (like IO) can be a Functor if you take fmap = Monad.liftM. For [] and Maybe this would give the same operation as in their normal instances. |
So, let's look at Tomazs' comment.
liftM :: Monad m => (a->b) -> m a -> m b
fmap :: Functor f => (a->b) -> f a -> f b
So the function signatures are clearly comparable. Consider Maybe:
liftM f a = do { a' <- a ; return f a' } = a >>= \a' -> return f a' = case a of Nothing -> Nothing Just a' -> Just (f a')
fmap f Nothing = Nothing fmap f (Just a') = Just (f a')
so it's easy to see these are equivalent.
Looking at the pattern here, it would seem that a Monad is a generalization of a Functor, which also deals with the "no value" case (Nothing, error, empty list, or whatever is returned by the monad's fail function).
Frank Atanassow notes: No, it is the other way around. Every monad is a functor, but not the other way around; a monad is a functor PLUS functions >>= and return satisfying some laws. Similarly, every functor is a type constructor but not vice versa; a functor is a type constructor PLUS a function fmap satisfying some laws. The fact that Haskell does not make Functor a superclass of Monad is an infelicity of the Standard. In Haskell 1.4 (I think), it was in fact a superclass. Strictly speaking monads need not support fail functions. The Id monad has no "no value" case, for example. You can define fmap via the monad operations: fmap f m = do { x <- m; return (f x) } or, more succinctly: fmap f = (>>= return . f) But you cannot define >>= and return via fmap alone. So, in that sense, a monad is a special kind of functor. (I observe that this definition of fmap is the same as liftM.) |
So it would seem better to say: a Monad is an extension of a Functor.
I have been told that I'm using the term "monad" rather loosely (and I do agree). "monad" is a term from category theory (whose precise definition I do not yet fully grasp), from which the Haskell Monad type class is derivided. In Haskell, a Monad is a type constructor: it accepts a type value and returns a new type. That is, a Monad has kind * -> *. If "m" is a Monad and "a" is a type, then "m a" is a monadic type. An instance of a monadic type is a monadic value.
Reading the available material, I found this really confusing. Not because it's difficult, but because the patterns used are so different from what I'm used to in conventional programming. The particular area that I was having problems getting my head around was the undertsanding how a monadic value (i.e. an instance of a monadic type) is created, and how final values are extracted and used. Many of the examples I found appeared to gloss over these issues, expecially if some result was to be used in a non-monadic expression.
I think a cookbook approach to use composed monads, with emphasis on actually how to actually execute the monads and extract a final result, might be helpful.
Eventually, I found Mark Carroll's example in http://www.haskell.org/hawiki/MonadState to be really useful. I spent a little while adding comments to this example, and everything became much clearer.
Try to avoid putting class constraints on data declarations.
I found myself doing this when declaring a datatype intended to be an instance of a particular class, and then finding the type system would not allow me to make it a member of some other class, such as Functor, because the constraint was not justifiable by the method signature for that class. I think this is an artefact caused by OO-style thinking, along the lines of "an instance of this class must be subject to the class constraints".
The data type used as an instance of a class need not (and, it seems, usually should not) be subject to the class constraints. The data type may be used additionally for non-class purposes, and this is assumed when it is also used as an instance of some other class, like Functor. It's only when the specific class methods are used that the constraints come into play, and need to be declared.
Another common error I frequently make, which leads to really obscure error messages:
return f a
in a do-sequence, instead of:
return $ f a
or
return (f a)
Compilers might helpfully notice this and offer a hint.
I can't think of any situation when the former case is actually sensible.
I've also noticed that, because a String is a list, and a list is a Monad, it is sometimes possible to completely omit the 'return' statement when returning a string value without this being an immediately invalid do sequence. Instead, the return type of the do sequence is an unexpected value (a list monad containing a Char, or something like that), which in turn leads to very confusing type error messages.
Or, maybe more correctly, there are many better ways to achieve the same things that an OO programmer would do using classes (or, in Java, interfaces).
I have enountered situations in which I declare a class, and instances of that class, and other functions and structures that use the class, and later I find it really doesn't do what I want to do because I find I cannot mix different instances of the class in the way I'd like. E.g. a list of class instances must have values of the same instance in every position. Generally, it looks as if storing instances of a class in a structure is often the wrong thing to do, unless it is truly required that the various stored values are fully interchangeable and mixable beyond just being instances of the same class.
What sometimes seems to be a more effective way is to use polymorphic function types: if the putative class has just one interface method, consider declaring a function type instead, and using that. If there are several methods, declare an algebraic data type whose members are polymorphic functions.
The time to use a class seems to be when there is a genuine desire to instantiate a particular interface over different and specific underlying types of data. A class doesn't make sense as a structuring or organizing mechanism. [[[Grumble, that doesn't make too much sense on re-reading. There's a subtle distinction between multiple types that are interchangeable to the extent that they implement some common interface, and having a collection of different values that can be processed in similar ways, that my OO-infested mind cannot quite get clear.]]]
Some tutorial material on using algebraic data types for "OO-style" programming tasks might be helpful. (There's a useful message from John Meacham here: http://www.haskell.org//pipermail/haskell/2004-June/014164.html.)
Frank Atanassow, again: I do my best to avoid type classes, but I'm in the minority. In the long run, I think it's better to use existentials, as they let you define multiple instances for the same type. This usually makes more sense. For example, Show: why support only one, global way of printing things? I often need multiple ones.
In general, I think beginners overuse type classes, and use them incorrectly. I think a good thing for novices to do is to write a prototype without any fancy type hacking, just using the usual ADT approach, and later come back to it and see how to improve it with typeful techniques. People tend to forget that the major difference between ADT's and OO-style classes is really only that with a class you can have many instances in the same program simultaneously, whereas with an ADT you can have only one; but the ADT implementation is still interchangeable. Usually, that is all you need. |
I think that 'forall x' introduces x as existential when it appears immediately preceding a datatype constructor declaration. Elsewhere, it appears to signal a universal quantifier.
Existential types are a kind of information hiding mechanism: some datatype is used in an implementation, but its details are hidden from view, and the particular type used may vary from instance-to-instance in the same surrounding type context (e.g. between members of a list). Having the existential type being an instance of a class may provide means for some functions to do useful things with the type.
See also: discussion of issues on Haskell-cafe mailing list, about October 2003, starting with http://www.haskell.org//pipermail/haskell-cafe/2003-October/005231.html.
Frank Atanassow: More or less. The reason is that, in: data AppT a = forall x. App (x -> a, x) App has type: (forall x. (x -> a, x)) -> AppT a (note the forall is under the second arrow) which is [implied by]: exist x. ((x -> a, x) -> AppT a) (note the exist is at top-level). With rank-N polymorphism you can construct such things without data declarations. (A forall induces an existential when it occurs in an `odd' argument position.) |
I think I finally figured out how to get my LookupMap abstraction to work the way I wanted. The idea was that I would be able to use LookupMap to provide a lookup over any datatype by creating an appropriate LookupEntryClass instance declaration. Originally, I had:
class (Eq k, Show k) => LookupEntryClass a k v where newEntry :: (k,v) -> a k v keyVal :: a k v -> (k,v)
and
instance (Eq k, Show k) => LookupEntryClass (,) k v where newEntry = id keyVal = id
and
data LookupMap a k v = LookupMap [a k v]
(A list is a poor implementation structure for this purpose. The idea of this class is that I can vary the implementation while maintaining a consistent interface for my applications.)
The trouble with this is that I couldn't use an existing type as a lookup entry unless it had a type constructor that was of the prescribed form; i.e. (typename keytype valuetype) in this case.
I finally realized, after 6 months or so of using Haskell, that I could declare the lookup entry class thus:
class (Eq k, Show k) => LookupEntryClass a k v | a -> k, a -> v where newEntry :: (k,v) -> a keyVal :: a -> (k,v)
and
instance (Eq k, Show k) => LookupEntryClass (k,v) k v where newEntry = id keyVal = id
and
data LookupMap a = LookupMap [a]
That is, the LookupEntryClass constructor takes three parameters of kind *, rather than the first being of kind * -> * -> *. In this way, the relationship between the component types and the lookup entry type is hidden. The downside of this is that the system can no longer infer that if, say, a k v is a LookupEntryClass then so is a v k. There is also some other type information that was inferred automatically but which now must be stated explicitly.
When needed, the key and value types are made available through a context declaration, ala:
keyOrder :: (LookupEntryClass a k v, Ord k) => a -> a -> Ordering keyOrder e1 e2 = compare k1 k2 where (k1,_) = keyVal e1 (k2,_) = keyVal e2
I don't think this overall approach would be possible without using the type class dependency extension to Haskell.
ShowS is a type that is used to improve the efficiency of creating long strings (character sequences) that are for output, or are otherwise used in left-to-right order.
The problem addressed by ShowS is that string concatenation is O(n) in the length of its first string argument: this means that when building a long string by successive concatenation, the time required to append each character is proportional to the length of the string, making the overall time requiired to build the string proportional to the square of its length.
Type ShowS is defined as:
type ShowS = String -> String
At first glance, ShowS does not appear to rectify this, since a ShowS for a string value is defined simply as (++):
showString :: String -> ShowS showString = (++)
ShowS uses lazy evaluation to defer evaluation of the ++ operation; if the string is accessed strictly sequentially, it never needs to be evaluated.
Consider the expression:
a ++ b
In order to access the first character of the resulting string, it is necessary to evaluate 'a'. If 'a' itself is a concatenation of strings:
(a1 ++ a2) ++ b
then that concatenation too must be evaluated.
By representing an unterminated string as a function, the evaluation of concatenation can never be required until the string is closed, by applying the ShowS function to en empty string (or any string).
The above example then would be:
showString a1 (showString a2 (showString b ""))
Thus, the introduction of a function allows the left-associated construction of concatenations to be evaluated in a right-associated fashion. To get at the start of a sequence, it if not necessary to evaluate anything that comes after it.
So how should one use ShowS values? There are three key operations:
a = showString "Hello"
or just a section:
a = ("Hello"++)
The Haskell prelude and library provide functions that turn other values into ShowS: the default mechanism is to turn them into a string, then apply showString as above. The default showsPrec is defined as:
showsPrec _ x s = show x ++ s
which might equally be written as:
showsPrec _ x = ((show x)++)
or
showsPrec _ x = showString (show x)
When implementing complex data structures as instances of Show, consider implementing showsPrec instead of show.
a = \x->"Hello"++(" world"++x) "" = "Hello"++(" world"++"") = "Hello"++(" world") = "Hello world"
(a . (b . c)) x = a ((b . c) x) = a (b (c x)) ((a . b) . c) x = (a . b) (c x) = a (b (c x))
So if:
a = ("Hello,"++) b = (" cruel"++) c = (" world"++)
(a . b . c) "" = a (b (c "")) = a (b " world") = a " cruel world" = "Hello, cruel world"
There is extensive support for doing useful things with the standard types declared in the prelude.
For example, it may be tempting to define a new type to represent a particular value or absence of a value. I did this for my RDF language tags, defining:
data Language = Lang String | NoLanguage
It would have been far better to use:
type Language = Maybe String
Many of the built-in types are defined as Monads or Functors, either in the prelude or in supporting libraries. While not obvious to someone new to Haskell, these additional definitions provide a rich seam of added functionality, ready to be accessed as the techniques associated with higher order functions become appreciated. For example, consider 'sequence'...
The prelude function 'sequence' is surprisingly versatile.
The basic type signature is:
sequence :: Monad m => [m a] -> m [a]
Its function varies with the particular Monad (m) with which it is used:
Sometimes, sequence can be applied more than once to a value, with each application having quite different effects.
Hmm... is there a generalization of sequence that re-arranges the monadic structure? e.g.
foo :: (Monad m1, Monad m2) => m1 m2 a -> m2 m1 a
Andrew Pimlott notes:
There's lots of work on composing monads. I think that a function like this is usually called "swap". These two papers have something, though. I know there was one I found even better that I can't remember now. |
The type ((->) e) can be defined as the simplest(?) instance of a Reader Monad, which can be used to allow several functions to be evalated using the same "environment" value.
(Note that (->) is the funtion type construstor, having kind (* -> * -> *) -- see Haskell Report section 4.1.2.)
A common requirement seems to be to scan through a list of values, and return a computation based on the first member of the list that yields a value for that computation. In imperative programming, this might be achieved by looping to probe each member of the list, and breaking out of the loop when a satisfactory element is found. If the entire list is scanned without finding a satisfactory element, some different or default result is returned.
In Haskell, a pattern for achieving this relies upon lazy evaluation: suppose the required computation is processItem, returning Just the desired result, or Nothing if the item does not yield a desired result. Then, the following code might be used:
listToMaybe $ catMaybes (map processItem list)
This returns Just the required value, or Nothing if no member of list yields such a value. Lazy evaluation means that only elements of list up to that which returns the desired value are actually processed. Alternatively, to return an empty list if there is no matching value, otherwise a singleton, use:
take 1 $ catMaybes (map processItem list)
Haskell's polymorphic type system allows very generic structure handling routines to be written. The list functions in the Haskell prelude and library are examples of this. The handling of structure-related issues can be kept separate from data-specific issues by using function parameters to handle data-specific functions. The Haskell list libraries have processBy functions that abstract element matching tests in this way.
I have found that failing to separate traversal logic in this way tends to lead to over-complicated code. Separating traversal logic means that it can be tested on simpler data structures, and of course means that it can be re-used with different data. But ever when the traversal function is so arbitrary as to defy re-use, the code simplification is a big win.
(This advice is carried a good deal further by the functional strategic programming project (Strafunski); see also the "Scrap Your Boilerplate" paper (see http://www.ninebynine.org/Links.html#Programming-Haskell for links).
Many of the lessons recorded here are due to careful and patient explanations by many members of the Haskell developer community, denizens of the mailing lists that are listed at http://www.haskell.org/mailinglist.html. My thanks are given to all, too many to enumerate, who have helped me along the way. Any errors or misunderstandings propagated above are, of course, solely my own fault.