I’m rather novice Haskell developer and my understanding of Monads might not be precise and accurate. I could expect that if I will try to write similar article in a few year, it might come out slightly differently So, if you feel like adding or correcting something, please leave your comments.
What is a Monad from imperative programmer perspective?
Let me try to explain what it feels like coding Haskell with tons and tons of programming experience in imperative languages. First of all, everything in Haskell is really a function. Not procedures, just functions. Moreover, functions in Haskell accept only one argument … Don’t worry, this one argument can be another function, so that you could imagine that function really looks like a normal imperative function with multiple arguments. It’s called currying – the root of lambda expressions that recently appeared in lots of other imperative languages like C#, C++, Java (is it not there yet?).
Another thing in Haskell that is rather distinct from ordinary imperative languages is it’s syntax. Lots of forward and back arrows, colons. Really weird indentation. It might feel like something completely uncomprehending, but it gets better over time when you start reading types more fluently. And once you get used to it, it seems very natural.
And most ridiculous concept of them all (at first anyway) is the concept of a Monad. What helped me at the beginning is to think about the Monad as about the mechanism that allows to separate code with side-effects from the code without side-effects. That is one of the things that makes Haskell a programming language that really stands out. There are no such separation mechanism in impure imperative languages, only in pure languages like Haskell.
But it turns out that Monads are not necessarily encapsulating the code with side effects. There are lots of examples of Monads that has pure implementation, e.g. State Monad that carries the state from one function call to another, etc.
So, and just to wrap this section up I must say that for a normal imperative programmer, Haskell is really a mind bending thingy, that if you dare to learn will enrich your programmer’s vocabulary significantly, regardless of whether you are going to use it for real projects or not!
Why bother creating a Monad?
You don’t need to. It’s just sometimes it makes sense to have some specific functions available in a special context (the context of a monad). As in IO monad we have access to a wide variety of functions, e.g.
putStrLn , etc.
Let me give you an example: let’s say we are in the context of bank accounts. In such a context it would make sense to have functions to transfer money from one account into another, to withdraw money, etc. And in the same time maintain the state of these accounts in order to be able to make sure it’s consistency while performing some financial operations.
One reason to create your own monadic type would be to be able to combine (compose is the right term for that) it with plenty of other monads out there. There might be many other reasons for using/creating monads out there, but the one that let’s you compose things is always the most important. I won’t go further, but mention one other thing – it is to be able to code in so-called imperative style with using
do notation, which makes some programs in Haskell look like some imperative programs, even though that is just a syntactic sugar.
Most of those benefits comes out of the nature of a monadic type – the set of very specific rules that are partly coming from the section of mathematics called Group Theory.
Monad Definition and it’s Laws
Here is how the Monad definition looks like in Haskell (for those obsessed with imperative analogues, you may think of a
class keyword in Haskell as of an abstract class in C# that may have abstract and virtual methods):
class Monad m where -- | Converts value `a` into a monadic value `m a` return :: a -> m a -- | Combines two monadic values `m a` and `m b` (>>=) :: m a -> (a -> m b) -> m b -- | Combines two monadic values `m a` and `m b` (>>) :: m a -> m b -> m b -- | Throws exception typically fail :: String -> m a -- | Default implementation of combine -- operation `>>` in terms of `>>=` operation m >> k = m >>= \_ -> k
The first function is defined as
return :: a -> m a . You should not think of it as of C#’s or C++’s version of it – there is nothing to return from. So, just try to get used to this collision. What it does in fact is it injects or wraps value
a into a monad
m producing a monadic value
m a , thus suggesting to think of a Monad as of a container of a specific form for a value.
The bind operations
(>>) are there to combine two monadic values. First one is a bit more generic and the second one is defined in terms of the first one:
m >> k = m >>= \_ -> k .
fail :: String -> m a is there due to historical reasons. Of course you will never find it in the requirements to Monads coming from Category Theory.
And now the laws:
return a >>= k = k a m >>= return = m xs >>= return . f = fmap f xs m >>= (\x -> k x >>= h) = (m >>= k) >>= h
The first law
return a >>= k = k a says that it’s possible to transform value inside a monad with using function
The second law
m >>= return = m says that any monadic value shoved into
return function will result into the same monadic value. In another words,
return is an identity function.
The third law
xs >>= return . f = fmap f xs says that a monadic value shoved into the composition
return . f is exactly the same as
fmap f xs . To remind you, the functor function is of type:
fmap :: Functor f => (a -> b) -> f a -> f b .
And, the fourth law
m >>= (\x -> k x >>= h) = (m >>= k) >>= h , so called associative law, says that it doesn’t matter how you group the members, the result is the same.
It normally feels like you drown in all the theoretical details about Monads when trying to learn something about it. I might guess that readers will have the same feeling when reading this particular post. That is why I would like to continue with this subject in the next post where I will try to create a new Monad for a small project.