Monad vs Monoid In Scala


functional programming, monad, monoid, scala

Monad vs Monoid In Scala


Daniel Shin - October 23, 2015

This post is a sequel to Monad In Scala, Applicative vs Monad in Scala and Semigroup And Monoid In Scala.

So far we’ve made our journey through two realms of typeclasses: Functor, Applicative to Monad and Semigroup to Monoid. This post won’t introduce any new typeclass but rather give some general insights over how these two realms of typeclasses connect and maybe talk a little about category theory which is an overarching mathematical field where many of these typeclasses originate.

Realm of Monad

For the sake of illustration, we will generalize over Functor, Applicative and Monad as simply a Monad.

Monad lets us operate on values within some context. Here, context refers to some computational context which wraps the inner value.

Future[A] monad wraps value of type A within the context of being eventually computed in the future and exposes a certain set of methods that defines operations on them.

Option[A] monad wraps value of type A within the context of being optional and exposes a certain set of methods that defines operations on them.

List[A] monad wraps value of type A within the context of being varying number of values (non-deterministic) and exposes a certain set of methods that defines operations on them.

Note that an actual monad instance is the type constructor itself without the type parameter, for example Future is a monad instance not Future[A] since it’s a concrete type. But it’s an irrelevant detail for us now.

Now, the way we operate these values within contexts is by passing a function to its methods. Then, internally each Monad instance will lift these functions to the given context (refer to this as to what lifting is). This is an important point. All that Monad does is providing a way to operate on these values within context through methods like map, flatMap, flatten etc. All these methods are higher-order functions which means they rely on passed functions to actually do some useful operations.

Again, keep in mind, Monad simply exposes higher-order functions with defined semantics (whether they simply map over values or map and then flatten over values, for example) enforced by the type signature of function that it accepts. It completely relies on the function passed in to do the real computation.

Realm of Monoid

Again, for the sake of illustration, we generalize over Semigroup and Monoid as Monoid.

Monoid lets us combine values within some context. Again, context means the same thing.

Future[A] monoid combine values of type A within the context of being eventually computed in the future.

Option[A] monoid combine values of type A within the context of being optional.

List[A] monoid combine values of type A within the context of being varying number of values (non-deterministic).

Note that with Monoid, the actual instance is the concrete type, not type constructor unlike Monad. So for example, Future[A] is, in fact, the instance of Monoid, but not Future[_].

Redundancy here with regards to Monad is intended. The only difference between Monad and Monoid instances we see here is that while

Monad instance simply wraps the value of type A within the given context and expose a certain set of methods to operate on them,

Monoid instance already knows how to combine these values of type A within the given context.

Insights

A good way to see their connection is by looking at an exmaple:

Future(1) |+| Future(2) // Future(3)
(Future(1) |@| Future(2)) { _ + _ } // Future(3)

They both produce the same result, yet the first operation relies on Monoid[Future[A]] while the second operation relies Monad[Future] (they are defined in scalaz.std.Future).

This example makes the difference explicit. Monoid is similar to Monad in that they operate on values within context but Monoid explicitly define the way to combine the values while Monad simply gives a way to combine them but how it’s done needs to be explicitly passed in as a form of function.

Redemption

I’ve made lots of generalizations and logical leaps to get the essential ideas across intuitively. So let me redeem myself a little here.

First of all, comparing Monoid to Monad isn’t fair since Monad does a lot more than that, like enabling dependent operations etc. A more fair comparison would be between Monoid and Applicative since they resemble in many ways.

Both work on operations that are independent and thus can be run in parallel.

Both require similar set of primitive methods, that is empty and combine for Monoid and unit and ap for Applicative.

Both define similar operator to combine values. Monoid has |+| and Applicative has |@| as you can see above example (But of course Monad has access to |@| also since it extends Applicative).

And in fact, Monoid is readily convertible to Applicative but not to Monad. Scalaz’s Monoid defines a method to do just that.

final def applicative: Applicative[({type λ[α]=F})#λ] = new Applicative[({type λ[α]=F})#λ] with SemigroupApply {
  def point[A](a: => A) = zero
}

SemigroupApply simply defines a mapping between Monoid’s zero (empty) and append (combine) to Applicative’s point (unit) and ap.

Monoid’s primitive methods map directly to Applicative’s primitive methods. To quote documentation on Monoid.applicative from Scalaz:

A monoidal applicative functor, that implements point and ap with the operations zero and append respectively.
Note that the type parameter α in Applicative[({type λ[α]=F})#λ] is discarded; it is a phantom type.
As such, the functor cannot support scalaz.Bind.

Takeaway

Use Monoid when all you need is to combine indepedent operations (values within context) and how the combine works is fixed and thus can be pre-defined.

Use Applicative when you need to combine indepedent operations (values within context) and how the combine works cannot be pre-defined and thus needs to pass function to do so.

Use Monad when you need to combine possibly dependent operations (values within context) and how the combine works cannot be pre-defined and moreover it can depend on the result of the previous operation and thus need to pass function to do so.

We can see here that as we gain more flexibility, we also gain more complexity and verbosity:

Future(1) |+| Future(2) // Monoid
(Future(1) |@| Future(2)) { _ + _ } // Applicative
Future(1).flatMap(one => Future(one + 2)) // Monad