# Applicative vs Monad In Scala

In previous article, we’ve looked at how `Monad`

is a more *capable* subclass of `Applicative`

like being able to depend on previous monadic operation, etc.

So does that mean we should always opt for Monad?

No. In fact, `Applicative`

is more preferable to `Monad`

when all you need is to execute similar operations together.

`Monad`

is more liberal form of `Applicative`

in that it adds one more trait; to chain / depend on previous operation. However, being less restricted means that the typeclass can make less assumptions about its instance.

For example, by being more restricted (as in, the operations must be independent of each other) in `Applicative`

, it makes possible the parallel execution of the operations. Whereas in `Monad`

, since its restriction on the traits of instances is more liberal (in that, the operations **may** depend on previous operations), the typeclass cannot make guarantees about whether it’s safe to execute them in parallel. Therefore, it opts for sequential execution for all monadic operations.

This becomes clear if you look at the signature of `ap`

(the primitive method to form `Applicative`

) in `Monad`

from scalaz:

```
override def ap[A, B](fa: => F[A])(f: => F[A => B]): F[B] = {
bind(f)(map(fa))
}
```

In scalaz, `bind`

is synonymous to `flatMap`

.

As you can see, overriden `ap`

method is nothing more than a combination of `bind`

and `map`

and since `bind`

method **must** guarantee the dependable operations and therefore must be sequential execution, by extension, `ap`

method is also a sequential execution.

To put it in different perspective, `Applicative`

requires the structure of the entire computations to be defined before the execution happens, whereas in `Monad`

, the structure of the entire computations is purely dependent upon the individual computation itself (it’s ad-hoc).

Look at the example:

```
(Future(1) |@| Future(2) |@| Future(3)) { _ + _ + _ } // Future(6)
// vs
Future(1).flatMap(one => Future(one + 2).flatMap(three => Future(three + 3))) // Future(6)
```

Both operations (the first, being applicative operation and the second, being monadic operation) produce the same result. But as you can see, in second operation, we have a choice of deciding what operation to run, possibly depending on the result of the previous operation. This isn’t the case with `Applicative`

where the entire operation’s structure is defined even before the execution happens. What it does is simple. If all `Future`

operations returns success value, then the last block will be run if not, `Future`

with failed result is returned instead.

The main point here is that all these `Future`

operations in `Applicative`

will be all run in parallel regardless of the success / failure of the previous operations and will only be *accumulated* or *folded* when all operations are finished. If all finished successfully, then the block runs otherwise, default operation defined by `ap`

method runs - usually returning failure.

In `Applicative`

, each operation associates only at the end. This has another connotation with regards to being able to accumulate failures since all operations will all be run first independent of other operations. Although, strictly speaking, this can also be implemented in `Monad`

(since it is a superset of `Applicative`

), it is more natural to do with `Applicative`

in terms of semantics of being able to run all operations in parallel.

## Takeaway

Use `Applicative`

if all you need is to run a batch of **independent** operations together and fold them at the end with added benefit of being able to run in parallel.

Use `Monad`

if you need to run operations that are possibly dependent on previous operations with (obvious) downside of running in sequential order.

## Caveat

Parallel execution in the context `Applicative`

is mere theoretical **possibility**. The actual implementation of whether it’s actually parallel or not is completely dependent on the judgement of the writer.