As far as monad tutorials go, this one seems quite good. I like the categorization of monads between "containers" and "recipes".
However, I personally think that monad tutorials tend to give people the wrong impression and leave them more confused than they were before, because they focus on the wrong thing.
A monad is not a complex concept, at all. IMO a more useful way to present the topic would be with one separate lesson for every common monad instance. Start with Maybe, then IO, then maybe State and List, and so on... because ultimately, every instance of a Monad works very differently. That's why the pattern is so useful in the first place, because it applies to so many places. (Note: this is a criticism of monad tutorials in general, not this one in particular, which seems to do a decent job on this front).
In my experience, people new to Haskell focus way too much on getting the "a-ha" moment for monads in general, when really you want a bunch of separate "a-ha" moments as you realize how each instance of a monad takes advantage of the pattern differently.
I also tend to think that monads are best demonstrated in Haskell rather than in other languages, if only because the notation is so much less clunky. That may just be me though. (EDIT: well, also because almost no other languages have typeclasses, so you have to approximate it with interfaces/traits/etc)
Also FYI: in part 2, the code examples have extra newlines in between every line, which makes it hard to read (I'm on firefox, if that matters).
I think you are right. I don't think I've fully mastered the concept yet, but what you are saying resonates with me.
I've been trying to grok monads for almost a decade. More and more I'm beginning to realize how "mundane" the concept is, and the usefulness really is just that specific pattern of mundanity.
Similar to pipelines on Linux, they are pretty basic, but their ubiquity and their use in composing unrelated things together is really, really useful, and you only get that if you use them in a variety of different ways.
Monads are extra cool because of the mathematical rigor behind them, and that's what I'm still trying to appreciate.
What helped me grok the mathematical rigor is: If you have a series of monad operations that exist purely in monad world -- in Haskell, if your expression is parametric over the type of the monad -- you shouldn't have to worry about how you do it.
This is what monads being categorically commutative ("a monoid in the category of endofunctors") buys you. You want to turn monad X into monad Y? Sure, just join, flatten, return, bind in whatever way makes the type checker happy. Anything that only uses what's in the Monad typeclass must necessarily be a monad morphism, so if you're generic over your monads, you get that for free. And of course `fmap` and `bind` are required to be parameterized monad morphisms, so there's a lot you can get for free.
I came to the conclusion that a List<X> is a good generic data structure, for instance in cases where the cardinality is supposed to be 0..1 it is often less trouble than a nullable scalar or an Optional<X> and you have cases where you’re going to get a list anyway such as if you are querying a relational database. (Often people write awkward code to turn that result into a nullable/Optional and then more awkward code to turn it back to a list later) Lists work almost exactly the same in most languages whereas there is often something weird about null, there might not be Optional, it might have something weird about it, etc.
For multi-language distributed processing, particular if JSON is involved it’s worth a try.
To be fair I write a lot of Java where Optional is a train wreck in so many ways not least it could be null anyway, you are allocating objects needlessly, and I just see people get hypnotized by awkward code also they write bugs or scan right past them.
yes, exactly, not sure why your comment was downvoted. Also, generally it's not cardinality of 0..1, it is `[]` vs `xs:[]`, that is - either empty, or a multitude of values, where 1 is a specific instance of the multitude (singleton).
If all monad instances work differently what is the value of the Monad interface? What kind of usefull generic code can one write against the Monad interface.
The more constrained your theory is, the fewer models you have of it and also the more structure you can exploit.
Monads, I think, offer enough structure in that we can exploit things like monad composition (as fraught as it is), monadic do/for syntax, and abstracting out "traversals" (over data structures most concretely, but also other sorts of traversals) with monadic accumulators.
There's at least one other practical advantage as well, that of "chunking".
A chess master is more capable of quickly memorizing realistic board states than an amateur (and equally good at memorizing randomized board states). When we have a grasp of relevant, powerful structures underlying our world, we can "chunk" along them to reason more quickly. People familiar with monads often can hand-wave a set of unknowns in a problem by recognizing it to be a monad-shaped problem that can be independently solved later.
> There's at least one other practical advantage as well, that of "chunking".
> When we have a grasp of relevant, powerful structures underlying our world, we can "chunk" along them to reason more quickly.
This is one thing I've observed about Haskell vs. other languages: it more readily gives names and abstractions to even the minutest and most trivial patterns in software, so that seemingly novel problems can be quickly pattern matched and catalogued against a structure that has almost certainly been seen before.
One example: I want to run two (monadic) computations, and then somehow combine together their results (with some binary operation). Such a trivial and fundamental mode of composition, that seems to lack a name in almost every other programming language. Haskell has a name for this mode of composition, and it's called liftM2.
Never again will you have to re-write this pattern for yourself, leaving yourself open to error, now that you have this new concept in your vocabulary. Other languages will happily let you reinvent the wheel for the umpteenth time, or invent idiosyncratic patterns and structures without realizing that they are just particular applications of an already well-studied and well-worn concept.
Every monad is also an applicative and liftA2 does/is the same thing as liftM2. The only reason they both exist was due to Monad being popularized in Haskell earlier than Applicative and thus not having it as a superclass until the Functor-Applicative-Monad Proposal in Haskell 2014. It was obviously correct, but a major breaking change that also got pork barreled a bit and so took a while to land.
Yes you're absolutely right. I had a bit of a brain fart moment here. If they were Applicative operations then you would not be able to use `liftM2`, not the other way around.
This is my fear when I think about doing actual in a team using a functional language. That there's an imbalance in understanding between participants in a team making all discussions about problem x into the pattern matchning problem y. Like "is this liftM2 or liftA2?"
I've only had a couple of months of experience working with scala before the team switched to Java. The reasons were many but one of them was that the external consultant that was most knowledgeable in "thinking with functions" was kind of a dick. Making onboarding into a horror show of "go look up yt video x before I can talk about this functionality" with a condescending tone. So within a month he was let go and then no one in the remaining team really had the courage to keep debe it further. Some thought that they maybe could maintain the current functionality but the solution was only like half complete. (in the consultant mind it was complete because it was so generic you only needed to add a couple of lines in the right place to implement each coming feature)
That said, I would love to work in a hardcore Haskell project with a real team, one with a couple of other "regular" coders that just help each other out when solving the actual problems at hand.
See for instance the MonadIO typeclass from Haskell [0]. Constraining against this typeclass allows one to write monadic code / do-notation that works with any monad, so long as that monad supports the execution of IO statements.
Now for instance, arbitrary effects (error handling, resource management, etc) can be composed on top of an IO monad (e.g. via monad transformers), and MonadIO code, that is written to only depend on the IO effects, can still be executed in these contexts with more effects layered on top.
Traverse (or foldM) is probably a good start, likely the most useful monad-generic (or applicative-generic) function, that is simple but incredibly powerful and useful.
More generally, Monads essentially support function composition between monadic functions, so you can use it to write code that is agnostic to the monad it runs in. This can let you write e.g. prod. Code that is in IO or Async or Maybe, but for unit testing run it in Identity.
Also, it allows syntax sugar such as do notation that makes it clear to work with even when you know which monad you're working in.
Your basic problem is that your programming language can’t express the concept cleanly. You need what’s called “Higher-Kinded Types”.
To give you a concrete example, in C#
Func<A, B>, List<A> -> List<B>
Func<A, B>, Task<A> -> Task<B>
Func<A, B>, Func<C, A> -> Func<C, B>
Can’t be expressed using a generalisation. But in Haskell, you can write
(Functor F) => Func<A,B>, F<A> -> F<B>
One of the biggest things that makes monads hard to understand is that the type systems of most languages can’t represent them. Annoying, that includes the “typeless” ones.
I'm sorry, I'm not sure I understand entirely what you are trying to express by
Func<A, B>, List<A> -> List<B>
That said, in C#, you can write:
List<A> listA;
Task<A> taskA;
Func<A, B> func;
List<B> listB = from i in listA select func(i);
Task<B> taskB = from t in taskA select func(t);
And if it can resolve a method on List<T> called 'Select' that takes a Func<T, S> that returns a List<S>, and a method on Task<T> called 'Select' that takes a Func<T, S> that returns a Task<S> this will compile.
In other words, I kind of think that Select methods (which can be Select extension methods, of course) amount to functors in C#?
and define a signature for it where `xs` and `fn` are the only input arguments, so that it accepts both `listB` and `taskB` without a compilation error.
Yes, but you can’t write something that’s generic over “things that support Select” because that’s not expressible via the type system.
So you can’t write a function, then get a version that works on lists, then a version that works on tasks, then a version that works on nullables, then get a version that works on parsers. One if the big secrets on monads is that Haskell users don’t spend very much time thinking about them, while people without monads have to think about them all the time.
> Right, ‘some type that has a Select(Func<T, S>) method available’
not just Select(Func<T, S>), but a Select(Func<T, S>) that preserves its original contextual instance of Select and doesn't leak into a different instance with Select.
>One of the biggest things that makes monads hard to understand is that the type systems of most languages can’t represent them. Annoying, that includes the “typeless” ones.
Well, yeah, since a monad is a type, then a "typeless" PL will not be able to represent it.
Here is an analogy. List is a container whose elements can be any type. There are general operations applying to a list, e.g. map, reduce, filter, find, etc. Any data type (int, float, or bool) of list elements can use these same operations regardless.
It’s similar for monad. If you can provide a unit constructor to turn an object value into a monad value and a “map” operation that unwraps a monad value, applies a function to it, and wraps the result, you have monadized the object type. Your objects can participate in any algorithm operates on monads.
The monad algorithms are the same. The only things different are the unit constructor and the mapping function.
> a “map” operation that unwraps a monad value, applies a function to it, and wraps the result
It can be misleading to think of "unwrapping" a monadic value, since the monad interface does not support it. For example, there's no way to implement a function `List<T> -> T` using monad operations; it requires something entirely separate (e.g. indexing into a List, in this case).
What monads do provide is `join`, which turns nested monadic values into flat ones, like `List<List<T>> -> List<T>`. Even this seemingly trivial example is interesting though, since there are many ways to "flatten" a List<List<T>> into a List<T>: we could concatenate (e.g. depth-first), interleave (e.g. breadth-first), diagonalise (to support infinite lists), operate on chunks at a time (e.g. iterative deepening), etc.
> For example, there's no way to implement a function `List<T> -> T` using monad operations; it requires something entirely separate (e.g. indexing into a List, in this case).
this is called catamorphism, that is folding. The opposite transformation is called anamorphism, that is generation from a seed value.
> If all monad instances work differently what is the value of the Monad interface? What kind of usefull generic code can one write against the Monad interface.
Code that composes a bunch of operations, for whatever kind of composition those operations need (some people call Monad "programmable semicolons", because it's a lot like sequencing). E.g. traversals of datastructures, or some kind of "do this in a context" operation. Essentially any function you pass a "callback" to should probably be written in terms of monads so that it can accept callbacks that need different kinds of composition beyond just being called at different points in the control flow.
Lots of useful generic code. MapM is a version of `map` that works with any Monad, `sequence` works with any monad, and so on. These are used very frequently.
But the bigger benefit is when syntax sugar like `do` notation comes in. Because it works for any Monad, people can write their own Monads and take advantage of the syntax sugar. That leads to an explosion of creativity unavailable to languages who "lock down" their syntax sugar to just what the language designers intended. In other words, what requires a change to other languages can often be a library in Haskell.
It's not about what a monad can do, it's about a property of the language: referential transparency. Haskell has referential transparency, Python doesn't. That's a technical condition but here's a simple consequence: effect typing. In Haskell you can know what possible effects an operation has from its type. Here's an example from my effect system, Bluefin:
foo ::
_ =>
Exception String e1 ->
State Int e2 ->
Eff es Bool
foo = ...
We know that `foo` produces a `Bool` and the only effects it can do are to throw a `String` exception and mutate an `Int` state. That's it. It can't yield anything to a stream, it can't make network connections, it can't read from disk. In order to compose these operations together, `Eff` has to be an instance of `Monad`. That's the only way `Monad` turns up in this thing at all.
So, that's what you get in Haskell that Python doesn't give you.
Those annotations create a compile-time enforced typological relationship between the input and output values of the function. Python doesn't have mandatory type-checking so it can't do that. But parent wasn't referring to type-checking. They claimed monads have expressive power unavailable in other languages.
Monad is a weird type that a lot of languages can't properly represent in their type system. However, if you do what dynamically-typed scripting languages do, you can do any fancy thing that Haskell does, because it is weakly typed in this sense. (The sense in which Python is "strongly typed" is a different one.)
What you can't do is not do the things that Haskell blocks you from doing because it's type-unsafe, like, making sure that calling "bind" on a list returns a list and not a QT Window or an integer or something.
This may be a dissenting opinion, but... Haskell tried to avoid mutable state. "Local state manipulation" was not really a thing you could do in Haskell, deliberately. Then someone figured out that you could (ab)use a monad to do that. And because that was the only way, whenever they needed to manipulate state, Haskell programmers reached for a monad.
So it's not "what can a Haskell monad do that a Python class cannot". It's "what can a Python class do in a straightforward way that Haskell has to use a monad for, because Haskell put the programmer in a straightjacket where they couldn't do it without a monad". It's basically a pattern to get around the limitations of a language (at least when it's used for state).
This is not historically how Haskell was developed. Haskell didn't try to "avoid mutable state". Haskell tried to be (and indeed succeeded in being) referentially transparent. Now, it turns out that you can't uphold referential transparency whilst having access to mutable state in the "traditional" way, but you can access mutable state if you introduce monads as a means of structuring your computation.
So, they're certainly not a means of getting around a limitation of the language. If it was just a limitation that limitation would have been lifted a long time ago! It's a means of preserving a desirable property of the language (referential transparency) whilst also preserving access to mutable state, exceptions, I/O, and all sorts of other things one expects in a normal language.
But historically, wasn't there a fair period of time between Haskell insisting on referential transparency (and therefore not allowing traditional mutable state) and monads being introduced as a way to deal with it? That was my understanding of the history.
And if so, then it seems fair to say at least that monads were a way to get around the limitations imposed by a desirable feature of the language...
> But historically, wasn't there a fair period of time between Haskell insisting on referential transparency (and therefore not allowing traditional mutable state) and monads being introduced as a way to deal with it? That was my understanding of the history.
Yes, although there were solutions in the meantime. I/O was performed in the original version of Haskell through input-output streams and continuation passing style. It turns out that both approaches could have been given monad interfaces if "monad" as an abstraction had been understood at the time, but it wasn't, so they had ad hoc interfaces instead.
> And if so, then it seems fair to say at least that monads were a way to get around the limitations imposed by a desirable feature of the language...
I mean, sort of, but that seems more of a judgement than a fact. Would you say that function calls in C were a way to "get around the limitations imposed by not allowing global jumps"?
In both cases I'd just say they're a useful abstraction that lets you achieve a well-specified goal whilst preserving some desirable language property.
As I so often do, I find it helpful to analogize Monad to Iterator for questions like these, because it's a typeclass/interface/etc. that people are more used to and does not have that aura of "if I feel like I understand it I must not understand it" attached to it that blocks so much learning.
You extremely often use iterators in a context where there's no way you could usefully slot in just "any" iterator and have some useful code. Suppose you have an iterator that iterates over the links that appear in an HTTP document, and write some code to fetch the HTTP resources so referenced. Well, obviously, "writing against the iterator interface" doesn't do you any good in that case. It's not like you can slot in an iterator that iterates over prime numbers to such code and get anything out of it.
What you can do with the Iterator interface is provide extremely generic tools that can be used against any Iterator, like, take the first x, skip every other one, reverse the iterator list (if finite and for a price), filter the results against a type-specific acceptance function, all kinds of things: https://docs.python.org/3/library/itertools.html These tools do not depend on the details of what the iterator is or how it works, only that it is one. In this case you might even use something as powerful as "give me an iterator and a function to run against the value that comes out of the iterator and I will run it in a parallel map and limit the number of workers and handle errors in this specific way", but all that code has no specific knowledge about URLs or fetching things from the web or anything like that. It just knows it has an iterator and a matching function for the value coming out.
Similarly, "writing to the Monad interface" gives you access to a wide variety of tools that work across all things that implement the monad interface: https://hackage.haskell.org/package/base-4.21.0.0/docs/Contr... What exactly they do depends on the underlying monad implementation. It happens that they turn out to be very useful in practice a lot of the time.
You can also create new compositions of the tools that only pay attention to the interfaces, like, "drop the first x values and then filter the rest" for an iterator, though often the libraries ship with the vast majority of what you need.
Written against the interface specifically you can only use exactly what is in the interface. But you also have the concrete types to work with, with whatever it is they do. Just as you can't really do much "real work" against just "something that provides a next value" when you have no idea what that next "value" is, but iterators are very useful with specific types, monads are the same way.
(You can then later work up to code that is allows swapping out which monad you may be using depending on how it is called, but I prefer to start here and work up to that.)
This is a cool example but I think it is missing the perspective of what the interface can abstract. For example if I program a data structure to provide an Iterator I get to use these itertool functions for free no matter how complex the data structure is underneath.
The trouble I have with Monads is that what get for free doesn't seem very exciting. Feels like I'm stuck in the world of a particular monad like State or Promises and then to do anything remotly usefull you have to bring ll of this monad tranformer machinery to switch worlds again.
Actually, it sounds to me like you largely have it.
"The trouble I have with Monads is that what get for free doesn't seem very exciting."
I think there's a lot of truth to that, actually.
One of the persistent myths about "monad" is that they somehow "add" to a datatype, that the datatype was able to X and Y, but now that it's a monad now it can do X and Y and Z and M and N. But that's not true. Monad is an interface that can be implemented on things. Once you implement it, you get a lot of little tools, but individually, none of them are necessarily mindblowing, and pretty much by definition it can't be anything the data type couldn't already do.
(Likewise, I'd suggest that what you get with iterator isn't really all that "exciting" either. Useful, oh yes beyond a shadow of a doubt. But it's not exciting. Iterator qua iterator doesn't let you do anything you couldn't do without it.)
The convenience comes in that they're now the same across all monads. mapM does what it does and you no longer need to consult the specific type you are currently using for what it does, and so on for each thing.
If one "removed" monad from Haskell, that is actually what would happen. It's not the Haskell wouldn't be able to do any fewer things. It's just that you'd have to consult each data type for these functions, and they'd be named different things. (And you wouldn't be able to abstract over these operations in different datatypes without basically rebuilding monad in the process.)
I think the standard for "knowing" monad isn't that you can type a bit of a do block and get it to do something, or that you can understand what a particular block of code using list as a monad does; it's when you completely naturally are programming along in Haskell, and you realize "Hey, I've typed
do
x <- something1 arg1
y <- something2 x
z <- something3 y
t <- something4 z
out, I bet there must be something to do that in Control.Monad" and you go and look it up and find out that yes there indeed is, and add >=> to your bag of tricks.
Thanks for the feedback! I didn't expect my post to garner a lot of attention. I am totally ok with rewriting part 1, potentially to make it more concise to prevent confusion (wow, this post is super long, monads must be complex!) is what I'd like to avoid.
I can reproduce the double line issue in part 2, this was my mistake and I missed it as part of my editing process. I'll delete part 2 while I make the corrections.
Yes, and I don't even see the value in generalizing to one Monad concept. It only makes things worse, as then one is tempted to share terminology between different kinds of monads. E.g. there's no reason Maybe's flatMap is called the same as List's flatMap, it might be more readable to call them differently, as some libraries do.
Actions compose, types (generally) don’t. So Monad X and Monad Y may not make a valid Monad Z, but Kleisi composition very much exists for actions within a monad.
But the whole promise of monads is precisely that they are a type that can compose.
It basically allows you to pipe successive function calls returning different types by lifting these types into a monad.
Don't get me wrong, that promise is very powerful and in the rare few cases where it works, it unlocks beautiful composition, but the simple truth is that monads are really not that useful outside of Haskell (and I'd say, it's even questionable within).
I think "monad" is overloaded, or at least there are varying depths of understanding that are confused.
From a programming perspective, the definition of monads is clear.
bind :: m a -> (a -> m b) -> m b
return :: a -> m a
You can start using monads immediately, and in a language like Haskell, things click fairly quickly, because monads are used everywhere and taken seriously in that language.
But the implications and consequences of this definition for monads aren't always obvious, like how they can be used to structure operations or whatever.
And then there's the category theoretic business of monads which you don't need to understand for most programming purposes. That might be a source of confusion. As you more or less say, people have vague, built up expectations about monads. They expect something heavy and mysterious and doubt they're understood them according to the first definition. But the basic definition is straightforward.
Numbers are like this, too. You understand what a number is (a quantity). You can perform all sorts of operations and calculations using them without knowing number theory or the philosophy of mathematics.
-a lot of functions that can ignore that hidden information
-some functions that can act (switch|add|mutate|signal|sequence) on that hidden information
people seem to think talking about flatMap somehow makes this intuitive despite the tautological issue of flatMap only making sense if you already know what's going on.
A list is not a monad. A list is a data structure; a monad is more like a "trait" or "interface." So you can define a List type that "implements" the monad interface, but this is not an inherent property of lists themselves. That's the sense in which a list "is a" monad: the OOP sense.
Haskell's List monad provides a model for nondeterminism. But that certainly isn't the only way List could satisfy the monad interface! It was a deliberate choice -- a good choice, possibly the best choice, but a choice nonetheless.
Hi, I completely agree. "A" list isn't inherently a monad, and that is where my metaphor starts to fall apart a bit (my post title furthers this issue.)
I can clarify this earlier in part 1 or 2 instead of in to-be-written part 3.
Nope. It's that there's only one lawful Functor instance. But Applicatives and Monads can be multiple - lists are the classic example (zip vs cross-product)
The cross-product is not to be confused with the Cartesian product, which is related to the (in this case unfortunately named) "cross join" in SQL. Cross products operate in ℝ³, while Cartesian products are just defined over sets. The standard List monad instance uses the latter.
When you bind with (the Haskell definition for) the List monad - `someList >>= \someElement -> ...` it's like you're saying "this is a forking point - run the rest of the computation for each of the possible values of someElement as taken from someList". And because Haskell is lazy, it's (pardon the informality here) not necessarily just going to pick the first option and then glom onto it if it, say, were to cause an infinite loop if the computation were eagerly evaluated; it'll give you all the possibilities, and as long as you're careful not to force ones that shouldn't be forced, you won't run into any problems. Nondeterminism!
A nice demonstration of this is writing a very simple regex matcher with the List monad. A naive implementation in Haskell with the List monad Just Works, because it's effectively a direct translation of Nondeterministic Finite Automata into code.
From automata theory, you might know that nondeterministic automata are represented by a set of states. Deterministic automata are always in a specific state, while nondeterministic ones are in multiple at once. Lists are used for non-determinism in Haskell the same way as a set, mainly because they are easier to implement. But the total order that a list induces over a set is not that relevant.
That's right, and you see this directly reflected in the "types" of the transition functions for DFAs and NFAs, respectively [0] [1]:
δ : Q × E ⟶ Q
δ : Q × E ⟶ P(Q)
... where Q denotes the set of the automaton's states, E its alphabet of input symbols, and P the power set operation. Deterministic automata arrive at a definite, single state drawn from Q, while non-deterministic automata may arrive at a set (~list) of possible states, when given a current state from Q and next input symbol from E.
> A list is not a monad. A list is a data structure; a monad is more like a "trait" or "interface."
You operate with things that are bound to PL notions of specific languages. Instead, consider that list isn't a data structure, it's an abstraction that defines a multitude of position-ordered values. The multitude of position-ordered values called "list" is a presented entity of "monad", because it can be used as a valid input for a monadic computation defined consistently (in terms of the monad laws).
So I come at this from a math background but I’ve always found these explanations to be overly complex. In the parlance of C++, I think of a monad as a template class T with the following properties:
1. For any class X, there is a canonical method
F: X -> T<X>
2. For any class X, there is a canonical method
G: T<T<X>> -> T<X>.
3. For classes X and Y, and any method
f: X -> Y,
there is a corresponding method
“T<f>”: T<X> -> T<Y>.
—————-
Here “any type” means any type that is compatible with the template.
And then there’s some additional rules which make all these methods compatible, based on the different ways of stacking nested T’s and the “canonical” maps you get. Admittedly there is some confusing accounting here, but I also think most natural ways of constructing the above three requirements are going to satisfy them anyway. For List and Maybe it’s fairly obvious what the above methods are.
I dunno, maybe I have it wrong and someone can correct my understanding.
This is like explaining chess by simply stating the rules. Like sure explaining the rules of chess is important but only knowing the rules provides for nothing more than a superficial understanding of a topic.
I mean if someone is learning chess for the first time, then yes you should start with the rules rather than jumping right into waxing philosophic about positional strategy to show off how smart you are.
I think the point is that a monad is a useful concept _purely_ because of what it _allows_ you to do, and _not_ because of anything syntactical. Those rules that you're obviating there, the commutative squares, are precisely what then lets us have powerful intuitions about these objects. The type signatures matter a lot less. If, for example, you don't have functoriality (which is false for `std::vector`, for instance, since `std::vector<bool>` is special-cased) you lose the ability to reason powerfully about abstract algorithms.
Thus, explaining the syntax and where the type variables go is explaining the least relevant thing about monads to their power and importance. It's certainly easy to showcase both the syntax and the list and maybe monads, that's part of the "monad tutorial fallacy". Gaining intuition for how to think about monads _in general_ is a lot harder and requires practice. Like, yes, list and maybe are "containers", but is `(->) t` a container? Is `IO`? How do these compose, if at all? What is this about "effect" semantics, "I thought monads were just burritos/containers"? etc. These are the hard, both conceptually and pedagogically, questions. Yes you need to know the syntax to use it in any given programming language, but knowing what scabbard your knife fits in doesn't give you the skills of how to use knife :)
Yeah I am aware that most teachers, like another commenter said, do teach math just by spitting out the rules and then people memorize them and sure who am I to argue against that. It's not how I learned math and most people I know who are well versed in math including those with PhDs (and working as a quant I am fortunate enough to work with many) also don't really learn things that way either...
When I taught my daughter chess at the age of 5, I did not teach her by laying out the rules and in general when I teach people anything, I don't start by emphasizing rules. Rules are often the least interesting part of any subject and I find rules are much easier to learn once a sufficient amount of motivation has been given first, and then we get to a point where we use rules to crystalize our intuition and express rigorously what we've come to learn. The rules are the end result of learning, not the starting point.
With chess, I taught my daughter by simply playing a simple game where I had a queen, and she had one king and two pawns, and her goal was to get one pawn across the board and my goal was to stop her.
The king and pawns are arranged so that with perfect play she is guaranteed a win, and eventually she learned the pattern that allows her to always win. I then switched it up so she gets the queen and I get the king/pawns but I arrange the king/pawns so that I will always lose if she plays perfectly.
After this, I added more pawns for white and a bishop for black. The motivation for adding these pieces is that to play perfectly and always win as white, you need to learn how pawns define the physical structure of the board, making walls and allowing for control over certain parts of the board, but this also comes at the cost of introducing clutter and making it hard for the king to maneuver.
After these principles are learned, then I introduce the knight, because the knight can jump over walls and isn't as burdened by clutter. I don't just put a knight in the game out of nowhere and make her memorize how it moves... by the time the knight is introduced, it feels natural to want a piece that has better maneuverability over the extra pawns.
And so on so forth... each piece is introduced with some kind of motivation for why it exists. It's not just there for the sake of existing because of some rule that you just need to memorize. It exists because it adds some kind of flavor and richness to the game and you end up absorbing the rules rather than just memorizing them.
Now I'm dwelling a bit on chess here... but the principle applies to programming as well. I don't think if you ever just listed the rules you did as a means of teaching someone about monads anyone would ever come to appreciate why these rules matter or why they should waste any time learning them and in fact I am fairly confident that this kind approach to teaching people functional programming is why it's taken so long for mainstream languages to appreciate these features and adopt them.
If you don't already know category theory, learning it is hard. The terms on wikipedia seem to form a dense graph of links. It's hard to get a foothold of comprehension. For people that already know C++, or are at least familiar with this syntax, this is more useful than describing it in haskell syntax or category theory. There seems to be a chicken and egg problem regarding haskell and monads. Learning c++ may be harder or easier than category theory. I'm no sure, as I don't understand either one of them. But this syntax makes more sense to me than something expressed in terms of category theory vocabulary.
Reminds me on an analogy for a monad I once heard. Not sure if it is correct because I lack the mathematical understanding to verify.
Anyway, a nested To-Do list is (allegedly) a common form of a monad. Say I am trying to clean my whole house. Well, I could have an item for a task like cleaning the kitchen that has each task I need to do in the kitchen in order for the kitchen to be cleaned. I can do the same for the living room, garage, etc..
However, that is mainly for organization purposes. While I may write the tasks in a nested manner, I could very well write each item as just a long flat list of To-Do tasks, and in reality, all the tasks are effectively completed as if they were one large flat list.
Is that kind of what you mean by 'flatMappable'? Nested To-Do lists being flattened and mapped to one large list? As in, a To-Do list of To-Do lists is just a single, larger To-Do list?
Not exactly. The flatMap() operation itself isn’t going to flatten an arbitrarily nested todo list. It just concatenates the lists that are returned after applying the callback to each item.
The monad part is about what happens if you call flatMap() repeatedly. That is, each call to flatMap() is one action, and these actions can be nested without affecting the result.
It's like... what would you call all types that have a .write() method? Writable right? What would you call all types that have a .dispose() method? Disposable. What would you call all types that have a .flatMap() method? Monad obviously.
That’s because flatMap() is a good name for a particular list operation, but it’s not generic enough to be a good name for the corresponding monad operation.
I’m not sure there is a good name for the monad operation. Sometimes it’s called ‘bind’ but what does it bind?
I suppose you could call it ‘then’ like when working with Promises.
All pure values are automatically lifted into the Kyo monad, so `map` is effectively `flatMap`.
From the linked docs:
> This unique property removes the need to juggle between map and flatMap. All values are automatically promoted to a Kyo computation with zero pending effects, enabling you to focus on your application logic rather than the intricacies of effect handling.
In the end it makes a lot of sense I think. What you do is manipulating values inside some wrapper. Whether this wrapper is a monad or not should not matter. Just do something with the value(s) inside, and that's mapping.
The way I think of it, monads are a solution to Callback Hell, where you've fallen in love with lambdas, but now you have a nightmarish mess of lambdas in lambdas and lambdas calling lambdas. The monadic functions allow you to create "for comprehensions" aka "do comprehensions" but really, they look like a classic for-each loop. They secretly call the monadic map/flatMap/filter functions.
for x in list
doThings(x)
These comprehensions have a strange bonus feature, that you can do nested "loops" all at once, and even add "guards" (little if statements)
newlist=
for x in list1
y in list2 if y > 3
z in list3
doThings(x, y, z)
But again, the comprehension, when "de-sugared", is secretly calling the map/flatMap/filter functions of list1, list2, list3 to get our result. You as the author of a given monad can implement those functions however you want, and they're all 3 lambda based. But notice how the comprehension is flattening those lambdas out! Our callbacks in callbacks are much more readable like this.
Without comprehensions, you can still implement monadic functions in any old language (probably in C...?), and they're handy in their own right, but you don't get the flattening-of-callback-hell magic.
In Scala you can add the needed methods to any type and than they will "magically" work in for-comprehensions. In Haskell you need to implement a Monad instance which than does the same trick.
The concrete implementations of these methods need to obey to some algebraic laws for the data structure which defines them to be called a monad. But that's pretty much it.
In my opinion all that Haskell in most "monad tutorials" just blurs an in principle very simple concept.
The in practice relevant part is that a monad can be seen as an interface for a wrapper type with a constructor that wraps some value (whether a flat value, some collection, or even functions, makes no difference), does not expose an accessor to this wrapped value, and has a `flatMap` method defined. It also inherits a `map` method, coming from an interface called "Functor". The thing is also an instance of an "Applicative", which is an interface coming with a `combine` method which takes another object of the same type as itself and returns a combination of again the same type (classical example: string concatenation can be a `combine` implementation if we'd say that `String` implements the `Applicative` interface).
First off, I'm not sure it's even worth it to understand this stuff... Second, someone should be along to slam it soon enough and insist I've missed some gibberishy business that you'll never understand.
With those caveats in mind, here's a more intensive scala-based monad tutorial I made:
But really, don't burn up too much of your short life trying to come to terms with this stuff. There's a reason most languages don't get around to supporting Monads...
It' really worth understanding. I studies Haskell and Scala to write better Python, Typescript, and Java, and it did help.
The whole thing about JS's Promises becomes way clearer when you see that they are a monad, except for one discrepancy (they auto-flatten themselves). It leads to much shorter and clearer code when doing pedestrian frontend stuff.
To get a minimal idea, you can think about a monad as of a parametrized class: M<T>. Its functioning follows "monad laws" that allow you to do certain things with it, and with the value(s) of T wrapped my it. In particular, you can always "map" the values:
Splitting hairs even further: the .then() returns a resolved value of the inner Promise, not the inner Promise itself, when the outer Promise resolves, so not "immediately" indeed. That's where the flattening occurs, AFAICT.
* "List is a monad" - True. But I think "`List` is a monad" might be clearer.
* "A list is an algebra for the `List` monad." - False!
What's correct is the following:
* "An algebra for the `List` Monad is precisely a monoid."
Sketch of the construction:
(an algebra for the list monad is a monoid): Recall that an algebra is a set/type `A` together with a function `mult: List A -> A` together with some axioms. Think of such a function `mult: List A -> A` as the function that assigns to each list with elements in `A` the product over all those elements. The aforementioned axioms boil down to: (1) `mult([])` is a neutral element and (2) `mult` is an associative binary operation when restricted to two-element lists.
(a monoid is an algebra for the list monad): Same Idea - Given a monoid, we define a function `mult: List A -> A` by assigning to a list of elements of `A` the product of these elements. And the empty list we assign to the neutral element. Then we can use the associativity and properties of the neutral element to show that this function constitutes an algebra for the list monad.
You're quite right! Thanks for the correction. I should have said that a list is an element of a free algebra over the List monad, which is less pithy.
Thanks for the feedback! I can definitely rename the post soon as a first step, although this may require rewriting a chunk of the article to more accurately reflect the fact that List is a monad, and not "a" list.
I could make this distinction in part 3 (not written yet) although I want to balance not misleading readers, but not overcomplicating it too early on.
Another tutorial which makes monads about 100x more impossible to understand for me by relating them to something else and describing all the weird ways that they are NOT that thing.
IMO if you already have it, this will be a lovely comparison full of insight, but if you haven't then it's full of confusing statements.
IMO what they are is utterly unimportant, except to mathematicians, and what you can do with them is more to the point.
The fact that explanations are so often in Haskell just makes them more unintelligible because you really need to know what problem they solve.
Thanks for the feedback! I'll likely be editing part 1 to include the feedback so far from the commenters as well. If there's a specific statement or analogy that felt especially confusing, please point it out and I'll clarify it in the post.
IDK if it ads anything to the article, but `map` is a property of Functors, and every Monad is a Functor. Well, every Monad is an Applicative Functor, and every applicative functor is a functor.
All a way of saying that, yep, you always have `map` when you have a Monad, but you don't need a Monad to have `map`.
If you want an example we can compare a regular list and a Ziplist. A regular list's Applicative instance does a cross product, while a Ziplist's applicative instance does a dot product.
For quirky reasons in Haskell, `fmap` the function implemented for every Functor instance. This is because `map` was already taken by lists. Weird, I know.
Realizing that lists are monads is what made monads "click" for me.
When I was first learning Haskell a million years ago, I was completely confused by the concept of a monad; I could, after enough fighting with the compiler, usually get something working, but it was a stochastic guess-and-check process trying to figure out what `IO` actually means. Even the `Maybe` was confusing to me, because I couldn't really figure out how the hell you relate something like "checking for null" with "writing to the disk".
I can't remember where I saw it, probably on the Haskell wiki somewhere, but when they pointed out the List is a monad, and after seeing an example of how it worked, I suddenly got it: in a hand-wavey way, a monad is basically just a value with a wrapper context [1], and from a practical perspective that's all it is. In the case of a List its wrapper context is that there might be 0 or many of those things in there, in the case of a Maybe its wrapper context is that it might exist or it might not, in the case of IO its wrapper context is that it's interfacing with the outside world, and once you abstract away the entire idea of context, you can suddenly open up an entire world of reusability.
This is a good tutorial, I will probably be linking it to people if they ever make the mistake of asking about monads.
[1] I don't need a lecture on the minutia of this, I know that there's a lot more to it in the theory world, I went to graduate school specifically to study functional language verification. I'm keeping it simple.
A monad is not a container! It’s a way of composing functions if they have an effect. You tell how to inject a value in that effect (unit) and how to compose two functions that have that effect and that’s it: programmable semicolons.
Thanks for the feedback, I totally agree that monads are not containers. From an OOP perspective, they have some properties that make them, in some sense, sorta like containers, e.g., they contain a value like the Maybe monad. I still agree that they are not simply containers. I can clarify this in a revision to part 1 soon.
> From an OOP perspective, they have some properties that make them, in some sense, sorta like containers, e.g., they contain a value like the Maybe monad.
Not always! I find this is a big source of confusion; not all monads contain values, sometimes beginners think they can or should "get the value out" of a monad and that tends to lead to writing the wrong kind of code.
In most programming languages the compiler authors go to great lengths to gives intuitive semantics to having one statement follow another, followed by another. This is an organizing principle for thinking about code and for having a program exist with well-defined semantics.
But its not a very robust one: its never true of fast programs on realistic hardware for example (not for a long time now). And all the rule bending (-fstrict-alias, bunch of stuff) exists in this tension between the grade school natural language paradigm and the reality of computers. I say grade school not to be pejorative, but rather because it is roughly the boundary where written natural languages begin to have interesting tensions around past and future and simultaneous, changing and not changing.
Functors and applicatives and monads and other type classes like these are the source of endless analogies because there isn't an accepted, broadly-understood terminology for this "well its roughly what would happen if you had a piece of paper and wrote things on it at every statement boundary and scratched off the old ones" (though Turing and von Neumann did formalize this in useful ways, they just don't generalize well to realistic computers anymore).
Monads are the mathematical object that is forced on you if you want a rigorous way to describe the semantics of program execution in the vicinity of this "common sense" notion. That's really what everyone is dancing around: your program is only well defined with either:
- a big rulebook full of exceptions and edge cases
- a compositional rule strict enough to give some useful predictability but lax enough to admit most useful programs.
It is this rigor/laxity tension as concerns text on a page and gates on a semiconductor that gives monads a privileged place in the towers of categories. When I worked on Sigma we were among the earlier adoptors of ApplicativeDo, for example, because we wanted a slightly different rigor/laxity tradeoff for performance reasons.
Monads are what happens when you do shift the giant pile of "back of the book" compiler details that describe program execution semantics into a much simpler set of rules, but at the cost of increasing the barrier to entry because you need to know the rules before you can print "hello world".
I used to struggle with understanding the "receipe" metaphor for monads when it comes to lists. But a list (or, really any collection) as a monad can be thought of as the "discrete nondeterminism monad".
Meaning that every collection is a set of possible inputs to the computation that is provided as the argument to a `flatMap` operation. Each `flatMap`, by definition, returns a new collection of possible outputs for each of the inputs, and each of those collections gets concatenated. Every item in the final output collection represents the result of following some path through the computations, selecting a single item at each step. Importantly, the type of the output of each `flatMap` operation can differ from the input.
You can imagine extending this by assigning probabilities, or making the domain continuous (I think...). These extensions would still be monads, just without being simple collections.
It's kind of like how multiplication over whole numbers is repeated addition, but that metaphor becomes less useful for other domains of numbers.
While I can understand the desire to draw a metaphor, there are better approaches than saying, "A List Is a Monad".
The statement as-is breaks pretty much immediately because, while there is a canonical list monad, there isn't a list monad, there are in fact several[1].
There are several more correct ways of phrasing the idea among:
"List can be given a monad instance"
"List forms a monad with pure and bind as defined"
"List is the underlying functor of a monad"
The point is that picking any old list implementation is likely not a monad without the supporting structure.
Will any of these help you learn what a monad is? Likely not. Monadology is a Mary's Room[2] problem; there is a qualia, a subjective sensation, when one understands monads having experienced them first hand. Subsequently monad tutorials are the best case against physicalism[3] yet devised.
Although there can be other monads (and stuff other than monads, such as ZipList) that can be made with lists, I think that such a monad would not necessarily be a "list monad". (Your link [1] has several examples of this.) You are right that it does not mean that "a list is a monad" and that your other phrasing is better, but it does not mean that "there isn't a list monad".
To me "a list monad" often subtly implies "one" list monad. I wasn't very clear, but the point I was trying to make was more along the line of singular vs plural. Thanks for pointing out the discrepancy.
I like to think of a monad as a design pattern for constructing new objects where you pass in a sequence of callback functions, one at a time. A monad’s ‘bind’ operation adds another callback function to the end of a sequence.
The monad interface only requires ways to construct object using callbacks. The ‘bind’ operation takes a callback as an argument, but says nothing about when it’s actually called; it could be immediately, deferred, multiple times, or even never. It’s up to the implementation of the monad, as well as the language, if it’s a lazy language.
This is basically a framework. Like with other frameworks, the principle is “don’t call us; we’ll call you.” Arbitrary computation can happen between callbacks. The framework can do whatever control flow it wants, and this is what often makes frameworks opaque. Hiding control flow is what frameworks do, for better or worse.
So far, none of this is specific to a Monad. The Monad part comes from the type signature of the callback function passed in to flatmap(), which allows ‘bind’ operations to be nested.
Once you know what kind of thing you’re dealing with (frameworks) then you can go into why some frameworks qualify as a monad.
I will die with the crushing embarrassment of not being able describe what a monad is, despite being a so-called (and apparently fake) programmer since the age of 10.
Misunderstanding of Monads is such an interesting phenomenon. Kind of similar to grasping 4D geometry or understanding the difference between a class and an object in OOP.
List can be an instance of a monad, i.e. a monadic type.
I think the trick to understanding monads is to see what benefits monad interface gives to the types that implement it.
The obsession with trying to explain a monad ultimately stems from conflicting explanations and the inability to differentiate between a mathematical monad and monads implemented in software.
Monads in software are just a standard API for any given type. That’s it. Theres no magic here. Just implement the standard and you have a monad.
It grinds my gears seeing monad tutorial after tutorial using the wrong metaphors or misleading explanations
We define a small OOP framework for monads, plus macros which then can succinctly define different kinds of monads, including generating a comprehension macro for each one.
I think that was the article that made me actually try to understand "A monad is just a monoid in the category of endofunctors". Researching "endofunctor" helped significantly more than any of the analogies floating around at the time.
The article sort of danced around what I think is the most natural way List is a "recipe": it's the bounded nondeterminism monad (a `List<T>` is a nondeterministic result; one could implement `List<T> -> T` by selecting an answer uniformly at random from the finite multiset).
Seriously, I've read things about lists and nondeterminism a few times in this thread, and I can't help but wonder if "you guys" (functional programming nerds, maybe?) use the word "nondeterministic" different than the rest of the world?
If not, I'd love a good explanation about what makes lists non-deterministic, and why we would want that, and why they seem to be perfectly deterministic in imperative programming languages.
In Haskell List is a Monad as well as MonadPlus. Since List's Monad instance is used probably 100x more than its MonadPlus instance, I think it makes sense to focus on that,
Let's start with function composition. We know that for any two types A and B we can consider functions from A to B, written A -> B. We can also compose them, the heart of sequentiality. If f: A -> B and g: B -> C then we might write (f;g) or (g . f) as two different, equivalent syntaxes for doing one thing and then the other, f and then g.
I'll posit this is an extremely fundamental idea of "sequence". Sure something like [a, b, c] is also a sequence, but (f;g) really shows us the idea of piping, of one operation following the first. This is because of how composition is only defined for things with compatible input and output types. It's a little implicit promise that we're feeding the output of f into g, not just putting them side-by-side on the shelf to admire.
Anyway, we characterize composition in two ways. First, we want to be clear that composition only cares about the order that the pipes are plugged together, not how you assemble them. Specifically, for three functions, f: A->B, g: B->C, h: C->D, (f;g);h = f;(g;h). The parentheses don't matter.
Second, we know that for any type A there's the "do nothing" identity function id_A: A->A. This doesn't have to exist, but it does and it's useful. It helps us characterize composition again by saying that f;id = id;f = f. If you're playing along by metaphor to lists, id is the empty list.
Together, composition and identity and the rules of associativity (parentheses don't matter) and how we can omit identity really serve to show what the idea of "sequences of pipes" mean. This is a super popular structure (technically, a category) and whenever you see it you can get a large intuition that some kind of sequencing might be happening.
Now, let's consider a slightly different sort of function. Given any type types, what about the functions A -> F B for some fixed other type F. F here exists to somehow "modulate" B, annotate it with additional meaning. Having a value of F B is kind of like having a value of type B, but maybe seen through some kind of lens.
Presumably, we care about that particular sort of lens and you can go look up dozens of useful choices of F later, but for now we can just focus on how functions A -> F B sort of still look like little machines that we might want to pipe together. Maybe we'd like there to be composition and identity here as well.
It should be obvious that we can't use identity or composition from normal function spaces. They don't type-check (id_A: A -> A, not A -> F A) and they don't semantically make sense (we don't offhand have a way to get Bs out of an F B, which would be the obvious way to "pipe" the result onward in composition).
But let's say that for some type constructors F, they did make sense. We'd have for any type A a function pure_A: A -> F A as well as a kind of composition such that f: A -> F B and g: B -> F C become f >=> g : A -> F C. These operations might only exist for some kinds of F, but whenever they do exist we'd again capture this very primal form of sequencing that we had with functions above.
We'd again capture the idea of little A -> F B machines which can be plugged into one another as long as their input and output types align and built into larger and larger sequences of piped machines. It's a very pleasant kind of structure, easy to work with.
And those F which support these operations (and follow the associativity and identity rules) are exactly the things we call monads. They're type constructors which allow for sequential piping very similar to how we can compose normal functions.
Thank you so very much. This is the first time monads have made sense to me, and now its clear why. People who try to explain them usually end up adding all kinds of Haskell minutia (the top post is another example), rather than actually explain the concept and why we need it. Your comment is the first time I actually understand what it is, and why it might be useful.
join has the type `m (m a) -> m a`. That's the thing that really shows off the monoidal structure. People normally implement monads in terms of bind, but you can easily define join in terms of bind for any Monad: `join ma = ma >>= id`. So really, as long as you have a lawful instance of Monad written with bind the existence of join is your proof.
I do not even know what a monoid or an endofuncor is. While I enjoy math, despite not being the best at it, I am confident I never made it this far in my studies. I looked at the Wikipedia definitions, and I am even more confused now.
The amount of people who tie themselves into knots to understand this pointless concept is very funny to me. I am 16 years into a successful software engineering career without learning what a monad is an it never held me back. Turns out I can use lists and optional types and all that jazz without it.
I mean really. Look at posts like this[0]. What does this give you? Nothing, in practical reality. Nothing.
> I am 16 years into a successful software engineering career without learning what a monad is an it never held me back.
How would you know? That's the classic Blub Paradox.
Being able to write a custom monad and then leverage the vast array of libraries that already exist has helped me deliver functionality to end users quicker, more maintainably, and with lower defect rates. They don't let you do anything that you couldn't do by writing it out longhand. But just like using generic container libraries instead of writing a specific container for every type you want to handle collections of, they're extremely helpful.
As far as monad tutorials go, this one seems quite good. I like the categorization of monads between "containers" and "recipes".
However, I personally think that monad tutorials tend to give people the wrong impression and leave them more confused than they were before, because they focus on the wrong thing.
A monad is not a complex concept, at all. IMO a more useful way to present the topic would be with one separate lesson for every common monad instance. Start with Maybe, then IO, then maybe State and List, and so on... because ultimately, every instance of a Monad works very differently. That's why the pattern is so useful in the first place, because it applies to so many places. (Note: this is a criticism of monad tutorials in general, not this one in particular, which seems to do a decent job on this front).
In my experience, people new to Haskell focus way too much on getting the "a-ha" moment for monads in general, when really you want a bunch of separate "a-ha" moments as you realize how each instance of a monad takes advantage of the pattern differently.
I also tend to think that monads are best demonstrated in Haskell rather than in other languages, if only because the notation is so much less clunky. That may just be me though. (EDIT: well, also because almost no other languages have typeclasses, so you have to approximate it with interfaces/traits/etc)
Also FYI: in part 2, the code examples have extra newlines in between every line, which makes it hard to read (I'm on firefox, if that matters).
I think you are right. I don't think I've fully mastered the concept yet, but what you are saying resonates with me.
I've been trying to grok monads for almost a decade. More and more I'm beginning to realize how "mundane" the concept is, and the usefulness really is just that specific pattern of mundanity.
Similar to pipelines on Linux, they are pretty basic, but their ubiquity and their use in composing unrelated things together is really, really useful, and you only get that if you use them in a variety of different ways.
Monads are extra cool because of the mathematical rigor behind them, and that's what I'm still trying to appreciate.
What helped me grok the mathematical rigor is: If you have a series of monad operations that exist purely in monad world -- in Haskell, if your expression is parametric over the type of the monad -- you shouldn't have to worry about how you do it.
This is what monads being categorically commutative ("a monoid in the category of endofunctors") buys you. You want to turn monad X into monad Y? Sure, just join, flatten, return, bind in whatever way makes the type checker happy. Anything that only uses what's in the Monad typeclass must necessarily be a monad morphism, so if you're generic over your monads, you get that for free. And of course `fmap` and `bind` are required to be parameterized monad morphisms, so there's a lot you can get for free.
I think you mean associative. Neither monads nor monoids are commutative.
Yep, you need category theory to express something as trivial as the definition of a monad.
That’s what category theory does well: broad collections of trivialities in a unified definition.
I came to the conclusion that a List<X> is a good generic data structure, for instance in cases where the cardinality is supposed to be 0..1 it is often less trouble than a nullable scalar or an Optional<X> and you have cases where you’re going to get a list anyway such as if you are querying a relational database. (Often people write awkward code to turn that result into a nullable/Optional and then more awkward code to turn it back to a list later) Lists work almost exactly the same in most languages whereas there is often something weird about null, there might not be Optional, it might have something weird about it, etc.
For multi-language distributed processing, particular if JSON is involved it’s worth a try.
To be fair I write a lot of Java where Optional is a train wreck in so many ways not least it could be null anyway, you are allocating objects needlessly, and I just see people get hypnotized by awkward code also they write bugs or scan right past them.
yes, exactly, not sure why your comment was downvoted. Also, generally it's not cardinality of 0..1, it is `[]` vs `xs:[]`, that is - either empty, or a multitude of values, where 1 is a specific instance of the multitude (singleton).
If all monad instances work differently what is the value of the Monad interface? What kind of usefull generic code can one write against the Monad interface.
Related: https://buttondown.com/j2kun/archive/weak-and-strong-algebra...
The more constrained your theory is, the fewer models you have of it and also the more structure you can exploit.
Monads, I think, offer enough structure in that we can exploit things like monad composition (as fraught as it is), monadic do/for syntax, and abstracting out "traversals" (over data structures most concretely, but also other sorts of traversals) with monadic accumulators.
There's at least one other practical advantage as well, that of "chunking".
A chess master is more capable of quickly memorizing realistic board states than an amateur (and equally good at memorizing randomized board states). When we have a grasp of relevant, powerful structures underlying our world, we can "chunk" along them to reason more quickly. People familiar with monads often can hand-wave a set of unknowns in a problem by recognizing it to be a monad-shaped problem that can be independently solved later.
> There's at least one other practical advantage as well, that of "chunking".
> When we have a grasp of relevant, powerful structures underlying our world, we can "chunk" along them to reason more quickly.
This is one thing I've observed about Haskell vs. other languages: it more readily gives names and abstractions to even the minutest and most trivial patterns in software, so that seemingly novel problems can be quickly pattern matched and catalogued against a structure that has almost certainly been seen before.
One example: I want to run two (monadic) computations, and then somehow combine together their results (with some binary operation). Such a trivial and fundamental mode of composition, that seems to lack a name in almost every other programming language. Haskell has a name for this mode of composition, and it's called liftM2.
Never again will you have to re-write this pattern for yourself, leaving yourself open to error, now that you have this new concept in your vocabulary. Other languages will happily let you reinvent the wheel for the umpteenth time, or invent idiosyncratic patterns and structures without realizing that they are just particular applications of an already well-studied and well-worn concept.
Note that this is general enough that you don't need a Monad for this. Applicative is enough (liftA2).
Not if you need to perform two /Monadic/ operations as the commenter stated explicitly.
Every monad is also an applicative and liftA2 does/is the same thing as liftM2. The only reason they both exist was due to Monad being popularized in Haskell earlier than Applicative and thus not having it as a superclass until the Functor-Applicative-Monad Proposal in Haskell 2014. It was obviously correct, but a major breaking change that also got pork barreled a bit and so took a while to land.
Yes you're absolutely right. I had a bit of a brain fart moment here. If they were Applicative operations then you would not be able to use `liftM2`, not the other way around.
This is my fear when I think about doing actual in a team using a functional language. That there's an imbalance in understanding between participants in a team making all discussions about problem x into the pattern matchning problem y. Like "is this liftM2 or liftA2?"
I've only had a couple of months of experience working with scala before the team switched to Java. The reasons were many but one of them was that the external consultant that was most knowledgeable in "thinking with functions" was kind of a dick. Making onboarding into a horror show of "go look up yt video x before I can talk about this functionality" with a condescending tone. So within a month he was let go and then no one in the remaining team really had the courage to keep debe it further. Some thought that they maybe could maintain the current functionality but the solution was only like half complete. (in the consultant mind it was complete because it was so generic you only needed to add a couple of lines in the right place to implement each coming feature)
That said, I would love to work in a hardcore Haskell project with a real team, one with a couple of other "regular" coders that just help each other out when solving the actual problems at hand.
See for instance the MonadIO typeclass from Haskell [0]. Constraining against this typeclass allows one to write monadic code / do-notation that works with any monad, so long as that monad supports the execution of IO statements.
Now for instance, arbitrary effects (error handling, resource management, etc) can be composed on top of an IO monad (e.g. via monad transformers), and MonadIO code, that is written to only depend on the IO effects, can still be executed in these contexts with more effects layered on top.
[0] https://hackage.haskell.org/package/base-4.21.0.0/docs/Contr...
Traverse (or foldM) is probably a good start, likely the most useful monad-generic (or applicative-generic) function, that is simple but incredibly powerful and useful.
More generally, Monads essentially support function composition between monadic functions, so you can use it to write code that is agnostic to the monad it runs in. This can let you write e.g. prod. Code that is in IO or Async or Maybe, but for unit testing run it in Identity.
Also, it allows syntax sugar such as do notation that makes it clear to work with even when you know which monad you're working in.
Your basic problem is that your programming language can’t express the concept cleanly. You need what’s called “Higher-Kinded Types”.
To give you a concrete example, in C#
Func<A, B>, List<A> -> List<B>
Func<A, B>, Task<A> -> Task<B>
Func<A, B>, Func<C, A> -> Func<C, B>
Can’t be expressed using a generalisation. But in Haskell, you can write
(Functor F) => Func<A,B>, F<A> -> F<B>
One of the biggest things that makes monads hard to understand is that the type systems of most languages can’t represent them. Annoying, that includes the “typeless” ones.
C# is a fun example because there is ongoing work to support Higher-Kinded Types in it: https://paullouth.com/higher-kinds-in-c-with-language-ext/
I'm sorry, I'm not sure I understand entirely what you are trying to express by
That said, in C#, you can write: And if it can resolve a method on List<T> called 'Select' that takes a Func<T, S> that returns a List<S>, and a method on Task<T> called 'Select' that takes a Func<T, S> that returns a Task<S> this will compile.In other words, I kind of think that Select methods (which can be Select extension methods, of course) amount to functors in C#?
now write a single function that performs
and define a signature for it where `xs` and `fn` are the only input arguments, so that it accepts both `listB` and `taskB` without a compilation error.In C++:
(Although idiomatically you wouldn't write the code this way).Whether that counts as HK types, I'll leave it to others to discuss.
Right, ‘some type that has a Select(Func<T, S>) method available’ is not a thing you can express in C# generic constraints.
But I don’t need a function that does that, the language has syntax for it - I can just do that wherever I need it.
Yes, but you can’t write something that’s generic over “things that support Select” because that’s not expressible via the type system.
So you can’t write a function, then get a version that works on lists, then a version that works on tasks, then a version that works on nullables, then get a version that works on parsers. One if the big secrets on monads is that Haskell users don’t spend very much time thinking about them, while people without monads have to think about them all the time.
> Right, ‘some type that has a Select(Func<T, S>) method available’
not just Select(Func<T, S>), but a Select(Func<T, S>) that preserves its original contextual instance of Select and doesn't leak into a different instance with Select.
> But I don’t need a function that does that
you don't need it yet ;)
>One of the biggest things that makes monads hard to understand is that the type systems of most languages can’t represent them. Annoying, that includes the “typeless” ones.
Well, yeah, since a monad is a type, then a "typeless" PL will not be able to represent it.
Here is an analogy. List is a container whose elements can be any type. There are general operations applying to a list, e.g. map, reduce, filter, find, etc. Any data type (int, float, or bool) of list elements can use these same operations regardless.
It’s similar for monad. If you can provide a unit constructor to turn an object value into a monad value and a “map” operation that unwraps a monad value, applies a function to it, and wraps the result, you have monadized the object type. Your objects can participate in any algorithm operates on monads.
The monad algorithms are the same. The only things different are the unit constructor and the mapping function.
> a “map” operation that unwraps a monad value, applies a function to it, and wraps the result
It can be misleading to think of "unwrapping" a monadic value, since the monad interface does not support it. For example, there's no way to implement a function `List<T> -> T` using monad operations; it requires something entirely separate (e.g. indexing into a List, in this case).
What monads do provide is `join`, which turns nested monadic values into flat ones, like `List<List<T>> -> List<T>`. Even this seemingly trivial example is interesting though, since there are many ways to "flatten" a List<List<T>> into a List<T>: we could concatenate (e.g. depth-first), interleave (e.g. breadth-first), diagonalise (to support infinite lists), operate on chunks at a time (e.g. iterative deepening), etc.
> For example, there's no way to implement a function `List<T> -> T` using monad operations; it requires something entirely separate (e.g. indexing into a List, in this case).
this is called catamorphism, that is folding. The opposite transformation is called anamorphism, that is generation from a seed value.
You're describing a functor. For monads, you still need bind or join.
> If all monad instances work differently what is the value of the Monad interface? What kind of usefull generic code can one write against the Monad interface.
Code that composes a bunch of operations, for whatever kind of composition those operations need (some people call Monad "programmable semicolons", because it's a lot like sequencing). E.g. traversals of datastructures, or some kind of "do this in a context" operation. Essentially any function you pass a "callback" to should probably be written in terms of monads so that it can accept callbacks that need different kinds of composition beyond just being called at different points in the control flow.
Lots of useful generic code. MapM is a version of `map` that works with any Monad, `sequence` works with any monad, and so on. These are used very frequently.
But the bigger benefit is when syntax sugar like `do` notation comes in. Because it works for any Monad, people can write their own Monads and take advantage of the syntax sugar. That leads to an explosion of creativity unavailable to languages who "lock down" their syntax sugar to just what the language designers intended. In other words, what requires a change to other languages can often be a library in Haskell.
What can a Haskell monad do that a Python class cannot? 99% of all monads I've seen only facilitate local state manipulation.
It's not about what a monad can do, it's about a property of the language: referential transparency. Haskell has referential transparency, Python doesn't. That's a technical condition but here's a simple consequence: effect typing. In Haskell you can know what possible effects an operation has from its type. Here's an example from my effect system, Bluefin:
We know that `foo` produces a `Bool` and the only effects it can do are to throw a `String` exception and mutate an `Int` state. That's it. It can't yield anything to a stream, it can't make network connections, it can't read from disk. In order to compose these operations together, `Eff` has to be an instance of `Monad`. That's the only way `Monad` turns up in this thing at all.So, that's what you get in Haskell that Python doesn't give you.
Those annotations create a compile-time enforced typological relationship between the input and output values of the function. Python doesn't have mandatory type-checking so it can't do that. But parent wasn't referring to type-checking. They claimed monads have expressive power unavailable in other languages.
It can do it type-safely.
Monad is a weird type that a lot of languages can't properly represent in their type system. However, if you do what dynamically-typed scripting languages do, you can do any fancy thing that Haskell does, because it is weakly typed in this sense. (The sense in which Python is "strongly typed" is a different one.)
What you can't do is not do the things that Haskell blocks you from doing because it's type-unsafe, like, making sure that calling "bind" on a list returns a list and not a QT Window or an integer or something.
This may be a dissenting opinion, but... Haskell tried to avoid mutable state. "Local state manipulation" was not really a thing you could do in Haskell, deliberately. Then someone figured out that you could (ab)use a monad to do that. And because that was the only way, whenever they needed to manipulate state, Haskell programmers reached for a monad.
So it's not "what can a Haskell monad do that a Python class cannot". It's "what can a Python class do in a straightforward way that Haskell has to use a monad for, because Haskell put the programmer in a straightjacket where they couldn't do it without a monad". It's basically a pattern to get around the limitations of a language (at least when it's used for state).
This is not historically how Haskell was developed. Haskell didn't try to "avoid mutable state". Haskell tried to be (and indeed succeeded in being) referentially transparent. Now, it turns out that you can't uphold referential transparency whilst having access to mutable state in the "traditional" way, but you can access mutable state if you introduce monads as a means of structuring your computation.
So, they're certainly not a means of getting around a limitation of the language. If it was just a limitation that limitation would have been lifted a long time ago! It's a means of preserving a desirable property of the language (referential transparency) whilst also preserving access to mutable state, exceptions, I/O, and all sorts of other things one expects in a normal language.
See my comment here for a little bit more about the benefits of referential transparency: https://news.ycombinator.com/item?id=44448127
But historically, wasn't there a fair period of time between Haskell insisting on referential transparency (and therefore not allowing traditional mutable state) and monads being introduced as a way to deal with it? That was my understanding of the history.
And if so, then it seems fair to say at least that monads were a way to get around the limitations imposed by a desirable feature of the language...
> But historically, wasn't there a fair period of time between Haskell insisting on referential transparency (and therefore not allowing traditional mutable state) and monads being introduced as a way to deal with it? That was my understanding of the history.
Yes, although there were solutions in the meantime. I/O was performed in the original version of Haskell through input-output streams and continuation passing style. It turns out that both approaches could have been given monad interfaces if "monad" as an abstraction had been understood at the time, but it wasn't, so they had ad hoc interfaces instead.
> And if so, then it seems fair to say at least that monads were a way to get around the limitations imposed by a desirable feature of the language...
I mean, sort of, but that seems more of a judgement than a fact. Would you say that function calls in C were a way to "get around the limitations imposed by not allowing global jumps"?
In both cases I'd just say they're a useful abstraction that lets you achieve a well-specified goal whilst preserving some desirable language property.
> Would you say that function calls in C were a way to "get around the limitations imposed by not allowing global jumps"?
If C had started with a rule against global jumps, and only figured out function calls later, then yes I would say that.
[dead]
Everything in here, for a start:
https://hackage.haskell.org/package/base-4.21.0.0/docs/Contr...
https://hackage.haskell.org/package/base-4.21.0.0/docs/Data-...
As I so often do, I find it helpful to analogize Monad to Iterator for questions like these, because it's a typeclass/interface/etc. that people are more used to and does not have that aura of "if I feel like I understand it I must not understand it" attached to it that blocks so much learning.
You extremely often use iterators in a context where there's no way you could usefully slot in just "any" iterator and have some useful code. Suppose you have an iterator that iterates over the links that appear in an HTTP document, and write some code to fetch the HTTP resources so referenced. Well, obviously, "writing against the iterator interface" doesn't do you any good in that case. It's not like you can slot in an iterator that iterates over prime numbers to such code and get anything out of it.
What you can do with the Iterator interface is provide extremely generic tools that can be used against any Iterator, like, take the first x, skip every other one, reverse the iterator list (if finite and for a price), filter the results against a type-specific acceptance function, all kinds of things: https://docs.python.org/3/library/itertools.html These tools do not depend on the details of what the iterator is or how it works, only that it is one. In this case you might even use something as powerful as "give me an iterator and a function to run against the value that comes out of the iterator and I will run it in a parallel map and limit the number of workers and handle errors in this specific way", but all that code has no specific knowledge about URLs or fetching things from the web or anything like that. It just knows it has an iterator and a matching function for the value coming out.
Similarly, "writing to the Monad interface" gives you access to a wide variety of tools that work across all things that implement the monad interface: https://hackage.haskell.org/package/base-4.21.0.0/docs/Contr... What exactly they do depends on the underlying monad implementation. It happens that they turn out to be very useful in practice a lot of the time.
You can also create new compositions of the tools that only pay attention to the interfaces, like, "drop the first x values and then filter the rest" for an iterator, though often the libraries ship with the vast majority of what you need.
Written against the interface specifically you can only use exactly what is in the interface. But you also have the concrete types to work with, with whatever it is they do. Just as you can't really do much "real work" against just "something that provides a next value" when you have no idea what that next "value" is, but iterators are very useful with specific types, monads are the same way.
(You can then later work up to code that is allows swapping out which monad you may be using depending on how it is called, but I prefer to start here and work up to that.)
This is a cool example but I think it is missing the perspective of what the interface can abstract. For example if I program a data structure to provide an Iterator I get to use these itertool functions for free no matter how complex the data structure is underneath.
The trouble I have with Monads is that what get for free doesn't seem very exciting. Feels like I'm stuck in the world of a particular monad like State or Promises and then to do anything remotly usefull you have to bring ll of this monad tranformer machinery to switch worlds again.
Actually, it sounds to me like you largely have it.
"The trouble I have with Monads is that what get for free doesn't seem very exciting."
I think there's a lot of truth to that, actually.
One of the persistent myths about "monad" is that they somehow "add" to a datatype, that the datatype was able to X and Y, but now that it's a monad now it can do X and Y and Z and M and N. But that's not true. Monad is an interface that can be implemented on things. Once you implement it, you get a lot of little tools, but individually, none of them are necessarily mindblowing, and pretty much by definition it can't be anything the data type couldn't already do.
(Likewise, I'd suggest that what you get with iterator isn't really all that "exciting" either. Useful, oh yes beyond a shadow of a doubt. But it's not exciting. Iterator qua iterator doesn't let you do anything you couldn't do without it.)
The convenience comes in that they're now the same across all monads. mapM does what it does and you no longer need to consult the specific type you are currently using for what it does, and so on for each thing.
If one "removed" monad from Haskell, that is actually what would happen. It's not the Haskell wouldn't be able to do any fewer things. It's just that you'd have to consult each data type for these functions, and they'd be named different things. (And you wouldn't be able to abstract over these operations in different datatypes without basically rebuilding monad in the process.)
I think the standard for "knowing" monad isn't that you can type a bit of a do block and get it to do something, or that you can understand what a particular block of code using list as a monad does; it's when you completely naturally are programming along in Haskell, and you realize "Hey, I've typed
out, I bet there must be something to do that in Control.Monad" and you go and look it up and find out that yes there indeed is, and add >=> to your bag of tricks.[dead]
Thanks for the feedback! I didn't expect my post to garner a lot of attention. I am totally ok with rewriting part 1, potentially to make it more concise to prevent confusion (wow, this post is super long, monads must be complex!) is what I'd like to avoid.
I can reproduce the double line issue in part 2, this was my mistake and I missed it as part of my editing process. I'll delete part 2 while I make the corrections.
> one separate lesson for every common monad instance.
Right on. This is the "What Are Monads" fallacy: https://entropicthoughts.com/the-what-are-monads-fallacy
Wow, this is a great post, thank you for sharing. It echoes my thoughts exactly.
Yes, and I don't even see the value in generalizing to one Monad concept. It only makes things worse, as then one is tempted to share terminology between different kinds of monads. E.g. there's no reason Maybe's flatMap is called the same as List's flatMap, it might be more readable to call them differently, as some libraries do.
A container monad is just a recipe monad where the recipe for getting the value is "here's the value"
Also formal mathematical objects aren’t always like real world objects. What’s a ring? It’s a thing with these properties
> That's why the pattern is so useful in the first place
How useful, really? Monads don't even universally compose, which is what most people sell the concept for.
Actions compose, types (generally) don’t. So Monad X and Monad Y may not make a valid Monad Z, but Kleisi composition very much exists for actions within a monad.
But the whole promise of monads is precisely that they are a type that can compose.
It basically allows you to pipe successive function calls returning different types by lifting these types into a monad.
Don't get me wrong, that promise is very powerful and in the rare few cases where it works, it unlocks beautiful composition, but the simple truth is that monads are really not that useful outside of Haskell (and I'd say, it's even questionable within).
Monads don't compose between different instances, but monad transformers do.
with this comment you joined a list of seven million devs that have written a monad tutorial :)
I found helpful. It really clarified a few things for me.
I think "monad" is overloaded, or at least there are varying depths of understanding that are confused.
From a programming perspective, the definition of monads is clear.
You can start using monads immediately, and in a language like Haskell, things click fairly quickly, because monads are used everywhere and taken seriously in that language.But the implications and consequences of this definition for monads aren't always obvious, like how they can be used to structure operations or whatever.
And then there's the category theoretic business of monads which you don't need to understand for most programming purposes. That might be a source of confusion. As you more or less say, people have vague, built up expectations about monads. They expect something heavy and mysterious and doubt they're understood them according to the first definition. But the basic definition is straightforward.
Numbers are like this, too. You understand what a number is (a quantity). You can perform all sorts of operations and calculations using them without knowing number theory or the philosophy of mathematics.
I mean at it's most basic "monads" are
-a data type with some hidden information
-a lot of functions that can ignore that hidden information
-some functions that can act (switch|add|mutate|signal|sequence) on that hidden information
people seem to think talking about flatMap somehow makes this intuitive despite the tautological issue of flatMap only making sense if you already know what's going on.
I think this adds more confusion than it removes.
A list is not a monad. A list is a data structure; a monad is more like a "trait" or "interface." So you can define a List type that "implements" the monad interface, but this is not an inherent property of lists themselves. That's the sense in which a list "is a" monad: the OOP sense.
Haskell's List monad provides a model for nondeterminism. But that certainly isn't the only way List could satisfy the monad interface! It was a deliberate choice -- a good choice, possibly the best choice, but a choice nonetheless.
Hi, I completely agree. "A" list isn't inherently a monad, and that is where my metaphor starts to fall apart a bit (my post title furthers this issue.)
I can clarify this earlier in part 1 or 2 instead of in to-be-written part 3.
Its a harmful metaphor and clickbait title.
Isn’t it the case that for a given functor (on Set) there can only be at most one Monad structure?
Nope. It's that there's only one lawful Functor instance. But Applicatives and Monads can be multiple - lists are the classic example (zip vs cross-product)
Ah right. Didn’t remember it correctly.
The cross-product is not to be confused with the Cartesian product, which is related to the (in this case unfortunately named) "cross join" in SQL. Cross products operate in ℝ³, while Cartesian products are just defined over sets. The standard List monad instance uses the latter.
ah yes my bad! good callout
Can you explain the nondeterminism part of your comment more?
When you bind with (the Haskell definition for) the List monad - `someList >>= \someElement -> ...` it's like you're saying "this is a forking point - run the rest of the computation for each of the possible values of someElement as taken from someList". And because Haskell is lazy, it's (pardon the informality here) not necessarily just going to pick the first option and then glom onto it if it, say, were to cause an infinite loop if the computation were eagerly evaluated; it'll give you all the possibilities, and as long as you're careful not to force ones that shouldn't be forced, you won't run into any problems. Nondeterminism!
A nice demonstration of this is writing a very simple regex matcher with the List monad. A naive implementation in Haskell with the List monad Just Works, because it's effectively a direct translation of Nondeterministic Finite Automata into code.
From automata theory, you might know that nondeterministic automata are represented by a set of states. Deterministic automata are always in a specific state, while nondeterministic ones are in multiple at once. Lists are used for non-determinism in Haskell the same way as a set, mainly because they are easier to implement. But the total order that a list induces over a set is not that relevant.
That's right, and you see this directly reflected in the "types" of the transition functions for DFAs and NFAs, respectively [0] [1]:
... where Q denotes the set of the automaton's states, E its alphabet of input symbols, and P the power set operation. Deterministic automata arrive at a definite, single state drawn from Q, while non-deterministic automata may arrive at a set (~list) of possible states, when given a current state from Q and next input symbol from E.[0] https://en.wikipedia.org/wiki/Deterministic_finite_automaton
[1] https://en.wikipedia.org/wiki/Nondeterministic_finite_automa...
Determinism, in that given some set of inputs you only ever receive one output.
Non-determinism, in that given some set of inputs it's possible to receive a collection (a list) of possible outputs.
With lists you can express things like all possible pairings of all possible outcomes, or the Cartesian product:
... or in more explicit monadic do-notation: and so on.> A list is not a monad. A list is a data structure; a monad is more like a "trait" or "interface."
You operate with things that are bound to PL notions of specific languages. Instead, consider that list isn't a data structure, it's an abstraction that defines a multitude of position-ordered values. The multitude of position-ordered values called "list" is a presented entity of "monad", because it can be used as a valid input for a monadic computation defined consistently (in terms of the monad laws).
So I come at this from a math background but I’ve always found these explanations to be overly complex. In the parlance of C++, I think of a monad as a template class T with the following properties:
1. For any class X, there is a canonical method
2. For any class X, there is a canonical method 3. For classes X and Y, and any method there is a corresponding method —————-Here “any type” means any type that is compatible with the template.
And then there’s some additional rules which make all these methods compatible, based on the different ways of stacking nested T’s and the “canonical” maps you get. Admittedly there is some confusing accounting here, but I also think most natural ways of constructing the above three requirements are going to satisfy them anyway. For List and Maybe it’s fairly obvious what the above methods are.
I dunno, maybe I have it wrong and someone can correct my understanding.
This is like explaining chess by simply stating the rules. Like sure explaining the rules of chess is important but only knowing the rules provides for nothing more than a superficial understanding of a topic.
Yes, this is how you learn math. It helps to pair definitions with examples and exercises.
I mean if someone is learning chess for the first time, then yes you should start with the rules rather than jumping right into waxing philosophic about positional strategy to show off how smart you are.
I think the point is that a monad is a useful concept _purely_ because of what it _allows_ you to do, and _not_ because of anything syntactical. Those rules that you're obviating there, the commutative squares, are precisely what then lets us have powerful intuitions about these objects. The type signatures matter a lot less. If, for example, you don't have functoriality (which is false for `std::vector`, for instance, since `std::vector<bool>` is special-cased) you lose the ability to reason powerfully about abstract algorithms.
Thus, explaining the syntax and where the type variables go is explaining the least relevant thing about monads to their power and importance. It's certainly easy to showcase both the syntax and the list and maybe monads, that's part of the "monad tutorial fallacy". Gaining intuition for how to think about monads _in general_ is a lot harder and requires practice. Like, yes, list and maybe are "containers", but is `(->) t` a container? Is `IO`? How do these compose, if at all? What is this about "effect" semantics, "I thought monads were just burritos/containers"? etc. These are the hard, both conceptually and pedagogically, questions. Yes you need to know the syntax to use it in any given programming language, but knowing what scabbard your knife fits in doesn't give you the skills of how to use knife :)
Yeah I am aware that most teachers, like another commenter said, do teach math just by spitting out the rules and then people memorize them and sure who am I to argue against that. It's not how I learned math and most people I know who are well versed in math including those with PhDs (and working as a quant I am fortunate enough to work with many) also don't really learn things that way either...
When I taught my daughter chess at the age of 5, I did not teach her by laying out the rules and in general when I teach people anything, I don't start by emphasizing rules. Rules are often the least interesting part of any subject and I find rules are much easier to learn once a sufficient amount of motivation has been given first, and then we get to a point where we use rules to crystalize our intuition and express rigorously what we've come to learn. The rules are the end result of learning, not the starting point.
With chess, I taught my daughter by simply playing a simple game where I had a queen, and she had one king and two pawns, and her goal was to get one pawn across the board and my goal was to stop her.
The king and pawns are arranged so that with perfect play she is guaranteed a win, and eventually she learned the pattern that allows her to always win. I then switched it up so she gets the queen and I get the king/pawns but I arrange the king/pawns so that I will always lose if she plays perfectly.
After this, I added more pawns for white and a bishop for black. The motivation for adding these pieces is that to play perfectly and always win as white, you need to learn how pawns define the physical structure of the board, making walls and allowing for control over certain parts of the board, but this also comes at the cost of introducing clutter and making it hard for the king to maneuver.
After these principles are learned, then I introduce the knight, because the knight can jump over walls and isn't as burdened by clutter. I don't just put a knight in the game out of nowhere and make her memorize how it moves... by the time the knight is introduced, it feels natural to want a piece that has better maneuverability over the extra pawns.
And so on so forth... each piece is introduced with some kind of motivation for why it exists. It's not just there for the sake of existing because of some rule that you just need to memorize. It exists because it adds some kind of flavor and richness to the game and you end up absorbing the rules rather than just memorizing them.
Now I'm dwelling a bit on chess here... but the principle applies to programming as well. I don't think if you ever just listed the rules you did as a means of teaching someone about monads anyone would ever come to appreciate why these rules matter or why they should waste any time learning them and in fact I am fairly confident that this kind approach to teaching people functional programming is why it's taken so long for mainstream languages to appreciate these features and adopt them.
So you think that a monad which is an object with a simple definition in category theory is better explained in terms of C++?
I would agree that most of these articles about monads are bad. Just study the definition, then study what you can do with monads, it's not that hard.
Yes.
If you don't already know category theory, learning it is hard. The terms on wikipedia seem to form a dense graph of links. It's hard to get a foothold of comprehension. For people that already know C++, or are at least familiar with this syntax, this is more useful than describing it in haskell syntax or category theory. There seems to be a chicken and egg problem regarding haskell and monads. Learning c++ may be harder or easier than category theory. I'm no sure, as I don't understand either one of them. But this syntax makes more sense to me than something expressed in terms of category theory vocabulary.
For people who are more familiar with C++ than category theory, yes.
I think the most intuitive description for a monad I've ever seen is 'flatMappable'.
Context: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...
Usually articles that describe them in a very Math-y way go above my head. But the definition above was immediately clear (I saw it on HN).
I think this article is a bit more approachable than others I've read, but it still gets very confusing near the end.
Reminds me on an analogy for a monad I once heard. Not sure if it is correct because I lack the mathematical understanding to verify.
Anyway, a nested To-Do list is (allegedly) a common form of a monad. Say I am trying to clean my whole house. Well, I could have an item for a task like cleaning the kitchen that has each task I need to do in the kitchen in order for the kitchen to be cleaned. I can do the same for the living room, garage, etc..
However, that is mainly for organization purposes. While I may write the tasks in a nested manner, I could very well write each item as just a long flat list of To-Do tasks, and in reality, all the tasks are effectively completed as if they were one large flat list.
Is that kind of what you mean by 'flatMappable'? Nested To-Do lists being flattened and mapped to one large list? As in, a To-Do list of To-Do lists is just a single, larger To-Do list?
Not exactly. The flatMap() operation itself isn’t going to flatten an arbitrarily nested todo list. It just concatenates the lists that are returned after applying the callback to each item.
The monad part is about what happens if you call flatMap() repeatedly. That is, each call to flatMap() is one action, and these actions can be nested without affecting the result.
Yes, your understanding is correct. It's literally calling map() on an array, followed by a flat().
Sorry, I should have added more context to my post. I edited my post and added a link to the MDN definition of the flatMap function.
Could you elaborate on that? What does 'flatMappable' mean in this context?
This is a good explanation:
https://users.scala-lang.org/t/what-is-a-monad-in-scala/4169
It's like... what would you call all types that have a .write() method? Writable right? What would you call all types that have a .dispose() method? Disposable. What would you call all types that have a .flatMap() method? Monad obviously.
That’s because flatMap() is a good name for a particular list operation, but it’s not generic enough to be a good name for the corresponding monad operation.
I’m not sure there is a good name for the monad operation. Sometimes it’s called ‘bind’ but what does it bind?
I suppose you could call it ‘then’ like when working with Promises.
Scala's new effect library Kyo just uses `map`. See:
https://getkyo.io/#/?id=the-quotpendingquot-type-lt
All pure values are automatically lifted into the Kyo monad, so `map` is effectively `flatMap`.
From the linked docs:
> This unique property removes the need to juggle between map and flatMap. All values are automatically promoted to a Kyo computation with zero pending effects, enabling you to focus on your application logic rather than the intricacies of effect handling.
In the end it makes a lot of sense I think. What you do is manipulating values inside some wrapper. Whether this wrapper is a monad or not should not matter. Just do something with the value(s) inside, and that's mapping.
something that you can call flatMap() on.
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...
The way I think of it, monads are a solution to Callback Hell, where you've fallen in love with lambdas, but now you have a nightmarish mess of lambdas in lambdas and lambdas calling lambdas. The monadic functions allow you to create "for comprehensions" aka "do comprehensions" but really, they look like a classic for-each loop. They secretly call the monadic map/flatMap/filter functions.
These comprehensions have a strange bonus feature, that you can do nested "loops" all at once, and even add "guards" (little if statements) But again, the comprehension, when "de-sugared", is secretly calling the map/flatMap/filter functions of list1, list2, list3 to get our result. You as the author of a given monad can implement those functions however you want, and they're all 3 lambda based. But notice how the comprehension is flattening those lambdas out! Our callbacks in callbacks are much more readable like this.Without comprehensions, you can still implement monadic functions in any old language (probably in C...?), and they're handy in their own right, but you don't get the flattening-of-callback-hell magic.
After reading your comment, I've made it my mission to understand it. Although I have no idea what you're talking about, you make it sound intriguing.
What parent describes is pretty simple: That's just how the compiler transforms some code.
Do-notation in Haskell, or for-comprehensions in Scala are just syntax sugar for nested calls to `flatMap`, `filter`, and `map`.
I think this here shows it nicely:
https://www.baeldung.com/scala/for-comprehension#for-compreh...
In Scala you can add the needed methods to any type and than they will "magically" work in for-comprehensions. In Haskell you need to implement a Monad instance which than does the same trick.
The concrete implementations of these methods need to obey to some algebraic laws for the data structure which defines them to be called a monad. But that's pretty much it.
In my opinion all that Haskell in most "monad tutorials" just blurs an in principle very simple concept.
The in practice relevant part is that a monad can be seen as an interface for a wrapper type with a constructor that wraps some value (whether a flat value, some collection, or even functions, makes no difference), does not expose an accessor to this wrapped value, and has a `flatMap` method defined. It also inherits a `map` method, coming from an interface called "Functor". The thing is also an instance of an "Applicative", which is an interface coming with a `combine` method which takes another object of the same type as itself and returns a combination of again the same type (classical example: string concatenation can be a `combine` implementation if we'd say that `String` implements the `Applicative` interface).
First off, I'm not sure it's even worth it to understand this stuff... Second, someone should be along to slam it soon enough and insist I've missed some gibberishy business that you'll never understand.
With those caveats in mind, here's a more intensive scala-based monad tutorial I made:
https://github.com/zaboople/techknow/blob/master/scala/monad...
But really, don't burn up too much of your short life trying to come to terms with this stuff. There's a reason most languages don't get around to supporting Monads...
It' really worth understanding. I studies Haskell and Scala to write better Python, Typescript, and Java, and it did help.
The whole thing about JS's Promises becomes way clearer when you see that they are a monad, except for one discrepancy (they auto-flatten themselves). It leads to much shorter and clearer code when doing pedestrian frontend stuff.
To get a minimal idea, you can think about a monad as of a parametrized class: M<T>. Its functioning follows "monad laws" that allow you to do certain things with it, and with the value(s) of T wrapped my it. In particular, you can always "map" the values:
You can always flatten the nested structure: This is usually expressed in a different form, more fundamental: You can notice how that map() thing does looping over a sequence for you.But Optional<T> is also a monad:
As you see, the same map() (and flatMap()) does the condition checking for you. and can be chained safely.You can also notice how chaining of map-like operations does operation sequencing:
Your language, like JS/TS, can add some syntax sugar over it, and allow you to write it as a sequence of statements: Promises are not exactly monads though, a Promise<Promise<T>> immediately transforms into Promise<T>. But other monadic properties are still there.> immediately transforms into
Minor quibble, "can only be resolved as". The runtime absolutely holds Promise<Promise<T>>'s.
Splitting hairs even further: the .then() returns a resolved value of the inner Promise, not the inner Promise itself, when the outer Promise resolves, so not "immediately" indeed. That's where the flattening occurs, AFAICT.
You might like https://philipnilsson.github.io/Badness10k/escaping-hell-wit... which is a longer version of the same kind of argument.
I wrote a post about a highly related topic here. It may be helpful to you in understanding the parent comment: https://chadnauseam.com/coding/random/how-side-effects-work-...
[dead]
I expect the author has done this knowingly, but the title is rather painful for a mathematician to read.
A list is not a monad. List is a monad. A list is an algebra for the List monad.
What you said is not correct!
In detail:
* "A list is not a monad" - True!
* "List is a monad" - True. But I think "`List` is a monad" might be clearer.
* "A list is an algebra for the `List` monad." - False!
What's correct is the following:
* "An algebra for the `List` Monad is precisely a monoid."
Sketch of the construction:
(an algebra for the list monad is a monoid): Recall that an algebra is a set/type `A` together with a function `mult: List A -> A` together with some axioms. Think of such a function `mult: List A -> A` as the function that assigns to each list with elements in `A` the product over all those elements. The aforementioned axioms boil down to: (1) `mult([])` is a neutral element and (2) `mult` is an associative binary operation when restricted to two-element lists.
(a monoid is an algebra for the list monad): Same Idea - Given a monoid, we define a function `mult: List A -> A` by assigning to a list of elements of `A` the product of these elements. And the empty list we assign to the neutral element. Then we can use the associativity and properties of the neutral element to show that this function constitutes an algebra for the list monad.
You're quite right! Thanks for the correction. I should have said that a list is an element of a free algebra over the List monad, which is less pithy.
Thanks for the feedback! I can definitely rename the post soon as a first step, although this may require rewriting a chunk of the article to more accurately reflect the fact that List is a monad, and not "a" list.
I could make this distinction in part 3 (not written yet) although I want to balance not misleading readers, but not overcomplicating it too early on.
I appreciate you mentioning this because I think it’s actually an important point
Another tutorial which makes monads about 100x more impossible to understand for me by relating them to something else and describing all the weird ways that they are NOT that thing.
IMO if you already have it, this will be a lovely comparison full of insight, but if you haven't then it's full of confusing statements.
IMO what they are is utterly unimportant, except to mathematicians, and what you can do with them is more to the point.
The fact that explanations are so often in Haskell just makes them more unintelligible because you really need to know what problem they solve.
Thanks for the feedback! I'll likely be editing part 1 to include the feedback so far from the commenters as well. If there's a specific statement or analogy that felt especially confusing, please point it out and I'll clarify it in the post.
Sorry for moaning - it's just the usual despair that I feel every time I read a new explanation and fail to understand it. This isn't your fault.
IDK if it ads anything to the article, but `map` is a property of Functors, and every Monad is a Functor. Well, every Monad is an Applicative Functor, and every applicative functor is a functor.
All a way of saying that, yep, you always have `map` when you have a Monad, but you don't need a Monad to have `map`.
If you want an example we can compare a regular list and a Ziplist. A regular list's Applicative instance does a cross product, while a Ziplist's applicative instance does a dot product.
There's no great way to write a Monad instance for ZipList. But it's an Applicative Functor and thus is also a Functor and thus you can map over it. https://www.mail-archive.com/haskell-cafe@haskell.org/msg572...For quirky reasons in Haskell, `fmap` the function implemented for every Functor instance. This is because `map` was already taken by lists. Weird, I know.
Realizing that lists are monads is what made monads "click" for me.
When I was first learning Haskell a million years ago, I was completely confused by the concept of a monad; I could, after enough fighting with the compiler, usually get something working, but it was a stochastic guess-and-check process trying to figure out what `IO` actually means. Even the `Maybe` was confusing to me, because I couldn't really figure out how the hell you relate something like "checking for null" with "writing to the disk".
I can't remember where I saw it, probably on the Haskell wiki somewhere, but when they pointed out the List is a monad, and after seeing an example of how it worked, I suddenly got it: in a hand-wavey way, a monad is basically just a value with a wrapper context [1], and from a practical perspective that's all it is. In the case of a List its wrapper context is that there might be 0 or many of those things in there, in the case of a Maybe its wrapper context is that it might exist or it might not, in the case of IO its wrapper context is that it's interfacing with the outside world, and once you abstract away the entire idea of context, you can suddenly open up an entire world of reusability.
This is a good tutorial, I will probably be linking it to people if they ever make the mistake of asking about monads.
[1] I don't need a lecture on the minutia of this, I know that there's a lot more to it in the theory world, I went to graduate school specifically to study functional language verification. I'm keeping it simple.
A monad is not a container! It’s a way of composing functions if they have an effect. You tell how to inject a value in that effect (unit) and how to compose two functions that have that effect and that’s it: programmable semicolons.
Thanks for the feedback, I totally agree that monads are not containers. From an OOP perspective, they have some properties that make them, in some sense, sorta like containers, e.g., they contain a value like the Maybe monad. I still agree that they are not simply containers. I can clarify this in a revision to part 1 soon.
> From an OOP perspective, they have some properties that make them, in some sense, sorta like containers, e.g., they contain a value like the Maybe monad.
Not always! I find this is a big source of confusion; not all monads contain values, sometimes beginners think they can or should "get the value out" of a monad and that tends to lead to writing the wrong kind of code.
And in the article's case, 'have an effect' means 'be a list'.
In most programming languages the compiler authors go to great lengths to gives intuitive semantics to having one statement follow another, followed by another. This is an organizing principle for thinking about code and for having a program exist with well-defined semantics.
But its not a very robust one: its never true of fast programs on realistic hardware for example (not for a long time now). And all the rule bending (-fstrict-alias, bunch of stuff) exists in this tension between the grade school natural language paradigm and the reality of computers. I say grade school not to be pejorative, but rather because it is roughly the boundary where written natural languages begin to have interesting tensions around past and future and simultaneous, changing and not changing.
Functors and applicatives and monads and other type classes like these are the source of endless analogies because there isn't an accepted, broadly-understood terminology for this "well its roughly what would happen if you had a piece of paper and wrote things on it at every statement boundary and scratched off the old ones" (though Turing and von Neumann did formalize this in useful ways, they just don't generalize well to realistic computers anymore).
Monads are the mathematical object that is forced on you if you want a rigorous way to describe the semantics of program execution in the vicinity of this "common sense" notion. That's really what everyone is dancing around: your program is only well defined with either:
- a big rulebook full of exceptions and edge cases
- a compositional rule strict enough to give some useful predictability but lax enough to admit most useful programs.
It is this rigor/laxity tension as concerns text on a page and gates on a semiconductor that gives monads a privileged place in the towers of categories. When I worked on Sigma we were among the earlier adoptors of ApplicativeDo, for example, because we wanted a slightly different rigor/laxity tradeoff for performance reasons.
Monads are what happens when you do shift the giant pile of "back of the book" compiler details that describe program execution semantics into a much simpler set of rules, but at the cost of increasing the barrier to entry because you need to know the rules before you can print "hello world".
I used to struggle with understanding the "receipe" metaphor for monads when it comes to lists. But a list (or, really any collection) as a monad can be thought of as the "discrete nondeterminism monad".
Meaning that every collection is a set of possible inputs to the computation that is provided as the argument to a `flatMap` operation. Each `flatMap`, by definition, returns a new collection of possible outputs for each of the inputs, and each of those collections gets concatenated. Every item in the final output collection represents the result of following some path through the computations, selecting a single item at each step. Importantly, the type of the output of each `flatMap` operation can differ from the input.
You can imagine extending this by assigning probabilities, or making the domain continuous (I think...). These extensions would still be monads, just without being simple collections.
It's kind of like how multiplication over whole numbers is repeated addition, but that metaphor becomes less useful for other domains of numbers.
While I can understand the desire to draw a metaphor, there are better approaches than saying, "A List Is a Monad".
The statement as-is breaks pretty much immediately because, while there is a canonical list monad, there isn't a list monad, there are in fact several[1].
There are several more correct ways of phrasing the idea among:
"List can be given a monad instance"
"List forms a monad with pure and bind as defined"
"List is the underlying functor of a monad"
The point is that picking any old list implementation is likely not a monad without the supporting structure.
Will any of these help you learn what a monad is? Likely not. Monadology is a Mary's Room[2] problem; there is a qualia, a subjective sensation, when one understands monads having experienced them first hand. Subsequently monad tutorials are the best case against physicalism[3] yet devised.
1. https://hackage.haskell.org/package/exotic-list-monads-1.1.0...
2. https://en.wikipedia.org/wiki/Knowledge_argument
3. https://en.wikipedia.org/wiki/Physicalism
Although there can be other monads (and stuff other than monads, such as ZipList) that can be made with lists, I think that such a monad would not necessarily be a "list monad". (Your link [1] has several examples of this.) You are right that it does not mean that "a list is a monad" and that your other phrasing is better, but it does not mean that "there isn't a list monad".
To me "a list monad" often subtly implies "one" list monad. I wasn't very clear, but the point I was trying to make was more along the line of singular vs plural. Thanks for pointing out the discrepancy.
I like to think of a monad as a design pattern for constructing new objects where you pass in a sequence of callback functions, one at a time. A monad’s ‘bind’ operation adds another callback function to the end of a sequence.
The monad interface only requires ways to construct object using callbacks. The ‘bind’ operation takes a callback as an argument, but says nothing about when it’s actually called; it could be immediately, deferred, multiple times, or even never. It’s up to the implementation of the monad, as well as the language, if it’s a lazy language.
This is basically a framework. Like with other frameworks, the principle is “don’t call us; we’ll call you.” Arbitrary computation can happen between callbacks. The framework can do whatever control flow it wants, and this is what often makes frameworks opaque. Hiding control flow is what frameworks do, for better or worse.
So far, none of this is specific to a Monad. The Monad part comes from the type signature of the callback function passed in to flatmap(), which allows ‘bind’ operations to be nested.
Once you know what kind of thing you’re dealing with (frameworks) then you can go into why some frameworks qualify as a monad.
I will die with the crushing embarrassment of not being able describe what a monad is, despite being a so-called (and apparently fake) programmer since the age of 10.
There's nothing embarrassing about not knowing the details of a sub-field you're not deeply involved in.
Like, why would you embarrassed about not understanding the details of MPLS networking if you're not a network engineer who works with MPLS?
Misunderstanding of Monads is such an interesting phenomenon. Kind of similar to grasping 4D geometry or understanding the difference between a class and an object in OOP.
List can be an instance of a monad, i.e. a monadic type.
I think the trick to understanding monads is to see what benefits monad interface gives to the types that implement it.
The obsession with trying to explain a monad ultimately stems from conflicting explanations and the inability to differentiate between a mathematical monad and monads implemented in software.
Monads in software are just a standard API for any given type. That’s it. Theres no magic here. Just implement the standard and you have a monad.
It grinds my gears seeing monad tutorial after tutorial using the wrong metaphors or misleading explanations
Associativity is important, and not something that can be expressed in the API in most languages.
> Monads in software are just a standard API for any given type. That’s it. Theres no magic here.
I don’t think that’s helpful for people to understand _why_ monads though, and that’s generally what people are looking for.
https://www.kylheku.com/cgit/lisp-snippets/tree/monads.lisp
[Content Warning: Lisp]
We define a small OOP framework for monads, plus macros which then can succinctly define different kinds of monads, including generating a comprehension macro for each one.
Just for fun, monads as modalities is missing:
https://hackage.haskell.org/package/Agda-2.6.4.2/docs/Agda-S...
A list is like a burrito
It's just a loust in the category of endosequences.
I think that was the article that made me actually try to understand "A monad is just a monoid in the category of endofunctors". Researching "endofunctor" helped significantly more than any of the analogies floating around at the time.
The article sort of danced around what I think is the most natural way List is a "recipe": it's the bounded nondeterminism monad (a `List<T>` is a nondeterministic result; one could implement `List<T> -> T` by selecting an answer uniformly at random from the finite multiset).
I really like my lists to be deterministic.
Seriously, I've read things about lists and nondeterminism a few times in this thread, and I can't help but wonder if "you guys" (functional programming nerds, maybe?) use the word "nondeterministic" different than the rest of the world?
If not, I'd love a good explanation about what makes lists non-deterministic, and why we would want that, and why they seem to be perfectly deterministic in imperative programming languages.
Somebody is keeping a giant list of Monad tutorials, right?
Yes, and we are approaching peak monad tutorial https://wiki.haskell.org/Monad_tutorials_timeline
So in the golden age of the internet we had the highest rate of monad tutorial growth. Churn them out, bring back the good times
Let's keep up the mathiness. Look how much economics has benefitted from that.
Nit, in Haskell it's a MonadPlus. Which IIRC is a monad that supports filtering.
In Haskell List is a Monad as well as MonadPlus. Since List's Monad instance is used probably 100x more than its MonadPlus instance, I think it makes sense to focus on that,
Monad tutorials are on the rise again.
Let's start with function composition. We know that for any two types A and B we can consider functions from A to B, written A -> B. We can also compose them, the heart of sequentiality. If f: A -> B and g: B -> C then we might write (f;g) or (g . f) as two different, equivalent syntaxes for doing one thing and then the other, f and then g.
I'll posit this is an extremely fundamental idea of "sequence". Sure something like [a, b, c] is also a sequence, but (f;g) really shows us the idea of piping, of one operation following the first. This is because of how composition is only defined for things with compatible input and output types. It's a little implicit promise that we're feeding the output of f into g, not just putting them side-by-side on the shelf to admire.
Anyway, we characterize composition in two ways. First, we want to be clear that composition only cares about the order that the pipes are plugged together, not how you assemble them. Specifically, for three functions, f: A->B, g: B->C, h: C->D, (f;g);h = f;(g;h). The parentheses don't matter.
Second, we know that for any type A there's the "do nothing" identity function id_A: A->A. This doesn't have to exist, but it does and it's useful. It helps us characterize composition again by saying that f;id = id;f = f. If you're playing along by metaphor to lists, id is the empty list.
Together, composition and identity and the rules of associativity (parentheses don't matter) and how we can omit identity really serve to show what the idea of "sequences of pipes" mean. This is a super popular structure (technically, a category) and whenever you see it you can get a large intuition that some kind of sequencing might be happening.
Now, let's consider a slightly different sort of function. Given any type types, what about the functions A -> F B for some fixed other type F. F here exists to somehow "modulate" B, annotate it with additional meaning. Having a value of F B is kind of like having a value of type B, but maybe seen through some kind of lens.
Presumably, we care about that particular sort of lens and you can go look up dozens of useful choices of F later, but for now we can just focus on how functions A -> F B sort of still look like little machines that we might want to pipe together. Maybe we'd like there to be composition and identity here as well.
It should be obvious that we can't use identity or composition from normal function spaces. They don't type-check (id_A: A -> A, not A -> F A) and they don't semantically make sense (we don't offhand have a way to get Bs out of an F B, which would be the obvious way to "pipe" the result onward in composition).
But let's say that for some type constructors F, they did make sense. We'd have for any type A a function pure_A: A -> F A as well as a kind of composition such that f: A -> F B and g: B -> F C become f >=> g : A -> F C. These operations might only exist for some kinds of F, but whenever they do exist we'd again capture this very primal form of sequencing that we had with functions above.
We'd again capture the idea of little A -> F B machines which can be plugged into one another as long as their input and output types align and built into larger and larger sequences of piped machines. It's a very pleasant kind of structure, easy to work with.
And those F which support these operations (and follow the associativity and identity rules) are exactly the things we call monads. They're type constructors which allow for sequential piping very similar to how we can compose normal functions.
Thank you so very much. This is the first time monads have made sense to me, and now its clear why. People who try to explain them usually end up adding all kinds of Haskell minutia (the top post is another example), rather than actually explain the concept and why we need it. Your comment is the first time I actually understand what it is, and why it might be useful.
U must prove it is a monoid in the category of endofuncors.
join has the type `m (m a) -> m a`. That's the thing that really shows off the monoidal structure. People normally implement monads in terms of bind, but you can easily define join in terms of bind for any Monad: `join ma = ma >>= id`. So really, as long as you have a lawful instance of Monad written with bind the existence of join is your proof.
> monoid in the category of endofuncors.
I do not even know what a monoid or an endofuncor is. While I enjoy math, despite not being the best at it, I am confident I never made it this far in my studies. I looked at the Wikipedia definitions, and I am even more confused now.
https://bartoszmilewski.com/2016/12/27/monads-categorically/
This is a book chapter, and you need the preceding chapters to grasp it I think. I'm still in the middle of it.
The amount of people who tie themselves into knots to understand this pointless concept is very funny to me. I am 16 years into a successful software engineering career without learning what a monad is an it never held me back. Turns out I can use lists and optional types and all that jazz without it.
I mean really. Look at posts like this[0]. What does this give you? Nothing, in practical reality. Nothing.
[0] https://news.ycombinator.com/item?id=44446472
> I am 16 years into a successful software engineering career without learning what a monad is an it never held me back.
How would you know? That's the classic Blub Paradox.
Being able to write a custom monad and then leverage the vast array of libraries that already exist has helped me deliver functionality to end users quicker, more maintainably, and with lower defect rates. They don't let you do anything that you couldn't do by writing it out longhand. But just like using generic container libraries instead of writing a specific container for every type you want to handle collections of, they're extremely helpful.
funny that you call it pointless then admit you never learned what it is
[dead]
[dead]
Hi