Structure as Control Flow
Table of Contents
Combining values to combining effects #
We’ve spent this series climbing a ladder of power. We start with Functors (changing values) moved to Applicatives (combining independent effects) and tackled Monads (chaining dependent effects). Then, we took a sidestep into Semigroups and Monoids to understand how values combine.
Now, we bring it all together. We rarely care about the individual elements of a data structure in isolation; we care about what they produce when combined.
Foldableanswers: “How do I collapse a structure into a summary?”Traversableanswers: “How do I run effects across a structure while keeping its shape in tact?”
Foldable #
A Foldable is any data structure that you can walk through to reduce it to a single value. While we often think of “folding” in terms of lists, it is actually about the shape of the container. If you can traverse the structure and combine its elements, it is Foldable.
Foldmap #
As we saw in the Monoid post, the most powerful way to think about folding is through foldMap.
\[
\text{foldMap} :: \text{Monoid } m \implies (a \to m) \to f \space a \to m
\]
This function is a three-step process:
- Map turn each element
ainto a Monoidm - Combine squash all those
mvalues together using(<>) - Default if the structure is empty, return
mempty
Foldable requires Monoid because without mempty, the function would be undefined for empty structures.
A Semigroup lets you combine values; a Monoid lets you combine values even when you have none
The minimal definition of Foldable relies on either foldMap or foldr
class Foldable t where
foldMap :: Monoid m => (a -> m) -> t a -> m
foldr :: (a -> b -> b) -> b -> t a -> b
If you define foldMap, Haskell can automatically figure out how to do a foldr. Conversely, if you define foldr, Haskell can infer foldMap. Once you can do a foldMap or foldr, you get operations like sum, product, any, all and length for free. We typically choose to implement foldMap as it is the most general and foldr can be derived from it, but not vice versa in a lawful way without assumptions.
Example #
Imagine we own a warehouse and we store our inventory in some Tree structure, where Nodes represent categories and leaves are items with a price.
data Tree a = Leaf a | Node (Tree a) (Tree a)
instance Foldable Tree where
foldMap f (Leaf x) = f x
foldMap f (Node l r) = foldMap f l <> foldMap f r
Because we implemented foldMap we can now summarize our warehouse in dozens of ways without writing any new logic. We just change the Monoid we use. Let’s assume warehouse :: Tree Int.
- Total values:
foldMap Sum warehouse - Most expensive item:
foldMap Max warehouse - Do we have an item priced over $1000?:
foldMap (Any . (> 1000)) warehouse - Generate a flat list of all items:
foldMap (\x -> [x]) myTreeor justtoList warehouse
In essence, Foldable is for loop that aggregates. In imperative programming, if you want to a sum a list, you intialize a variable and loop through the collection:
total = 0 # Identity (mempty)
for x in warehouse: # Foldable walk
total += x # Combination (<>)
total = foldMap Sum warehouse
The “magic” here is that Foldable abstracts away the loop boilerplate. You don’t have to manage the iterator or the mutable variable. You just provide “how to combine” (the Monoid) and the “what to walk” (the Foldable).
Foldable laws #
Foldable instances are expected to obey a small set of laws that ensure folding is consistent and predictable, regardless of how the structure is implemented internally.
Intuitively, the key idea is this: folding a structure should depend only on the order of its elements, not on how the fold is written.
- Folding via
foldMapshould produce the same result as folding viafoldrusing the corresponding Monoid. - Using
foldMapwithConstrecovers a pure fold that ignores values and only combines structure.
These laws ensure that if two Foldable structures contain the same elements in the same order, they will always fold to the same result, even if their internal shapes differ (lists, trees, maps, etc.).
Without these laws, functions like sum, length, or toList could behave inconsistently across different containers, defeating the purpose of Foldable.
Why foldable is not enough #
Foldable is great for summaries, but it has a major limitation: it destroys the structure. If you fold a Tree into an Int, the tree is gone. You can summarize the values, but you can’t preserve the structure they form. This is where Traversable comes in, it preserves structure while running effects.
Traversable #
A Traversable is a structure where you can walk through the elements, apply an effect to each and collect the results in the same shape
It turn’s a structure of values into a structure of effects.
The definition #
The definition of a Traversable is as follows:
class (Functor t, Foldable t) => Traversable t where
traverse :: Applicative f => (a -> f b) -> t a -> f (t b)
sequenceA :: Applicative f => t (f a) -> f (t a)
Because it uses Applicative (specifically the (<*>) operator), it can walk through a structure, run an effect at each stop and “stitch” those effects together. Because it doesn’t use Monad, the effects can’t change the shape of the container mid-way through. The list stays the same length; the tree keeps the same branches.
In order for t to be Traversable it also has to be a Functor and Foldable.
- Why functor? to
traverse, you apply a function(a -> f b)to the elements. You need theFunctormachinery to actually reach inside the structure and change thet a’s intot (f b)’s - Why foldable? after you’ve turned your structure into a “structure of effects” (
t (f b)), you need a way to walk through it and “collapse” those multiple effects into one single, combined effect. That walking and combining is exactly what a fold does.
Example #
Before we look at sequenceA and traverse, let’s look back at our example. Imagine we need to update the prices in our warehouse by fetching the latest exchange rates from an API. This is an effectful operation (IO).
Our structure is already Foldable, it still needs to be a Functor so we can map over the elements. Let’s do that and implement Traversable.
instance Functor Tree where
fmap f (Leaf x) = Leaf (f x)
fmap f (Node l r) = Node (fmap f l) (fmap f r)
instance Traversable Tree where
traverse f (Leaf x) = Leaf <$> f x
traverse f (Node l r) = Node <$> traverse f l <*> traverse f r
Just like with Foldable, implementing either sequenceA or traverse is enough as sequenceA can simply be implemented as sequenceA = traverse id or traverse as traverse f = sequenceA . fmap f. Typically we implement traverse as it is more general and expressive (unless you have a specific optimization)
So now, imagine we have a function that fetches a price from a database or an API, takes an ID an returns a new price, IO Float.
fetchPrice :: String -> IO Float
fetchPrice itemId = do
putStrLn $ "Fetching price for: " ++ itemId
-- do some API call
return 6.70
We can now given a warehouse with ids, turn it into a warehouse of prices.
warehouseIds :: Tree String
warehouseIds = Node (Leaf "item-1") (Leaf "item-2")
warehousePrices :: IO (Tree Float)
warehousePrices = traverse fetchPrice warehouseIds
main :: IO ()
main = do
tree <- warehousePrices
print tree -- Output: Node (Leaf 6.70) (Leaf 6.70)
When we run our program, Haskell enters the Node and looks at the first Leaf "item-1", runs fetchPrice "item-1" which produces a IO 6.70. It then moves to the second Leaf and does the same. Because it’s a Traversal the Applicative property of IO is used to “stitch” these two actions together so we don’t get a Tree (IO Float) but a IO (Tree Float). This way the results are collected in a single IO call.
In an imperative language, you would have to:
- intialize a new empty tree (or mutate the old one)
- manually recurse down the branches
- handle the asynchronous nature of the API
- carefully put the result back in the right slots so the tree shape isn’t ruined
With Traversable the structure defines the loop. You just provide the effects and the typeclass handles the reconstruction.
Traverse #
If fmap changes a value inside a box, traverse runs a function that creates its own box for every element, and then merges all these boxes into one.
\[
\text{traverse} :: \text{Applicative } f \implies (a \to f\space b) \to t\space a \to f\space (t \space b)
\]
- Map we apply
fetchPriceto everyLeafso we get aTree (IO Price) - Sequence we “pull” the
IOoutwards, instead of a tree containing many small IO actions, we get one big IO action that contains the wholeTree Price
If we used a standard loop or converted the tree to a list to fetch the prices, we would lose the “structure” of our warehouse. By using traverse, we get the best of both worlds. Every price is updated via an IO call and the final result is a Tree, with every item in its original position.
Applicative is the key constraint #
traverse requires Applicative and not Monad, this is crucial:
- all effects are independent
- the structure is known upfront
- the order of elements doesn’t change the final “shape” of the container
So in essence, Traversable is Applicative lifted to entire data structures.
Applicative constraint does not imply sequential execution. The actual behavior depends on the chosen Applicative instance. For example, using an Applicative that runs effects concurrently (such as Concurrently from async) allows a traverse to execute effects in parallel while still preserving the structure of the data.SequenceA #
What if we already performed the mapping form earlier? Suppose we used fmap with fetchPrice by mistake and ended up with a tree of pending requests:
pendingRequests :: Tree (IO Float)
pendingRequests = fmap fetchPrice warehouseIds
We can’t “buy” the items yet, because they are still in their individual IO actions. We want to flip the context, this is exactly what sequenceA is for:
readyToRead :: IO (Tree Float)
readyToRead = sequenceA pendingRequests
While traverse maps a function and flips the structure, sequenceA is what you use when your structure is already full of effects.
\[
\text{sequenceA} :: \text{Applicative } f \implies t\space (f\space a) \to f\space(t\space a)
\]
sequenceA traverses the structure and applies the id function to each value.
Think of sequenceA as the “wait for all” operator (similar to Promise.all() in JavaScript). Say you have a list of pending tasks [IO "Page 1", IO "Page 2"] but instead you want one task that gives a list of results IO ["Page 1", "Page 2"]
pages :: IO [String]
pages = sequenceA [download url1, download url2]
It also works as a “fail-fast” switch. If you have [Just 1, Nothing, Just 3], sequenceA will see that Nothing and immediately collapse the whole result into Nothing.
List of functions #
One of the coolest (and often overlooked) features of Traversable is what happens when you use it with the Function context.
Because traverse is generic, we can use it to turn a structure of values into a structure of potential values. Specifically, if we have a structure and a function that takes an element and returns a function, we can “flip” the entire structure into one giant function.
Suppose we have a list of configuration keys ["host", "port", "api"] and a function that looks up a key in a config file: lookup :: String -> (Config -> String)
keys = ["host", "port"]
lookup :: String -> (Config -> String)
lookup = ...
valuesOfKeys :: Config -> [String]
valuesOfKeys = traverse lookup keys
By using traverse, we’ve transformed a list of keys into a single function that takes one config and returns a list of all it’s values. We didn’t manually pass the Config to every single lookup; Traversable handled the plumbing of “threading” that config through the entire list for us.
If you find yourself converting a structure to a list just to run effects and then rebuilding it, you probably want Traversable.
Traversable laws #
Traversable comes with stronger laws, because it combines structure and effects. These laws guarantee that traversing a structure is not just convenient, but also lawful.
At a high level, the laws say:
- Identity Traversing with the
IdentityApplicative is the same as a purefmap. Traversal does not invent effects. - Composition Traversing with two
Applicativescomposed together is the same as traversing once with the composed effect. - Fold consistency Traversing with
ConstrecoversfoldMap.
Together, these laws ensure that traversal preserves shape, meaning, and effect structure.
Order / left-to-right semantics #
Most Traversable instances process elements from left to right. This ordering matters when effects are observable, such as IO, exceptions, or nondeterministic Applicatives. While the laws don’t mandate a specific order, consistent ordering is part of what makes a Traversable instance usable and intuitive.
Real-world intuition #
Traversable is one of the most used typeclasses in Haskell. Think of these scenarios:
- Validation you have a list of user inputs and a function
validate :: Input -> Validation Error User, usingtraverseyou can getValidation [Error] [User] - API Calls you have a list of IDs and want to fetch them independently
traverse fetchUser userIdsgives youIO [User] - Parsing walking a configuration tree and attempting to parse each leaf.
Conclusion #
Foldable answers the question “what does this structure mean?” It collapses a shape into a summary using Monoid. Traversable answers “what happens when I run effects across the shape?” It lets you apply Applicative effects to every element while preserving the original structure.
Use Foldable when you only need a summary (sum, max, count). Use Traversable when you need to run effects (validation, IO, API calls) but keep the container shape intact. And remember: Traversable deliberately relies on Applicative: effects are independent and the shape is fixed. If your effects must change the shape, you need a different abstraction (usually Monad-level reasoning).
In short: Foldable collapses; Traversable preserves shape while running effects. Use the weakest abstraction that fits your domain: it will make your code simpler, faster to reason about, and often easier to optimize.
Tying the series together #
In this series of blog posts we’ve seen three orthogonal axes of abstraction:
- Effects:
Functor->Applicative->Monad(how computations compose) - Combination:
Semigroup->Monoid(how values combine, and whether there’s an “empty”) - Structure:
Foldable->Traversable(how we collapse or traverse shapes)
These axes intersect: foldMap uses a Monoid inside a Foldable; traverse uses an Applicative on a Traversable.
The guiding rule of the series: use the weakest abstraction that expresses your intent, it gives you more guarantees and simpler reasoning.