Let me propose a interesting theoretical variant of the shortest path problem. You have a directed graph with source and target vertices, and every edge has a cost to traverse it. However, not every cost is in good old American dollars—some are in alternative currencies, and you don’t know what the exchange rates will be until the day of your trip. You can’t tell for sure how much it’s going to cost, but there’s still a significant amount of precomputation that you can do, and we’re going to employ some fun algebra to do it.

So here’s the setup. You have a directed graph and each edge has a cost . You can add costs together , and you can compare costs, so we have an ordered monoid. To be precise, this is a monoid equipped with a partial order such that if then and , for all . I know I said you add costs together but I’m going to be calling the monoid operation from now on, so, like, I’m sorry for deceiving you for five seconds. The monoid identity gonna be called , too, just so we’re clear.

Anyway, the task is to produce the set of all cheapest costs, which is to say, the set of all costs that are the cheapest in some linearization of the monoid : some total order extending the partial order. Such a total order is a characterization of prices as they are on the day of travel. The hope is that in a simple enough monoid, antichains would be reasonably small, so flattening an antichain to a final cost is not as complicated as the preprocessing that is done on the graph now.

Notice that we are not requesting very much of the monoid of costs: in the currency example it would be very reasonable to stipulate that be commutative (), positive (), etc. (Actually, positive is an interesting condition to keep in mind as we proceed, especially if the digraph contains any directed cycles.)

The most important thing to not request is that the order be total: we are not guaranteed to compare any two monoid elements. In fact, equality is a perfectly valid partial order—every monoid is an ordered monoid under “ iff ”—and this represents the worst case of knowing absolutely nothing about the relative costs of anything, even compared to the trivial cost .

So what are we to do? The only thing we intuitively can do: carry around all the possible costs that we see, only throwing out those for which we find a provably better cost for the same task. Then, I don’t know, I guess you can run some graph algo and just keep all the incomparable results you see and throw away the big ones and it probably works out, right?

Well, I mean, yeah, it does… but for some (arguably) subtle and (debatably) interesting reasons.


So let’s start by making this precise. When solving the graph problem, instead of carrying one cost , we want carry a subset of costs. And furthermore, whenever two costs are comparable—comparable by the order in the monoid and also algorithmically permitted to be comparable by having the same source and target—we keep the lesser. A more expensive subpath can’t possibly lead to a less expensive final path, so this checks out. So, we won’t be encountering just any old subsets of the costs, we will be dealing with antichains.

An antichain is a set of pairwise incomparable elements, which is to say, for all , if then . The set of finite antichains on a poset like will be denoted by , because I haven’t used a fraktur letter in a while and I’m horny. If we have two antichains, we can merge them together and take the best of both, by considering the minimal elements of their union,

I am using the symbol for this operation because it makes into a semilattice: it is associative, commutative, and idempotent (). Formally, it makes no difference whether the semilattice operation is a join or a meet, but in our case we will intuitively be considering it a meet, because we are interested in minimal costs. As such, the partial order on induced by this meet is iff , iff for all there exists with . This has the somewhat unfortunate side effect that subsets of antichains (which are still antichains) are larger than their supersets: if then . Oh well. Not everything is sunshine and rainbows.

Now, if you’re me, you might be tempted to do an aside and talk about if this semilattice is a lattice. And though it is indeed very tempting, it’s a little long to get into the details, so I will just spoil the ending: it is a lattice,

and the proof that it works is just ugly casework.

In any case, whether or not has a lattice join doesn’t actually matter, because we only care about minimizing costs. What does matter is the operation on antichains induced by the monoid operation. We know what to do with the singleton antichains——and since every antichain is the meet of the singletons of its elements, we can extend this to all antichains by distributivity:

This is where we rely on the fact that we defined to be the finite antichains. Up until this point, we could do things for all antichains, but if is not a complete semilattice then this infinite meet may not be defined. You can’t even dodge this by externally declaring it’s just the minimal elements of the setwise product because there’s no guarantee it has any, let alone is adequately described by them.1

This package of data is an example of an idempotent semiring. Recall that a semiring is a set equipped with two monoid operations, a commutative addition with identity and a not-necessarily-commutative multiplication with identity , and a further stipulation that distributes over . Of course, every ring is a semiring, and the most famous example not arising from a ring is the natural numbers .

A semiring is (additively) idempotent if for all . A particularly famous example is the tropical semiring , where the multiplication is the usual real addition extended to have as an absorbing element. (Its fame comes from tropical geometry, a hot topic in algebraic geometry as of late.) Idempotence means the addition is a semilattice operation, and as such defines a partial order on the semiring: iff .2 Furthermore, because of distributivity, this order is a monoid order on the multiplicative monoid .

Exercise. Verify that for any idempotent semiring, is a semilattice ordering of the multiplicative monoid. That is, show that is:

  • reflexive: ;
  • antisymmetric: and implies ;
  • transitive: and implies ;
  • a meet-semilattice order: and iff ;
  • a monoid order: implies and .

Let’s quickly take stock of our idempotent semiring .

  • is the set of finite antichains of our ordered monoid .
  • takes the minimal elements of the union of its two operands, so it’s associative, commutative, and its identity element is the empty antichain .
  • can be interpreted as the meet of a semilattice, so it determines a partial order : the order it induces on the singleton antichains agrees with the monoid order on , and the order it induces on subsets of any fixed antichain agrees with the superset order (if then ).
  • takes the minimal elements of the setwise product of its operands, so it’s associative, and its identity element is , the singleton containing the identity of . is commutative iff the is.
  • is an absorbing element for : .
  • is the greatest element of the poset of antichains—representing a literally priceless cost—and if is positive then is the least element.

Now that we have our costs in a cute little arithmetical package, we can unleash it on the problem. Recall from the setup: is a directed graph, and is an assignment of costs to the edges. The cost of a path is the product of the costs along that path, .

Recall also that the goal is to find all possibly cheapest paths from some source to some target , subject to the indeterminacy of “some pairs of costs in may not be comparable”. In , we still are not able to compare costs, but if they come from paths that have the same start and end points, we can combine them without much thought, by simply taking their meet in . By construction, we know how to interpret in , as singleton sets are antichains.

Immediately we can observe that the cheapest path from to will only definitively exist if there are no directed cycles whose cost around is not at least the cost of the identity, that is, every directed cycle satisfies . If not, then there is some linearization of the monoid order—some cost conversion eventuality on the day of your trip—where the more you travel around that cycle, the cheaper the trip will be. So in the following analysis I will silently ignore this possibility, because its treatment is exactly similar as in the shortest path problem.

A second observation is that if we have a cheapest path from to , then every subpath is also a cheapest path between its own start and endpoints. That is to say, this problem is amenable to dynamic programming in precisely the same way as the shortest path problem is. Some of you reading may already see where this is going, but for everyone else, I will take it one step at a time.

First of all, let’s look at cheapest paths of length one. I claim it’s pretty easy to see that it’s solved by the following function , defined as

of course is the least possible cost, representing free transit, which I am implicitly assuming is the cost for simply staying at a given vertex. If the digraph has only at most one edge between any two vertices, then the big meet is not necessary, so long as it is acknowledged that nonedges are equivalent to edges whose cost is .

Now, the cheapest paths of larger lengths are a breeze:

And since , we have , which means you just need to repeat this calculation until you are satisfied that there are no longer paths that need to be considered.

Now, this may look kind of complicated, but you have probably seen an algorithm of this form before, though possibly in an unexpected way. You see, in the semiring , a computation of the form is a dot product. We can actually view as a -matrix with coefficients in the semiring , and then is just the matrix power ! The addition is unorthodox compared to your run-of-the-mill linear algebra, sure, but in the arithmetic of it is perfectly natural, and indeed you can view as an obvious adjacency matrix for with coefficients from .3

One final observation about this computation. Because of the “idempotence” property , overshooting isn’t really a bad thing, so you can repeatedly square the matrix instead of naively multiplying on copies of , taking full advantage of the “exponentiation by squaring” principle. I don’t think this gets you any serious computational gain if you actually track the timespace complexity of building and computing on antichains, but it’s pretty cool, in my opinion.


To me, this solution is satisfying. To some of you, it might not be.

Perhaps you imagined a stricter variant of the problem, where the task is to produce a list of paths that enacts each of the cheapest costs. Depending on who you are, this is either obviously equivalent or obviously inequivalent. I am of the former position, but regardless of whether or not you agree with me, the procedure to accomodate you is standard. In fact, this whole matrix-over-idempotent-semirings approach is essentially an algebraic recasting of the Floyd–Warshall algorithm, so that discussion may be your next destination. I myself am not particularly interested in that line of study, as it lacks a certain elegance—yes, I think is elegant—and is more of a necessary evil sort of math.

This topic is also a good springboard to talk about the use of idempotent semirings in combinatorial optimization. Since the late 19th-century, semirings have been trying to find a way to break into mainstream algebra. While they have largely failed to uproot the stranglehold that rings have on algebra, they persist, and by the 1970’s or so they finally started appearing in the work of applied mathematicians and computer scientists, who noted how much could be cast in their language. Idempotent semirings are especially valued, precisely for their ability to be viewed as an operation with a compatible and pleasant partial order. A min-plus–style semiring like , for example, allows one to perform optimization, like in this ‘blog post, while a max-plus semiring like is more handy for recasting scheduling-type problems.

The study is only about as old as computer science is, and generally lays neglected out of what I can only assume is a distaste for unnecessary algebra, but it is not without its textbooks. I rather like the texture and mouthfeel4 of Golan, Semirings and their Applications, but I suppose it would be dishonest not to mention the newer (and not extremely worse typeset) book by Gondran and Minoux. Full disclosure, I haven’t read them all the way through (I actually never learned how to read, I just like the pretty pictures) so I can’t guarantee it’s not a waste of time, but I mean, it’s semiring theory. You’ve already admitted you don’t value your time by reading this ‘blog post all the way through, so why stop after dipping your toes in?

  1. The meet-semilattice of antichains almost coincides with the complete lattice of the upper sets. They coincide when satisfies the descending chain condition—that no sequence can continue indefinitely—which at first blush sounds like a tough guarantee. However, it follows from the simple assumption that the monoid is positive, that is, for all . On any graph, the set of costs that appear on edges is a finite set, and hence gives a finitely generated submonoid of which inherits the order, and that alongside positivity gives you the preconditions for Higman’s Lemma from universal algebra. The conclusion is that the order is a wellquasiorder, which is equivalent to DCC plus the additional fact that all antichains are finite! So in a sense, upper sets are what we should really be looking at, and antichains are simply the computational approximation to them, and the only time they don’t work as an approximation is when the antichains are infinite anyway. 

  2. Most semiring literature defines the partial order the other way, iff , because addition feels more like a join-y operation. This also has the benefit of making the additive identity the bottom of the semilattice, which matches the notational conventions of order theory. However, this would require an unintuitive flip somewhere in the setup of our cost minimization problem, so for exposition’s sake I will turn it around here. Still, I didn’t lie to you about the tropical semiring, the min-plus formulation is probably more common, and I don’t have an explanation for that, algebraic geometry is just weird. 

  3. Depending on your persuasion, you could call it either luck or predestination that semirings are precisely the objects which can be coefficients of matrices, and an idempotent semiring is a natural way of recasting a semilattice-ordered monoid. 

  4. 20. Sorry, I couldn’t think of a better place to put a weed joke.