Anyone who has ever experimented with Conway’s Game of Life or any other cellular automata (CA) know they can be very fun to play with. You can easily lose several hours in e.g. George Maydwell’s awesome Modern CA site; his CA evolution lab is particularly cool. If you like to write your own CA code, there are some good efficiency hints on Tim Tyler’s CA page. (As a side note, Tim also has a good — but NSFW, due to boobie pic — rant on why the entertainment industry should be destroyed.)
While CA are fun, I’ve never been thoroughly convinced about their practical value; it always seemed like you could do the same thing with less effort. However, with all this GPGPU hysteria that’s brewing, perhaps it’s worth looking at them again soon. CA certainly fit fairly nicely onto a GPU.
In that I talked about pathfinding in my previous post, I thought that showing how CA could apply to pathfinding might be an interesting topic here, thus this post.
Solving a traditional maze
As my very first example, let’s look at how a CA can solve a traditional maze. By “traditional” I mean a maze with one entry and one exit, with a single path from entry to exit, and a path that’s one cell wide. Like this:
By considering the 4-neighborhood of a cell (i.e. the NSEW neighbors) we can use a CA rule that fills in a cell if we have 3 or 4 wall neighbors. What this will do is to effectively fill in the dead-ends of the maze from the “bottom”, one cell at a time, until every dead-end path has been filled. The figures below show the first iteration of this approach as well as the end result, after 24 iterations:
As should be clear, this CA fully solves the maze; the cells that remain unfilled constitute the path from start to goal. This method of solving a maze is well-known and is sufficiently obvious that it has been reinvented lots of times. (I rediscovered it myself sometime in the mid-to-late 80s I think, and I know others did too.) But we can do more interesting things with a CA than solving a traditional maze. As a small step forward, let’s look at a “fat” maze next.
Solving a “fat” maze
Here I’ve started with the same maze as before, but removed a lot of the walls and made some overall minor changes too. This leads to a maze where the paths are wider than one cell across, or — simply put — where the paths are fat:
To fill in dead-end paths in the fat maze we need a slightly different rule. The particular rule I’ll use here gives birth to a wall cell if, out of the 8-cell neighborhood we have:
- three wall edge neighbors (i.e. NSW, NSW, NEW, or SEW), or
- an edge neighbor (e.g. W) that 8-connects to a corner cell on the opposite edge (e.g. NE or SE) via a neighboring edge cell (here, N or S, respectively) — unless an opposite-edge corner cell exists but there is no neighboring edge cell to connect to it, in which case no new wall cell is birthed.
(Sorry, I know that’s a mouthful, but I can’t be bothered to draw a picture to illustrate. If that wasn’t clear, plot it out on graph paper, or hire me at my usual consulting rate of $350/hr for an elaboration!)
Running this rule on the fat maze gives the following results for iterations 1, 5, and 12 (where iteration 12 is the solved state):
What’s cool is that it looks like we got the shortest path solution here. I haven’t played around with this rule enough (nor thought things through sufficiently) to know under what conditions we’ll get a shortest path, or indeed if the rule will always return a solution (as opposed to a set of cells containing the solution). If anyone has any insights, please add a comment!
We can also apply the same rule as on the fat maze to an arbitrary pathfinding problem, where we have a nonmaze and two arbitrary start and goal positions (here shown in red and green):
Running the rule on this problem gives on iterations 1, 5, and 12 the following result (with 12 again being the last state, the stable state detected on iteration 13):
Here, as expected, we don’t get an actual path on reaching the stable state. What we have achieved though is a pruning of the search space for a subsequent A* search (or similar), pruning away cells that we know do not need to be searched to find a path from start to goal.
Of course, this CA preconditioning step is today more expensive than running a full search over the whole space, but with things like GPGPU it might one day make sense to precondition the search space in this way for a subsequent pathfinding step. We can also amortize the CA cost by inserting multiple starts and goals (for multiple agents) before evaluating the CA to a steady state, so in some circumstances it might be worthwhile doing something like this already today.
BTW, there are other rules than the one I gave above, which prune even more of the search space, but will take away free cells that would be needed for a shortest path. But, then again, we don’t always care about a shortest path, do we now.