Memory-efficient pathfinding

Third time’s a charm they say, and as I’ve talked about pathfinding twice before (in Don’t follow the shortest path! and Aiding pathfinding with cellular automata) I thought I’d charm everyone with a third pathfinding article. This time I’ll talk about how we can reduce the memory consumption on a commonly used pathfinding algorithm by about a magnitude!

Those of you wacky enough to have taken EE instead of CS, might know breadth-first search applied to pathfinding on a grid as Lee’s algorithm [Lee61]. We also sometimes see this referred to as the wavefront or flood-fill pathfinding algorithm. Odds are you’re quite familiar with it.

An application of Lee’s algorithm is shown below, with the depth of the breadth-first search written into the cells (shown on the left, with start and goal highlighted) along with an optimal path as returned from the algorithm (on the right, with the path in golden yellow):

Lee’s algorithm

The optimal path is obtained by starting at the goal cell and finding the neighboring cell with the smallest value, then from that cell we go to its neighboring cell with the smallest value, and so on until we reach the start cell.

For an MxN grid one would think we would need to store an integer per cell, holding log2(M*N) bits, in order to represent the cell state. But one would think wrong! In fact, only two(!) bits per cell are needed, as noted by Akers [Akers67].

Akers first noted that we don’t really need to have labels unique to every cell; we only need labels that are unique between a cell and its preceding and succeeding neighboring cells along any possible path. As such, it would be possible to simply label the cells 0, 1, 2, 0, 1, 2, … along the BFS path. As we also need to represent empty and filled cells we would in this encoding need 5 values or 3 bits to encode all cell values.

From there, Akers observed that we don’t actually need three values, we can instead label the cells 0, 0, 1, 1, 0, 0, 1, 1, … as this encoding (using only two values) still allows us to infer unique labels for three successive cells along a path. Along with the state labels for empty and filled cells, this encoding uses only 2 bits to encode all states!

The grid below shows Akers’ algorithm applied to the same configuration as before:

Akers’ algorithm

Here I’ve considered the start node as being labeled 0. In general, we can turn an implementation of Lee’s algorithm into Akers’ algorithm by rather than storing the current search depth D into the grid cell instead storing ((D >> 1) & 1).

Extracting the optimal path once a goal node is reached is done analogous to in Lee’s algorithm, but now we step to the next neighboring cell which has the appropriate value in the reverse of the sequence 0, 0, 1, 1, 0, 0, 1, 1, ….

Traditionally Lee’s algorithm (and Akers’ modification) were used to route paths on printed circuit boards. Today there’s a lot of published algorithms for this routing problem, and anyone pretending to be an AI expert for games should probably be familiar with not just Akers’ algorithm but the ideas behind a few other of these EE path-routing algorithms. A good overview of some is given in Prof. Hai Zhou’s lecture notes. Several relevant papers can also be found at the ACM and IEEE digital libraries.

References

  • [Lee61] Lee, C. Y. “An algorithm for path connections and its applications,” IRE Transactions on Electronic Computers, vol. EC-10, pp. 346-365, September 1961.
  • [Akers67] Akers Jr, S. B. “A modification of Lee’s path connection algorithm,” IEEE Transactions on Electronic Computers, vol. EC-16, pp. 97-98, February 1967.

11 thoughts on “Memory-efficient pathfinding”

  1. It seems like there could be a bit of a problem in figuring out which part of the 0,0,1,1 sequence the goal is. In your example, take the second 0 from the upper right corner. How do you know if it’s the first or second 0 in the path?

    Also, it appears diagonals are not allowed, though they would be if cells had unique ids.

  2. In both Lee’s algorithm and in Akers’ algorithm we have multiple paths that all are of optimal length (above there are 6 possible paths). When there are multiple choices we just pick arbitrarily (or perhaps based on a secondary criteria, such as to avoid path turns when possible).

    Both algorithms, as described above, assume a 4-connected neighborhood. If you want to include paths that use diagonal moves then you need to also flood-fill diagonally during the search phase of the algorithm, otherwise you won’t necessarily reach the goal from start. Consider e.g. a chess board where all black squares are obstacles and start and goal are at opposite sides of the board on white squares (at A8 and H1); here there exists a path (along the diagonal), but the 4-connected flood-fill cannot number any squares.

    Lee’s algorithm trivially adjusts to diagonal moves, Akers’ might too (haven’t thought about it).

  3. I’m most likely missing some subtle detail, but my objection wasn’t directed towards resolving multiple paths of of equal length. Without knowing where you are in the sequence, it seems like it’s possible to head off in the wrong direction for a couple cells before realizing it.

  4. My understanding of the Lee algorithm is that you first mark cells until the goal is reached, and then walk backwards from the goal to the starting cell. This approach doesn’t quite work with the Akers modification, though I suppose you could just pass in the preceding cells bit pattern during the first pass, or something along those lines.

    I don’t believe just passing in the depth is sufficient, however.

  5. “This approach doesn’t quite work with the Akers modification”

    Sure it does, you do exactly the same thing in both algorithms! For Akers’ algorithm you start at the goal (which had a value of 1), and then go, in sequence, to successive neighboring squares with the following values: 1, 0, 0, 1, 1, 0, 0, ….

    The beauty of Akers’ encoding is that following the sequence backwards (based on wherever in the sequence you were when you reached the goal) will make you arrive at the start every time!

  6. I was about to write a response further outlining my objection, which made me think about it some more, and I realized you’re right; the encoding is even more clever than I realized.

    The bit (no pun intended) I was missing was that on arrival of the goal cell during the flood-fill stage, you don’t need any previous path information; just knowing the previous depth is actually good enough, because knowing any 2 bits in the sequence uniquely determines your spot in the sequence.

    Anyways, cool algorithm!

  7. I’m a little confused why you feel that people writing games should be familiar with these algorithms. Traditional pathfinding algorithms like A* will search significantly fewer cells in significantly less ram (your approach is low-memory only if you pretend that the mutable part of the map you’re using to keep track of what’s been touched isn’t to be counted.)

    This approach is slow and memory hungry in comparison to normal methods. Please profile it.

  8. There are lots of reasons if you think about the problem more. For example, Lee’s algorithm trivially maps to GPGPU architectures, whereas A* does not (the priority queue, or similar, for handling the open/closed lists is not GPGPU friendly). Furthermore, running Lee’s algorithm on GPGPU trivially parallelizes and we can easily execute 4 pathfinding problems in parallel in a shader using a single rendertarget (one per component).

    Enter Akers’ algorithm. With a single RGBA8 buffer we can now do not just 4 problems in parallel, but 16 problems in parallel (as each only requires 2 out of 8 bits available). Then add MRTs on top of that for even more parallelization.

    Always explore options. Always know multiple algorithms. Other reasons to care left as an exercise for the reader.

  9. Check this out:
    EGM’s interview with Platnium games, formerly Clover Studio (Viewtiful Joe, God Hand, Okami).

    1UP: Okay. God of War?

    Hideki Kamiya: My impression is that it’s very carefully made, the details are all carefully made. Devil May Cry was a bit rough, but I think that there’s no roughness in God of War. I think that it’s very well-made. So there’s like a wall for your imagination or ideas, and when you’re actually creating something, you don’t even see the wall. But with GOW, I felt like I saw beyond the wall. So the GOW dev team’s imagination and their ideas were like… beyond. That’s something else that struck me. And also, I feel a bit bad about talking like somebody who is evaluating a game. But I felt like they created the game in a very good way which managed to bring in the users, they gave you a huge impactful feeling right in the beginning, and just drew you into the game. So I’m very impressed that they made the game like that.

    Yusuke Hashimoto: The level of the game was very high in general. So the things that you see in the game are things that you can’t even imagine. The imagination is one step ahead of you.

    Hideki Kamiya: In this game, they were using a camera system that was on rails. The camera follows the character like they’re on rail tracks. And so it makes the game really easy to play, and of course I was impressed by that. And there were people around me who were like, “Oh, you should use that camera.” But it’s not something that you can easily do. To create that kind of a system, you have to pile up a lot of technology, and study a lot, and research a lot, to be able to make a camera system like that. I was like “Okay, you guys appreciate the technology, but it’s not that easy. Everything in the game is not easy to make.” These guys were really serious about the creation, and they put in a lot of effort in it, and that’s why that game came out so well. So yeah, I played GOW2 and the PSP version, and I feel that they were very well-made.

Leave a Reply