Secrets of a scalar triple product identity

In my post about the evilness that is coplanarity I asked to be reminded to blog about the scalar triple product. David Neubelt at Ready at Dawn actually sent me an email to take me up on it, but I’m afraid I just didn’t get around to it. Until now that is! While I’m not sure this post is worth the 8-month(!) wait David had to endure, hopefully there are some interesting takeaways here as I look into what we can learn by looking into a particular scalar triple product identity. Without further ado…

The scalar triple product

I’ve always liked the scalar triple product: the dot product of a vector a with the cross product of vectors b and c, that is a(b × c). The reason for my fancy is that this product is a surprisingly useful tool. Geometrically, the scalar result of the product corresponds to the (signed) volume of the parallelepiped formed by the vectors a, b, and c. The sign is positive if the triangle abc appears clockwise when viewed from the origin; otherwise the sign is negative. Equivalently, the sign is positive if the vectors a, b, and c form a (not necessarily orthogonal) right-handed coordinate system. The three vectors are coplanar (i.e. lie in the same plane) if and only if their scalar triple product is zero.

By studying the sign of the product we can, for example, determine if a directed line passes to the left or right of another directed line, which translates into a test for ray-triangle intersection in a straightforward way. (See my earlier rant against Plücker coordinates for the brief details, or Section 5.3.4 of my book for a fully worked example.)

The scalar triple product notation

Scalar triple products occur frequently enough that they have been given their own special notation:

[a b c] = a(b × c)

This notation makes it easy to remember that any even permutation (which, for three elements, is equivalent to a cyclic permutation) of the three vectors does not change the expression result, i.e.

[a b c] = [b c a] = [c a b]

Similarly, an odd permutation causes a sign change:

[a b c] = -[a c b]

This angle-bracket notation also makes it easy to express (and remember) a number of vector identities. In my book, in Section 3.3.8, I list several useful identities involving scalar triple products. In particular, I list an identity that I want to give a closer examination in this post, namely:

u[v w x] – v[w x u] + w[x u v] – x[u v w] = 0

This may look rather gobbledygooky but it is actually a very useful and interesting identity, well worth remembering (though in a different form, as explained below). Let’s explore why in the next section.

Insight into the identity gobbledygook

To match the vector names I started out with and to ease the exposition, let us first rewrite the previous identity to an equivalent expression (which, arguably, is the one I should have given in my book, but hindsight is 20/20):

[a b c]d = [d b c]a + [a d c]b + [a b d]c

Now let’s assume [a b c] is not zero (that is, that the vectors are not coplanar). This allows us to divide both sides by [a b c], giving:

d = ([d b c]a + [a d c]b + [a b d]c) / [a b c]

This last expression is of the form:

d = ra + sb + tc

where r, s, and t are scalar constants. Now stop and look at that expression again, and think about what it means. Back from thinking? Good. So, right, this identity allows us to express d as a weighted combination of a, b, and c. I underlined that because if you didn’t know how to project a vector into a nonorthogonal coordinate system, well, now you do! (Alternatively, you can view this as computing the barycentric coordinates of d with respect to the simplex (here a tetrahedron) defined by a, b, c, and 0 (the origin).)

A valid question at this point is to ask from where the scalar values come. Solving for these geometrically is, I think, rather messy (someone correct me if you have a simple way). However, we can solve d = ra + sb + tc as a 3×3 system of linear equations and obtain the scalars using Cramer’s rule.

Generalizing to 2D, 4D, or even nD

As you hopefully recall from elementary linear algebra, Cramer’s rule expresses the solved variables in terms of ratios of determinants. In 3D, it just so happens that the determinant, |a b c|, of the 3×3 matrix (a b c) is equivalent to the scalar triple product [a b c], so by Cramer’s rule we have that:

r = [d b c] / [a b c] = |d b c| / |a b c|,
s = [a d c] / [a b c] = |a d c| / |a b c|,
t = [a b d] / [a b c] = |a b d| / |a b c|.

Writing the expressions like this makes it clear how similar the determinant and scalar triple product notations are and how, in this case, they result in identical expressions. However, there’s a very important distinction. Scalar triple products only exist in 3D, because they involve cross products which only exist in 3D (please keep the “they exist in 7D too” comments for some other time). Determinants do not have that problem. So Christer, you ask, why would we use the scalar triple product expression here and limit ourselves to 3D? That’s a good question, so let’s ask how we would do the same operation in 2D and 4D. Well, with determinants, it turns out that we can express this basis projection operation analogously to the 3D projection for both 2D, 4D, and even nD!

The 2D decomposition of c onto the two vectors a and b is given by:

c = (|c b|a + |a c|b) / |a b|

Similarly, the 4D decomposition of e onto vectors a, b, c, and d is:

e = (|e b c d|a + |a e c d|b + |a b e d|c + |a b c e|d) / |a b c d|

All three (2D, 3D, and 4D) identities are now so similar in pattern that if you remember one, you can directly infer the other ones.

Does this mean I should be less enamored with the scalar triple product notation as it is inherently locked to three-dimensional space, whereas the determinant notation is not? Well, yes, perhaps, but it was the scalar triple product identity that took us here so let’s not diss the scalar triple product just yet. Instead, let’s see what more we can learn from that identity. Like, right now, below.

What’s your basis for that basis?

Earlier we concluded that this identity

d = ([d b c]a + [a d c]b + [a b d]c) / [a b c]

allows us to express d in in terms of the nonorthogonal basis { a, b, c } by computing the scalars r, s, and t as per above, giving d = ra + sb + tc. This is all good, but let’s say we have tons of vectors we want expressed in this basis. Can we come up with something cheaper to compute? I mean, if we had an orthonormal basis { a, b, c } instead, we could just simply express d as

d = (d • a)a + (d • b)b + (d • c)c

which is much cheaper (using only 3 dot products). Yes, turns out we can. Because “hidden” inside our identity is a dual (or reciprocal) basis that is orthogonal and that will make our repeated projections much cheaper!

Let’s do some successive rewrites of the identity to reveal this dual basis:

d = ([d b c]a + [a d c]b + [a b d]c) / [a b c]
 = k ([d b c]a + [a d c]b + [a b d]c)
 = k ((d • (b × c))a + (d • (c × a))b + (d • (a × b))c)
 = ((d • a’)a + (d • b’)b + (d • c’)c)

where a’ = k (b × c), b’ = k (c × a), c’ = k (a × b), and k = 1 / [a b c].

Clearly, { a’, b’, c’ } form an orthogonal basis. Not orthonormal, mind you, as these dual basis vectors each are multiplied by the reciprocal lengths of their original basis counterpart, but that’s exactly what we want, because the original basis vectors aren’t guaranteed to be unit length!

So, rather than computing r, s, and t as before, when we have multiple projections to make, we compute the dual basis { a’, b’, c’ } just once and instead compute r, s, and t as

r = d • a’
s = d • b’
t = d • c’

for every d that needs to be transformed.

It turns out that an orthogonal dual basis exists for any nonorthogonal basis (in 3D), so we can use this dual basis “trick” whenever we work with nonorthogonal spaces, which can be quite handy.

Conclusion

OK, so that’s enough for tonight (I say). What did we learn? Well, scalar triple products are cool because they greatly simplify vector math. Determinants (of matrices formed by column vectors) are probably even cooler, because they also simplify things, but they allow us to more easily generalize into e.g. 2D or 4D. Lastly, a dual basis is just the thing you should be looking for when you have a skewed basis in life!

3 thoughts on “Secrets of a scalar triple product identity”

  1. I’m not sure if your don’t post comments about generalizing was aimed at specifically 7D or not, so I’m going to anyways ;).

    The cross product and scalar triple product generalize to any number of dimensions (probably with different names). CrossN takes N-1 arguments and permuting the arguments has the same effect as the triple scalar product (even is equal, odd is negative). The ScalarProductN would take N parameters and relates to permutations and determinants in the same way.

    Example “useful” feature, a 4d plane (N,-d) equation is the “cross” of the homogeneous versions of 3 input points. Obviously there’s a cheaper way to compute it than a generic cross in this specific circumstance. Inverses can also be computed in a similar way because of the relationship between cross and the null space of given set of vectors. Specifically, your dual basis a’b’c’ put in matrix form is the matrix inverse of abc.

    An interesting possible extension was http://www.geometrictools.com/Documentation/LaplaceExpansionTheorem.pdf . It kind of looks similar to cross, but I don’t know if it’s just coincidence yet. That specific formulation leads to a decent, intuitively vectorized 4×4 matrix inverse at least.

  2. adruab, there are certainly cross product-like operations that can be defined in arbitrary dimensions, as you describe. What I referred to with my parenthetical remark, however, is that when we use a stricter definition of cross product we find that the only two dimensions that have that “strict” cross product are dimensions 3 and 7. The late Pertti Lounesto describes this in a lot of detail in this usenet post of his (archived at Dave Rusin’s excellent The Mathematical Atlas site).

Leave a Reply