You should be able to exploit the fact that the matrix is affine to speed things up over a full inverse. Namely, if your matrix looks like this
A = [ M b ]
[ 0 1 ]
where A is 4x4, M is 3x3, b is 3x1, and the bottom row is (0,0,0,1), then
inv(A) = [ inv(M) -inv(M) * b ]
[ 0 1 ]
Depending on your situation, it may be faster to compute the result of inv(A) * x instead of actually forming inv(A). In that case, things simplify to
inv(A) * [x] = [ inv(M) * (x - b) ]
[1] = [ 1 ]
where x is a 3x1 vector (usually a 3D point).
Lastly, if M represents a rotation (i.e. its columns are orthonormal), then you can use the fact that inv(M) = transpose(M). Then computing the inverse of A is just a matter of subtracting the translation component, and multiplying by the transpose of the 3x3 part.
Note that whether or not the matrix is orthonormal is something that you should know from the analysis of the problem. Checking it during runtime would be fairly expensive; although you might want to do it in debug builds to check that your assumptions hold.
Hope all of that is clear...
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…