Deriving the glm LookAt function
=== DRAFT ===
Foreword
While the purpose of this post is to derive a function used in graphics programming, some prior experience with Linear Algebra is expected, at the very least a first course. I've tried to present necessary prerequisite in way that I wish someone would have presented them to me the first time I studied the subject.
With that said, I'm no teacher, so it's very likely that I will fail at this task.
Intro
TODO:
- Motivation
Fields
For the purpose of this post the formal definition of a field is optional, and you may just think of a field $F$ as a set of numbers or scalars for those with previous linear algebra experience, for example $\mathbb{R}$ (the set of real numbers).
A field $F$ is a set containing at least two elements called 0 and 1, together with two binary operations, addition and multiplication for which the following properties are satisfied:
- If $a, b \in F$ then $a + b \in F$
- For all $a,b,c \in F$, $(a + b)c = ac + bc$ and $c(a + b) = ca + cb$
- For all $a,b \in F$, $a + b = b + a$ and $ab = ba$
- There exists a distinct element $0 \in F$, such that for all $a \in F$, $a + 0 = 0 + a = a$
- There exists a distinct element $1 \in F$, such that for all $a \in F$, $1 a = a 1 = a$
- For each $a \in F$, there exists an element $-a \in F$, such that $a + (-a) = 0$
- For each $a \ne 0 \in F$, there exists an element $a^{-1}$, such that $aa^{-1} = 1$
Vector spaces
Let $F$ be a field, whose elements are called scalars. A vector space over $F$ is a nonempty set $V$, whose elements are called vectors, together with two operations, addition and scalar multiplication. If $V$ is a vector space then the following properties hold:
- For all vectors $u, v, w \in V$, $u + (v + w) = (u + v) + w$
- For all vectors $u, v \in V$, $u + v = v + u$
- There exists a vector $0 \in V$ such that $0 + u = u + 0 = u$, for all $u \in V$
- For each $u \in V$ there exists a vector $-u \in V$, such that $u + (-u) = (-u) + u = 0$
- For all scalars $a,b,c \in F$ and all vectors $u, v \in V$ ££ a(u + v) = au + av \\ (a + b)u = au + bu \\ (ab)u = a(bu) \\ 1u = u ££
If $V$ and $W$ are vector spaces and $W$ is a subset of $V$, then $W$ is called a subspace of $V$.
We use the notaton $W \subseteq V$ to indicate that $W$ is a subspace of $V$ and $W \subset V$ to indicate that $W$ is a proper subspace of $V$, that is, $W \ne V$.
Linear Combinations
Let $V$ be a vector space over a field $F$. A linear combination of vectors in $V$ is an expression of the form ££ a_1 v_1 + \dots + a_n v_n ££ where $v_1 \dots v_n \in V$ and $a_1 \dots a_n \in F$.
Linear Span
A set of vectors $S$ span a vector space $V$ if every vector in $V$ can be written as a linear combination of the vectors in $S$.
The subspace spanned by a nonempty set $S$ of vectors in $V$ is the set of all linear combinations from $S$
££ \text{span}(S) = \{ a_1 v_1 + \dots + a_n v_n | a_i \in F, v_i \in S \} ££
Linear Independence
Let $V$ be a vector space. A nonempty set $W$ of vectors in $V$ is linearly independent if for any distinct vectors $w_1, \dots, w_n$ in $W$
££ a_1 w_1 + \dots + a_n w_n = 0 \Rightarrow a_i = 0 \text{ for all } i ££
The previous definition is saying that $W$ is linearly independent if the only linear combination of vectors from $W$ that is equal to $0$ (the zero vector) is the trivial linear combination, that is when the coefficients $a_1 \dots a_n$ are all 0. If $W$ is not linearly independent it is linearly dependent.
Example: The set ££ W = \Bigl\{ v_1 = \begin{bmatrix} 1 \\ 2\end{bmatrix}, v_2 = \begin{bmatrix} 2 \\ 4\end{bmatrix} \Bigr\} ££
is linearly dependent because $v_2 - 2 v_1 = 0$. However, the set ££ W = \Bigl\{ v_1 = \begin{bmatrix} 1 \\ 0\end{bmatrix}, v_2 = \begin{bmatrix} 2 \\ 4\end{bmatrix} \Bigr\} ££
is linearly independent because the only linear combination of vectors $v_1, v_2$ that is equal to $0$ is the trivial one: $0 v_1 + 0 v_2 = 0$.
Basis
A set of vectors $E$ in a vector space $V$ is called a basis for $V$ if $E$ is linearly independent and $\text{span}(E) = V$.
Example: The standard or canonical basis for $V = F^n$ is the set of vectors $E = \{ e_1, \dots, e_n \}$ where $e_i$ has a one in the $i$:th coordinate and zeroes everywhere else.
Linear Maps
If $V$ and $W$ are finite dimensional vector spaces over some field $F$, then a linear map from $V$ to $W$ is a function $T : V \rightarrow W$ with the following property
££ T (\alpha u + \beta v) = \alpha T u + \beta T v \hskip{0.5em} \text{for all } u,v \in V \text{ and } \alpha, \beta \in F ££
Example: (Identity)
The identity operator $I$ is the linear map that takes every element of $V$ to itself. ££ I v = v ££
Example: (Multiplication by x)
££ T(p(x)) = x p(x) ££ where $p$ is some polynomial of $P_n$, the vector space of polynomials up to degree $n$.
Ordered basis and Coordinate vectors
Important
Notice that I have yet to mention anything about coordinate systems or geometric vectors, this is very intentional as I want to stress that vectors, vector spaces, and linear maps exists independently of coordinate systems and vectors are not just ordered lists of numbers, other examples of vector spaces include:
- $P_n$ : the set of polynomials up to degree $n$.
- $M(F)_{m \times n}$ : the set of $m \times n$ matrices over $F$.
- $L(V,W)$ : the set of linear maps from $V$ to $W$.
Another example of a vector space that we've seen many times throughout this post is the coordinate space $F^n$, the set of ordered $n$-tuples $v = (a_1, \dots, a_n)$ where $a_1, \dots a_n$ are the coordinates of the vector $v$ relative to some basis.
Let $V$ be a vector space of dimension n. An ordered basis for $V$ is an ordered $n$-tuple $(b_1, \dots, b_n)$ of vectors for which the set $\{b_1, \dots, b_n\}$ is a basis for $V$.
If $B = (b_1, \dots, b_n)$ is an ordered basis for V, then for each $v \in V$ there is a unique ordered $n$-tuple $(a_1, \dots, a_n)$ of scalars for which ££ v = a_1 b_1 + \dots + a_n b_n ££
Coordinate maps
We are now ready to define the coordinate map $\Phi_B : V \rightarrow F^n$ by ££ \Phi_B(v) = [v]_B = \begin{bmatrix} a_1 \\ \vdots \\ a_n \end{bmatrix} ££ where the column vector (or matrix) $[v]_B$ is called the coordinate vector of $v$ with respect to the ordered basis $B$. Furthermore, this map is linear ££ \Phi_B(a_1 v_1 + \dots + a_n v_n) = a_1 \Phi_B(v_1) + \dots + a_n \Phi_B(v_n) ££
or equivalently, since I will use both notations depending on the situation
££ [a_1 v_1 + \dots + a_n v_n]_B = a_1 [v_1]_B + \dots + a_n [v_n]_B ££
Example
Consider the vector space $P_2$ with polynomials $p(x) = a_0 + a_1 x + a_2 x^2$ and the standard ordered basis $B = (1, x, x^2$) of $P_2$, then ££ [p(x)]_B = [a_0 + a_1 x + a_2 x^2]_B = \begin{bmatrix} a_0 \\ a_1 \\ a_2 \end{bmatrix} ££ Now consider instead the specific polynomial $p(x) = x + 2x^2$, then ££ [p(x)]_B = [0 + x + 2 x^2]_B = \begin{bmatrix} 0 \\ 1 \\ 2 \end{bmatrix} ££ How about $[x + 2x^2]_C$ where $C = (1, x - 1, x^2)$, what is the linear combination of the basis vectors in $C$ that equals $p(x) = x + 2x^2$?
Conclusion
The only vector space that we will consider going forward is the cordinate space $F^{n}$ but I wanted drive home a point, in my experience, a first course in linear algebra will focus too much on coordinate vectors in $\mathbb{R^2}$ and $\mathbb{R^3}$ and immediately present them as arrows from the origin to some point, perhaps for the sake of visualization.
I intentionally picked a different vector space in the examples above in an attempt to make the distinction between a vector and its coordinates with respect to some ordered basis clear.
Matrix Vector multiplication
There are multiple ways of defining matrix vector multiplication and it's useful to be comfortable with all of them. Here's one that I personally feel should be standard definition if there ever was to be one, it will also prove the most useful for this post.
££ \begin{bmatrix} m_{11} & m_{21} & m_{31} \\ m_{12} & m_{22} & m_{32} \\ m_{13} & m_{23} & m_{33} \end{bmatrix} \begin{bmatrix} v_1 \\ v_2 \\ v_3 \end{bmatrix} = v_1 \begin{bmatrix} m_{11} \\ m_{12} \\ m_{13} \\ \end{bmatrix} + v_2 \begin{bmatrix} m_{21} \\ m_{22} \\ m_{23} \\ \end{bmatrix} + v_3 \begin{bmatrix} m_{31} \\ m_{32} \\ m_{33} \\ \end{bmatrix} ££ and in general ££ \begin{bmatrix} \vert & \vert & & \vert \\ m_1 & m_2 & \dots & m_k \\ \vert & \vert & &\vert \end{bmatrix} \begin{bmatrix}v_1 \\ v_2 \\ \vdots \\ v_k\end{bmatrix} = v_1 \begin{bmatrix}\vert \\ m_1 \\ \vert\end{bmatrix} + v_2 \begin{bmatrix}\vert \\ m_2 \\ \vert\end{bmatrix} + \dots + v_k \begin{bmatrix}\vert \\ m_k \\ \vert\end{bmatrix} ££ where the vertical bars indicate column vectors.
Matrix Matrix multiplication
Suppose $A, B$ are $k \times k$ real matrices, then one way of thinking about their product $A B$ is
££ A \begin{bmatrix} \vert & \vert & & \vert \\ b_1 & b_2 & \dots & b_k \\ \vert & \vert & &\vert \end{bmatrix} = \begin{bmatrix} \vert & \vert & & \vert \\ A b_1 & A b_2 & \dots & A b_k \\ \vert & \vert & &\vert \end{bmatrix} ££ where $b_i \dots b_k$ are the columns of $B$.
Another way is to think of each entry $c_{i,j}$ of the output matrix $C$ as being the product $a_i^T b_j$ (dot-product), where $i, j = 1 \dots k$ and $a_1^T \dots a_k^T$ are the rows of $A$.
££ \begin{bmatrix} - & a_1^T & - \\ & \vdots & \\ - & a_i^T & - \\ & \vdots & \\ - & a_k^T & - \end{bmatrix} \begin{bmatrix} \vert & & \vert & & \vert \\ b_1 & \dots & b_j & \dots & b_k \\ \vert & & \vert & & \vert \end{bmatrix} = \begin{bmatrix} a_1^T b_1 & \dots & a_1^T b_j & \dots & a_1^T b_k \\ \vdots & \ddots & \vdots & & \vdots \\ a_i^T b_1 & \dots & a_i^T b_j & \dots & a_i^T b_k \\ \vdots & & \vdots & \ddots & \vdots \\ a_k^T b_1 & \dots & a_k^T b_j & \dots & a_k^T b_k \end{bmatrix} ££ This way of thinking about matrix multiplication will prove useful later.
Inner product spaces
If you have previous experience with linear algebra or perhaps vector calculus then you have most certainly come in contact with the standard inner product of $\mathbb{R^n}$ which is commonly referred to as the "dot-product", defined as ££ u \cdot v = \begin{bmatrix} u_1 \\ \vdots \\ u_n\end{bmatrix} \cdot \begin{bmatrix} v_1 \\ \vdots \\ v_n\end{bmatrix} = u_1 v_2 + \dots + u_n v_n ££
or equivalently
££ u^T v = \begin{bmatrix} u_1 & \dots & u_n\end{bmatrix} \begin{bmatrix} v_1 \\ \vdots \\ v_n\end{bmatrix} = u_1 v_2 + \dots + u_n v_n ££ the second definition will prove more useful later.
I will only focus on Real Inner Product spaces in this post, ignoring the complex case.
Let $V$ be a vector space over $F = \mathbb{R}$. $V$ is an inner product space if there is a function $\left< u, v\right> : V \times V \rightarrow F$ with the following properties:
- For all $v \in V$ ££\left< v, v\right> \ge 0££
- For all $u, v \in V$ ££\left< u, v\right> = \left< v, u\right>££
- For all $u, v \in V$ and $a, b \in F$ ££ \left< a u + b v, w\right> = a \left< u, w \right> + b \left< v, w \right> \\ ££
The second property (symmetry) together with the third (liniearity in the first argument), means that the inner product (in the real case) is a bilinear function. This will prove useful later.
If $V$ is an inner product space, the length or norm of $v \in V$ is defined by ££ ||v|| = \sqrt{\left< v, v\right>} ££
A vector $v$ in an inner product space $V$ is a unit vector if $ ||v|| = 1$.
Why we would want inner product spaces will become clear in the next section on orthogonality.
Orthogonality
In Linear Algebra, orthogonality is a generalization of perpendicularity from geometry that extends to any inner product space.
If $V$ is an inner product space then vectors $u, v \in V$ are orthogonal if $\left< u, v\right> = 0$.
Example Consider the inner product space $\mathbb{R^2}$ with the standard inner product $\left< u, v\right> = u^T v$ and let ££ u = \begin{bmatrix} 1 \\ 0 \end{bmatrix}, \hskip{0.5em} v = \begin{bmatrix} 0 \\ 1 \end{bmatrix} ££ then $u$ and $v$ are orthogonal, because ££ u^T v = \begin{bmatrix} 1 & 0 \end{bmatrix} \begin{bmatrix} 0 \\ 1 \end{bmatrix} = (0) 1 + (1) 0 = 0 ££
Two unit vectors $u, v$ are orthonormal if they are orthogonal.
Orthonormal matrices
An orthonormal matrix $A$ is a square matrix whose columns are orthonormal.
Proposition: If $A$ is an orthonormal matrix then $A^T = A^{-1}$
Proof: Consider the matrix square $A^T A$, by the definition of matrix-matrix multiplication $A^T A$ has entries $(A^T A)_{i j} = a^T_i a_j$, this is the standard inner product of $\mathbb{R^n}$ (also known as the dot-product). Since the columns of $A$ are orthonormal it follows that $(A^T A)_{ij} = 1$ for $i = j$ and $0$ otherwise, hence $A^T A = I$ and $A^T = A^{-1}$ as desired.
Orthogonal Projection
A basis $E = \{e_1, ..., e_n\}$ is an orthogonal basis if the vectors in $E$ are mutually orthogonal, i.e $\left<e_i, e_j\right> = 0$ for $i \ne j$.
Let $V$ be an inner product space and $W$ a subspace of $V$, the orthogonal complement of $W$ is the set ££ W^{\perp} = \{ v \in V | \left<v, w\right> = 0, w \in W \} ££
In words, $W^{\perp}$ is the set of vectors in $V$ that are orthogonal to every vector in $W$.
Let $W$ be a subspace of $V$ and let $v$ be a vector in $V$, then $v$ can be uniquely decomposed as ££ v = w + p ££ where $w \in W$ and $p \in W^{\perp}$.
$w + p$ is called the orthogonal decomposition of $v$ with respect to $W$ and $w$ is the orthogonal projection of $v$ onto $W$
Let $V$ be an inner product space and $W$ a suspace of $V$. Suppose we want to find the orthogonal pojection of a vector $v \in V$ onto $W$. Let $w$ be the orthogonal projection of $v$ onto $W$ and let vectors $b_1, \dots, b_n$ form an orthogonal basis for $W$, then
££ w = \lambda_1 b_1 + \dots + \lambda_i b_i + \dots + \lambda_n b_n ££
we now know that $v$ can written as ££ v = w + p = \lambda_1 b_1 + \dots + \lambda_i b_i + \dots + \lambda_n b_n + p ££ where $p \in W^{\perp}$
If we apply the inner product with $b_i$ to both sides of this expression we obtain
££ \left< v, b_i \right> = \left< \lambda_1 b_1 + \dots + \lambda_i b_i + \dots + \lambda_n b_n + p, b_i \right> ££ Since the inner product is linear in the first argument we can rewrite this equation as ££ \left< v, b_i \right> = \lambda_1 \left< b_1, b_i \right> + \dots + \lambda_i \left< b_i, b_i \right> + \dots + \lambda_n \left< b_n, b_i \right> + \left< p, b_i \right> ££
Since the vectors $b_1, \dots, b_i, \dots, b_k$ and $p$ are mutually orthogonal, by the definition of orthogonal vectors we know that all the terms of the right-hand side except for $\lambda_i \left< b_i, b_i \right> $ is equal to 0. Hence ££ \left< v, b_i \right> = \lambda_i \left< b_i, b_i \right> ££ solving for $\lambda_i$ we get ££ \lambda_i = \frac{\left< v, b_i \right>}{\left< b_i, b_i \right>} ££ and then we can write $w$ as ££ w = \frac{\left< v, b_1 \right>}{\left< b_1, b_1 \right>} b_1 + \dots + \frac{\left< v, b_n \right>}{\left< b_n, b_n \right>} b_n ££ or more compactly as ££ w = \sum_{i = 1}^n \frac{\left< v, b_i \right>}{\left< b_i, b_i \right>} b_i ££
We now have a formula for the orthogonal projection of $v$ onto $W$ in terms of the orthogonal basis $\{ b_1, \dots, b_n \}$.
Orthogonalization
Let $V$ be an inner-product space and that $B = \{ b_1, \dots, b_n \}$ is a non-orthogonal basis for $V$ and suppose we want to construct an orthogonal basis for $V$.
If we let the first vector in our new basis be $v_1 = b_1$. This vector spans a 1-dimensional subspace of $V$, call it $W_1$. Now, since $b_2$ is in $V$ we know that it can be written as ££ b_2 = w_1 + v_2 ££ where $v_2 \in W_1^{\perp}$ and $w_1 \in W_1$ is the orthogonal projection of $b_2$ onto $W_1$. If we were to subtract from $b_2$ the vector $w_1$ we get a vector $v_2$ that is orthogonal to $W_1$, as desired. ££ v_2 = b_2 - w_1 ££ We also know that since $w_1$ is the orthogonal projection of $b_2$ onto $W$, $v_2$ can be written as ££ v_2 = b_2 - \sum_{i = 1}^1 \frac{\left< b_2, b_i \right>}{\left< b_i, b_i \right>} b_i = b_2 - \frac{\left< b_2, b_1 \right>}{\left< b_1, b_1 \right>} b_1 ££
In a similar fashion, we can write $b_3$ as ££ b_3 = w_2 + v_3 ££ where $w_2$ is in the subspace $W_2$ spanned by $v_1, v_2$, and $p_2 \in W_2^{\perp}$, hence
££ v_3 = b_3 - \sum_{i = 1}^2 \frac{\left< b_3, b_i \right>}{\left< b_i, b_i \right>} b_i ££ Continuing this way we get the general formula for the next vector in our othogonal basis. ££ v_{k + 1} = b_{k+1} - \sum_{i = 1}^k \frac{\left< b_{k+1}, b_i \right>}{\left< b_i, b_i \right>} b_i ££
Change of basis matrices
When writing a linear transformation as a matrix we need to pick a basis since this matrix encodes what the transformation does to the basis vectors.
We previously covered the concept of coordinate maps $\Phi_B : V \rightarrow F^n$ that maps vectors in some vector space $V$ to its coordinate vector relative to some basis $B$.
Consider the vector space $V$ and let $E$ be standard basis. ££ E = \Bigl\{ \begin{bmatrix} 1 \\ 0 \end{bmatrix}, \begin{bmatrix} 0 \\ 1 \end{bmatrix} \Bigr\} ££ Take any vector $v = \begin{bmatrix} v_1 \\ v_2 \end{bmatrix} \in V$, then $ \Phi_E (v) = [ v ]_E = \begin{bmatrix} v_1 \\ v_2 \end{bmatrix} $. Consider now a different basis of $\mathbb{R^2}$, for example
££ C = \Bigl\{ c_1 = 2 e_1 + e_2, c_2 = - e_1 + 2 e_2 \Bigr\} ££ where the basis vectors $c_1, c_2$ are expressed as linear combination of the sandard basis vectors $e_1, e_2$.
How would we write the vector $v$ relative to this new basis? Sure we could go through all the necessary calculations for every vector $v \in V$ in order to find its coordinate vector $[v]_C$.
However, notice that we know how to express the basis vectors $c_1, c_2$ in terms of the vectors $e_1, e_2$. Is there perhaps some matrix that will convert coordinate vectors $[v]_E$ to their respective coordinate vector $[v]_C$?
We could use the fact that any vector $[v]_C$ can be expressed as a linear combination of the basis vectors $c_1, c_2$, because if
££ v = a_1 c_1 + a_2 c_2 ££ then, since $\Phi_C(v) = [v]_C$ is linear, we have ££ [v]_E = [a_1 c_1 + a_2 c_2]_E = a_1 [c_1]_E + a_2 [c_2]_E ££ which we can write as a matrix equation (see the section on matrix-vector multiplication) ££ [v]_E = \begin{bmatrix} \vert & \vert \\ [c_1]_E & [c_2]_E \\ \vert & \vert \end{bmatrix} \begin{bmatrix} a_1 \\ a_2 \end{bmatrix} ££ now, what are the coefficients $a_1, a_2$? Those are the coordinates of the vector $v$ relative to the basis is $C$, thus we have
££ [v]_E = \begin{bmatrix} \vert & \vert \\ [c_1]_E & [c_2]_E \\ \vert & \vert \end{bmatrix} [v]_C ££ The matrix with columns $[c_1]_E, [c_2]_E$ is the so called change of basis matrix from the basis $C$ to $E$ which I will denote as $M_{C,E}$. How is this going to help us in finding $\Phi(v)_C = [v]_C$?
The following diagram could be useful for intuition.
Notice that applying $\Phi_C$ and then $M_{C,E}$ is the same as applying $\Phi_E$, in other words $\Phi_E(v) = M_{C,E} \Phi_C(v)$, thus $\Phi_C(v) = M_{C,E}^{-1} \Phi_E(v) = M_{E,C} \Phi_E(v)$.
We have ££ M_{C,E} = \begin{bmatrix} \vert & \vert \\ [c_1]_E & [c_2]_E \\ \vert & \vert \end{bmatrix} = \begin{bmatrix} 2 & -1\\ 1 & 2 \end{bmatrix} ££
taking its inverse we obtain ££ M_{E,C} = M_{C,E}^{-1} = \begin{bmatrix} 2/5 & 1/5\\ -1/5 & 2/5 \end{bmatrix} ££ Now, if we have some vector $[v]_E$ we can use this matrix to obtain $[v]_C$ ££ [v]_C = \begin{bmatrix} 2/5 & 1/5\\ -1/5 & 2/5 \end{bmatrix} [v]_E ££
Example
Let
££ v = (1) c_1 + (1) c_2 = (1) (2 e_1 + e_2) + (1) (-e_1 + 2 e_2) = e_1 + 3 e_2 ££ then ££ [v]_C = [c_1 + c_2]_C = \begin{bmatrix} 1 \\ 1 \end{bmatrix} \hskip{1em} \text{ and } \hskip{1em} [v]_E = [e_1 + 3 e_2]_E = \begin{bmatrix} 1 \\ 3 \end{bmatrix} ££
Let's verify that applying our change of coordinate matrix produces the expected result
££ [v]_C = M_{E,C} [v]_E = \begin{bmatrix} 2/5 & 1/5\\ -1/5 & 2/5 \end{bmatrix} \begin{bmatrix} 1 \\ 3 \end{bmatrix} = \begin{bmatrix} 1 \\ 1 \end{bmatrix} ££
The Look-At function
Consider the vector space of our 3D world, call it $W = \mathbb{R^3}$, whose elements are the vectors that define our 3D objects. $W$ is spanned by the the vectors in the standard orthonormal basis $E = \{ x, y, z \}$.
Let's now consider some notion of a camera that is able to rotate and move around, this camera has it's own local coordinate system and thus it's own set of orthonormal basis vectors $C = \{f, r, u\}$ that defines its orientation in our 3D world.
In order to map the objects in our world space into our view space we need to do two things.
- As the camera rotates we need to take all the vectors defined as a linear combination of the vectors in the basis $E$ and map them to their respective vectors relative to the basis $C$.
- As the camera moves around we need to perform a translation of all the vectors by adding a displacement vector calculated based on the cameras current position.
The Rotation Matrix
As you probably guessed, our first task is to find the change of basis matrix $M_{E,C}$
Since the basis vectors $f, r, u \in C$ of the camera is a linear combination of the basis vectors $ e_1, e_2, e_3 $ in the standard eucledian basis $E$, there exists a change of coordinate matrix $M_{C,E}$.
Let ££f = f_x e_1 + f_y e_2 + f_z e_3££ ££r = r_x e_1 + r_y e_2 + r_z e_3££ ££u = u_x e_1 + u_y e_2 + u_z e_3££
then
££ [f]_E = \begin{bmatrix} f_x\\ f_y \\ f_z \\ \end{bmatrix}, \hskip{0.5em} [r]_E = \begin{bmatrix} r_x\\ r_y \\ r_z \\ \end{bmatrix}, \hskip{0.5em} [u]_E = \begin{bmatrix} u_x\\ u_y \\ u_z \\ \end{bmatrix} ££ and ££ M_{C,E} = \begin{bmatrix} f_x & r_x & u_x \\ f_y & r_y & u_y \\ f_z & r_z & u_z \end{bmatrix} ££
To find the matrix we want, we simply need to find the inverse of this matrix, and as we know, since $M_{C,E}$ is an orthonormal matrix we have $M_{C,E}^{-1} = M_{C,E}^{T}$
££ M_{E,C} = \begin{bmatrix} f_x & f_y & f_z \\ r_x & r_y & r_z \\ u_x & u_y & u_z \end{bmatrix} ££
TODO
- Constructing the orhtonormal basis $C$
The Translation Matrix
TODO
- Describe affine trasformations and homogeneous coordinates
- The rest is trivial