Question 3.12: Let  a1^→ = (−1, 1, 1), a2^→ = (1,−1, 1) and a3^→ = (1, 1,−1...

Let \overrightarrow{a_{1} }=\left(-1,1,1\right), \overrightarrow{a_{2} }=\left(1,-1,1\right) and \overrightarrow{a_{3} }=\left(1,1,-1\right).

(1) Try to find linear operators mapping the tetrahedron \Delta \overrightarrow{0}\overrightarrow{a_{1} }\overrightarrow{a_{2} } \overrightarrow{a_{3} } onto the tetrahedron \Delta \overrightarrow{0} \left(-\overrightarrow{a_{1} }\right) \left(-\overrightarrow{a_{2} }\right) \left(-\overrightarrow{a_{3} }\right). See Fig. 3.54(a).

(2) Try to find a linear operator mapping the tetrahedron \Delta \overrightarrow{0}\overrightarrow{a_{1} }\overrightarrow{a_{2} } \overrightarrow{a_{3} } onto the parallelogram \overrightarrow{a_{1} }\overrightarrow{a_{2} }. See Fig. 3.54(b).

15
The blue check mark means that this solution has been answered and checked by an expert. This guarantees that the final answer is accurate.
Learn more on how we answer questions.

(1) There are six such possible linear operators. The simplest one, say f_{1}, among them is the one that satisfies

f_{1} \left(\overrightarrow{a_{i} }\right)=-\overrightarrow{a_{i} } for 1 ≤ i ≤ 3.

In the natural basis N=\left\{\overrightarrow{e_{1} },\overrightarrow{e_{2} },\overrightarrow{e_{3} }\right\},

\left[ f_{1}\right] _{N} =\left[\begin{matrix} \overrightarrow{a_{1} } \\ \overrightarrow{a_{2} } \\ \overrightarrow{a_{3} } \end{matrix} \right] ^{-1} \left[\begin{matrix} -1 & & 0 \\ & -1 & \\ 0 & &-1 \end{matrix} \right] \left[\begin{matrix} \overrightarrow{a_{1} } \\ \overrightarrow{a_{2} } \\ \overrightarrow{a_{3} } \end{matrix} \right]=-I_{3}

\Rightarrow f_{1}\left(\overrightarrow{x} \right) =-\overrightarrow{x}=-\overrightarrow{x}I_{3}.

It is possible that \overrightarrow{a_{1} } and \overrightarrow{a_{2} } are mapped into -\overrightarrow{a_{2} } and -\overrightarrow{a_{1} } respectively while \overrightarrow{a_{3} } is to -\overrightarrow{a_{3} }. Denote by f_{2} such a linear operator. Then

f_{2}\left(\overrightarrow{a_{1} }\right) =-\overrightarrow{a_{2} },

f_{2}\left(\overrightarrow{a_{2} }\right) =-\overrightarrow{a_{1} },

f_{2}\left(\overrightarrow{a_{3} }\right) =-\overrightarrow{a_{3} }.          \left(*_{1} \right)

\Rightarrow \left[ f_{2}\right] _{N} = P^{-1} \left[\begin{matrix} 0 & -1 & 0 \\ -1 & 0 & 0 \\ 0 & 0 &-1 \end{matrix} \right] P,  where P=\left[\begin{matrix} \overrightarrow{a_{1} } \\ \overrightarrow{a_{2} } \\ \overrightarrow{a_{3} } \end{matrix} \right]=\left[\begin{matrix} -1 & 1 & 1 \\ 1 & -1 & 1 \\ 1 & 1 &-1 \end{matrix} \right]

= \frac{1}{2} \left[\begin{matrix} 0 & 1 & 1 \\ 1 & 0 & 1 \\ 1 & 1 & 0 \end{matrix} \right]\left[\begin{matrix} 0 & -1 & 0 \\ -1 & 0 & 0 \\ 0 & 0 & -1 \end{matrix} \right]\left[\begin{matrix} -1 & 1 & 1 \\ 1 & -1 & 1 \\ 1 & 1 &-1 \end{matrix} \right]=\left[\begin{matrix} 0 & -1 & 0 \\ -1 & 0 & 0 \\ 0 & 0 & -1 \end{matrix} \right]

\Rightarrow f_{2}\left(\overrightarrow{x} \right) = \overrightarrow{x}\left[ f_{2}\right] _{N} =\overrightarrow{x}\left[\begin{matrix} 0 & -1 & 0 \\ -1 & 0 & 0 \\ 0 & 0 & -1 \end{matrix} \right]=-\overrightarrow{x}\left[\begin{matrix} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 1 \end{matrix} \right].  \left(*_{2} \right)

Notice that, \left(*_{1} \right) is equivalent to

-f_{2}\left(\overrightarrow{e}_{1} \right)+f_{2}\left(\overrightarrow{e}_{2} \right)+f_{2}\left(\overrightarrow{e}_{3} \right)=-\overrightarrow{a_{2} }, f_{2}\left(\overrightarrow{e}_{1} \right)-f_{2}\left(\overrightarrow{e}_{2} \right)+f_{2}\left(\overrightarrow{e}_{3} \right)=-\overrightarrow{a_{1} }, f_{2}\left(\overrightarrow{e}_{1} \right)+f_{2}\left(\overrightarrow{e}_{2} \right)-f_{2}\left(\overrightarrow{e}_{3} \right)=-\overrightarrow{a_{3} }

\Rightarrow f_{2}\left(\overrightarrow{e}_{1} \right)+f_{2}\left(\overrightarrow{e}_{2} \right)+f_{2}\left(\overrightarrow{e}_{3} \right)=-\left(\overrightarrow{a_{1} }+\overrightarrow{a_{2} }+\overrightarrow{a_{3} }\right)=-\left(1,1,1\right)

\Rightarrow f_{2}\left(\overrightarrow{e}_{1} \right)=-\overrightarrow{e}_{2}, f_{2}\left(\overrightarrow{e}_{2} \right)=-\overrightarrow{e}_{1} , f_{2}\left(\overrightarrow{e}_{3} \right)=-\overrightarrow{e}_{3} .  \left(*_{3} \right)

This is just \left(*_{2} \right). f_{2} is diagonalizable. Similarly, both

f_{3}\left(\overrightarrow{x} \right) = -\overrightarrow{x} \left[\begin{matrix} 0 & 0 & 1 \\ 0 & 1 & 0 \\ 1 & 0 & 0 \end{matrix} \right]  and f_{4}\left(\overrightarrow{x} \right) = -\overrightarrow{x} \left[\begin{matrix} 1 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & 1 & 0 \end{matrix} \right]

are another two such linear operators.

The last two linear operators are

f_{5}\left(\overrightarrow{x} \right) = -\overrightarrow{x} \left[\begin{matrix} 0 & 1 & 0 \\ 0 & 0 & 1 \\ 1 & 0 & 0 \end{matrix} \right]  and f_{6}\left(\overrightarrow{x} \right) = -\overrightarrow{x} \left[\begin{matrix} 0& 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0 \end{matrix} \right].

Both are not diagonalizable. For details, see Sec. 3.7.8.

(2) The parallelogram \overrightarrow{a_{1} }\overrightarrow{a_{2} } has the vertices at   \overrightarrow{0},\overrightarrow{a_{1} },\overrightarrow{a_{1} }+\overrightarrow{a_{2} }=2 \overrightarrow{e_{3} } and \overrightarrow{a_{2} }.

Define a linear operator g: R³ → R³ as

g\left(\overrightarrow{a_{1} }\right) =\overrightarrow{a_{1} },

g\left(\overrightarrow{a_{2} }\right) =\overrightarrow{a_{2} },

g\left(\overrightarrow{a_{3} }\right) =\overrightarrow{a_{1} }+\overrightarrow{a_{2} }=2\overrightarrow{e_{3} }.

The process like \left(*_{2} \right) or \left(*_{3} \right) will lead to

g\left(\overrightarrow{x }\right)=\overrightarrow{x }\left[ g\right] _{N} =\overrightarrow{x }\left[\begin{matrix} \frac{1}{2} & -\frac{1}{2} & \frac{3}{2} \\ \\ -\frac{1}{2} & \frac{1}{2} & \frac{3}{2} \\ \\  0 & 0 & 1 \end{matrix} \right] or

\left[ g\right] _{B} =P\left[ g\right] _{N}P^{-1}=\left[\begin{matrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 1 & 1 & 0 \end{matrix} \right],

where P is as above and B=\left\{\overrightarrow{a_{1} },\overrightarrow{a_{2} },\overrightarrow{a_{3} }\right\} . g is diagonalizable and

Q\left[ g\right] _{N}Q^{-1}=\left[\begin{matrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 0 \end{matrix} \right],  where Q=\left[\begin{matrix} -1 & 1 & 1 \\ 1 & -1 & 1 \\ 1 & 1 & -3 \end{matrix} \right].

g is a projection of R³ onto the subspace \ll \overrightarrow{a_{1} },\overrightarrow{a_{2} }\gg along \ll \left(1,1,-3\right)\gg as can be visualized in Fig. 3.54(b).

The readers are urged to find more such linear operators.

One of the main advantages of diagonalizable linear operators or matrices A is that it is easy to compute the power

A^{n}

for n ≥ 1 and n < 0 if A is invertible. More precisely, suppose

A=P^{-1}\left[\begin{matrix} \lambda _{1} & & 0 \\ &\lambda _{2} & \\ 0 & &\lambda _{3} \end{matrix} \right]P

⇒ 1. det(A) = \lambda _{1} \lambda _{2} \lambda _{3} .

2. A is invertible ⇔ \lambda _{1} \lambda _{2} \lambda _{3} \neq 0 . In this case,

A^{-1}=P^{-1}\left[\begin{matrix} \lambda^{-1} _{1} & & 0 \\ &\lambda^{-1} _{2} & \\ 0 & &\lambda^{-1} _{3} \end{matrix} \right]P.

3. Hence

A^{n}=P^{-1}\left[\begin{matrix} \lambda^{n} _{1} & & 0 \\ &\lambda^{n} _{2} & \\ 0 & &\lambda^{n} _{3} \end{matrix} \right]P.

4. tr(A) = \lambda _{1}+\lambda _{2}+\lambda _{3}.

5. For any polynomial g\left(t\right)\in P _{n}\left(R\right),

g\left(A\right)=P^{-1}\left[\begin{matrix} g\left(\lambda _{1} \right)& & 0 \\ &g\left(\lambda _{2} \right) & \\ 0 & &g\left(\lambda _{3} \right) \end{matrix} \right]P.    (3.7.49)

These results still hold for any diagonalizable matrix of finite order.

Related Answered Questions

Question: 3.37

Verified Answer:

Both S_{1}  and  S_{2} are two-dime...
Question: 3.26

Verified Answer:

Analysis The characteristic polynomial is ...