## Textbooks & Solution Manuals

Find the Source, Textbook, Solution Manual that you are looking for in 1 click.

## Tip our Team

Our Website is free to use.
To help us grow, you can support our team with a Small Tip.

## Holooly Tables

All the data tables that you may search for.

## Holooly Help Desk

Need Help? We got you covered.

## Holooly Arabia

For Arabic Users, find a teacher/tutor in your City or country in the Middle East.

Products

## Textbooks & Solution Manuals

Find the Source, Textbook, Solution Manual that you are looking for in 1 click.

## Holooly Arabia

For Arabic Users, find a teacher/tutor in your City or country in the Middle East.

## Holooly Help Desk

Need Help? We got you covered.

## Q. 3.10

Let

$A=\left[\begin{matrix} 1 & 0 \\ 1 & 1 \\ 1 & -1 \end{matrix} \right].$

Do the same problems as Example 3.9.

## Verified Solution

Perform elementary row operations to

$\left[A\mid \overrightarrow{b^{*} }\mid I_{3} \right] = \left[\begin{array}{cc:c:ccc} 1 & 0 & b_{1} & 1 & 0 & 0 \\ 1 & 1 & b_{2} & 0 & 1 & 0 \\ 1 & -1 & b_{3} & 0 & 0 & 1 \end{array} \right]$

$\underset{\begin{matrix} E_{\left(2\right)-\left(1\right) } \\ E_{\left(3\right)-\left(1\right)} \end{matrix} }{\longrightarrow} \left[\begin{array}{cc:c:ccc} 1 & 0 & b_{1} & 1 & 0 & 0 \\ 0 & 1 & b_{2}-b_{1} & -1 & 1 & 0 \\ 0 & -1 & b_{3}-b_{1} & -1 & 0 & 1 \end{array} \right]$

$\underset{\text{}E_{\left(3\right)+\left(2\right) } }{\longrightarrow} \left[\begin{array}{cc:c:ccc} 1 & 0 & b_{1} & 1 & 0 & 0 \\ 0 & 1 & b_{2}-b_{1} & -1 & 1 & 0 \\ 0 & 0 & b_{2}+b_{3}-2b_{1} & -2 & 1 & 1 \end{array} \right].\left(*_{18} \right)$

From $\left(*_{18} \right)$

$A \overrightarrow{x^{*}}=\overrightarrow{b^{*}}$ has a solution $\overrightarrow{x}=\left(x_{1},x_{2}\right) \in R^{2}.$

$\Leftrightarrow b_{2} +b_{3}-2b_{1}=0$

⇒ The solution is  $x_{1}=b_{1}, x_{2}=b_{2}-b_{1}.$

The constrained condition $b_{2}+b_{3}-2b_{1}=0$ can also be seen by eliminating $x_{1},x_{2},x_{3}$ from the set of equations $x_{1}=b_{1}, x_{1}+x_{2}=b_{2},x_{1}-x_{2}=b_{3}.$

$\left(*_{18} \right)$ also indicates that

$BA=I_{2},$  where $B = \left[\begin{matrix} 1 & 0 & 0 \\ -1 & 1 & 0 \end{matrix} \right]$

i.e. B is a left inverse of A. In general, let $B=\left[\begin{matrix} \overrightarrow{v_{1}} \\ \overrightarrow{v_{2}} \end{matrix} \right]_{2\times3}.$ Then

$BA=\left[\begin{matrix} \overrightarrow{v_{1}} \\ \overrightarrow{v_{2}} \end{matrix} \right] A =\left[\begin{matrix} \overrightarrow{v_{1}}A \\ \overrightarrow{v_{2}}A \end{matrix} \right]=I_{2}$

$\Leftrightarrow \overrightarrow{v_{1}}A=\overrightarrow{e_{1}}$

$\overrightarrow{v_{2}}A=\overrightarrow{e_{2}}.$

Suppose $\overrightarrow{v_{1} }=\left(x_{1},x_{2},x_{3}\right),$ Then

$\overrightarrow{v_{1}}A=\overrightarrow{e_{1}}$

$\Leftrightarrow \left\{\begin{matrix} x_{1}+x_{2}+x_{3}=1 \\ x_{2}-x_{3}=0 \end{matrix} \right.$

$\Leftrightarrow\overrightarrow{v_{1}}=\left(1,0,0 \right)+t_{1}\left(-2,1,1 \right), t_{1}\in R.$

Similarly, let $\overrightarrow{v_{2} }=\left(x_{1},x_{2},x_{3}\right).$ Then

$\overrightarrow{v_{2}}A=\overrightarrow{e_{2}}$

$\Leftrightarrow \left\{\begin{matrix} x_{1}+x_{2}+x_{3}=0 \\ x_{2}-x_{3}=1 \end{matrix} \right.$

$\Leftrightarrow\overrightarrow{v_{2}}=\left(-1,1,0 \right)+t_{2}\left(-2,1,1 \right), t_{2}\in R.$

Thus, the left inverses of A are

$B=\left [ \begin{matrix} 1-2t_{1} & t_{1} & t_{1} \\ -1-2t_{2} & 1+t_{2} & t_{2} \end{matrix} \right ] _{2\times 3}$   for $t_{1},t_{2}\in R. \left(*_{19} \right)$

On the other hand, $\left(*_{18} \right)$ says

$E_{\left(3\right)+\left(2\right) }E_{\left(3\right)-\left(1\right)}E_{\left(2\right)-\left(1\right) }A=PA= \left [ \begin{matrix} 1 & 0 \\ 0 & 1 \\ 0 & 0 \end{matrix} \right ]$

(row-reduced echelon matrix and
normal form of A),

$P=E_{\left(3\right)+\left(2\right) }E_{\left(3\right)-\left(1\right) }E_{\left(2\right)-\left(1\right) }=\left [ \begin{matrix} 1 & 0 & 0 \\ -1 & 1 & 0 \\ -2 & 1 & 1 \end{matrix} \right ]$

$\Rightarrow A= E^{-1}_{\left(2\right)-\left(1\right) }E^{-1}_{\left(3\right)-\left(1\right) }E^{-1}_{\left(3\right)+\left(2\right) }\left [ \begin{matrix} 1 & 0 \\ 0 & 1 \\ 0 & 0 \end{matrix} \right ]=P^{-1}\left [ \begin{matrix} 1 & 0 \\ 0 & 1 \\ 0 & 0 \end{matrix} \right ]$

$=\left [ \begin{matrix} 1 & 0 & 0 \\ 1 & 1 & 0 \\ 1 & -1 & 1 \end{matrix} \right ]\left [ \begin{matrix} 1 & 0 \\ 0 & 1 \\ 0 & 0 \end{matrix} \right ]$ (LU-decomposition).

Refer to (1) in (2.7.70).

To investigate $A^{*}$ and $A^{*}A,$ consider $\overrightarrow{x}A=\overrightarrow{b}$ for $\overrightarrow{x}=\left(x_{1},x_{2},x_{3}\right)\in R^{3}$ and $\overrightarrow{b}=\left(b_{1},b_{2}\right)\in R^{2}.$

By simple computation or by $\left(*_{18} \right)$

$Ker\left(A \right)= \ll \left(-2,1,1\right) \gg = Im \left(A^{*} \right) ^{\bot },$

$Im\left(A \right)=R^{2}=Ker\left(A ^{*}\right)^{\bot },$

$Ker\left(A^{*} \right)=\left\{0\right\} = Im \left(A\right) ^{\bot },$

$Im\left(A^{*} \right)=\left\{\left(x_{1},x_{2},x_{3} \right)\in R^{3} \mid 2x_{1}-x_{2}-x_{3}=0 \right\}=\ll \left(1,2,0\right),\left(1,0,2\right) \gg$

$=\ll \left(1,1,1\right),\left(0,1,-1\right) \gg=Ker\left(A \right)^{\bot },$

and

$\left(A^{*} A\right)=\left[\begin{matrix} 3 & 0 \\ 0 & 2 \end{matrix} \right ],$

$\left(A^{*} A\right)^{-1} = \frac{1}{6} \left[\begin{matrix} 2 & 0 \\ 0& 3 \end{matrix} \right ].$

For any fixed $\overrightarrow{b} \in R^{2}, \overrightarrow{x}A= \overrightarrow{b}$ always has a particular solution $\overrightarrow{b}\left(A^{*} A\right)^{-1}A^{*}$ and the solution set is

$\overrightarrow{b}\left(A^{*} A\right)^{-1}A^{*} + Ker\left(A \right), \left(*_{20} \right)$

which is a one-dimensional affine subspace of R³. Among so many solutions, it is $\overrightarrow{b}\left(A^{*} A\right)^{-1}A^{*}$ that has the shortest distance to the origin$\overrightarrow{0}$ (see Ex. <B> 7 of Sec. 3.7.3). For simplicity, let

$A^{+}=\left(A^{*} A\right)^{-1}A^{*}$

$= \frac{1}{6} \left[\begin{matrix} 2 & 0 \\ 0& 3 \end{matrix} \right ]\left[\begin{matrix} 1 & 1 & 1 \\ 0 & -1 & -1 \end{matrix} \right ]=\frac{1}{6}\left[\begin{matrix} 2 & 2 & 2 \\ 0 & 3 & -3 \end{matrix} \right ]_{2 \times 3} \left(*_{21} \right)$

and can be considered as a linear transformation from R² into R³ with the range space $Im\left(A^{*} \right).$ Since

$A^{+}A=I_{2},$

$A^{+}$ is a left inverse of A.

Therefore, it is reasonable to expect that one of the left inverses shown in $\left(*_{19} \right)$ should be $A^{+}.$ Since the range of $A^{+}.$ is $Im\left(A^{*} \right),$ then

B in $\left(*_{19} \right)$ is $A^{+}.$

$\Leftrightarrow$ The range space of $B =Im\left(A^{*} \right)$

$\Leftrightarrow \left(1-2t_{1} ,t_{1},t_{1}\right)$ and $\left(-1-2t_{2} ,1+t_{2},t_{2}\right)$ are in $Im\left(A^{*} \right).$

$\Leftrightarrow \left\{\begin{matrix} 2 \left(1-2t_{1}\right)-t_{1}-t_{1}=0 \\ 2 \left(-1-2t_{2}\right)- \left(1+t_{2}\right)-t_{2}=0 \end{matrix} \right.$

$\Rightarrow t_{1}=\frac{1}{3}$ and $t_{2}=-\frac{1}{2}.$

In this case, B in $\left(*_{19} \right)$ is indeed equal to $A^{+}.$ This $A^{+}$ is called the generalized inverse of A.

How about $AA^{*}?$

$AA^{*}= \left [ \begin{matrix} 1 & 0 \\ 1 & 1 \\ 1 & -1 \end{matrix} \right ] \left [ \begin{matrix} 1 & 1 & 1 \\ 0 & 1 & -1 \end{matrix} \right ] = \left [ \begin{matrix} 1 & 1 & 1 \\ 1 & 2 & 0 \\ 1 & 0 & 2 \end{matrix} \right ] \left(*_{22} \right)$

is a linear operator on R³ with the range space $Im\left(A^{*} \right).$ Actual computation shows that $\left(AA^{*} \right)^{2}\neq AA^{*} .$ Therefore, $AA^{*}$ is not a projection of R³ onto $Im\left(A^{*} \right)$ along $Ker\left(A\right).$ Also

 eigenvalues of $AA^{*}$ eigenvectors 2 $\overrightarrow{v_{1} }=\left(0,\frac{1}{\sqrt{2} },-\frac{1}{\sqrt{2} } \right)$ 3 $\overrightarrow{v_{2} }=\left(\frac{1}{\sqrt{3} },\frac{1}{\sqrt{3} },\frac{1}{\sqrt{3} } \right)$ 0 $\overrightarrow{v_{3} }=\left(-\frac{2}{\sqrt{6} },\frac{1}{\sqrt{6} },\frac{1}{\sqrt{6} } \right)$

indicates that $AA^{*}$ is not a projection (see (3.7.34)). Notice that

$QAA^{*}Q^{-1}=\left [ \begin{matrix} 2 & 0 & 0 \\ 0 & 3 & 0 \\ 0 & 0 & 0 \end{matrix} \right ],$  where  $Q= \left[\begin{matrix} \overrightarrow{v_{1} } \\ \overrightarrow{v_{2} } \\ \overrightarrow{v_{3} } \end{matrix} \right]$ is orthogonal.

$\Rightarrow QAA^{*}Q^{*}=\left [ \begin{matrix} \sqrt{2} & 0 & 0 \\ 0 & \sqrt{3} & 0 \\ 0 & 0 & 1 \end{matrix} \right ] \left [ \begin{matrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 0 \end{matrix} \right ]\left [ \begin{matrix} \sqrt{2} & 0 & 0 \\ 0 & \sqrt{3} & 0 \\ 0 & 0 & 1 \end{matrix} \right ]$

$\Rightarrow R\left(AA^{*} \right)R^{*}=\left[\begin{matrix} I_{2} & 0 \\ 0 & 0 \end{matrix} \right] _{3\times 3},$

where

$R= \left [ \begin{matrix} \frac{1}{\sqrt{2}} & 0 & 0 \\ 0 & \frac{1}{\sqrt{3}} & 0 \\ 0 & 0 & 1 \end{matrix} \right ] \left [ \begin{matrix} 0 & \frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{3}} & \frac{1}{\sqrt{3}} & \frac{1}{\sqrt{3}} \\ -\frac{2}{\sqrt{6}} & \frac{1}{\sqrt{6}} & \frac{1}{\sqrt{6}} \end{matrix} \right ] = \left [ \begin{matrix} 0 & \frac{1}{2} & -\frac{1}{2} \\ \\ \frac{1}{3} & \frac{1}{3} & \frac{1}{3} \\ \\ -\frac{2}{\sqrt{6}} & \frac{1}{\sqrt{6}} & \frac{1}{\sqrt{6}} \end{matrix} \right ].$

Thus, the index of $AA^{*}$ is 2 and the signature is equal to 2. By the way, we pose the question: What is the preimage of the unit circle (or disk) $y^{2}_{1} +y^{2}_{2}=1$ (or ≤1) under A? Let $\overrightarrow{y}=\left(y_{1},y_{2}\right)=\overrightarrow{x}A.$ Then

$y^{2}_{1} +y^{2}_{2}=\overrightarrow{y}\overrightarrow{y^{*} } =1$

$\Leftrightarrow \left(\overrightarrow{x}A\right)\left(\overrightarrow{x}A\right)^{*}=\overrightarrow{x}AA^{*}\overrightarrow{x^{*} }$

$=x^{2}_{1} +2x^{2}_{2}+2x^{2}_{3}+2x_{1}x_{2}+2x_{1}x_{3},$ in the natural basis for R³

$= 2x^{\prime2}_{1}+3x^{\prime2}_{2},$ in the basis $\left\{\overrightarrow{v_{1}},\overrightarrow{v_{2}},\overrightarrow{v_{3}}\right\}=B$

$= x^{\prime \prime 2}_{1}+x^{\prime \prime 2}_{2},$ in the basis $\left\{R_{1^{*} }, R_{2^{*} },R_{3^{*} }\right\}=C$

= 1,    $\left(*_{23} \right)$

where $\left(x^{\prime}_{1},x^{\prime}_{2},x^{\prime}_{3}\right)= \left[\overrightarrow{x} \right] _{B} =\overrightarrow{x}Q^{-1}$ and $\left(x^{\prime \prime}_{1},x^{\prime \prime}_{2},x^{\prime \prime}_{3}\right)=\left[\overrightarrow{x} \right] _{C} =\overrightarrow{x}R^{-1}.$ See Fig. 3.53.

Replace $A^{*}$  in  $AA^{*}$  by $A^{+}$ and, in turn, consider

$AA^{+} =\left [ \begin{matrix} 1 & 0 \\ 1 & 1 \\ 1 & -1 \end{matrix} \right ] \cdot \frac{1}{6}\left [ \begin{matrix} 2 & 2 & 2 \\ 0 & 3 & -3 \end{matrix} \right ] = \frac{1}{6}\left [ \begin{matrix} 2 & 2 & 2 \\ 2 & 5 & -1 \\ 2 & -1 & 5 \end{matrix} \right ].$

$AA^{+}$ is symmetric and

$\left(AA^{+}\right)^{2}=AA^{+}AA^{+}=AI_{2}A^{+}=AA^{+}.$

Therefore, $AA^{+}: R^{3} \rightarrow Im \left(A^{*} \right)\subseteq R^{3}$ s the orthogonal projection of R³ onto $Im \left(A^{*} \right)$ along $Ker \left(A \right).$ Note that

$Im \left(AA^{+} \right) = Im \left(A^{+} \right) = Ker \left(A\right) ^{\bot } = Ker\left(AA^{+} \right) ^{\bot },$

$Ker\left(AA^{+} \right) = Ker \left(A \right) = Im \left(A^{*} \right) ^{\bot } = Im \left(AA^{+} \right) ^{\bot }.$

What is the reflection or symmetric point of a point $\overrightarrow{x}$ in R³ with respect to the plane $Im \left(A^{*} \right)?$ Notice that (see $\left(*_{16} \right)$)

$\overrightarrow{x} \in R^{3}$

$\rightarrow \overrightarrow{x} AA^{+},$ the orthogonal projection of $\overrightarrow{x}$ on $Im \left(A^{*} \right)$

$\rightarrow \overrightarrow{x} AA^{+} + \left(\overrightarrow{x} AA^{+}- \overrightarrow{x}\right)= \overrightarrow{x}\left(2AA^{+} -I_{3}\right),$ the reflection point. $\left(*_{24} \right)$

Thus, denote the linear operator

$P_{A}= 2AA^{+} -I_{3} = 2 \cdot \frac{1}{6} \left [ \begin{matrix} 2 & 2 & 2 \\ 2 & 5 & -1 \\ 2 & -1 & 5 \end{matrix} \right ]-\left [ \begin{matrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{matrix} \right ]=\left [ \begin{matrix} -\frac{1}{3} & \frac{2}{3} & \frac{2}{3} \\ \\ \frac{2}{3} & \frac{2}{3} & -\frac{1}{3} \\ \\ \frac{2}{3} & -\frac{1}{3} & \frac{2}{3} \end{matrix} \right ].$

$P_{A}$ is symmetric and is orthogonal, i.e. $P^{*}_{A}=P^{-1}_{A}$ and is called the reflection of R³ with respect to $Im \left(A^{*} \right).$

A simple calculation shows that

 eigenvalues of  $A^{+}A$ eigenvalues of $P_{A}$ eigenvectors 1 1 $\overrightarrow{u_{1} }=\left(\frac{1}{\sqrt{5} },\frac{2}{\sqrt{5} },0 \right)$ 1 1 $\overrightarrow{u_{2} }=\left(\frac{2}{\sqrt{30} },-\frac{1}{\sqrt{30} },\frac{5}{\sqrt{30} } \right)$ 0 -1 $\overrightarrow{u_{3} }=\left(-\frac{2}{\sqrt{6} },\frac{1}{\sqrt{6} },\frac{1}{\sqrt{6} } \right)$

$D=\left\{\overrightarrow{u_{1} } ,\overrightarrow{u_{2} } ,\overrightarrow{u_{3} } \right\}$ is an orthonormal basis for R³. In D,

$\left[AA^{+}\right] _{D}=S\left(AA^{+}\right)S^{-1}=\left [ \begin{matrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{matrix} \right ] ,$   where $S = \left[\begin{matrix} \overrightarrow{u_{1} } \\ \overrightarrow{u_{2} } \\ \overrightarrow{u_{3} } \end{matrix} \right]$ is orthogonal;

$\left[P_{A}\right] _{D}= SP_{A}S^{-1}=\left [ \begin{matrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & -1 \end{matrix} \right ].$

Try to explain $\left[AA^{+}\right] _{D}$ and $\left[P_{A}\right] _{D}$ graphically.

As a counterpart of (3.7.40), we summarize in