Open bestsauce opened 7 years ago
@cpassos I was expecting a quick solve of Problem 22, Exercise 3.B by using the fundamental theorem of linear maps.
Since $S:U \to V$ and $T:U \to V$, we conclude that (by rearranging some inequalities) $\operatorname{dim \ null} S \geq \operatorname{dim} V - \operatorname{dim} W$, and $\operatorname{dim \ null} T \geq \operatorname{dim} U - \operatorname{dim} V$. Adding the inequalities yields $\operatorname{dim \ null} S + \operatorname{dim \ null} T \geq \operatorname{dim} U - \operatorname{dim} W$. Since the result is true, we must conclude that $\operatorname{dim} U - \operatorname{dim} W \geq \operatorname{dim \ null} ST$
But $ST : U \to W$ is also a linear map, so the fundamental theorem still applies, so we obtain (again after rearranging the inequality), $\operatorname{dim \ null} ST \geq \operatorname{dim} U - \operatorname{dim} W$.
I must've gone wrong somewhere. Where did I go wrong?
Side remark: This is the first time I'm taking a peek at the solution for Exercise 3B (it's for question 22), and it seems that my initial guess that problem 22 would be solved easily by rearranging inequalities and using the fundamental theorem of linear maps was wrong; I need to construct some new linear map and use the fundamental theorem of linear maps. There's quite a lot of "constructing linear maps" satisfying property $X$ going on, so it's a useful skill to acquire, and polish...
Since the result is true, we must conclude that $\operatorname{dim} U - \operatorname{dim} W \geq \operatorname{dim \ null} ST$
No. This
$$ \operatorname{dim \ null} S + \operatorname{dim \ null} T \geq \operatorname{dim} U - \operatorname{dim} W $$
Together with this
$$ \operatorname{dim \ null} S + \operatorname{dim \ null} T \geq \operatorname{dim} \operatorname{null} ST $$
Is not enough to imply
$$ \operatorname{dim} U - \operatorname{dim} W \geq \operatorname{dim \ null} ST $$
@cpassos Right! My bad. This is why it's good to have somebody else to turn to, because sometimes you get into these obvious mistakes, but for some odd reasons can't make heads or tails of what went wrong...
@cpassos Problem 24 (the $\implies$ direction) is indeed difficult. I've thought long and hard about it, but can't seem to "see" the way through to constructing the linear map $S$. I took a peek at your solution, but I'm still trying to understand the motivation behind considering direct sums and such. Does it have to do with previous exercises in 3B?
@MaxisJaisi The direct sum bit in that solution was unecessary, I removed it now.
@cpassos Will look into it. Doing exercises in 3C and 3D now. For some odd reasons, whenever I get to matrices, I have to realign my brain to fix the orientation of columns and rows, although I understand in principle how they need to be arranged, seeing them as representing linear maps. Also, the proliferation of indices $j,k,l,m$ and summing them when doing general proofs involving properties of matrices makes me feel uneasy... I think I can understand how beginning students feel about manipulating tensors.
@cpassos Is there something I'm not getting about Exercise 26? $P(\mathbf{R})$ is infinite dimensional, so there's no way to argue that $D$ is surjective using the fundamental theorem of linear map. The other way is to show that any polynomial has an antiderivative, but this isn't what Axler wants. For some reason I think that when Axler says the $\operatorname{deg} D p' = \operatorname{deg} \ p - 1$, that already implies that $D$ surjects onto $P(\mathbf{R})$.
@MaxisJaisi You can define a linear map $T: Pm(\mathbb{R}) \to P{m-1}(\mathbb{R})$, such that $Tp = Dp$ and prove that $T$ is surjective. Since $m$ was arbitrary, it follows that $D$ is also surjective.
@cpassos Formatting needs fixing, everything's muddled up.
@cpassos Thanks! This is rather nice, now I see why Axler included it. Did you come up with it on your own?
@MaxisJaisi I have probably seen a solution (in another exercise) that was like this, but I don't remember where.
Chapter 3: Linear Maps (Part 1)
3.A
Exercise 3
We know that the list of $n$ n-tuples such that the $j$th n-tuple has a $1$ in the $j$th place and $0$ in the others form a basis for $\mathbf{F}^n$. Similarly, the list of $m$ m-tuples such that the $i$th m-tuple has a $1$ in the $j$th place and $0$ in the others forms a basis for $\mathbf{F}^m$. Then
$\begin{align} T(x_1, \dots, x_n) & = T[x_1(1,0,\dots,0)+x_2(0,1,\dots,0)+\dots+x_n(0,0,\dots,1)] \\ & = T(x_1(1,0,\dots,0))+T(x_2(1,0,\dots,0))+\dots+T(x_n(0,0,\dots,1)) \\ & = x_1 T(1,0,\dots,0)+x_2 T(0,1,\dots,0)+\dots+x_n T(0,0,\dots,1) \\ \end{align}$
Now each of $T(1,0,\dots,0),T(0,1,\dots,0),\dots,T(0,0,\dots,1)$ can in turn be expanded in terms of the standard basis of $\mathbf{F}^m$. Let us consider $T(1,0,\dots,0)$ for illustration purposes. In terms of the standard basis of $\mathbf{F}^m$, $T(1,0,\dots,0) = a{1,1} (1,0,\dots,0) + a{2,1} (0,1,\dots,0) + \dots + a_{m,1} (0,0,\dots,1)$. Doing this for each $T(1,0,\dots,0),T(0,1,\dots,0),\dots,T(0,0,\dots,1)$, and substituting in $x_1 T(1,0,\dots,0)+x_2 T(0,1,\dots,0)+\dots+x_m T(0,0,\dots,1)$, we obtain
$\begin{align} T(x_1,\dots,x_n) & = x1 [a{1,1} (1,0,\dots,0)+ a{2,1} (0,1,\dots,0) + \dots + a{m,1} (0,0,\dots,1) \\ & + x2 [a{1,2} (1,0,\dots,0)+ a{2,2} (0,1,\dots,0) + \dots + a{m,2} (0,0,\dots,1) \\ & + \dots \\ & + xn [a{m,1} (1,0,\dots,0)+ a{2,n} (0,1,\dots,0) + \dots + a{m,n} (0,0,\dots,1)) \\ \end{align}$.
Adding everything up, we obtain our desired result.
Exercise 4
Consider $a_1, \dots, a_m$ such that $a_1 v_1 + \dots + a_m v_m = \mathbf{0}$. Then $T(a_1 v_1 + \dots + a_m v_m) = a_1 T(v_1) + \dots + a_m T(v_m) = T(\mathbf{0}) = \mathbf{0'}$. By the linear independence of $T(v_1), \dots, T(v_m)$, we can conclude that $a_1 = \dots = a_m = 0$.
Exercise 7
Let $V$ be a $1$-dimensional vector space, and $T$ be a linear map from $V$ to itself. We are required to show that $T$ is of the form $T(v) = \lambda v$, for some $\lambda \in \mathbf{F}$.
Since $V$ is $1$-dimensional, it is generated by a vector, say $\mathbf{v} \in V$. Let $\mathbf{x}$ be an arbitrary vector in $V$. Then $T(\mathbf{x}) = T(a \mathbf{v})$, since $\mathbf{x}$ must be of the form $a \mathbf{v}$ for some $a \in \mathbf{F}$. Furthermore, we know that $T$ is a linear map, so $T(a \mathbf{v}) = a T(\mathbf{v})$. Note also that $T(\mathbf{v}) \in V$, so $T(\mathbf{v}) = \lambda \mathbf{v}$, for some $\lambda \in \mathbf{F}$. Since $a (\lambda \mathbf{v})= \lambda (a \mathbf{v}) = \lambda \mathbf{x}$, we have that $T(\mathbf{x}) = \lambda \mathbf{x}$. Thus $T$ is of the form $T(\mathbf{v}) = \lambda \mathbf{v}$, and we are done.
Exercise 8
Let $T$ be the "ruler" map, which means that it measures the "length" of any $\mathbf{v} \in \mathbf{R}^2$, which is given by $T(\mathbf{v}) = |\mathbf{v}|$. Take $\mathbf{v} = -\mathbf{x}$, and we immediately see that $|\mathbf{-x}| \neq -|\mathbf{x}|$.
Exercise 9
Let $T$ be the "protractor" map, which takes any complex number and returns its argument w.r.t the real line. $T$ satisfies $T(u+v) = T(u)+T(v)$ for any $u,v \in \mathbf{C}$, but stretching any complex number by a scalar will not change its argument, so $T(av) \neq aT(v)$.
Exercise 10
$S$ is a linear map on $U$, so it would be futile to use elements of $U$ to achieve a contradiction. Hence we should consider the sum of an element of $U$ and an element in $V - U$.
Let $u \in U$ such that $S(u) \neq 0$, $v \in V$ and $v \notin U$. Then $u+v$ cannot be in $U$, otherwise $(u+v)+(-u) = v \in U$, a contradiction.
Now suppose $T$ were a linear map. Then $T(u+v) = 0$, because by construction $T(x)=0$ for any $x \notin U$. But $T(u+v)=T(u)+T(v)=S(u)+T(v)=S(u)+0=S(u)=0$, contradicting the fact that $u$ wasn't a root of $S$. Hence $T$ wasn't a linear map after all.
Exercise 11
We construct the extension of $S$ to cover $V$, given by
$T(v)= \begin{cases} S(v), & \text{if $v \in U$} \\ v, & \text{if $v \notin U$} \end{cases} $
Since $S$ is already a linear map on $U$, we only need to check if $T$ is indeed linear for $v \in V-U$. Given $u,v \in V-U$, we clearly have $T(u+v) = u+v = T(u)+T(v)$, and given $\lambda u \in V-U$, we have $T(\lambda u) = \lambda u = \lambda T(u)$.
Addendum: The above "proof" is erroneous. The mistake lies in the fact that I was under no circumstances permitted to conclude that $u = T(u)$. It is indeed true that $u = S(u)$, but how do I know that $S(u) = T(u)$?
I have since learned that the proper way to go about this is to extend the basis of $U$ to a basis of $V$, then defining $T(u_j) = S(u_j)$ where each $u_j$ is a vector in the basis of $U$, and $T(v_k) = 0$ where each $v_k$ is a vector added to $U$'s basis to extend it to a basis of $V$.
Exercise 12
It suffices to exhibit a sequence of linear maps $T_1, T_2, T_3, \dots$ from $V$ to $W$ such that $T_1, T_2, \dots, T_j$ is linearly independent for every natural number $j$.
To do this, note that $V$ is finite dimensional. Let $n$ be its dimension. Then there exists a basis $B_1 = (v_1, v_2, \dots, v_n)$ for $V$. Meanwhile, $W$ being infinite dimensional implies that there exists a sequence $B_2 = w_1, w_2, \dots$ such any set consisting of a finite number of vectors from $B_2$ would be linearly independent.
I construct the sequence of maps $T_1, T_2, T_3,$ $\dots$ as follows:
$T{i} (v{j}) = w_{i+j-1}$ for every natural number $i$ (starting from $1$) and for $1 \leq j \leq n$.
For example, $T3$ is given by $T{3} (v_1) = w3$, $T{3} (v_2) = w4$, $T{3} (v_3) = w4$, $\dots$, $T{3} (vn) = w{n+2}$.
I claim that $T_1, T_2, \dots, T_n$ for every natural $n$ must be linearly independent. To see this, suppose we have $a_1 T_1 (\mathbf{v}) + a_2 T_2 (\mathbf{v}) + \dots + a_n T_n (\mathbf{v}) = 0$. This must be true for every $\mathbf{v} \in V$. In particular, it must also be true for $\mathbf{v} = v_1$. Thus we have $a_1 T_1 (v_1) + a_2 T_2 (v_1) + \dots + a_n T_n (v_1) = a_1 w_1 + a_2 w_2 + \dots + a_n w_n = 0$. But $w_1, w_2, \dots, w_n$ is linearly independent, hence $a_1 = \dots = a_n = 0$, and we're done.
Exercise 13
This problem stumped me, and I had to look online for clues. I don't want to dwell too long on a problem to the extent that I can't proceed further.
Anyway, the heart of this problem (I think) is the fact that attempting to define a linear transformation on a vector space by specifying where it sends vectors of a spanning set to another vector space is not enough, because the vectors in the spanning set might be linearly dependent. So we must have a basis in order to define a unique linear map taking a finite set of vectors to another finite set of vectors of the same length.
Suppose we had a linearly dependent list of vectors $v_1, \dots, v_m$ in $V$. That means there exist scalars $a_1, \dots, a_m$, not all zero, such that $a_1 v_1 + \dots + a_m v_m = 0$. If we try to guess a linear map $T$ to map $a_1 v_1 + \dots + a_m v_m $ to an element in $W$, we'll find $T(a_1 v_1 + \dots + a_m v_m) = a_1 T(v_1) + \dots + a_m T(v_m) = 0$. We are free to choose the image of $v_j$ under $T$ for each $1 \leq j \leq m$. Take any $a_j \neq 0$, and define $T(a_j) = 1$. Then we get a contradiction.
Exercise 14
We can take $V=\mathbf{R}^3$, and maps $S,T$ from $\mathbf{R}^3$ to itself such that $S,T$ are distinct rotations of the space about $(0,0)$.
3.B