It is one-to-one because if $f$ sends two members of the domain to the same image, that is, if $f\left({\begin{pmatrix}a&b\end{pmatrix}}\right)=f\left({\begin{pmatrix}c&d\end{pmatrix}}\right)$, then the definition of $f$ gives that
and since column vectors are equal only if they have equal components, we have that $a=c$ and that $b=d$. Thus, if $f$ maps two row vectors from the domain to the same column vector then the two row vectors are equal: ${\begin{pmatrix}a&b\end{pmatrix}}={\begin{pmatrix}c&d\end{pmatrix}}$.
To show that $f$ is onto we must show that any member of the codomain $\mathbb {R} ^{2}$ is the image under $f$ of some row vector. That's easy;
${\begin{pmatrix}x\\y\end{pmatrix}}$
is $f\left({\begin{pmatrix}x&y\end{pmatrix}}\right)$.
The computation for preservation of addition is this.
Denote the map from Example 1.2 by $f$. To show that it is one-to-one, assume that $f(a_{0}+a_{1}x+a_{2}x^{2})=f(b_{0}+b_{1}x+b_{2}x^{2})$. Then by the definition of the function,
and so $a_{0}=b_{0}$ and $a_{1}=b_{1}$ and $a_{2}=b_{2}$. Thus $a_{0}+a_{1}x+a_{2}x^{2}=b_{0}+b_{1}x+b_{2}x^{2}$, and consequently $f$ is one-to-one.
The function $f$ is onto because there is a polynomial sent to
${\begin{pmatrix}a\\b\\c\end{pmatrix}}$
by $f$, namely, $a+bx+cx^{2}$.
As for structure, this shows that $f$ preserves addition
and so, since column vectors are equal only when their components are equal, $b_{1}=b_{2}$ and $a_{1}=a_{2}$. That shows that the two linear polynomials are equal, and so $f$ is one-to-one.
To show that $f$ is onto, note that this member of the codomain
${\begin{pmatrix}s\\t\end{pmatrix}}$
is the image of this member of the domain $(s+t)+tx$.
To check that $f$ preserves structure, we can use item 2 of Lemma 1.9.
Show that the natural map $f_{1}$ from
Example 1.5
is an isomorphism.
Answer
To verify it is one-to-one, assume that $f_{1}(c_{1}x+c_{2}y+c_{3}z)=f_{1}(d_{1}x+d_{2}y+d_{3}z)$. Then $c_{1}+c_{2}x+c_{3}x^{2}=d_{1}+d_{2}x+d_{3}x^{2}$ by the definition of $f_{1}$. Members of ${\mathcal {P}}_{2}$ are equal only when they have the same coefficients, so this implies that $c_{1}=d_{1}$ and $c_{2}=d_{2}$ and $c_{3}=d_{3}$. Therefore $f_{1}(c_{1}x+c_{2}y+c_{3}z)=f_{1}(d_{1}x+d_{2}y+d_{3}z)$ implies that $c_{1}x+c_{2}y+c_{3}z=d_{1}x+d_{2}y+d_{3}z$, and so $f_{1}$ is one-to-one.
To verify that it is onto, consider an arbitrary member of the codomain $a_{1}+a_{2}x+a_{3}x^{2}$ and observe that it is indeed the image of a member of the domain, namely, it is $f_{1}(a_{1}x+a_{2}y+a_{3}z)$.
(For instance, $0+3x+6x^{2}=f_{1}(0x+3y+6z)$.)
The computation checking that $f_{1}$ preserves addition is this.
No; this map is not one-to-one. In particular, the matrix of all zeroes is mapped to the same image as the matrix of all ones.
Yes, this is an isomorphism.
It is one-to-one:
${\text{if }}f({\begin{pmatrix}a_{1}&b_{1}\\c_{1}&d_{1}\end{pmatrix}})=f({\begin{pmatrix}a_{2}&b_{2}\\c_{2}&d_{2}\end{pmatrix}}){\text{ then }}{\begin{pmatrix}a_{1}+b_{1}+c_{1}+d_{1}\\a_{1}+b_{1}+c_{1}\\a_{1}+b_{1}\\a_{1}\end{pmatrix}}={\begin{pmatrix}a_{2}+b_{2}+c_{2}+d_{2}\\a_{2}+b_{2}+c_{2}\\a_{2}+b_{2}\\a_{2}\end{pmatrix}}$
gives that $a_{1}=a_{2}$, and that $b_{1}=b_{2}$, and that $c_{1}=c_{2}$, and that $d_{1}=d_{2}$.
It is onto, since this shows
This gives, by the definition of $f$, that $c_{1}+(d_{1}+c_{1})x+(b_{1}+a_{1})x^{2}+a_{1}x^{3}=c_{2}+(d_{2}+c_{2})x+(b_{2}+a_{2})x^{2}+a_{2}x^{3}$ and then the fact that polynomials are equal only when their coefficients are equal gives a set of linear equations
that has only the solution $a_{1}=a_{2}$, $b_{1}=b_{2}$, $c_{1}=c_{2}$, and $d_{1}=d_{2}$.
To show that $f$ is onto, we note that $p+qx+rx^{2}+sx^{3}$ is the image under $f$ of this matrix.
${\begin{pmatrix}s&r-s\\p&q-p\end{pmatrix}}$
We can check that $f$ preserves structure by using item 2 of Lemma 1.9.
No, this map does not preserve structure. For instance, it does not send the zero matrix to the zero polynomial.
Problem 5
Show that the map $f:\mathbb {R} ^{1}\to \mathbb {R} ^{1}$ given by $f(x)=x^{3}$ is one-to-one and onto.Is it an isomorphism?
Answer
It is one-to-one and onto, a correspondence, because it has an inverse (namely, $f^{-1}(x)={\sqrt[{3}]{x}}$). However, it is not an isomorphism. For instance, $f(1)+f(1)\neq f(1+1)$.
This exercise is recommended for all readers.
Problem 6
Refer to Example 1.1. Produce two more isomorphisms (of course, that they satisfy the conditions in the definition of isomorphism must be verified).
Verification is straightforward (for the second, to show that it is onto, note that
${\begin{pmatrix}s\\t\\u\end{pmatrix}}$
is the image of $(s-t)+tx+ux^{2}$).
This exercise is recommended for all readers.
Problem 8
Show that, although $\mathbb {R} ^{2}$ is not itself a subspace of $\mathbb {R} ^{3}$, it is isomorphic to the $xy$-plane subspace of $\mathbb {R} ^{3}$.
Answer
The space $\mathbb {R} ^{2}$ is not a subspace of $\mathbb {R} ^{3}$ because it is not a subset of $\mathbb {R} ^{3}$. The two-tall vectors in $\mathbb {R} ^{2}$ are not members of $\mathbb {R} ^{3}$.
The natural isomorphism $\iota :\mathbb {R} ^{2}\to \mathbb {R} ^{3}$ (called the injection map) is this.
For what $k$ is ${\mathcal {P}}_{k}$ isomorphic to $\mathbb {R} ^{n}$?
Answer
If $n\geq 1$ then ${\mathcal {P}}_{n-1}\cong \mathbb {R} ^{n}$. (If we take ${\mathcal {P}}_{-1}$ and $\mathbb {R} ^{0}$ to be trivial vector spaces, then the relationship extends one dimension lower.) The natural isomorphism between them is this.
To finish checking that it is an isomorphism, we apply item 2 of Lemma 1.9 and show that it preserves linear combinations of two polynomials. Briefly, the check goes like this.
Why, in Lemma 1.8, must there be a ${\vec {v}}\in V$? That is, why must $V$ be nonempty?
Answer
No vector space has the empty set underlying it. We can take ${\vec {v}}$ to be the zero vector.
Problem 14
Are any two trivial spaces isomorphic?
Answer
Yes; where the two spaces are $\{{\vec {a}}\}$ and $\{{\vec {b}}\}$, the map sending ${\vec {a}}$ to ${\vec {b}}$ is clearly one-to-one and onto, and also preserves what little structure there is.
Problem 15
In the proof of Lemma 1.9, what about the zero-summands case (that is, if $n$ is zero)?
Answer
A linear combination of $n=0$ vectors adds to the zero vector and so Lemma 1.8 shows that the three statements are equivalent in this case.
Problem 16
Show that any isomorphism $f:{\mathcal {P}}_{0}\to \mathbb {R} ^{1}$ has the form $a\mapsto ka$ for some nonzero real number $k$.
Answer
Consider the basis $\langle 1\rangle$ for ${\mathcal {P}}_{0}$ and let $f(1)\in \mathbb {R}$ be $k$. For any $a\in {\mathcal {P}}_{0}$ we have that $f(a)=f(a\cdot 1)=af(1)=ak$ and so $f$'s action is multiplication by $k$. Note that $k\neq 0$ or else the map is not one-to-one. (Incidentally, any such map $a\mapsto ka$ is an isomorphism,
as is easy to check.)
This exercise is recommended for all readers.
Problem 17
These prove that isomorphism is an equivalence relation.
Show that the identity map ${\mbox{id}}:V\to V$ is an isomorphism. Thus, any vector space is isomorphic to itself.
Show that if $f:V\to W$ is an isomorphism then so is its inverse $f^{-1}:W\to V$. Thus, if $V$ is isomorphic to $W$ then also $W$ is isomorphic to $V$.
Show that a composition of isomorphisms is an isomorphism: if $f:V\to W$ is an isomorphism and $g:W\to U$ is an isomorphism then so also is $g\circ f:V\to U$. Thus, if $V$ is isomorphic to $W$ and $W$ is isomorphic to $U$, then also $V$ is isomorphic to $U$.
Answer
In each item, following item 2 of Lemma 1.9, we show that the map preserves
structure by showing that the it preserves linear combinations of two members of the domain.
The identity map is clearly one-to-one and onto. For linear combinations the check is easy.
The inverse of a correspondence is also a correspondence (as stated in the appendix), so we need only check that the inverse preserves linear combinations. Assume that ${\vec {w}}_{1}=f({\vec {v}}_{1})$ (so $f^{-1}({\vec {w}}_{1})={\vec {v}}_{1}$) and assume that ${\vec {w}}_{2}=f({\vec {v}}_{2})$.
The composition of two correspondences is a correspondence (as stated in the appendix), so we need only check that the composition map preserves linear combinations.
Suppose that $f:V\to W$ preserves structure. Show that $f$ is one-to-one if and only if the unique member of $V$ mapped by $f$ to ${\vec {0}}_{W}$ is ${\vec {0}}_{V}$.
Answer
One direction is easy: by definition, if $f$ is one-to-one then for any ${\vec {w}}\in W$ at most one ${\vec {v}}\in V$ has $f({\vec {v}}\,)={\vec {w}}$, and so in particular, at most one member of $V$ is mapped to ${\vec {0}}_{W}$. The proof of Lemma 1.8 does not use the fact that the map is a correspondence and therefore shows that any structure-preserving map $f$ sends ${\vec {0}}_{V}$ to ${\vec {0}}_{W}$.
For the other direction, assume that the only member of $V$ that is mapped to ${\vec {0}}_{W}$ is ${\vec {0}}_{V}$. To show that $f$ is one-to-one assume that $f({\vec {v}}_{1})=f({\vec {v}}_{2})$. Then $f({\vec {v}}_{1})-f({\vec {v}}_{2})={\vec {0}}_{W}$ and so $f({\vec {v}}_{1}-{\vec {v}}_{2})={\vec {0}}_{W}$. Consequently ${\vec {v}}_{1}-{\vec {v}}_{2}={\vec {0}}_{V}$, so ${\vec {v}}_{1}={\vec {v}}_{2}$, and so $f$ is one-to-one.
Problem 19
Suppose that $f:V\to W$ is an isomorphism. Prove that the set $\{{\vec {v}}_{1},\dots ,{\vec {v}}_{k}\}\subseteq V$ is linearly dependent if and only if the set of images $\{f({\vec {v}}_{1}),\dots ,f({\vec {v}}_{k})\}\subseteq W$ is linearly dependent.
Answer
We will prove something stronger— not only is the existence of a dependence preserved by isomorphism, but each instance of a dependence is preserved, that is,
and applying the fact that $f$ is one-to-one, and so for the two vectors ${\vec {v}}_{i}$ and $c_{1}{\vec {v}}_{1}+\dots +c_{i-1}{\vec {v}}_{i-1}+c_{i+1}{\vec {v}}_{i+1}+\dots +c_{k}{\vec {v}}_{k}$ to be mapped to the same image by $f$, they must be equal.
This exercise is recommended for all readers.
Problem 20
Show that each type of map from Example 1.6 is an automorphism.
Dilation $d_{s}$ by a nonzero scalar $s$.
Rotation $t_{\theta }$ through an angle $\theta$.
Reflection $f_{\ell }$ over a line through the origin.
Hint.
For the second and third items, polar coordinates are useful.
Answer
This map is one-to-one because if $d_{s}({\vec {v}}_{1})=d_{s}({\vec {v}}_{2})$ then by definition of the map, $s\cdot {\vec {v}}_{1}=s\cdot {\vec {v}}_{2}$ and so ${\vec {v}}_{1}={\vec {v}}_{2}$, as $s$ is nonzero. This map is onto as any ${\vec {w}}\in \mathbb {R} ^{2}$ is the image of ${\vec {v}}=(1/s)\cdot {\vec {w}}$ (again, note that $s$ is nonzero). (Another way to see that this map is a correspondence is to observe that it has an inverse: the inverse of $d_{s}$ is $d_{1/s}$.)
To finish, note that this map preserves linear combinations
As in the prior item, we can show that the map $t_{\theta }$ is a correspondence by noting that it has an inverse, $t_{-\theta }$.
That the map preserves structure is geometrically easy to see. For instance, adding two vectors and then rotating them has the same effect as rotating first and then adding. For an algebraic argument, consider polar coordinates: the map $t_{\theta }$ sends the vector with endpoint $(r,\phi )$ to the vector with endpoint $(r,\phi +\theta )$. Then the familiar trigonometric formulas $\cos(\phi +\theta )=\cos \phi \,\cos \theta -\sin \phi \,\sin \theta$ and $\sin(\phi +\theta )=\sin \phi \,\cos \theta +\cos \phi \,\sin \theta$ show how to express the map's action in the usual rectangular coordinate system.
The calculation for preservation of scalar multiplication is similar.
This map is a correspondence because it has an inverse (namely, itself).
As in the last item, that the reflection map preserves structure is geometrically easy to see: adding vectors and then reflecting gives the same result as reflecting first and then adding, for instance. For an algebraic proof, suppose that the line $\ell$ has slope $k$ (the case of a line with undefined slope can be done as a separate, but easy, case). We can follow the hint and use polar coordinates: where the line $\ell$ forms an angle of $\phi$ with the $x$-axis, the action of $f_{\ell }$ is to send the vector with endpoint $(r\cos \theta ,r\sin \theta )$ to the one with endpoint $(r\cos(2\phi -\theta ),r\sin(2\phi -\theta ))$.
To convert to rectangular coordinates, we will use some trigonometric formulas, as we did in the prior item. First observe that $\cos \phi$ and $\sin \phi$ can be determined from the slope $k$ of the line. This picture
gives that $\cos \phi =1/{\sqrt {1+k^{2}}}$ and $\sin \phi =k/{\sqrt {1+k^{2}}}$. Now,
Produce an automorphism of ${\mathcal {P}}_{2}$ other than the identity map, and other than a shift map $p(x)\mapsto p(x-k)$.
Answer
First, the map $p(x)\mapsto p(x+k)$ doesn't count because it is a version of $p(x)\mapsto p(x-k)$. Here is a correct answer (many others are also correct): $a_{0}+a_{1}x+a_{2}x^{2}\mapsto a_{2}+a_{0}x+a_{1}x^{2}$. Verification that this is an isomorphism is straightforward.
Problem 22
Show that a function $f:\mathbb {R} ^{1}\to \mathbb {R} ^{1}$ is an automorphism if and only if it has the form $x\mapsto kx$ for some $k\neq 0$.
Let $f$ be an automorphism of $\mathbb {R} ^{1}$ such that $f(3)=7$. Find $f(-2)$.
Show that a function $f:\mathbb {R} ^{2}\to \mathbb {R} ^{2}$ is an automorphism if and only if it has the form
For the "only if" half, let $f:\mathbb {R} ^{1}\to \mathbb {R} ^{1}$ to be an isomorphism. Consider the basis $\langle 1\rangle \subseteq \mathbb {R} ^{1}$. Designate $f(1)$ by $k$. Then for any $x$ we have that $f(x)=f(x\cdot 1)=x\cdot f(1)=xk$, and so $f$'s action is multiplication by $k$. To finish this half, just note that $k\neq 0$ or else $f$ would not be one-to-one.
For the "if" half we only have to check that such a map is an isomorphism when $k\neq 0$. To check that it is one-to-one, assume that $f(x_{1})=f(x_{2})$ so that $kx_{1}=kx_{2}$ and divide by the nonzero factor $k$ to conclude that $x_{1}=x_{2}$. To check that it is onto, note that any $y\in \mathbb {R} ^{1}$ is the image of $x=y/k$ (again, $k\neq 0$). Finally, to check that such a map preserves combinations of two members of the domain, we have this.
By the prior item, $f$'s action is $x\mapsto (7/3)x$. Thus $f(-2)=-14/3$.
For the "only if" half, assume that $f:\mathbb {R} ^{2}\to \mathbb {R} ^{2}$ is an automorphism. Consider the standard basis ${\mathcal {E}}_{2}$ for $\mathbb {R} ^{2}$. Let
To finish this half, note that if $ad-bc=0$, that is, if $f({\vec {e}}_{2})$ is a multiple of $f({\vec {e}}_{1})$, then $f$ is not one-to-one.
For "if" we must check that the map is an isomorphism, under the condition that $ad-bc\neq 0$. The structure-preservation check is easy; we will here show that $f$ is a correspondence. For the argument that the map is one-to-one, assume this.
has a unique solution, namely the trivial one
$x_{1}-x_{2}=0$ and $y_{1}-y_{2}=0$
(this follows from the hint).
The argument that this map is onto is closely related— this system
spans $\mathbb {R} ^{2}$, i.e., if and only if this set is
a basis (because it is a two-element subset of $\mathbb {R} ^{2}$),
i.e., if and only if $ad-bc\neq 0$.
Refer to Lemma 1.8 and Lemma 1.9. Find two more things preserved by isomorphism.
Answer
There are many answers; two are linear independence and subspaces.
To show that if a set $\{{\vec {v}}_{1},\dots ,{\vec {v}}_{n}\}$ is linearly independent then its image $\{f({\vec {v}}_{1}),\dots ,f({\vec {v}}_{n})\}$ is also linearly independent, consider a linear relationship among members of the image set.
Because this map is an isomorphism, it is one-to-one. So $f$ maps only one vector from the domain to the zero vector in the range, that is, $c_{1}{\vec {v}}_{1}+\dots +c_{n}{\vec {v}}_{n}$ equals the zero vector (in the domain, of course). But, if $\{{\vec {v}}_{1},\dots ,{\vec {v}}_{n}\}$ is linearly independent then all of the $c$'s are zero, and so $\{f({\vec {v}}_{1}),\dots ,f({\vec {v}}_{n})\}$ is linearly independent also. (Remark. There is a small point about this argument that is worth mention. In a set, repeats collapse, that is, strictly speaking, this is a one-element set: $\{{\vec {v}},{\vec {v}}\}$, because the things listed as in it are the same thing. Observe, however, the use of the subscript $n$ in the above argument. In moving from the domain set $\{{\vec {v}}_{1},\dots ,{\vec {v}}_{n}\}$ to the image set $\{f({\vec {v}}_{1}),\dots ,f({\vec {v}}_{n})\}$, there is no collapsing, because the image set does not have repeats, because the isomorphism $f$ is one-to-one.)
To show that if $f:V\to W$ is an isomorphism and if $U$ is a subspace of the domain $V$ then the set of image vectors $f(U)=\{{\vec {w}}\in W\,{\big |}\,{\vec {w}}=f({\vec {u}}){\text{ for some }}{\vec {u}}\in U\}$ is a subspace of $W$, we need only show that it is closed under linear combinations of two of its members (it is nonempty because it contains the image of the zero vector). We have
and $c_{1}{\vec {u}}_{1}+c_{2}{\vec {u}}_{2}$ is a member of $U$ because of the closure of a subspace under combinations. Hence the combination of $f({\vec {u}}_{1})$ and $f({\vec {u}}_{2})$ is a member of $f(U)$.
Problem 24
We show that isomorphisms can be tailored to fit in that, sometimes, given vectors in the domain and in the range we can produce an isomorphism associating those vectors.
Let $B=\langle {\vec {\beta }}_{1},{\vec {\beta }}_{2},{\vec {\beta }}_{3}\rangle$ be a basis for ${\mathcal {P}}_{2}$ so that any ${\vec {p}}\in {\mathcal {P}}_{2}$ has a unique representation as ${\vec {p}}=c_{1}{\vec {\beta }}_{1}+c_{2}{\vec {\beta }}_{2}+c_{3}{\vec {\beta }}_{3}$, which we denote in this way.
Show that the ${\rm {Rep}}_{B}(\cdot )$ operation is a function from ${\mathcal {P}}_{2}$ to $\mathbb {R} ^{3}$ (this entails showing that with every domain vector ${\vec {v}}\in {\mathcal {P}}_{2}$ there is an associated image vector in $\mathbb {R} ^{3}$, and further, that with every domain vector ${\vec {v}}\in {\mathcal {P}}_{2}$ there is at most one associated image vector).
Show that this ${\rm {Rep}}_{B}(\cdot )$ function is one-to-one and onto.
Show that it preserves structure.
Produce an isomorphism from ${\mathcal {P}}_{2}$ to
$\mathbb {R} ^{3}$ that fits these specifications.
is a function if every member ${\vec {p}}$ of the domain is associated with at least one member of the codomain, and if every member ${\vec {p}}$ of the domain is associated with at most one member of the codomain. The first condition holds because the basis $B$ spans the domain— every ${\vec {p}}$ can be written as at least one linear combination of ${\vec {\beta }}$'s. The second condition holds because the basis $B$ is linearly independent— every member ${\vec {p}}$ of the domain can be written as at most one linear combination of the ${\vec {\beta }}$'s.
For the one-to-one argument, if ${\rm {Rep}}_{B}({\vec {p}})={\rm {Rep}}_{B}({\vec {q}})$, that is, if ${\rm {Rep}}_{B}(p_{1}{\vec {\beta }}_{1}+p_{2}{\vec {\beta }}_{2}+p_{3}{\vec {\beta }}_{3})={\rm {Rep}}_{B}(q_{1}{\vec {\beta }}_{1}+q_{2}{\vec {\beta }}_{2}+q_{3}{\vec {\beta }}_{3})$ then
and so $p_{1}=q_{1}$ and $p_{2}=q_{2}$ and $p_{3}=q_{3}$, which gives the conclusion that ${\vec {p}}={\vec {q}}$. Therefore this map is one-to-one.
For onto, we can just note that
${\begin{pmatrix}a\\b\\c\end{pmatrix}}$
equals ${\rm {Rep}}_{B}(a{\vec {\beta }}_{1}+b{\vec {\beta }}_{2}+c{\vec {\beta }}_{3})$, and so any member of the codomain $\mathbb {R} ^{3}$ is the image of some member of the domain ${\mathcal {P}}_{2}$.
This map respects addition and scalar multiplication because it respects combinations of two members of the domain (that is, we are using item 2 of Lemma 1.9): where ${\vec {p}}=p_{1}{\vec {\beta }}_{1}+p_{2}{\vec {\beta }}_{2}+p_{3}{\vec {\beta }}_{3}$ and ${\vec {q}}=q_{1}{\vec {\beta }}_{1}+q_{2}{\vec {\beta }}_{2}+q_{3}{\vec {\beta }}_{3}$, we have this.
Use any basis $B$ for ${\mathcal {P}}_{2}$ whose first two members are $x+x^{2}$ and $1-x$, say $B=\langle x+x^{2},1-x,1\rangle$.
Problem 25
Prove that a space is $n$-dimensional if and only if it is isomorphic to $\mathbb {R} ^{n}$. Hint. Fix a basis $B$ for the space and consider the map sending a vector over to its representation with respect to $B$.
Answer
See the next subsection.
Problem 26
(Requires the subsection on Combining Subspaces, which is optional.) Let $U$ and $W$ be vector spaces. Define a new vector space, consisting of the set $U\times W=\{({\vec {u}},{\vec {w}})\,{\big |}\,{\vec {u}}\in U{\text{ and }}{\vec {w}}\in W\}$ along with these operations.
This is a vector space, the external direct sum of $U$ and $W$.
Check that it is a vector space.
Find a basis for, and the dimension of, the external direct sum ${\mathcal {P}}_{2}\times \mathbb {R} ^{2}$.
What is the relationship among $\dim(U)$, $\dim(W)$, and $\dim(U\times W)$?
Suppose that $U$ and $W$ are subspaces of a vector space $V$ such that $V=U\oplus W$ (in this case we say that $V$ is the internal direct sum of $U$ and $W$). Show that the map $f:U\times W\to V$ given by
is an isomorphism. Thus if the internal direct sum is defined then the internal and external direct sums are isomorphic.
Answer
Most of the conditions in the definition of a vector space are routine. We here sketch the verification of part 1 of that definition.
For closure of $U\times W$, note that because $U$ and $W$ are closed, we have that ${\vec {u}}_{1}+{\vec {u}}_{2}\in U$ and ${\vec {w}}_{1}+{\vec {w}}_{2}\in W$ and so $({\vec {u}}_{1}+{\vec {u}}_{2},{\vec {w}}_{1}+{\vec {w}}_{2})\in U\times W$. Commutativity of addition in $U\times W$ follows from commutativity of addition in $U$ and $W$.
The check for associativity of addition is similar. The zero element is $({\vec {0}}_{U},{\vec {0}}_{W})\in U\times W$ and the additive inverse of $({\vec {u}},{\vec {w}})$ is $(-{\vec {u}},-{\vec {w}})$.
The checks for the second part of the definition of a vector space are also straightforward.
because there is one and only one way to represent any member of ${\mathcal {P}}_{2}\times \mathbb {R} ^{2}$ with respect to this set; here is an example.