Chapter 3 Vector Space

3.1 Subspace of \(\mathbb{R}^n\)

Vector spaces, which are the focus of this chapter, can be thought of as subsets within the realm of \(\mathbb{R}^n\), where \(n\) represents the number of dimensions in real-numbered vector space. This topic is explored further in the current section, with a formal definition provided in Section 3.3.

While row vectors are more compact, column vectors are preferred due to their compatibility with matrix multiplication. For efficiency in written form, we often represent a column vector as \((a_1, \ldots, a_n)^t\) using the transpose operation. As established in Chapter 1, we treat a column vector and a point in \(\mathbb{R}^n\) with identical coordinates as equivalent. Typically, column vectors are indicated by lower-case letters like \(v\) or \(w\). When \(v\) corresponds to \((a_1, \ldots, a_n)^t\), this particular representation is referred to as the coordinate vector of \(v\).

We consider two operations on vectors: \[\begin{align*} & \text{vector addition: } & \begin{bmatrix} a_1 \\ \vdots \\ a_n \end{bmatrix} &+ \begin{bmatrix} b_1 \\ \vdots \\ b_n \end{bmatrix} &= \begin{bmatrix} a_1 + b_1 \\ \vdots \\ a_n + b_n \end{bmatrix}, \\ & \text{scalar multiplication: } & c &\cdot \begin{bmatrix} a_1 \\ \vdots \\ a_n \end{bmatrix} &= \begin{bmatrix} ca_1 \\ \vdots \\ ca_n \end{bmatrix}. \end{align*}\] These operations make \(\mathbb{R}^n\) into a vector space.

Definition 3.1 (Subspace) A subset \(W\) of \(\mathbb{R}^n\) is a subspace if it has these properties:

  1. If \(w\) and \(w'\) are in \(W\), then \(w + w'\) is in \(W\).
  2. If \(w\) is in \(W\) and \(c\) is in \(\mathbb{R}\), then \(cw\) is in \(W\).
  3. The zero vector is in \(W\).

There is another way to state the conditions for a subspace,

Let \(W\) be a non-empty set. If \(w_1, w_2, \ldots, w_n\) are elements of \(W\) and \(c_1, c_2, \ldots, c_n\) are scalars, then the linear combination \(c_1w_1 + c_2w_2 + \ldots + c_nw_n\) is also in \(W\).

Definition 3.2 (Nullspace) Given an \(m \times n\) matrix \(A\) with coefficients in \(\mathbb{R}\), the set of vectors in \(\mathbb{R}^n\) whose coordinate vectors solve the homogeneous equation \(AX = 0\) is called the nullspace of \(A\).

Example 3.1 (Example for subspaces) Systems of homogeneous linear equations.

Given an \(m \times n\) matrix \(A\) with coefficients in \(\mathbb{R}\), the set of vectors in \(\mathbb{R}^n\) whose coordinate vectors solve the homogeneous equation \(AX = 0\) is a subspace, called the nullspace of \(A\). Though this is very simple, we will check the conditions for a subspace:

  • Suppose that \(X\) and \(Y\) be solutions of given homogenous sysmstem. Then \(AX=0\) and \(AY=0\). Thus, \[A(X+Y)=AX+AY=0+0=0\] Thus, \((X+Y)\) is a solution.
  • If \(X\) and \(Y\) be solutions of given homogenous sysmstem. Then \(AX=0\). Let \(c\in \mathbb{R}\). \[A(cX)=c(AX)=c\cdot 0=0\] Thus, \(cX\) is a solution.
  • \(A0=0\). Thus, the zero vector \(0\) is solution.

Therfore nullspace of \(A\) is a subspace.

Example 3.2 The zerospace \(W = \{0\}\) and the whole space \(W = \mathbb{R}^n\) are subspaces.

Definition 3.3 (Proper Subspace) A subspace is called a proper subspace if it’s not the entire space

The next proposition describes the proper subspaces of \(\mathbb{R}^2\).

Proposition 3.1 Let \(W\) be a proper subspace of the space \(\mathbb{R}^2\), and let \(w\) be a non-zero vector in \(W\). Then \(W\) consists of the scalar multiples \(cw\) of \(w\). Distinct proper subspaces have only the zero vector in common.

Definition 3.4 The subspace consisting of the scalar multiples \(cw\) of a given nonzero vector \(w\) is called the subs pace spanned by \(w\).
Geometrically, it is a line through the origin in the plane \(\mathbb{R}^2\).

Proof. (Proof of the proportion) Redo it.

3.2 Fields

Definition 3.5 A field \(F\) is a set together with two laws of composition:

  1. Addition: \[\begin{eqnarray} +:F \times F & \rightarrow & F \\ (a,b) & \mapsto & a+b \end{eqnarray}\]
  2. Multiplication \[\begin{eqnarray} \times:F \times F & \rightarrow & F \\ (a,b) & \mapsto & ab \end{eqnarray}\]

which satisfy these axioms:

  • (F1) Addition makes \(F\) into an abelian group \((F,+)\);
    its identity element is denoted by \(0\).

  • (F2) Multiplication is commutative, and it makes the set of nonzero elements of \(F\) into an abelian group \(F^{\times}\); its identity element is denoted by \(1\).

  • (F3) Distributive law: \[\text{For all } a, b,\text{ and } c \in F,~ a(b + c) = ab + ac\].

The first two axioms describe properties of the two laws of composition, addition and multiplication, separately. The third axiom, the distributive law, relates the two laws.

we will be familiar with the fact that the real numbers satisfy these axioms, but the fact that they are the only ones needed for the usual algebraic operations can only be understood after some experience.

The next lemma explains how the zero element multiplies.

Lemma 3.1 Let \(F\) be a field.

  1. The elements 0 and 1 of \(F\) are distinct.

  2. For all \(a\) in \(F\), \(a0 = 0\) and \(0a = 0\).

  3. Multiplication in \(F\) is associative, and 1 is an identity element.

Proof.

  1. Axiom (ii) implies that 1 is not equal to 0.

  2. Since 0 is the identity for addition, \(0 + 0 = 0\). Then \(a0 + a0 = a(0 + 0)= a0\). Since \((F,+)\) is a group, we can cancel \(a0\) to obtain \(a0 = 0\), and then \(0a = 0\) as well.

  3. Since \(F \setminus \{0\}\) is an abelian group, multiplication is associative when restricted to this subset. We need to show that \(a(bc) = (ab)c\) when at least one of the elements is zero. In that case, (b) shows that the products in question are equal to zero. Finally, the element 1 is an identity on \(F \setminus \{0\}\). Setting \(a = 1\) in (b) shows that 1 is an identity on all of \(F\).

Example 3.3

  • Real numbers (\(\mathbb{R}\))
    Easy to verify
  • Complex Numbers (\(\mathbb{C}\))
    Easy to verify
  • Prime field \[\mathbb{F}_p:={\overline{0},\overline{1},...,\overline{p-1}}=\mathbb{Z}/p\mathbb{Z}\] We saw in the previous chapter that the set \(\mathbb{Z}/n\mathbb{Z}\) of congruence classes modulo an integer \(n\) has laws of addition and multiplication derived from addition and multiplication of integers. All of the axioms for a field hold for the integers, except for the existence of multiplicative inverses. And as noted in previous chapters, such axioms carry over to addition and multiplication of congruence classes. But the integers are not closed under division, so there is no reason to suppose that congruence classes have multiplicative inverses. The class of 2, for example, has no multiplicative inverse modulo \(6\). It is somewhat surprising that when \(p\) is a prime integer, all nonzero congruence classes modulo \(p\) have inverses, and therefore the set \(\mathbb{Z}/ p \mathbb{Z}\) is a field. In the next theorem, we prove the existence of multiplicative inverse of non-zero element in prime field.

Theorem 3.1 Let \(p\) be a prime integer. Every nonzero congruence class modulo \(p\) has a multiplicative inverse, and therefore \(\mathbb{F}_p\) is a field of order \(p\).

We discuss the theorem before giving the proof.

3.3 Vector Spaces

Having some examples and the concept of a field, we proceed to the definition of a vector space.

Definition 3.6 A vector space \(V\) over a field \(F\) is a set together with two laws of composition:

  1. Addition: \(V \times V \to V\), written \(v, w \mapsto v + w\), for \(v\) and \(w \in V\).
  2. Scalar multiplication by elements of the field: \(F \times V \to V\), written \(c, v \mapsto cv\), for \(c \in F\) and \(v \in V\).

These laws are required to satisfy the following axioms: - Addition makes \(V\) into a commutative group \(V^+\), with identity denoted by \(0\). - \(1 \cdot v = v\), for all \(v \in V\). - Associative law: \((ab)v = a(bv)\), for all \(a, b \in F\) and all \(v \in V\). - Distributive laws: - \((a + b)v = av + bv\) - \(a(v + w) = av + aw\), for all \(a, b \in F\) and all \(v, w \in V\).

The space \(F^n\) of column vectors with entries in the field \(F\) forms a vector space over \(F\), when addition and scalar multiplication are defined as usual (3.1.1).

Example 3.4

  1. Let \(V = \mathbb{C}\) be the set of complex numbers. Forget about multiplication of two complex numbers. Remember only addition \(x + y\) and multiplication \(rx\) of a complex number \(x\) by a real number \(r\). These operations make \(V\) into a real vector space.

    Verification

    1. Closure under addition: If \(x, y \in \mathbb{C}\), then \(x + y \in \mathbb{C}\) since the sum of two complex numbers is a complex number.
    2. Associativity of addition: For \(x, y, z \in \mathbb{C}\), \((x + y) + z = x + (y + z)\).
    3. Existence of additive identity: The complex number \(0 + 0i\) serves as the additive identity, as \(x + 0 = x\) for all \(x \in \mathbb{C}\).
    4. Existence of additive inverses: For any \(x \in \mathbb{C}\), there exists \(-x \in \mathbb{C}\) such that \(x + (-x) = 0\).
    5. Closure under scalar multiplication: If \(x \in \mathbb{C}\) and \(r \in \mathbb{R}\), then \(rx \in \mathbb{C}\).
    6. Distributive properties: For \(r, s \in \mathbb{R}\) and \(x, y \in \mathbb{C}\),
      • \(r(x + y) = rx + ry\).
      • \((r + s)x = rx + sx\).
    7. Compatibility of scalar multiplication: For \(r, s \in \mathbb{R}\) and \(x \in \mathbb{C}\), \((rs)x = r(sx)\).
    8. Identity for scalar multiplication: For \(x \in \mathbb{C}\), \(1 \cdot x = x\).

    All axioms hold, so \(V\) is a real vector space.

  2. The set of real polynomials \(p(x) = a_nx^n + \dots + a_0\) is a real vector space, with addition of polynomials and multiplication of polynomials by real numbers as its laws of composition.

    Verfication

    We verify the vector space axioms for the set of polynomials:

    1. Closure under addition: The sum of two polynomials is a polynomial.
    2. Associativity of addition: Addition of polynomials is associative.
    3. Existence of additive identity: The zero polynomial, \(0(x) = 0\), serves as the additive identity.
    4. Existence of additive inverses: For any polynomial \(p(x)\), the polynomial \(-p(x)\) satisfies \(p(x) + (-p(x)) = 0\).
    5. Closure under scalar multiplication: Multiplying a polynomial by a real scalar produces a polynomial.
    6. Distributive properties: For \(r, s \in \mathbb{R}\) and polynomials \(p(x)\), \(q(x)\),
      • \(r(p(x) + q(x)) = rp(x) + rq(x)\).
      • \((r + s)p(x) = rp(x) + sp(x)\).
    7. Compatibility of scalar multiplication: \((rs)p(x) = r(sp(x))\).
    8. Identity for scalar multiplication: \(1 \cdot p(x) = p(x)\).

    All axioms hold, so the set of polynomials is a real vector space.

  3. The set of continuous real-valued functions on the real line is a real vector space, with addition of functions \(f + g\) and multiplication of functions by real numbers as its laws of composition.

    Verfication

    We check the vector space axioms for the set \(C(\mathbb{R})\) of continuous functions \(f : \mathbb{R} \to \mathbb{R}\):

    1. Closure under addition: If \(f, g \in C(\mathbb{R})\), then \(f + g\) is continuous, so \(f + g \in C(\mathbb{R})\).
    2. Associativity of addition: Function addition is associative.
    3. Existence of additive identity: The zero function, \(f(x) = 0\), satisfies \(f + 0 = f\).
    4. Existence of additive inverses: For \(f \in C(\mathbb{R})\), the function \(-f\) satisfies \(f + (-f) = 0\).
    5. Closure under scalar multiplication: If \(r \in \mathbb{R}\) and \(f \in C(\mathbb{R})\), then \(rf \in C(\mathbb{R})\).
    6. Distributive properties: For \(r, s \in \mathbb{R}\) and \(f, g \in C(\mathbb{R})\),
      • \(r(f + g) = rf + rg\).
      • \((r + s)f = rf + sf\).
    7. Compatibility of scalar multiplication: \((rs)f = r(sf)\).
    8. Identity for scalar multiplication: \(1 \cdot f = f\).

    All axioms hold, so \(C(\mathbb{R})\) is a real vector space.

  4. The set of solutions of the differential equation \(\frac{dy}{dx} = -y\) is a real vector space.

    1. Closure under addition: If \(y_1(x) = C_1e^{-x}\) and \(y_2(x) = C_2e^{-x}\), then: \[ y_1(x) + y_2(x) = C_1e^{-x} + C_2e^{-x} = (C_1 + C_2)e^{-x}. \] Since \(C_1 + C_2 \in \mathbb{R}\), \(y_1 + y_2 \in S\). Hence, closure under addition holds.

    2. Associativity of addition: For \(y_1(x) = C_1e^{-x}\), \(y_2(x) = C_2e^{-x}\), and \(y_3(x) = C_3e^{-x}\): \[ (y_1 + y_2) + y_3 = [(C_1 + C_2)e^{-x}] + C_3e^{-x} = (C_1 + C_2 + C_3)e^{-x}. \] Similarly: \[ y_1 + (y_2 + y_3) = C_1e^{-x} + [(C_2 + C_3)e^{-x}] = (C_1 + C_2 + C_3)e^{-x}. \] Thus, addition is associative.

    3. Existence of additive identity: The zero solution \(y(x) = 0\) (where \(C = 0\)) satisfies: \[ y(x) + 0 = y(x), \quad \text{for all } y(x) \in S. \] Hence, an additive identity exists.

    4. Existence of additive inverses: For \(y(x) = Ce^{-x}\), the function \(-y(x) = -Ce^{-x}\) satisfies: \[ y(x) + (-y(x)) = Ce^{-x} + (-Ce^{-x}) = 0. \] Thus, every element has an additive inverse in \(S\).

    5. Closure under scalar multiplication: For \(r \in \mathbb{R}\) and \(y(x) = Ce^{-x}\), we have: \[ ry(x) = r(Ce^{-x}) = (rC)e^{-x}. \] Since \(rC \in \mathbb{R}\), \(ry(x) \in S\). Hence, closure under scalar multiplication holds.

    6. Distributive property (scalar over function addition): For \(r \in \mathbb{R}\) and \(y_1(x), y_2(x) \in S\), we have: \[ r(y_1(x) + y_2(x)) = r[(C_1 + C_2)e^{-x}] = (rC_1 + rC_2)e^{-x}. \] On the other hand: \[ ry_1(x) + ry_2(x) = (rC_1)e^{-x} + (rC_2)e^{-x} = (rC_1 + rC_2)e^{-x}. \] Thus, this property is satisfied.

    7. Distributive property (function over scalar addition): For \(r, s \in \mathbb{R}\) and \(y(x) = Ce^{-x}\), we have: \[ (r + s)y(x) = (r + s)(Ce^{-x}) = (rC + sC)e^{-x}. \] On the other hand: \[ ry(x) + sy(x) = (rC)e^{-x} + (sC)e^{-x} = (rC + sC)e^{-x}. \] Thus, this property is satisfied.

    8. Compatibility of scalar multiplication: For \(r, s \in \mathbb{R}\) and \(y(x) = Ce^{-x}\), we have: \[ (rs)y(x) = (rs)(Ce^{-x}) = (r(sC))e^{-x}. \] On the other hand: \[ r(sy(x)) = r[(sC)e^{-x}] = (r(sC))e^{-x}. \] Thus, this property is satisfied.

    9. Identity for scalar multiplication: For \(y(x) = Ce^{-x}\), we have: \[ 1 \cdot y(x) = 1 \cdot (Ce^{-x}) = Ce^{-x}. \] Thus, the identity property holds.

    Since all eight axioms are verified, the set \(S\) is indeed a real vector space.