Notebook 11 – Math 2121, Fall 2020

In practice, (abstract) vector spaces that are not readily identified with subspaces of Rn often arise as function spaces, like the vector space of functions RR. Let's examine how we can think of vectors, vector addition, and scalar multiplication in such spaces.

1.8 ms
94.5 ms
10.3 s
652 ms

Recall from a few weeks ago how we can visualize vector operations in R2.

9.3 μs

First, create some vectors.

3.2 μs
u
2×1 Array{Int64,2}:
 7
 2
58.7 ms
v
2×1 Array{Int64,2}:
 -1
  7
7.3 μs

Now choose some scalars.

a = 2 b = -1.9

129 μs
9.7 ms

Visualizing the vector space operations for

Fun(R,R)={functions f:RR}

is not so different.

7.2 μs
plot_functions (generic function with 1 method)
37.9 μs
plot_combinations (generic function with 1 method)
95.1 μs

Create some functions RR.

7.5 μs
f (generic function with 1 method)
22.2 μs
g (generic function with 1 method)
26.3 μs

The sinc function is defined by sinc(x)=sin(πx)πx.

You may remember from calculus that limx0sin(x)x=1.

The sinc function is therefore defined at all xR and has sinc(0)=1.

10.6 μs
2.4 s

Choose some scalars in R:

a = 2.7 b = -4.1

79.1 μs
7 ms

The function space Fun(R,R)={functions f:RR} is really big, too big for many natural operations like differentiation and integration to make sense for all elements.

The space Fun(R,R) contains many natural subspaces that may be more suitable to applications:

  • The space L1(R)={fFun(R,R):|f(x)|dx<} of integrable functions.

  • The space L2(R)={fFun(R,R):f2L1(R)} of square-integrable functions.

  • The space L(R)={fFun(R,R):N with |f(x)|<N x} of bounded functions.

Let's think about linear transformations between these spaces.

15.7 μs
300 μs

A linear transformation T:L2(R)L2(R) is a function that takes a square-integrable function f as input and gives another square-integrable function T(f) as output, such that

T(f+g)=T(f)+T(g)andT(cf)=cT(f)

for all f,gL2(R) and cR.

Perhaps the most famous linear transformation L2(R)L2(R) is the Fourier transform FT.

One definition of this (technically, the Fourier cosine transform) is

FT(f)=(the function with formula xf(y)cos(2πxy)dy).

This transformation is linear because of the rules for integration from calculus, which allow us to expand sums and scalar multiples of functions under the integral sign.

Let's see how we can implement FT in Julia and explore some examples.

16.6 μs
FT (generic function with 2 methods)
103 μs
plot_fourier_transform (generic function with 2 methods)
108 μs

Our definition of the Fourer transform does not make sense for cos(x), since this function is not square-integrable: |cos(x)|2dx=.

But we can apply FT to things like the truncated cosine function

cos(x)1[2π,2π](x)

where 1[a,b](x) is the function that takes values 1 for abx and 0 for all other x.

11.5 μs
1.2 s

The result is the sum of two 'pulses' at x=±12π.

10.8 μs
---

What about the Fourier transform of cos(x2)1[4π2,4π2](x)?

11.1 μs
2.4 s

Ignoring error, the Fourier transform of the above function is exactly πcos(π2x2π/4).

11.3 μs
2.5 s

An interesting example comes sinc(x), whose Fourier transform looks exactly like 1[12,12](x).

11.1 μs
1.2 s

The Fourier transform of 1[12,12](x), in turn, looks just like sinc(x).

9.4 μs
2.8 s

We can use these examples to construct eigenvectors of FT.

If T:VV is a linear transformation from a vector space to itself, then an eigenvector is a nonzero element vV such that T(v)=λv for some scalar λR.

In this case λ is the corresponding eigenvalue.

17.1 μs
---

Now, if FT(f)=g and FT(g)=f where fg, then by linearity

FT(f+g)=FT(f)+FT(g)=g+f=f+g

so f+g is an eigenvector with eigenvalue 1. Similarly,

FT(fg)=FT(f)FT(g)=gf=(fg)

so fg is an eigenvector with eigenvalue 1.

We can apply this to f(x)=sinc(x) and g(x)=1[12,12](x).

10.9 μs
1.2 s
1.3 s

This shows that λ=1 and λ=1 are both eigenvalues of FT.

Here are some other interesting eigenvectors of FT for these eigenvalues:

11.5 μs
3.6 s
4.9 s

Are there are any other eigenvalues λ for FT besides λ=1 and λ=1? Yes, λ=0:

7.5 μs
1.1 s

The numbers 1,0,1 are the only eigenvalues of FT. Here's why.

A function f:RR is even if f(x)=f(x) for all x and odd if f(x)=f(x) for all x.

If fL2(R) is odd then FT(f)=0, as in the example above.

If fL2(R) is even then FT(FT(f))=f, as we saw with f(x)=sinc(x).

Every fL2(R) can be written as f=feven+fodd where

feven(x)=f(x)+f(x)2andfodd(x)=f(x)f(x)2.

Also FT(feven)=FT(f)even.

This means FT(f)=FT(feven)=FT(f)even.

So the only way we can have FT(f)=λf is if

  • the function f is odd and λ=0, or

  • the function f is even and λ2=1, since then λ2f=FT(FT(f))=f.

34.3 μs
---

Because the Fourier transform is a linear transformation L2(R)L2(R), which is infinite dimensional, there is no corresponding standard matrix as there would be for a linear transformation RnRm.

As such, although we can figure out the eigenvalues of FT, it is much less straightforward to find (a basis for) the eigenvectors: we cannot do this just by finding the null space of a certain matrix.

Solving this probem is something covered in a more advanced course on Fourier/harmonic analysis.

15.4 μs
1.4 s
1.2 s

Very briefly, why is the Fourier transform is an important/useful linear function:

  • The Fourier transform converts waves to bits. More precisely, FT converts (decaying) sinusoidal signals to discrete signals. For this reason, FT is the foundation of signal processing.

  • Not so much emphasized in our discussion, but FT is also useful in algebra because it transforms differential equations to polynomial equations, which are easier to solve.

  • Finally, and signficantly: there are efficient algorithms to compute FT.

16.8 μs