I didn’t post last week because I was on vacation. But on vacation I decided to write about something a little out of my comfort zone: Fourier series. (Yeah. Try being my wife.)

Fourier series (and the related Fourier transform) made some sense to me for, but I never really learned how to derive them so they always seemed a bit magical. As I was going through Arthur Mattuck’s excellent differential equations course at MIT’s Open Courseware, the Fourier series clicked for me, so I thought I’d distill this out.

I’m tempted to categorize this post under “things Isaac should have learned in school (had he been paying attention), but I don’t think I ever had a course that taught this. If *you’re* still paying attention, I will assume that you recall your basic linear algebra and have some idea (and interest in) what a Fourier series is. But given that, this isn’t hard.

## It’s All Linear Algebra

Recall that the Fourier series represents *periodic* functions as a sum of sines and cosines. In this post, we’ll deal with functions with a period that lies in the interval . The generalization to arbitrary periods is straightforward, but this interval illustrates the scheme well.

The fundamental “click” for me was that this was all linear algebra. The Fourier series:

- Looks at functions over an interval as a vector space with an inner product;
- Picks an orthonormal basis for the space; and
- Represents an arbitrary function in this basis by projecting it out on the basis.

Given that we’re working over the interval , we’ll define a vector space where the scalars are taken from and the vectors are functions over the interval In addition, we’ll define an inner product by:

I’ll leave it to you to check that these meet the requirements of a vector space and an inner product. This is worth doing, particularly if using functions as vectors seems odd to you.

In any case, that’s all for step 1.

## Step 2: Choose an Orthonormal Basis

We’re going to choose an orthonormal basis for our vector space. The vectors of are functions, so our basis will be a set of functions. The basis will be infinite, meaning that is infinite dimensional.

To build our basis, we’re going to use the constant function as well as sines and cosines of a positive integral multiple of . In particular, our basis is:

For these to be an orthonormal basis, we first have to show that any two of these are orthogonal. I.e., that for all with . There are a few cases to check:

- If one of and is then the inner product is just an integral of a sine or cosine function over a whole number of periods, which is zero.
- If and , then is an odd function, so the integral is zero.
- If both and are both sines (or cosines) with
*different*coefficients, then this is also zero. As Mattuck suggests, you can work this out via complex exponentials or trig identities; he does it in his lecture via differential equations and a nice symmetry argument.

We don’t need to show linear independence separately because it’s implied by orthogonality. But we do need to check that all of the vectors have unit length:

- For , we find that $latex|1(x)|^2 = \langle 1(x),1(x)\rangle = \frac{1}{2\pi}\int_{-\pi}^\pi 1 dx = 1$.
- For , we find that . So our sine vectors are normalized.
- This same argument holds for the cosines.

So is an orthonormal basis. We haven’t shown that actually spans the space of functions. I’ll mention this again, but for now, it’s sufficient to know that it spans *some *space.

## Step 3: Represent a Function in this Basis

Now that we have a basis, we can take an arbitrary vector and write it as a linear combination of the basis vectors:

All we need to do is figure out the constants . But since is an inner product space, we can find the coefficient for each as:

Instead of using , we’ll use as the constant term for the term, and for the term. For now, we’ll use as the constant for the term.

Now:

Where:

And that’s the Fourier series. Note that we’ve folded in two factors into the the and terms, which is why they are missing the leading .

This isn’t quite the form you usually see. We can tweak this slightly by noticing that , so we can replace that constant term and its associated constant with a cosine term—we just have to watch out for that . Doing so, we get the usual formulation:

Where the and terms are as above.

## A Few Loose Ends

One thing I glossed over is the space spanned by our basis. A quick counting argument shows that there are

functions in our vector space, but only

functions in the span of our “basis”. Since , our basis doesn’t actually span the space of functions. We’re missing a lot of functions—but which ones? It turns out that our basis actually spans the set of square integrable functions, but I’m afraid that showing this is still beyond me.

Also still beyond me is the extension of this to the full real line: the Fourier *transform. *Hopefully I’ll be back sometime to explain that one, too.