Motivated by e-mail from Pat W.
Wavelets ...and orthonormal expansions
Now consider an arbitrary vector: W = a u + b v.
Consider a vector W in 2-dimensional space, as illustrated in Figure 1.
It can be generated by adding together multiples of two "basis" vectors, u and v.
Indeed, any vector in this space can be so generated.
So what's so special about the two vectors u and v?
They're perpendicular to each other ... or orthogonal.
They each have a length equal to 1 ... they're normalized.
Indeed, they're said to be orthonormal.
To determine the value of the two numbers a and b we introduce a "dot product" or "scalar product" in this 2-D space, namely: uv
which takes two vectors and returns a (scalar) number, like so:
Note that uu = ||u||2 cos θ. For our particular vector, ||u|| = 1 and θ = 0 so uu = 1
... since cos0 = 1.
|uv= ||u|| ||v|| cos θ
where ||u|| and ||v|| denote the lengths of vectors u and v and θ is the angle between them.|
Since our vectors u and v are orthonormal, then ||u|| = ||v|| = 1 and θ = π/2 (or 90 degrees)
and uv = 0.
Note that uv = vu.
Writing W = a u + b v we consider:
Wu = a uu + b vu = a (1) + b (0) = a
and Wv = a uv + b vv = a (0) + b (1) = b.
In other words, using the "scalar" product (sometimes called the "inner product") we can extract the coefficients a and b
if we're given W.
>And what if that W had more components? I mean, what if ...?
If we had an n-dimensional space with orthonormal basis vectors u1, u2, ... un, then we could write an arbitrary vector as:
W = a1 u1 + a2 u2 +
... + an un
and we could extract each coefficient using the "inner product" like so:
a1 = Wu1 and
a2 = Wu2 etc. etc.
We can then write, for any vector in our n-dimensional space:
W = (Wu1) u1
+ (Wu2) u2
+ ... + (Wun) un
= Σ(Wuj) uj
The n numbers (Wuj) (with j going from 1 to n)
are the coefficients in the expansion of W in terms of the orthonormal basis.
>And that's interesting ... to you?
NOW comes the interesting part:
The guy we're calling W could be a function of time, W(t).
If we're careful we can choose
an "orthonormal basis" which will be a collection of functions and we can express W(t) as a sum of these basis functions uj(t).
W(t) = a1 u1(t) + a2 u2(t) + a3 u3(t) + ......
Well-known examples include the Fourier Series expansion of a function.
The function W shown in Figure 2 is a sum: W(t) = u1(t) - (0.5)u2(t) - (0.1)u3(t)
... where u1(t) = sin(t), u2(t) = sin(2t) and u3(t) = sin(3t).
The collection of functions sin(t), sin(2t), sin(3t) etc. etc. can be used as a basis to generate (almost) any function defined on 0 ≤ t ≤ 2π.
Note that W must have an average value of 0 since all the sine-functions do.
Indeed, if W(t) is a periodic function (like a note played on a violin) then the components u1, u2, u3 etc.
are the harmonics (or overtones) associated with that note and ...
>Yeah, I remember listening to the Beatles and ...
A more "complete" basis would be: 1, cos(t), sint(t), cos(2t), sin(2t), cos(3t), sin(3t), etc.
For example, if W(0) weren't 0 and/or W(-t) weren't the negative of W(t)
then using only sine functions wouldn't work.
Indeed, sin(0) = 0 and sin(-t) = - sin(t) and, of course, all sine and cosine functions have an average value of 0 over an interval such as (-π, π) or (0, 2π)
... which explains the need to include that first constant function: 1 (in case W has an average different than 0).
For example, here W(t) = 1 + sin(t) - (0.5)cos(2t) - (0.1)sin(3t).
Our objective is to show that, for our Stock Prices, we can use Haar Wavelets as our basis functions.
Alfréd Haar (1885 - 1933), a Hungarian mathematician, mentioned these guys in an appendix to his doctoral thesis.
>You said you needed some dot product, or scalar or inner producat ... or whatever you call it.
Oh, yes, I almost forget.
For our sines and cosines we'd use:
In fact, ∫ sin(jt) sin(kt) dt
= ∫ sin(jt) cos(kt) dt = ∫ cos(jt) cos(kt) dt = 0 if j ≠ k.
uv= ∫ uj(t) vk(t) dt
where uj(t) and vk(t) are sin(jt) or cos(jt) or sin(kt) or cos(kt) and the integration is over, say 0 ≤ t ≤ 2π|
>So they're orthonormal?
Uh ... not exactly. They're orthogonal, to be sure, but the scalar product of a basis with itself isn't "1", because
∫ sin2(jt) dt = ∫ cos2(jt) dt = π.
>So divide each by π1/2 and they'd be orthonormal, right?
Yes, but it's messy having that π1/2 hanging around. We'll stick it in when we need it.
>But what about that very first basis function ... the "1"?
Aah, yes. If we let u0(t) = 1, then
∫ u0(t) uk(t) dt = 0 if k ≠ 0
since ∫sin(kt) dt = ∫cos(kt) dt = 0
∫ u02(t)dt = 2π.
Anyway, we can now write for (almost) any function f(t):
f(t) = a0 + a1cos(t) + b1sin(t) + a2cos(2t) + b2sin(2t) + a3cos(3t) + b3sin(3t) + ...
To find the constants a0, a1, b1, etc. we use 
(with an infinite number of basis functions) and the scalar product defined by .
Now, with Haar Wavelets ...
>How about an example ... a Fourier series with some sexy f(t)?
All the terms on the right side integrate to 0 except the one where k = m.
Okay, let's try f(t) a square wave, like so:
Our f(t) is an "odd" function (meaning that f(-t) = - f(t)) so we only need the sine functions in our basis.
We write f(t) = Σbk sin(kt).
Multiply each side by sin(mt) (for an arbitrary integer m) and integrate from t = 0 to t = 2π.
Then ∫f(t) sin(mt) dt = Σbk sin(mt) sin(kt) dt.
That's equivalent to taking the scalar product, eh?
That integration (of the right side) gives Σbk sin2(mt) dt = π bm.
For the left side, we split the integral into two parts:
∫1sin(mt) f(t) dt + ∫2 sin(mt) f(t) dt
The first integration is from t = 0 to t = π (where f(t) = 1) and the second is from t = π to t = 2π (where f(t) = -1).
That gives: ∫1 sin(mt) dt
- ∫2 sin(mt) dt
= (-1/m) [cos(mπ) - cos(0)] - (-1/m) [cos(2mπ) - cos(mπ)] = (1/m) ( 2 - 2cos(mπ) ) = 0 unless m is an odd integer
(which makes cos(mπ) = -1).
When m is an odd integer, we get: 4/m.
Finally, then, we have 4/m = π bm when m =1, 3, 5, etc., otherwise we get bm = 0 (for m an even integer).
That makes bm = 4/(mπ) for m = 1, 3, 5 etc. and our square wave is then represented as the Fourier Series:
f(t) = 4/(π)[ sin(t) + (1/3) sin(3t) + (1/5) sin(5t) + (1/7) sin(7t) + ... ]
If we take just ten terms of the Fourier Series, namely
4/π[ sin(t) + (1/3) sin(3t) + (1/5) sin(5t) + (1/7) sin(7t) + ... + sin(19t)]
... we'd get Figure 3.
>The Fourier Series is periodic, but what if f(t) isn't?
Aah, if you use a series with these sines and/or cosines, you're going to get a periodic representation.
Of course, with Haar Wavelets ...
>Maybe it's time to switch to Haar, eh?
There's a spreadsheet to play with which lets you gaze in awe at the sum of terms in a Fourier Series (up to 10 terms).
It looks like this (Click on the picture to download):