Risk ... how to measure it

motivated by an article by Mandelbrot & Taleb
The article noted above comments on various parameters involved in measuring or using Risk, such as variance,
Sharpe Ratio, correlation, alpha, valueatrisk, BlackScholes, etc. etc. ... in what they call "the pseudoscience of finance".
>So, what's wrong with that?
The point that the author's make is that the "finance gurus" too often use some wellbehaved, tame and placid distribution
(such as Normal or Lognormal) where returns which are far from the Mean occur infrequently.
Since large deviations are not that rare (in real life), then ...
>Huh? Not that rare?
Remember that picture from here?
From 5000+ daily returns for the S&P500, we
delete the 10 largest returns (either postive or negative, replacing them by 0%)
and compare with the same ritual using a Normal distribution of returns
(with the same mean and variance).
>Yes, I remember. The real life returns have all those outliers, far from the mean!
 Figure 1

Yes.
For example, if we look plot the last 1000+ weekly S&P returns we get something like Figure 2.
See those large returns, indicated by wee arrows? A Normal distribution which presumably "fits" the distribution is the
red curve.
>And it gives little weight to the returns far from the mean, right?
Right.
So let's think about a measure of Risk which gives more weight to them thar outliers
 Figure 2

Remember how to calculate the Standard Deviation?
 We look at a bunch of returns: R_{1}, R_{2}, ... R_{n}.
 We calculate their Mean: M = (1/n) (R_{1} + R_{2} + ... + R_{n})
 We calculate all the deviations from that Mean and average their squares :
VAR = (1/n) [ (R_{1}M)^{2}+(R_{2}M)^{2} + ... + (R_{n}M)^{2} ]
 That gives the Variance ... and Standard Deviation = Variance^{1/2}
Let's consider step #3 where we do the average thing.
If we wanted to put more weight on the returns far from the mean, we'd introduce weights which increase as we move away from the mean.
>Huh?
By a "weighted" mean of the numbers a_{1}, a_{2}, ... a_{n}, we mean**
[ W_{1}a_{1} + W_{2}a_{2} +...+ W_{n}a_{n} ] /
[ W_{1} + W_{2} + ... + W_{n} ]
** sorry
The W's are the weights. If they were all the same, say "c", we'd get the gardenvariety mean:
[ ca_{1} + ca_{2} +...+ ca_{n} ] /
[ c + c + ... + c ] = (a_{1} + a_{2} +...+ a_{n}) / n
>So now you make the weights larger for ... uh, larger returns, eh?
Yes. Let's take our modifiedVariance as:
[ W_{1}(R_{1}M)^{2}+W_{2}(R_{2}M)^{2} + ... + W_{n}(R_{n}M)^{2} ]
/ [ W_{1} + W_{2} + ... + W_{n} ] =
Σ W_{k}(R_{k}M)^{2} / Σ W_{k}
and we'll take W_{k} proportional to some power of ABS(R_{k}  M) = R_{k}  M:
W_{k} = R_{k}  M^{g}
That'd give us:
gVAR = Σ R_{k}M^{2+g} / Σ R_{k}M^{g}

If g = 0, we'd get the standard definition of Variance.
>And what'd you choose for g?
I'm not sure ... yet.
As an example, Figure 3 shows the size of weights as a function of the deviations from the mean
(the mean is the red dot),
for various gvalues.
>Yeah, but what'd you choose?
Okay, let's look at the monthly S&P returns for the past twenty years.
 Figure 3

The S&P looked like this:
The Variance (for the twenty years of monthly returns) was 0.0020,
corresponding to a Standard Deviation of monthly returns of SQRT(0.0020) (about 4.5%)
and an annualized standard deviation of ...
>Don't you have a picture showing how gVAR depends upon g and ...?
Yes. In order to make the numbers more familar I'll multiply the variances by 12 and take the square root to get the standard deviations
... to get "annualized" numbers. Then, for example, a gVAR = 0.002 becomes 12(0.002) = 0.024 and sqrt(0.024) = 0.155 or 15.5% (annualized) so ...
>Got a picture?
Yes. Remember that choosing g = 0 would be the "standard" definition of ... uh, "standard" deviation.
A bigger gvalue gives a bigger volatility and ...
>Yeah, but what'd you choose?
I have no idea.
>And what would your modified distribution look like?
 Figure 4

I'd probably use the distribution which (compared to Normal), looks like Figure 5:
Figure 5: Mean = 0, SD = 1
>Huh? Another distribution?
Yeah, it's described here.
Note that, for large values of x, this "other" distribution is still pretty significant.
Indeed, the probability of lying within 2 standard deviations of the Mean is 95% for a Normal distribution, but only 85% for our "other" distribution.
(See the arrows?)
>But, in that "other" distribution, there's some kvalue, right?
Yes. You start off, near the Mean, with the standard Normal distribution and when you're k standard deviations from the Mean, you give
up the Normal (which decays like e^{x2}) and adopt a simple exponential decay
(which decays like e^{x}). In Figure 5 we've used k = 1.
>So what's the best kvalue?
I have no idea.
>Do you ever have an idea?
I have no idea.
Inventing Distributions ... with large tails

Because it's the tails of the distribution which greatly affect Risk, we invent distributions with large tails and ...
>Didn't you just do that?
You mean switching from Normal to Another, when the return is a certain distance from the mean? Yes, we did ... but it's clumsy.
We've already talked about such a distribution in the link we gave above: click.
We could, for example, use this guy:
When x is close to the mean m
in terms of standard deviations, so (xm)/s is small,
f(x)
decays like e^{(1/2)((xm)/s)2}
just like the Normal distribution.

When x is far from the mean m
in terms of standard deviations, so (xm)/s is large,
f(x)
decays like e^{xm/s}
just like our Other distribution.

>So where's the spreadsheet?
Click on this picture:
You can download stock data and play with the value of s until you get a good match with stock returns.
Each time you press F9 you get a bunch of random selections from your invented distribution ... to compare to the stock returns.
>And ... uh, what do you do with it when you got it?
I have no idea
