motivated by email from Peter D.
I never really liked the Sharpe Ratio
>Huh? Everybuddy likes the Sharpe Ratio!
Not me.
In particular, I don't like the use of the word "risk" to mean portolio volatility.
What's volatility got to do with risk?
If my monthly returns are 20% greater than yours, my volatility is the same as yours, but is my "risk" the same?
Is that a reasonable notion for the idea of "risk"?
I don't think that ...
>So what else would you use?
Well, for one thing, I'd like greater "risk" to imply a greater probability of losing money or having negative returns.
When I read that:
"The portfolio with the largest Sharpe Ratio has the highest returntorisk ratio" it makes me ill. ReturntoRisk?
>Okay! So what else would you use?
I'll remind you that the Sharpe Ratio measures the expected (or Mean) excess return, over and above some benchmark or riskfree return
divided by the volatility (which is too often associated with "risk"!):
[1A] Sharpe Ratio = Mean[ Return  Riskfree] / SD[Return]
where SD is the Standard Deviation (or volatility) of portfolio returns.
Note that we could also consider:
[1B] Sharpe Ratio = Mean[ Return  Benchmark] / VAR^{1/2}[Return  Benchmark]
where VAR is the Variance (or SD^{2}) of portfolio returns.
If the benchmark is a constant, then VAR^{1/2}[Return  Benchmark] = VAR^{1/2}[Return] = SD[Return]
and [1A] and [1B] are identical.
Okay, so we'll now look at returns compared to some benchmark or investorselected target return.
We then look at the probability that these returns will not exceed this target ... that'd be bad, eh?
We then choose a portfolio which makes this probability decay to zero as our time horizon increases.
And what's the maximum possible decay rate?
That's the Stutzer Index.
>Huh?
the Stutzer Index revealed

Here we follow Stutzer (University of Colorado):
 If r is a return (weekly, monthly ... it doesn't matter), we consider the gain factor to be 1+r.
Note that r = 0.123 refers to a 12.3% return.
 Consider a sequence of portfolio gain factors: p_{1}, p_{2}, p_{3} ... etc.
 The value of a $1.00 portfolio after n periods is
[2A] P(n) = p_{1} p_{2} ... p_{n}.
 If the $1.00 were invested in some "benchmark" portfolio with gain factors b_{1}, b_{2} ... this portfolio would be worth
[2B] B(n) = b_{1} b_{2} ... b_{n}.
 To compare the two portfolios, we take the difference in their logarithms:
[2C] log(P(n))  log(B(n)) = log(P(n) / B(n))
= Σ log(p_{k})  Σ log(b_{k})
... the sum going from k = 1 to k = n
 The average difference is then given by:
[2D] S = (1/n) log(P(n) / B(n))
= (1/n) [Σ log(p_{k})  log(b_{k})]
... the sum going from k = 1 to k = n
 Note that our portfolio will outperform the benchmark portfolio (over n periods) if P(n) > B(n), hence if S > 0.
 We assign to our portfolio a probability that S > 0. We call this Pr[S > 0].
We hope to rank various portfolio strategies according to their probabilities: Pr[S > 0].
Alas, the ranking of portfolios may vary, depending upon the time horizon, measured by n.
So we develop a ranking which holds for sufficiently long time periods ... meaning sufficiently large values for n.
Note that the probability that a portfolio will outperform the benchmark portfolio is Pr[S>0].
The probability that it will underperform is then Pr[S≤0].
Again, following Stutzer, we'll avoid portfolios for which Pr[S≤0] > 0, for large n.
>Huh?
If we underperform the benchmark, that's bad. Consider the average investor.
If her portfolio has a small chance of beating the benchmark
(meaning that Pr[S>0] is small), she'll just invest in the benchmark itself, right?
So (again following Stutzer) we rank portfolio strategies for which the limit of S, as n goes to infinity, is greater than 0.
That is:
(1/n) log(P(n) / B(n))
= (1/n) [Σ log(p_{k})  log(b_{k})] > 0.
>Like investing in the S&P 500?
Yes.
 Consider the probability of underperformance, namely Pr[S≤0], and identify those portfolios for which this decays to zero as
n ∞.
 The faster that Pr[S≤0] 0 , the better the portfolio
... since then the probability of beating the benchmark, Pr{S>0], is greater.
 The underperformance rate of decay is then the Stutzer Index.
Our problem now is to explain the decay rate of the underperformance probability obtained by Stutzer :
[3] D_{p} =

where E[ ] means Expected (or average) value.
So to get D_{p}, the rate of decay for our portfolio (compared to a benchmark portfolio), we do this:
 Pick some positive number γ and, at the nperiod time, evaluate the ratio R(n) = P(n) / B(n).
 Determine the logarithm of the Expected value of R(n)^{γ}.
 Divide this log by n and let n ∞.
 Then repeat steps 1  3 to find that value of γ which maximizes the number obtained in step 3.
>zzzZZZ
Look carefully:
 R(n) = P(n) / B(n) is the ratio of portfolios after n periods.
 According to our definition of S in 2D, above, this ratio can also be written like so:
R(n) = P / B = e^{nS}
 We can then rewrite the decay, D_{p}, in terms of the maximum of the limit of the Expected value of:
e^{γnS}
 Suppose that γ is
the γvalue that maximizes the limit as n ∞.
Then we'd have, for large nvalues, D_{p} ≈ (1/n) log E[e^{γnS}].
 This can also be written like so: e^{n Dp} ≈ E[e^{γnS}]
= E[R^{γ}]
 We note that, for large n, the left side of this equation increases as D_{p} increases, so large decay rates mean larger
probability of underperformance of our portfolio (compared to the benchmark portfolio).
>This is all greek to me! Where on earth did that expression [3] for D_{p} come from?
Uh ... that's the solution to the problem of finding Pr[S≤0] and it's called the GärtnerEllis Large Deviations Theorem.
This theorem gives the decay rate of the undeperformance probability.
>You're kidding, right?
We'll get to GärtnerEllis later ... maybe.
>Warn me when you do.
Notice that we're using GärtnerEllis in connection with a ratio of portfolios: ours and some
benchmark which may be the S&P 500 or, if we're a fund manager, it could be the average portfolio of other fund managers
(which we wish to better).
Notice, too, that our objective is to look at the probability that our portfolio will underperform, compared to the benchmark,
then make this probability decay to zero as time increases (so, hopefully, we're unlikely to underperform), then maximize the
rate of decay of this probability.
The faster the decay, the better our portfolio strategy ... so we want that portfolio strategy which provides the maximum decay rate.
>zzzZZZ
Logarithmic and other Utilities

Remember when we talked about utility functions?
Investors don't attempt to maximize their expected gain but rather their utility and each investor has a personal utility function and ...
>Huh?
There's that Bernoulli example:
You toss a coin umpteen times.
When a heads comes up you stop.
If a head comes up in m tosses, you win $2^{m}.
Your expected gain is infinite, but would you pay a jillion dollars to play this game?
No, but you might pay, say $25.
If we take not the gain of 2^{m} but rather a "logarithmic utility", 20 log(2^{m}) ,
then the expected value of this utility function is $27.73.
>Why 20 log(2^{m})? Why not 47 log(2^{m})?
You pick your own utility function. It's a personal thing. For investments, it'd depend upon your aversion to risk. Figure 1A shows some possible utility functions.
Notice that they are increasing functions, so if the utility is larger, you're a happy camper.
However, a logarithmic utility function is quite popular.
For our particular problem, an exponential utility function seems to be the way to go.
If the Expected value of our portfolio is $W, we consider the utility function  e^{kW}
... as in Figure 1B.
We'd like to make it as large as possible.
>That's the reason for maximizing over γ, right?
Yes, as in equation [3] ... but we wish to maximize the Expected value of the utility function.
You'll notice that we rearranged things, above, like so:
D_{p} ≈ (1/n) log E[e^{γnS}]
so e^{n Dp} ≈ E[e^{γnS}]
= E[R^{γ}]
>And up pops an exponential utility, right?
Yes, e^{n Dp} is like Figure 1B.
However, we wish maximize the Expected value
E[e^{γnS}]
... and we can do that by maximizing D_{p}, the Stutzer Index.
>And this all makes sense to you?
Barely ...

Figure 1A
Figure 1B.

Some observations:
 If we want our portfolio, P, to do better than a benchmark portfolio, B, we could use
the logarithmic utility and simply maximize:
 E[e^{γ(P  B)}]
 Stutzer, on the other hand, uses a power utility function R^{γ} over long time periods:
 (1/n) log E[e^{γ(log(P/B))}]
=  (1/n) log E[R^{γ}]
 Note: the logarithm of the ratio of portfolios after n periods is log(P)  log(B) = log[ P / B ].
 If the annualized returns are p_{n} and b_{n} respectively, then, $1.00 portfolios would grow to:
P(n) = (1+ p_{n})^{n} and B(n) = (1+ b_{n})^{n}
 The log of the ratio is then:
log[ P(n) /B(n) ] = n log[(1+ p_{n}) /(1+ b_{n})]
 The Stutzer Index is then the maximum of:
 (1/n) log E[e^{γ n log[(1+ pn) /(1+ bn)]}]
 This can be written as:  (1/n) log E[ [(1+ p_{n}) /(1+ b_{n})] ^{γ n} ]
>Does that make things easier?
Maybe.
Suppose we ignore that Expected value and just look at the behaviour of
 (1/n) log[ [(1+ p_{n}) /(1+ b_{n})] ^{γ n}]
= γ log[ (1+ p_{n}) / (1+ b_{n})]
as n gets larger.
Figure 2 shows such a graph, where the portfolio returns are randomly selected from a normal distribution with
mean return = 10% and volatility = 30% and
the benchmark mean = 4% with volatility = 8%.
>And γ?
Uh ... I think it was 0.2 or thereabouts.

Figure 2

More observations:
 We choose a utility function like R^{γ}. It'd look like Figure 3.
R is a measure of our success: our portfolio value or our annualized gain or ... whatever.
 We take R to be the ratio of our portfolio to the benchmark portfolio, after n time periods.
 We wish to maximize the Expected value of this utility, namely E[ R^{γ}].
 We define a thing called D such that:  e^{nD} = E[ R^{γ}].
As a function of D,  e^{nD} would look like Figure 1B.
 Instead of maximizing E[ R^{γ}], we can maximize D instead (for large n).
 The maximum of this guy we're calling D is the Stutzer Index
 Figure 3 
>Will there be a spreadsheet?
Uh ... maybe.
Here's what we'll do:
 We download prices for three stocks (or mutual funds) and calculate the weekly returns for each.
 We pick some allocation, like 70% Asset A, 20% Asset B and 10% Asset C and calculate the weekly returns for this portfolio.
 We also download the weekly prices (and calculate the returns) for our benchmark: the S&P 500.
 We calculate the ratio of total returns over n weeks, namely R(n), for n = 1, 2, 3 ... 200.
 We pick some value for gamma (that's γ) and calculate (for each n) R^{γ}.
 We calculate the average of R^{γ} over 200 weeks: call it AVG.
(It's our Expected value and it'll be a negative number.)
 We calculate D = (1/200)log[AVG].
 We vary gamma (that's γ) in order to get the largest value of D.
(Expressed as a percentage, that's our Stutzer Index for this portolio.)
>N = 200? Is that infinite?
Well, we really want the limit as n infinity, so we'll plot the Dvalue versus n to see where it's going.
We get this:
>You picked gamma = 75?
Well, I actually tried all values from 0 to 200 in steps of 5 and that gave me the biggest Dvalue.
>And that 70% + 20% + 10% is the best allocation?
Well, no. I also calculate a bunch of allocations and find the "best" (in the sense of the largest Dvalue) is like so:
>Hmmm ... sounds like a lot of work, to find the "best".
Not really.
There are buttons ...
>And when you vary gamma, you're sure you got yourself a maximum?
There's a picture ...
>So how do I get the spreadsheet?
Click here and if that don't hardly work, try a RIGHTclick and Save Link or Save Target.
Note: There are no guarantees about this spreadsheet. It may (or may not!) work proper ... but it's great fun
for Part II
