One often attempts to find some optimal strategy by varying certain parameters. - Perhaps the problem is to find the
__optimal__allocation of assets between stocks and bonds, given the Mean and Volatility of each asset. - Perhaps it's to decide the
__optimal__rebalancing period (6 months? 12 months? 18 months?), given the characteristics of the individual assets. - Perhaps it's to find the
__optimal__...
>Can you just give us an example?
>Gain Factor? Explain, please.
Okay. Suppose: - Stock #1 will give us $
**G**_{1}= $2.00 (after one year) with a probability of**p**= 0.75 (meaning a 75% probability) ... and 25% of the time you'd get nothing. - Stock #2 will give us $
**G**_{2}= $3.00 with a probability of**q**= 0.40 so 40% of the time you'd get $3.00 and 60% of the time you'd get nothing. - What stock would you choose, to invest your $1.00?
>Uh ... I have no idea. Besides, it's ridiculous to get nothing and ...
- If you invested in stock #1, the Expected Value of your $1.00, after one year, is
**p G**_{1}= 0.75 x 2.00 = $1.50 - If you invested in stock #2, the Expected Value of your $1.00 is
**q G**_{2}= 0.40 x 3.00 = $1.20
> >That's You have a choice: - You can receive $1,500 guaranteed - that's 100% probability.
- You can receive $4,000 with a probability of 25%.
>Let me do it:
Very good. Now here's another problem: - You can receive $1,000 with probability of 95%.
- You can receive $3,000 with a probability of 40%.
>Expected Values are 0.95 x $1000 = $950 and
0.40 x $3,000 or $1,200, so I choose #2.
>Well, in real life, I'd Okay, now our last problem considered by one of the
We toss a coin n times. As soon as it comes up heads, the game ends.
>Uh ... I give up. - The 1-toss value is $2.00 and the probability of ending after 1 toss is 1/2
so the Expected Value after 1 toss is "**Probability x Value**" = (1/2)$2 = $1.00 - The 2-toss value is $4.00 and the probability of ending after 2 tosses is 1/4
so the Expected Value after 2 tosses is "**Probability x Value**" = (1/4)$4 = $1.00 - The 3-toss value is $8.00 and the probability of ending after 3 tosses is 1/8
so the Expected Value after 3 tosses is "**Probability x Value**" = (1/8)$8 = $1.00
>Okay, so the expected winnings is ... uh ... it's ...
>If I'm allowed n tosses, the Expected Value is $n, right?
>20? Why 20? You can pick your own number, but one often uses Utility Theory to compare two or more possible choices so the actual numbers aren't as important as their ratios. Is the utility of this greater than the utility of that? Anyway, using 20, the sum is:20{ (1/2) log(2) + (1/2 ^{2}) log(2^{2}) + (1/2^{3}) log(2^{3}) + ...
} =
20 Σ(1/2^{n}) log(2^{n})
= $27.73 so, would you pay $27.73? >I assume your talking log _{e}, but yeah, sure. I'd pay $27.73 and, in fact, I'd go for $30 and maybe even ...
Well, then your personal bias is consistent with what Bernoulli said, in the 18th century.
The amount you'd pay (to play the Petersburg Game) would then be: (1/2)sqrt(2) + (1/2 ^{2})sqrt(2^{2}) + (1/2^{3})sqrt(2^{3}) + ...
which (as an infinite sum) adds to about $2.40 so ...
>I'd pay $25 to $30. I already said that. So, pick your own personalized Utility! Besides, we're trying to explain human behaviour as it relates to risk and return. One is
unwilling to pay a great sum for the chance to win a HUGE sum if the chances are slim.
Lotteries which pay millions don't charge hundreds of dollars. So, it's not the
>That Risk seeking curve? It's not concave down! That's okay, so long as it doesn't increase too rapidly. We'll talk about that ... later. In the meantime, we'll assume it's always increasing (meaning "more is ).
always better" |