We're talking about generating joint distributions for two variables, say x and y, with prescribed properties. >Just two?
For example, suppose we do this, assuming that both sets look like they are normally distributed: >So use Copulas!
We'll talk about Archimedean Copulas which are generated via If F For a given x-value, say U, what's the probability that x ≤ U? It's F And what's the probability that x ≤ U
This describes a joint distribution function evaluated at U, V where the individual distributions (called the >Wait! If F
>So why all the fuss about copulas and joint distributions and ...?
>Why didn't you say that before?
>Okay, but in [B1], you're assuming you >And you'd get a >If I just >You're kidding, right? No matter what joint distribution I invent?
- A
**C**opula connects a given joint distribution**F**unction to its marginal distributions ... them's the individual distribution functions F_{1}, F_{2}. - The
**C**opula describes the*dependence*between variables, regardless of their individual distributions. **C**opulas were invented (in 1959?) in order to isolate the dependence structure between variables.- No matter what F
_{1}and F_{2}are, the numbers F_{1}(U) and F_{2}(V) will be*uniformly distributed*on [0,1]**!** - Since the arguments of
**C**are just uniformly distributed random variabes (namely F_{1}(U) and F_{2}(U)), what's left is their dependence. - In fact, we can link together
*any*two marginal distributions (F_{1}, F_{2}) and*any***C**opula and we'll get a valid joint distribution**F**unction. - In fact ...
We've noted that u = F Further, u and v are uniformly distributed in >Inverse what?
>Yeah ... I got it.
>
>zzzZZZ
Indeed, we can now conclude (via Sklar's theorem) that: - If we know F
_{1}and F_{2}, the distributions of x and y (where F_{1}(U) is the probability that x ≤ U and F_{2}(V) is the probability that y ≤ V), and we pick our favourite**C**opula, then the joint distribution**F**is given by**[B1]**:**F**(U,V) =**C**(F_{1}(U), F_{2}(V))
- If we know the joint distribution
**F**and marginal distributions F_{1}and F_{2}, then there is a unique**C**opula according to**[B2]**, namely:**C**(u, v) =**F**(F_{1}^{-1}(u), F_{2}^{-1}(v))
>Huh? The x-variable may be normally distributed and the y-variable may be lognormal or maybe a t-distribution or maybe uniformly or maybe ... >Yeah, but what Copula and how do you change the dependence? Having chosen your favourite Copula, you change the parameter, such as the d in Clayton's Copula:
g(z) = [ x
^{-d} + y^{-d} - 1 ]^{-1/d}Aah, but |