Posted on April 16, 2013 @ 05:51:00 AM by Paul Meagher
There are many ways to compute a conditional probability such as P(H|E).
The simplest ways to compute P(H|E) is:
P(H|E) = P(H & E) / P(H)
In my last blog introducing Bayes Theorem, I showed how to re-arrange terms so that you could compute P(H|E) using a version of the conditional probability formula called Bayes Theorem:
P(H|E) = P(E|H) * P(H) / P(E)
I also showed that this equation could be further simplified to:
P(H|E) ~ P(E|H) * P(H)
Where the symbol ~ means "is proportional to". The equation says that the probability of an hypothesis given evidence P(H|E) is equal to
the likelihood of the evidence P(E|H) given the hypothesis multiplied by a prior assessment of the probability of our hypothesis P(H).
The likelihood term plays a critical role in updating our prior beliefs. So how is it computed and what does it mean? That is what
will be discussed today.
Below I have fabricated a data table consisting of 10,000 startups classified as successful S (1200 instances) or unsuccessful U (8800 instances). In a previous blog, I reported a finding that claimed the success rate of first time startups is 12% which equates to 1200 instances out of 10,000. The data table also includes the outcome of two diagnostic tests. A positive outcome on both tests is denoted ++, while a negative outcome is denoted --. Each cell displays a joint frequency value and a corresponding likelihood value for the relevant combination of diagnostic tests and startup outcomes.
|
Tests |
Outcome |
# Startups |
++ |
+- |
-+ |
-- |
S |
1200 |
650 (.54) |
250 (.21) |
250 (.21 |
50 (.04) |
U |
8800 |
100 (.01) |
450 (.05) |
450 (.05) |
7800 (.89) |
Total |
10,000
|
|
Computing a likelihood from this data table is actually a simple calculation involving the formula:
P(E|H) = P(H & E) / P(H)
To calculate the likelihood of two positive tests given that a startup is successful P(E=++|H=S), we divide the joint frequency of the evidence E=++ when a startup is successful H=S (which is 650) by the frequency of startup success H=S (which is 1200). So 650/1200 is equal to .54 which is the value in parenthesis beside 650 in the table above. To calculate the likelihood of two positive tests given that a startup is unsuccessful P(E=++|H=U), we divide the joint frequency of the evidence E=++ when a startup is unsuccessful H=U (which is 100) by the frequency that a first time startup is unsuccessful H=U (which is 8800). So 100/8800 is equal to .01 which is the value in parenthesis beside 100 in the table above.
The likelihood calculation tells us which hypothesis makes the evidence most likely. In this case, the hypothesis that the startup is successful makes the positive outcome of our two diagnostic tests (E=++) more likely (.54) than the hypothesis that the startup is unsuccessful (.01). We can examine the likelihood values in each column to determine
which hypothesis makes the diagnostic evidence more likely. You can see why the likelihood values are important in updating our prior beliefs about the probability of startup success. We can also appreciate why some would argue that likelihood values are sufficient for making decisions - just compare the relative likelihood of the different hypothesis given the evidence.
|