- hypotheses is a statement about the value of a pop parameter developed for the purpose of testing a theory or belief
- state hypothesis
- choose test statistic e.g. mean
- choose level of significance
- state decision rule
- collect sample/calculate sample statistics
- decision on hypothesis
- investment decision
- null hypothesis H0 is the thing you want to reject and is usually what it would mean to end up outside the confidence interval and might be x = 5 or x > 6
- alternative hypothesis Ha is the thing you want to "prove"
- use one tailed for greater than or less than
- use two tailed for equalities
- most are two tailed
OK. I found this a little confusing - usually my problem was that I couldn't quite figure out what I was proving at the end of it all. Must focus on a good rejection rule and applying it well. I'm going to spend some time here and include an example from Schweser because I found this confusing when I read it:
Example (two tailed):
We have data on daily returns on a portfolio of call options over 250 day period. The mean daily return has been 0.1% (0.001) and the sample standard deviation of portfolio returns is 0.25% (0.0025). researcher beleives that the mean daily return is not equal to zero (so our hypothesis is that it is equal to zero):
1. Ho: μ= 0, Ha: μ ≠ 0
Since this is an equality, use two tailed test for mean (#2). At 5% significance, we look up standard deviations in z table for 95% which is ±1.96 (#3). So we can come up with our decision rule:
4. Reject Ho if +1.96 <>
5. Calc. statistic by standardising the test statistic by dividing test stat (mean) by the standard error. This will convert it into standard deviations which we can then compare against our rule. So 0.001/standard error where standard error is 0.0025/sq.rt. 250 and we get 0.001/0.000158 = 6.33
6. Look back at decision rule in #4 and since 6.33 > 1.96 we reject null hypothesis.
7. We can conclude that the mean return is significantly different from zero given sample's standard deviation and size i.e. the two values are different from one another after considering the variation in the sample
Common error - i tend to misread 0.25% as 25 percent instead of 0.0025 etc.
LOS 11b. define and interpret a test statistic, a Type I and a Type II error, and a significance level, and explain how significance levels are used in hypothesis testing;
Hypothesis testing involved two statistics: the test statistic calculated from the sample and the critical value of the test stat
- The test statistic is the difference between the sample stat (e.g. sample mean) and the
- hypothesized stat (the one stated in the null hyp.), divided by the standard error
- Type I error: rejection of null when it is actually true
- Type II error: Failure to reject null when it is actually false
- significance level α is the probability of a Type I error e.g. sig. level = 5% /95% confid.
LOS 11c. define and interpret a decision rule and the power of a test, and explain the relation between confidence intervals and hypothesis tests;
- Decision rule is to reject or fail to reject null
- decision is based on distribution of the test stat
- typical decision rule: if test stat is (greater/less than) the value X, reject the null
- power of a test is the prob of correctly rejecting null when it is false
- power of test is all about rejecting; if you incorrectly reject then P = α, if you correctly reject then power of test = 1-P e.g. 95%
- For any α, you can only descrease the prob of Type II (and increase power of test) by increasing sample size
Confidence Intervals and Hypothesis Tests
- confidence levels are about comparing in actual units rather than standard deviations
- [sample stat-(crit. value)(stand. error)]≤ pop parameter ≤ [sample stat.+(crit. value)(stand. error)]
- this is intuitive: the pop parameter lies in a range of values depending on confidence interval. So if standard error is 0.0158 and sample mean is 0.1 and we want to be 95% confident, we mult. 1.96 (dev's for 95% prob) by the standard error and get 0.030968 which is the percentage deviation from the mean. So we know the pop parameter lies somewhere in the range of 0.1 ± 0.030968
LOS 11d. distinguish between a statistical result and an economically meaningful result;
This concept is basic. Even though you may get a statistically positive return, transaction costs may make erase/diminish it so your investment decision must take this into account
LOS 11e. explain and interpret the p-value as it relates to hypothesis testing;
- p-value is prob of obtaining a test stat that would lead to rejection of null when it is actually true (type I)
- It is the bit leftover in the tail(s)... so if we find our test stat in the tail past the critical value, the leftover bit(s) is the p-value (in two tailed tests, we sum the leftover bit in each)
- If our test stat is found at 99% (at 95% conf., 2.5% in each tail) we would reject the null but there is still that 1% chance in each tail that we are wrong to do so
LOS 11f. identify the appropriate test statistic and interpret the results for a hypothesis test concerning the population mean of both large and small samples when the population is normally or approximately distributed and the variance is 1) known or 2) unknown;
- Rule of thumb: when in doubt use t statistic
- Use t-test if pop variance is unknown and either small sample but normal or large sample
- As we said earlier, small sample and unknown variance with nonnormal means we can't rely on the answer at all
- calculate t stat as before in hyp testing i.e. diff between sample stat and hyp stat divided by standard error
- the z stat and the t stat are calculated the same way except the t stat includes the degrees of freedom to derive the probability/deviations
Critical Z values - important!
two tailed test significance levels and corresponding critical z values
- 0.10 is ± 1.65
- 0.05 is ± 1.96
- 0.01 is ± 2.58
one tailed test
- 0.10 is +1.28 or -1.28
- 0.05 is +1.65 or -1.65
- 0.01 is + 2.33 or -2.33
LOS 11g. identify the appropriate test statistic and interpret the results for a hypothesis test concerning the equality of the population means of two at least approximately normally distributed populations, based on independent random samples with 1) equal or 2) unequal assumed variances;
I sort of skimmed this because the formula is ridiculous but the summary is this:
- for two independent samples from two normally dist. pop's, the different in means can be tested (to see if it equals zero i.e. they are the same number) with a z-stat
- if variances are assumed equal the denominator is based on the variance of the pooled samples
- when variances are assumed unequal, denominator is based on combination of the two samples' variance
- otherwise testing of the hypo is done the same way once you have your result i.e. does it lie within or outside the chosen prob range
LOS 11h. identify the appropriate test statistic and interpret the results for a hypothesis test concerning the mean difference of two normally distributed populations (paired comparisons test);
- paired comparisons are done when the variables being tested are dependent in some way but the sample distributions are still normal
- A t-stat is used in this case
- t = avg difference of the n paired observations from hyp value/sample standard error
LOS 11i. identify the appropriate test statistic and interpret the results for a hypothesis test concerning 1) the variance of a normally distributed population, and 2) the equality of the variances of two normally distributed populations, based on two independent random samples;
Chi Square test of variances (normal distribution)
- Chi square test is used for hyp tests concerning variance of normally distributed populations i.e. if i believe the variance to be a, my Ho: x ≠ a
- chi-squared distribution is asymmetrical and approaches normal as df increases
- looks lognormal i.e. bounded by zero, humped to the left
- different prob in left tail than right tail - Chi-squared table captures this and is used by matching the df with the prob in the appropriate tail (NB divide sig level by two for two tailed tests)
- Chi squared formula = (df * s2)/σ2 hyp
- i.e. degrees of freedom mult by sample variance, divided by hypothesized variance
F-test for equality of variances of two normally distributed pops (independent random samples)
- F-test is just variance2/variance1
- Ratio of the variance in question with the larger one on top
- Use F-table to support or reject hypothesis (match degrees of freedom for each sample - numerator df is on the top of the table)
- rejection region is in the right side of the table
- if you are looking at the difference in divergence/dispersion of earnings between two industries, use greater than in rule
LOS 11j. distinguish between parametric and nonparametric tests and describe the situations in which the use of nonparametric tests may be appropriate.
- parametric tests rely on assumptions regarding distribution of population and are specific to pop parameters (hence the name)
- non-parametric tests either do not consider a particular pop parameter or have few assumptions about the population that is sample
- non-parametric tests are used when there is concern about quantities other than the parameters of a distribution or when the assumptions of parametric tests cannot be supported e.g. ranked observations
No comments:
Post a Comment