## drgwen.org tutorials

PowerPoint Tutorial
Basics of Excel Tutorial
APA Style Tutorial

INFERENTIAL STATISTICS AND HYPOTHESIS TESTING

I. Inferential Statistics and Hypothesis Testing

a. Overview

1) Research is conducted either to answer questions or to test hypotheses. It becomes possible to state hypotheses once the factors in a given situation and their relationships have been described.

2) Use of hypotheses allows us to use more powerful statistical techniques that are more likely to show a significant difference or relationship if one exists.

3) To state hypotheses, you must make clear in the review of literature what theoretical structure underlies the hypotheses and the deductions that led to them.

4) Answering questions at the descriptive level, collect data at the qualitative or quantitative level or both. Qualitative data may be reported in frequencies, as well as verbal descriptions. Quantitative data often presented in graphs and summarized through use of descriptive statistics. For questions asked about relationships, correlational techniques often used.

5) At higher levels of inquiry, hypotheses are stated about the differences between groups and about relationships among variables. Statistical techniques such as ANOVA, correlation, and regression are used to test these hypotheses.

6) Descriptive Statistics: used to report what we observe in a sample.

7) Inferential Statistics: allow generalization from the sample to the population.

8) Sample needs to represent the population to which we want to generalize, and study designed in such a way to reduce chances for error and distortion of results.

II. Hypothesis Testing

a. Introduction: Want to see if data support the hypotheses. We do not claim to prove the hypothesis is true, because one study can never prove anything. It is always possible that some error has distorted the findings.

b. Statistical Significance: Under the concept of statistical significance lies notion of probability. The researcher wants to generalize beyond the sample, he or she needs to know how likely it is that the results are a matter of chance. Statistics are used to tell how likely it is that the observed differences result from chance.

c. Null Hypothesis: Often written as Ho. Proposes that there is no difference. The null hypothesis is the basis of the statistical test. If a "significant" difference is found, the null hypothesis is rejected. If no difference is found, we "fail to reject" the null hypothesis.

d. Types of Error: Types of "error" are defined in terms of the null hypothesis. After analyzing the data, the researcher accepts the null hypothesis if there are no significant results, and rejects the null if there are indeed significant results. Rejecting the null means that significant differences have been found. Because no study is perfect, there is always a chance for error; perhaps this is one of the five chances 100 (p < .05) that such an extreme result has happened by chance. There are two potential errors that can be made.

• A Type I error is rejecting a true null hypothesis. This occurs when the data indicate a statistically significant result, when, there is no difference in the population. Probability of making a Type I error is called alpha and can be decreased by altering the level of significance. You could set the p at .01, instead of .05. Then there is only one chance in 100 that the result termed "significant" could occur by chance alone. However, you will make it more difficult to find a significant result, or decrease the power of the test and increase risk of a Type II error.
• A Type II error is accepting a false null hypothesis. If the data showed no significant results, researcher accepts the null hypothesis. If there were in fact significant results, a Type II error would have been made. To avoid a Type II error, you could make the level of significance less extreme. There is a greater chance of finding significant results if you are willing to risk 10 chances in 100 that you are wrong (p =. 10). Another way to decrease likelihood of a Type II error is to increase the sample size, decrease sources of extraneous variation, and increase effect size. Effect size is the impact made by the dependent variable. For example, if Group A scored 10 points higher on the final than Group B, the effect size would be 10. Decreasing the likelihood of a Type II error increases chance of a Type I error.

e) One and Two-Tailed tests

• Tails refer to the ends of the probability curve. When test for statistical significance, asking if difference or relationship is so extreme, so far out in the tail of the distribution that it is unlikely to have occurred by chance alone. When we hypothesize direction of the difference, showing which tail of the distribution we expect to find the difference.
• One-tailed test of significance used when a directional hypothesis is stated. Two-tailed test in all other situations. Advantage of using one-tailed test is it is more powerful.

f) Degrees of Freedom: Degrees of freedom are related to the number of scores, items, or whatever in a data set, and the idea of freedom to vary. Given three scores (1, 5, 6) have three degrees of freedom, one for each independent item. Each score is "free to vary" that is, before collecting the data we do not know what any of these scores will be. Once we calculate the mean, we lose one of those dfs. This means that each of the three scores is no longer free to vary. In calculating the variance or standard deviation, you are calculating how much the scores vary around the sample mean. Since the sample mean is known, one df is lost and the dfs become n-1, the number of items in the set less one.

III. Parametric vs Nonparametric Test

1.Parametric Tests: estimating at least one population parameter from the sample statistics. Make certain assumption, one being that the variable measured in the sample is normally distributed in the population to which we generalize the findings. Considered more powerful and flexible. Can study effects of many independent variables on the dependent variable and study interaction. Data should be at ordinal level or higher.

2.Nonparametric Tests: No assumptions about the distribution of the variable in the population, nonparametric tests often called distribution-free. Small samples and serious distortion of data lead to nonparametric techniques