**10.3.2.1 Spearman’s Rho**

Spearman's rho is a non-parametric test used to assess rank correlations.

**Criteria for Use**

**Data Type:**Best suited for ordinal data or when relationships are non-linear.**Sample Size:**Effective for smaller samples, as it is less sensitive to outliers.**Distribution:**Ideal for non-normally distributed data.

**Appropriate Contexts**

Useful in psychology for correlating ranks, such as preference rankings in a survey.

Applied in situations where data violate the assumptions of Pearson's r, like skewed distributions.

**10.3.2.2 Pearson’s r**

Pearson's r is a measure of linear correlation between two variables.

**Criteria for Use**

**Data Type:**Optimal for interval or ratio data.**Assumptions:**Requires normal distribution, linear relationship, and homoscedasticity (constant variance).**Sample Size:**More accurate with larger samples, as small samples may lead to misleading results.

**Appropriate Contexts**

Frequently used in experimental psychology to examine relationships between variables, such as the correlation between stress levels and performance.

Ideal for continuous data, like measuring time or scores on standardised tests.

**10.3.2.3 Wilcoxon Signed-Rank Test**

This test compares two related samples or repeated measurements.

**Criteria for Use**

**Data Type:**Suitable for ordinal data or non-normally distributed interval data.**Sample Condition:**Ideal for paired data, such as pre-test and post-test scenarios.**Assumptions:**Assumes differences between pairs are symmetrically distributed.

**Appropriate Contexts**

Applied when comparing before-and-after measurements in a single group, like cognitive scores before and after an educational intervention.

Useful in clinical psychology to assess treatment effects over time.

**10.3.2.4 Mann-Whitney U Test**

A non-parametric test for comparing two independent groups.

**Criteria for Use**

**Data Type:**Used for ordinal or continuous data that doesn't follow a normal distribution.**Group Independence:**Requires two groups to be independent of each other.**Sample Size:**Efficient even with small sample sizes.

**Appropriate Contexts**

Employed to compare differences between groups in observational studies, like comparing stress levels in different professions.

Useful in comparing outcomes of different therapeutic approaches in clinical settings.

**10.3.2.5 Related t-test**

Also known as the paired sample t-test, this test compares means of two related groups.

**Criteria for Use**

**Data Type:**Best for interval or ratio data.**Assumptions:**Requires normally distributed data and equal variances (homogeneity of variance).**Pairing:**Suitable for data involving matched pairs or repeated measures.

**Appropriate Contexts**

Commonly used to assess the impact of an intervention by comparing pre- and post-intervention data.

Appropriate for experimental designs where participants are tested under different conditions.

**10.3.2.6 Unrelated t-test**

This test is used to compare means of two independent groups.

**Criteria for Use**

Data Type: Suitable for interval or ratio data.

Assumptions: Necessitates normal distribution, equal variances, and independent samples.

Sample Size: More reliable with larger samples, as it reduces the risk of Type I and Type II errors.

**Appropriate Contexts**

Used in comparative studies, such as examining behavioural differences between genders or age groups.

Applicable in assessing the effectiveness of different psychological treatments in separate groups.

**10.3.2.7 Chi-Squared Test**

A test to determine the association between two categorical variables.

**Criteria for Use**

**Data Type:**Ideal for nominal data.**Sample Size:**Requires sufficient sample size to ensure expected frequencies in contingency tables are adequate (generally at least 5).**Independence:**Assumes each observation is independent and not influenced by the others.

**Appropriate Contexts**

Used in social psychology to explore relationships between categorical variables like marital status and happiness.

Suitable for large-scale surveys or observational studies where variables are categorical, such as presence or absence of a symptom.

In summary, the selection of a statistical test in psychology hinges on the data's nature, the research design, and the specific hypotheses. A deep understanding of each test's unique features and appropriate application contexts ensures that the conclusions drawn from psychological research are both valid and reliable.

## FAQ

Checking for normal distribution is vital because many statistical tests, like the parametric tests (e.g., Pearson's r, t-tests), are based on the assumption that the data are normally distributed. This assumption is crucial because it impacts the test's accuracy in estimating population parameters. Normal distribution implies that most data points cluster around the mean, decreasing symmetrically on either side, forming a bell-shaped curve. If the data deviate significantly from this pattern, parametric tests may produce unreliable results, leading to incorrect conclusions. For example, using a t-test on highly skewed data might suggest a significant difference where none exists. In cases where normality is violated, non-parametric tests like Spearman's rho or Mann-Whitney U test, which do not assume normal distribution, become more appropriate. They are more robust against outliers and skewed distributions, thus providing more valid results in such circumstances.

Type I and Type II errors are concepts crucial to understanding the reliability of statistical tests. A Type I error occurs when the test incorrectly rejects a true null hypothesis (i.e., it indicates a significant effect when there is none), while a Type II error happens when the test fails to reject a false null hypothesis (i.e., it misses a significant effect that is present). The choice of statistical test impacts the likelihood of these errors. For instance, parametric tests, which assume normal distribution and homogeneity of variance, might be more prone to Type I errors if these assumptions are violated. Conversely, non-parametric tests, while less susceptible to these assumptions, may have a higher chance of Type II errors due to their generally lower statistical power. Researchers need to balance these risks, choosing a test that minimizes the chance of errors while being appropriate for their data's characteristics. This decision is crucial in ensuring that the conclusions drawn from a study are both accurate and trustworthy.

The level of measurement of data (nominal, ordinal, interval, or ratio) greatly influences the choice of statistical test. Nominal data, which consists of categories without any inherent order (e.g., gender, race), is typically analyzed using tests like the Chi-Squared test, which can determine associations between categorical variables. Ordinal data, which represents categories with a logical order but uneven intervals (e.g., satisfaction ratings), is more suited to non-parametric tests like Spearman’s rho or Mann-Whitney U test, as these tests can handle data where the distance between ranks isn’t consistent. For interval and ratio data, which have equal intervals between measurements (e.g., test scores, age), parametric tests like Pearson's r or the t-test are often used. These tests assume that the data follow a normal distribution and can provide more precise information about relationships or differences. Choosing the correct statistical test based on data level ensures the accuracy and validity of the analysis.

A Wilcoxon Signed-Rank Test is preferred over a Related t-test in situations where the assumptions of the t-test, particularly the normal distribution of differences, are not met. It is used for ordinal data or when interval/ratio data are significantly skewed. This non-parametric test is ideal for small sample sizes or when the data include outliers that could significantly affect the results. For instance, in a psychological study comparing the effectiveness of a therapy session by measuring patient stress levels before and after the session, if the data are not normally distributed or if there are outliers (e.g., a few patients with exceptionally high stress levels), the Wilcoxon Signed-Rank Test would provide a more accurate analysis than the Related t-test. This test ranks the differences between paired observations before comparing their sums, making it less sensitive to non-normal distributions and outliers.

The unrelated t-test, also known as the independent samples t-test, is not suitable for ordinal data. This test is designed for interval or ratio data that follow a normal distribution. Ordinal data, by contrast, represent ordered categories that do not necessarily have equal intervals between them (e.g., rankings, Likert scale ratings). The distances between ordinal categories are not consistent or quantifiable in the same way as interval or ratio data. Applying an unrelated t-test to ordinal data could lead to misinterpretation of the results, as the mathematical operations underlying the t-test assume equal intervals and a linear relationship between values. For ordinal data from independent groups, non-parametric alternatives like the Mann-Whitney U test are more appropriate, as they are designed to compare medians or ranks between groups without assuming normally distributed interval data.

## Practice Questions

Describe a scenario in a psychological study where using a Mann-Whitney U test would be more appropriate than a t-test. Explain your reasoning.

In a psychological study, a Mann-Whitney U test would be more appropriate than a t-test in a scenario where researchers are comparing the levels of anxiety between two independent groups, such as students and teachers, with non-normally distributed data. The Mann-Whitney U test is a non-parametric alternative to the t-test and is better suited for ordinal or continuous data that does not follow a normal distribution. This test is also preferred when the sample size is relatively small, reducing the risk of inaccuracies due to distributional assumptions required by the t-test.

Why would a researcher choose to use Spearman's rho instead of Pearson's r in a study? Provide an example in your explanation.

A researcher would choose Spearman's rho instead of Pearson's r when dealing with ordinal data or when the relationship between variables is non-linear. For example, in a study investigating the correlation between ranked anxiety levels (ordinal data) and the number of social interactions, Spearman's rho is more appropriate. This test is used when the assumptions of Pearson's r, such as normality and linearity of data, are not met. Spearman's rho provides a measure of a monotonic relationship, which is ideal for understanding correlations in ranked data or in situations where the relationship is not strictly linear.