Tutorial 7.7 - The power of contrasts
15 Aug 2017
Context
- Recent statistical advice has progressively advocated the use of effect sizes and confidence intervals over the traditional p-values of hyothesis tests.
- It is argued that the size of p-values are regularly incorrectly inferred to represent the strength of an effect.
- Proponents argue that effect sizes (the magnitude of the differences between treatment levels or trajectories) and associated measures of confidence or precision of these estimates provide far more informative summaries.
- Furthermore, the resolve of these recommendations is greatly enhanced as model complexity increases (particularly with the introduction of nesting or repeated measures) and appropriate degrees of freedom on which to base p-value calculations become increasingly more tenuous.
- Consequently, there have been numerous pieces of work that oppose/criticize/impugn the use of p-values and advocate the use of effect sizes.
- Unfortunately, few offer much guidance in how such effects sizes can be calculated, presented and interpreted.
- Therefore, the reader is left questioning the way that they perform and present their statistical findings and is confused about how they should proceed in the future knowing that if they continue to present their results in the traditional framework, they are likely to be challenged during the review process.
- This paper aims to address these issues by illustrating the following:
- The use of contrast coefficients to calculate effect sizes, marginal means and associated confidence intervals for a range of common statistical designs.
- The techniques apply equally to models of any complexity and model fitting procedure (including Bayesian models).
- Graphical presentation of effect sizes
- All examples are backed up with full R code (in appendix)
All of the following data sets are intentionally artificial. By creating data sets with specific characteristics (samples drawn from exactly known populations), we can better assess the accuracy of the techniques.
Contrasts in Single Factor ANOVA
Linear model effects parameterization for single factor ANOVA looks like: $$y_i = \mu + \beta x_i + \varepsilon_i \hspace{1cm} \varepsilon_i \sim \mathcal{N}(0,\sigma^2) $$
Motivating example
Lets say we had measured a response (e.g. weight) from four individuals (replicates) treated in one of three ways (three treatment levels).- three groups (n.groups = 3
- four replicates per group (n=4)
- the first (control) group has a baseline mean response of 40
- groups two and three have a mean response 10 and 15 units higher than the control
- the data are drawn from normal distributions with a mean of 0 and standard deviation of 3 ($\sigma^2=9$)
set.seed(1) n.groups <- 3 n <- 4 FactA <- gl(n = n.groups, k = n, len = n.groups * n, lab = paste("a", 1:n.groups, sep = "")) baseline <- 40 all.effects <- c(baseline, 10, 15) sigma <- 3 X <- as.matrix(model.matrix(~FactA)) eps <- round(rnorm(n.groups * n, 0, sigma), 2) Response <- as.numeric(as.matrix(X) %*% as.matrix(all.effects) + eps) data <- data.frame(Response, FactA) head(data)
Response FactA 1 38.12 a1 2 40.55 a1 3 37.49 a1 4 44.79 a1 5 50.99 a2 6 47.54 a2
Raw data | Treatment means | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
|
|
Two alternative model parameterizations (based on treatment contrasts) would be:
Means parameterization | Effects parameterization | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
|
Whilst the above two are equivalent for simple designs (such as that illustrated), the means parameterization (that does not include an intercept, or overall mean) is limited to simple single factor designs. Designs that involve additional predictors can only be accommodated via effects parameterization.
Lets now fit the above means and effects parameterized models and contrasts the estimated parameters:
Means parameterization | Effects parameterization |
---|---|
summary(lm(Response ~ -1 + FactA, data))$coef Estimate Std. Error t value Pr(>|t|) FactAa1 40.2375 1.300481 30.94046 1.885834e-10 FactAa2 50.5500 1.300481 38.87022 2.453386e-11 FactAa3 56.6300 1.300481 43.54541 8.871078e-12 |
summary(lm(Response ~ FactA, data))$coef Estimate Std. Error t value (Intercept) 40.2375 1.300481 30.940464 FactAa2 10.3125 1.839159 5.607184 FactAa3 16.3925 1.839159 8.913043 Pr(>|t|) (Intercept) 1.885834e-10 FactAa2 3.312139e-04 FactAa3 9.243186e-06 |
The model on the left omits the intercept and thus estimates the treatment means. The parameters estimated by the model on the right are the mean of the first treatment level and the differences between each of the other treatment means and the first treatment mean. If there were only two treatment levels or first treatment was a control (two which you wish to compare the other treatment means), then the effects model is providing what the statisticians are advocating - estimates of the effect sizes as well as estimates of precision (standard error from which confidence intervals can easily be derived).
These parameters represent model coefficients that can be substituted back into a linear model. They represent the set of linear constants in the linear model. We can subsequently multiply these parameters to produce alternative linear combinations in order to estimate other values from the linear model.
$$ \begin{align*} y_i &= \mu + \alpha x_i\\ y_i &= \mu + \beta_2 x_{2i} + \beta_3 x_{3i}\\ y_i &= 40.24\times Intercept + 10.31\times FactAa2 + 16.39\times FactAa3\\ \end{align*} $$For example, if we wanted to estimate the effect size for the difference between treatments a2 and a3, we could make a1=0, a2=1 and a3=-1. That is, if we multiplied our model parameters by a contrast matrix of 0,1,-1; $$ y_i = \begin{bmatrix}\mu&\beta_2&\beta_3\end{bmatrix}\times \begin{bmatrix}a1\\a2\\a3\end{bmatrix} $$ where the matrix $\begin{bmatrix}a1\\a2\\a3\end{bmatrix}$ is a matrix of contrast coefficients. Each row corresponds to a column in the parameter matrix (vector). The contrast matrix can have multiple columns, each corresponding to a different derived comparison.
$$ \begin{align*} comparison &= 40.24\times a1 + 10.31\times a2 + 16.39\times a3\\ comparison &= 40.24\times 0 + 10.31\times 1 + 16.39\times -1\\ comparison &= 10.31 + -16.39\\ comparison &= -6.08\\ \end{align*} $$So the difference in means between group 2 and group 3 is -6.08.
Some researchers might be familiar with the use of contrasts in performing so called 'planned comparisons' or 'planned contrasts'. For simple comparisons, this can involve redefining the actual parameters used in the linear model to reflect more meaningful comparisons amongst groups. For example, rather than compare each group mean against the mean of one (the first) treatment group, we many wish to include a comparison of the first treatment group (perhaps a control group) against the average two other treatment groups.
contrasts(data$FactA) <- cbind('a1 vs (a2 & a3)' = c(0, 1, 1)) summary(lm(Response ~ FactA, data))
Call: lm(formula = Response ~ FactA, data = data) Residuals: Min 1Q Median 3Q Max -3.0100 -2.2256 0.2062 1.0975 4.5525 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 40.238 1.300 30.940 1.89e-10 *** FactAa1 vs (a2 & a3) 13.352 1.593 8.383 1.52e-05 *** FactA 4.299 1.300 3.306 0.00914 ** --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 Residual standard error: 2.601 on 9 degrees of freedom Multiple R-squared: 0.9002, Adjusted R-squared: 0.8781 F-statistic: 40.6 on 2 and 9 DF, p-value: 3.13e-05
summary(aov(Response ~ FactA, data), split = list(FactA = list('a1 vs (a2 & a3)'=1, 2)))
Df Sum Sq Mean Sq F value Pr(>F) FactA 2 549.4 274.7 40.60 3.13e-05 *** FactA: a1 vs (a2 & a3) 1 475.4 475.4 70.28 1.52e-05 *** FactA: 1 73.9 73.9 10.93 0.00914 ** Residuals 9 60.9 6.8 --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
In the above code, we first re-parameterized the linear model (via setting the contrasts) such that $\beta_2$ represented the difference between group 1 and the average of groups 2 and 3. Remember, that the first parameter is the mean of the first group and thus all other parameters are relative to this group. However, most other comparisons become very difficult to parameterize.
It is much easier to fit the model as either a effects parameterized model or a means parameterized model and then apply a set of estimable contrasts post fit. For example, to explore the following two contrasts:
- group 1 vs the average of groups 2 and 3
- group 2 vs 3
contrasts(data$FactA) <- contr.treatment data.lm <- lm(Response~FactA, data) library(multcomp) (contr <- rbind('a1 vs (a2 & a3)'=c(1,-0.5,-0.5), 'a2 vs a3'=c(0,1,-1)))
[,1] [,2] [,3] a1 vs (a2 & a3) 1 -0.5 -0.5 a2 vs a3 0 1.0 -1.0
(contr.m <- cbind(0, contr %*% contr.treatment(3)))
2 3 a1 vs (a2 & a3) 0 -0.5 -0.5 a2 vs a3 0 1.0 -1.0
#technically we should first check that all contrasts are orthogonal (independent of one another) # if they are then the lower or upper triangle matrix of the cross products should all be zero crossprod(contr.m)
2 3 0 0.00 0.00 2 0 1.25 -0.75 3 0 -0.75 1.25
## This this case, the two contrasts that we have defined are not independent of one another # and cannot both be estimated (without then applying adjustments for multiple comparisons) # Nevertheless, in the interest of a demonstration, we will proceed. summary(glht(data.lm, linfct=contr.m))
Simultaneous Tests for General Linear Hypotheses Fit: lm(formula = Response ~ FactA, data = data) Linear Hypotheses: Estimate Std. Error t value Pr(>|t|) a1 vs (a2 & a3) == 0 -13.352 1.593 -8.383 3e-05 *** a2 vs a3 == 0 -6.080 1.839 -3.306 0.0177 * --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 (Adjusted p values reported -- single-step method)
confint(glht(data.lm, linfct=contr.m))
Simultaneous Confidence Intervals Fit: lm(formula = Response ~ FactA, data = data) Quantile = 2.6567 95% family-wise confidence level Linear Hypotheses: Estimate lwr upr a1 vs (a2 & a3) == 0 -13.3525 -17.5840 -9.1210 a2 vs a3 == 0 -6.0800 -10.9661 -1.1939
#OR summary(glht(data.lm, linfct=mcp(FactA=contr)))
Simultaneous Tests for General Linear Hypotheses Multiple Comparisons of Means: User-defined Contrasts Fit: lm(formula = Response ~ FactA, data = data) Linear Hypotheses: Estimate Std. Error t value Pr(>|t|) a1 vs (a2 & a3) == 0 -13.352 1.593 -8.383 3e-05 *** a2 vs a3 == 0 -6.080 1.839 -3.306 0.0177 * --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 (Adjusted p values reported -- single-step method)
#Alternatively, for individual comparisons, there is also the contrast() function in the package of the same name library(contrast) contrast(data.lm,a=list(FactA='a2'),b=list(FactA='a3'))
lm model parameter contrast Contrast S.E. Lower Upper t df Pr(>|t|) 1 -6.08 1.839159 -10.24047 -1.919534 -3.31 9 0.0091
In so doing, this technique provides a relatively easy way to test other non-standard hypotheses (linear effects).
However, a greater understanding of contrasts has even greater benefits when it comes to calculations of effect sizes, marginal means and their associated confidence intervals - features that are considered of greater interpretive value than hypothesis tests. Contrasts can also be used to derive a range of other derived parameters from the fitted model.
Calculating treatment means from the effects model
A simple way to illustrate the use of contrasts to derive new parameters from a linear model is in the estimation of population treatment means from the effects model. $$ y_i = \begin{bmatrix}\mu&\beta_2&\beta_3\end{bmatrix}\times \begin{bmatrix}1&0&0\\0&1&0\\0&0&1\end{bmatrix} $$contrasts(data$FactA) <- contr.treatment data.lm <- lm(Response ~ FactA, data) cmat <- cbind(c(1, 0, 0), c(1, 1, 0), c(1, 0, 1)) colnames(cmat) <- c("a1", "a2", "a3") data.mu <- data.lm$coef %*% cmat data.mu
a1 a2 a3 [1,] 40.2375 50.55 56.63
And to obtain the standard error and thus confidence intervals for each group.
data.se <- sqrt(diag(t(cmat) %*% vcov(data.lm) %*% cmat)) data.se
a1 a2 a3 1.300481 1.300481 1.300481
data.ci <- as.numeric(data.mu) + outer(data.se, qt(c(0.025, 0.975), df = data.lm$df.resid)) data.ci
[,1] [,2] a1 37.29561 43.17939 a2 47.60811 53.49189 a3 53.68811 59.57189
Calculating all the pairwise effects from the effects model
$$ y_i = \begin{bmatrix}\mu&\beta_2&\beta_3\end{bmatrix}\times \begin{bmatrix}0&1&0\\0&1&0\\0&-1&1\end{bmatrix} $$contrasts(data$FactA) <- contr.treatment data.lm <- lm(Response ~ FactA, data) cmat <- rbind(c(0, 1, 0), c(0, 0, 1), c(0, -1, 1)) #OR newdata <- data.frame(FactA=levels(data$FactA)) tuk.mat <- contrMat(n=table(newdata$FactA), type="Tukey") cmat <- tuk.mat %*% model.matrix(~FactA, data=newdata) colnames(cmat) <- c("a1", "a2", "a3") cmat <- t(cmat) data.mu <- data.lm$coef %*% cmat data.mu
a2 - a1 a3 - a1 a3 - a2 [1,] 10.3125 16.3925 6.08
And to obtain the standard error and thus confidence intervals for each group.
data.se <- sqrt(diag(t(cmat) %*% vcov(data.lm) %*% cmat)) data.se
a2 - a1 a3 - a1 a3 - a2 1.839159 1.839159 1.839159
data.ci <- as.numeric(data.mu) + outer(data.se, qt(c(0.025, 0.975), df = data.lm$df.resid)) data.ci
[,1] [,2] a2 - a1 6.152034 14.47297 a3 - a1 12.232034 20.55297 a3 - a2 1.919534 10.24047
Factorial ANOVA
Two-factor Main effects
Theoretical means
We will confirm the theoretical means of this design matrix (based on the defined effects).
b1 | b2 | b3 | |
---|---|---|---|
a1 | 40.00 | 45.00 | 50.00 |
a2 | 30.00 | 33.00 | 44.00 |
a3 | 35.00 | 43.00 | 45.00 |
a4 | 45.00 | 50.00 | 58.00 |
a5 | 50.00 | 59.00 | 58.00 |
And now we can add noise to the response to create data with variability. We will add noise by adding value to each observation that is randomly drawn from a normal distribution with a mean of zero and a standard deviation of 3.
Lets just examine the first six rows of these data to confirm that a little noise has been added
Sample means
We will now calculate the sample means of each FactA by FactB combination. These are called 'cell means' as they are the means of each cell in a table in which the levels of FactA and FactB are the rows and columns (as was depicted in the table above for the theoretical means). If the samples are collected in an unbiased manner (whereby the collected samples reflect the populations), hen the cell means should roughly approximate the true means of the underlying populations from whcih the observations were drawn.
Raw sample group means:Here is a side-by-side comparison of the theoretical means, the raw sample means and the cell differences:
Theoretical cell means | Sample cell means | Differences | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
|
|
Model term | Summarized model parameter | Interpretation |
---|---|---|
µ | (Intercept) | (a1b1) |
β1 | FactAa2 | (a2b1 - a1b1) |
β2 | FactAa3 | (a3b1 - a1b1) |
β3 | FactAa4 | (a4b1 - a1b1) |
β4 | FactAa5 | (a5b1 - a1b1) |
β5 | FactBb2 | (a1b2 - a1b1) |
β6 | FactBb3 | (a1b3 - a1b1) |
β7 | FactAa2:FactBb2 | (a2b1+a1b2) - (a1b1+a2b2) |
β8 | FactAa3:FactBb2 | (a3b1+a1b2) - (a1b1+a3b2) |
β9 | FactAa4:FactBb2 | (a4b1+a1b2) - (a1b1+a4b2) |
β10 | FactAa5:FactBb2 | (a5b1+a1b2) - (a1b1+a5b2) |
β11 | FactAa2:FactBb3 | (a2b1+a1b3) - (a1b1+a2b3) |
β12 | FactAa3:FactBb3 | (a3b1+a1b3) - (a1b1+a3b3) |
β13 | FactAa4:FactBb3 | (a4b1+a1b3) - (a1b1+a4b3) |
β14 | FactAa5:FactBb3 | (a5b1+a1b3) - (a1b1+a5b3) |
But this is where the good stuff begins. As the parameters are defined to encapsulate all of the patterns amungst the groups, he linear combination of parameters can now be used to calculate a whole range of related derived estimates (and their standard deviations etc) including:
- population cell means
- marginal means (row and column means)
- effect sizes (differences between marginal means)
- effect sizes (deviations from overal mean)
In order to achieve this wizardry, we must indicate which combination of the parameter estimates are relevant for each of the derived estimates required. And this is where contrast coefficients come in. The contrast coefficients are a set of numbers (equal in length to the number of model parameters) determine the combinations and weights of the model parameters that are appropriate for each of the derived estimates. Hence when the set of model parameters are multiplied by a single set of contrast coefficients and then summed up, the result is a new derived estimates.
By way of example, the following contrast coefficients would specify that the derived estimate should include only (and all of) the first model parameter (and thus according to the table above, estimate the mean of cell FactA level a1 and FactB level b1). In R, the %*% is a special operator that performs matrix multiplication (each element of one matrix is multiplied by the corresponding elements of the other matrix) and returns their sum.
Only the first model parameter is included as all the others are multiplied by zeros.
We can calculate multiple derived estimates at once, by multiplying the model parameters by multiple sets of contrast coefficients (represented as a matrix). Hence to also calculate the cell mean of FactA level a2 and FactB b1 (a2b1), we would make the first (a1b1 cell mean) and second (difference between a2b1 and a1b1 cell means) contrast coefficents ones (1) and the others zeros. The cbind() function binds together vectors into the columns of a matrix.
At this point we could go on and attempt to work out which combinations of model parameters would be required to define each of the cell means. Whilst it is reasonably straight forward to do this for cells a1b1, a2b1, a3b1, a4b1, a5b1, a1b2, a1b3 (those along the top and down the left column), it requires more mental gymnastics to arrive at the entire set. However, there is a function in R called model.matrix() that helps us achieve this in a relatively painfree manner.
Modeled population meansHence, the entire contrast matrix for deriving cell mean estimates would be (note this is transposed to make it more readable)
a1b1 | a1b2 | a1b3 | a2b1 | a2b2 | a2b3 | a3b1 | a3b2 | a3b3 | a4b1 | a4b2 | a4b3 | a5b1 | a5b2 | a5b3 | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
µ | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 |
β1 | 0.00 | 0.00 | 0.00 | 1.00 | 1.00 | 1.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
β2 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 1.00 | 1.00 | 1.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
β3 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 1.00 | 1.00 | 1.00 | 0.00 | 0.00 | 0.00 |
β4 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 1.00 | 1.00 | 1.00 |
β5 | 0.00 | 1.00 | 0.00 | 0.00 | 1.00 | 0.00 | 0.00 | 1.00 | 0.00 | 0.00 | 1.00 | 0.00 | 0.00 | 1.00 | 0.00 |
β6 | 0.00 | 0.00 | 1.00 | 0.00 | 0.00 | 1.00 | 0.00 | 0.00 | 1.00 | 0.00 | 0.00 | 1.00 | 0.00 | 0.00 | 1.00 |
β7 | 0.00 | 0.00 | 0.00 | 0.00 | 1.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
β8 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 1.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
β9 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 1.00 | 0.00 | 0.00 | 0.00 | 0.00 |
β10 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 1.00 | 0.00 |
β11 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 1.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
β12 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 1.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
β13 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 1.00 | 0.00 | 0.00 | 0.00 |
β14 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 1.00 |
The estimated population cell means are thence derived by multiplying the model parameters by this transposed contrast matrix:
Here is a side-by-side comparison of the theoretical means, the raw sample means and the cell differences:
Theoretical cell means | Sample cell means | Model derived cell means | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
|
|
mu | se | lwr | upr | |
---|---|---|---|---|
a1b1 | 40.24 | 1.35 | 37.52 | 42.96 |
a1b2 | 45.55 | 1.35 | 42.83 | 48.27 |
a1b3 | 51.63 | 1.35 | 48.91 | 54.35 |
a2b1 | 28.68 | 1.35 | 25.96 | 31.40 |
a2b2 | 34.76 | 1.35 | 32.04 | 37.48 |
a2b3 | 43.84 | 1.35 | 41.12 | 46.56 |
a3b1 | 34.20 | 1.35 | 31.48 | 36.92 |
a3b2 | 43.90 | 1.35 | 41.18 | 46.62 |
a3b3 | 43.91 | 1.35 | 41.19 | 46.63 |
a4b1 | 46.06 | 1.35 | 43.34 | 48.78 |
a4b2 | 50.63 | 1.35 | 47.91 | 53.35 |
a4b3 | 57.80 | 1.35 | 55.08 | 60.52 |
a5b1 | 50.42 | 1.35 | 47.70 | 53.14 |
a5b2 | 60.97 | 1.35 | 58.25 | 63.69 |
a5b3 | 57.27 | 1.35 | 54.55 | 59.99 |
If we add columns to this frame that indicate the levels of each of the factors, we can hen create an interaction plot that incorporates confidence intervals in addition to the cell mean estimates.
Unfortunately, for more complex designs (particularly those that are hierarchical), identifying or even approximating sensible estimates of degrees of freedom (and thus the t-distributions from which to estimate confidence intervals) becomes increasingly more problematic (and some argue, borders on quess work). As a result, more complex models do not provide any estimates of residual degrees of freedom and thus, it is necessary to either base confidence intervals on he normal distribution or provide your own degrees of freedom (and associated justification). Furthermore, as a result of these complications, the popMeans() function only supports models fitted via lm.
Marginal means
The true flexibility of the contrasts is the ability to derive other estimates, such as the marginal means for the levels of each of the factors. The contrasts for calculating marginal means are the averages.
- FactA marginal means
> library(plyr) > mat <- model.matrix(~FactA * FactB, data) > dat <- cbind(data, mat) > cmat <- ddply(dat, ~FactA, function(df) mean(df[, -1:-3])) > cmat <- as.matrix(cmat[, -1]) > FactAs <- data.lm$coef %*% t(cmat) > se <- sqrt(diag(cmat %*% vcov(data.lm) %*% t(cmat))) > ci <- as.numeric(FactAs) + outer(se, qt(df = data.lm$df, c(0.025, 0.975))) > colnames(ci) <- c("lwr", "upr") > mat2 <- cbind(mu = as.numeric(FactAs), se, ci) > rownames(mat2) <- levels(data$FactA) Contrast matrix Marginal means 1 2 3 4 5 (Intercept) 1.00 1.00 1.00 1.00 1.00 FactAa2 0.00 1.00 0.00 0.00 0.00 FactAa3 0.00 0.00 1.00 0.00 0.00 FactAa4 0.00 0.00 0.00 1.00 0.00 FactAa5 0.00 0.00 0.00 0.00 1.00 FactBb2 0.33 0.33 0.33 0.33 0.33 FactBb3 0.33 0.33 0.33 0.33 0.33 FactAa2:FactBb2 0.00 0.33 0.00 0.00 0.00 FactAa3:FactBb2 0.00 0.00 0.33 0.00 0.00 FactAa4:FactBb2 0.00 0.00 0.00 0.33 0.00 FactAa5:FactBb2 0.00 0.00 0.00 0.00 0.33 FactAa2:FactBb3 0.00 0.33 0.00 0.00 0.00 FactAa3:FactBb3 0.00 0.00 0.33 0.00 0.00 FactAa4:FactBb3 0.00 0.00 0.00 0.33 0.00 FactAa5:FactBb3 0.00 0.00 0.00 0.00 0.33 mu se lwr upr a1 45.81 0.78 44.24 47.38 a2 35.76 0.78 34.19 37.33 a3 40.67 0.78 39.10 42.24 a4 51.50 0.78 49.93 53.07 a5 56.22 0.78 54.65 57.79 - FactB marginal means
> cmat <- ddply(dat, ~FactB, function(df) mean(df[, -1:-3])) > cmat <- as.matrix(cmat[, -1]) > FactBs <- data.lm$coef %*% t(cmat) > se <- sqrt(diag(cmat %*% vcov(data.lm) %*% t(cmat))) > ci <- as.numeric(FactBs) + outer(se, qt(df = data.lm$df, c(0.025, 0.975))) > colnames(ci) <- c("lwr", "upr") > mat2 <- cbind(mu = as.numeric(FactBs), se, ci) > rownames(mat2) <- levels(data$FactB) Contrast matrix Marginal means 1 2 3 (Intercept) 1.00 1.00 1.00 FactAa2 0.20 0.20 0.20 FactAa3 0.20 0.20 0.20 FactAa4 0.20 0.20 0.20 FactAa5 0.20 0.20 0.20 FactBb2 0.00 1.00 0.00 FactBb3 0.00 0.00 1.00 FactAa2:FactBb2 0.00 0.20 0.00 FactAa3:FactBb2 0.00 0.20 0.00 FactAa4:FactBb2 0.00 0.20 0.00 FactAa5:FactBb2 0.00 0.20 0.00 FactAa2:FactBb3 0.00 0.00 0.20 FactAa3:FactBb3 0.00 0.00 0.20 FactAa4:FactBb3 0.00 0.00 0.20 FactAa5:FactBb3 0.00 0.00 0.20 mu se lwr upr b1 39.92 0.60 38.70 41.14 b2 47.16 0.60 45.94 48.38 b3 50.89 0.60 49.67 52.11 - FactA by FactB interaction marginal means (NOTE, as there are only two factors, this gives the marginal means of the highest order interaction - ie, the cell means)
> cmat <- ddply(dat, ~FactA * FactB, function(df) mean(df[, -1:-3])) > cmat <- as.matrix(cmat[, -1:-2]) > FactAFactBs <- data.lm$coef %*% t(cmat) > se <- sqrt(diag(cmat %*% vcov(data.lm) %*% t(cmat))) > se <- sqrt(diag(cmat %*% vcov(data.lm) %*% t(cmat))) > ci <- as.numeric(FactAFactBs) + outer(se, qt(df = data.lm$df, c(0.025, 0.975))) > colnames(ci) <- c("lwr", "upr") > mat2 <- cbind(mu = as.numeric(FactAFactBs), se, ci) > rownames(mat2) <- interaction(with(data, expand.grid(levels(FactB), levels(FactA))[, c(2, 1)]), sep = "") Contrast matrix 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 (Intercept) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 FactAa2 0.00 0.00 0.00 1.00 1.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 FactAa3 0.00 0.00 0.00 0.00 0.00 0.00 1.00 1.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 FactAa4 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 1.00 1.00 0.00 0.00 0.00 FactAa5 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 1.00 1.00 FactBb2 0.00 1.00 0.00 0.00 1.00 0.00 0.00 1.00 0.00 0.00 1.00 0.00 0.00 1.00 0.00 FactBb3 0.00 0.00 1.00 0.00 0.00 1.00 0.00 0.00 1.00 0.00 0.00 1.00 0.00 0.00 1.00 FactAa2:FactBb2 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 FactAa3:FactBb2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 FactAa4:FactBb2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 FactAa5:FactBb2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 FactAa2:FactBb3 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 FactAa3:FactBb3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 FactAa4:FactBb3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 FactAa5:FactBb3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 Marginal means mu se lwr upr a1b1 40.24 1.35 37.52 42.96 a1b2 45.55 1.35 42.83 48.27 a1b3 51.63 1.35 48.91 54.35 a2b1 28.68 1.35 25.96 31.40 a2b2 34.76 1.35 32.04 37.48 a2b3 43.84 1.35 41.12 46.56 a3b1 34.20 1.35 31.48 36.92 a3b2 43.90 1.35 41.18 46.62 a3b3 43.91 1.35 41.19 46.63 a4b1 46.06 1.35 43.34 48.78 a4b2 50.63 1.35 47.91 53.35 a4b3 57.80 1.35 55.08 60.52 a5b1 50.42 1.35 47.70 53.14 a5b2 60.97 1.35 58.25 63.69 a5b3 57.27 1.35 54.55 59.99
Again, it is possible to use the popMeans function to get these marginal means:
Effect sizes - differences between means
To derive effect sizes, you simply subtract one contrast set from another and use the result as the contrast for the derived parameter. For example, to derive the effect size (difference between two parameters) of a1 vs a2 for b1:
Contrast matrix for cell | Contrast matrix for | Effect size for | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
means (a1b1 & a2b1) | a1b1 - a2b1 | a1b1 - a2b1 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
minus |
|
equals |
|
Whilst it is possible to go through and calculate all the rest of the effects individually, we can loop through and create a matrix that contains all the appropriate contrasts.
Effect sizes of FactA marginal means
Contrast matrix | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Marginal means | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
Effect sizes plot
Effect sizes of FactB marginal means
Contrast matrix | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Marginal means | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
But what do we do in the presence of an interaction then - when effect sizes of marginal means will clearly be underrepresenting the patterns. The presence of an interaction implies that the effect sizes differ according to the combinations of the factor levels.
|
Compare each level of FactB at each level of FactA
|
Deviations from the values predicted by ADDITIVE model
|
Not surprisingly, for simple designs (like simple linear models), a bright member of the R community has written a function that derives the marginal means as well as the standard errors of the marginal mean effect sizes. This function is called model.tables().