Skip to content


Personal tools
You are here: Home » SSCC » gangc's Home » Help


Document Actions

The traditional cell-mean (or structural) model of one-way within-subject (repeated-measures) ANOVA  is

Yij = μ + αi+ βj + εij                                                                               (I)

Yij independent variable;
μ constant – grand mean;
αi constants subject to Σαi = 0 – simple effect of factor A at level i, i = 1, 2, ..., a
βj independent N(0, σp2) – random effect of subject j, j = 1, 2, ..., bp2 - population variance);
εij independent N(0, σ2) – random error or within-subject variability or interaction between the factor of interest and subject (σ2 - variance of sampling error).
Assumptions are:

E(Yij) = μ + αi, Var(Yij) = σp2 + σ2, Cov(Yij, Yi'j) = σp2 (i ~= i'), Cov(Yij,Yi'j') = 0 (j ~ = j');
Correlation between any two levels of factor A is: σp2/(σp2 + σ2).

Suppose we want to model a new one-way within-subject (repeated-measures) ANOVA:

Yij = μi+ βj + εij                                                                                     (II)

where all terms bear the same meaning as in model (I) except for factor effects (constants) {μi, i = 1, 2, ..., a}. And the new assumptions are the same as before except E(Yij) = μi.

The absence of a constant in model (II) is because we don't want a common mean removed from each effect and would like to test the following null hypothesis

H0: μ1 = 0, μ2 = 0, ..., and μa = 0                                                          (III)

instead of the main effect test of factor A in model (I),

H0: α1 = 0, α2 = 0, ..., and αa = 0   (no difference among factor levels)
(III) is an extension of one-sample t test to a whole set of factor effects instead of one specific simple effect μi.

I failed to find any discussion about model (II) and (III) in any text books I could access to, so I tried the following myself. Solving (II) as a general linear model is very straightforwad by coding factor A with a dummy variable, and we get an F test with F = {[(residuals SS of the reduced model) - (residuals SS of full)]/DF1}/[(residuals SS of full)/DF2].  The nice thing about this test is that there is only one F and the degrees of freedom are accurate.

However I would like to analyze it in a more computationally economical way by calculating sums of squares as in the traditional ANOVA approach.

First I started with the reduced model under null hypothesis (III)

Yij = βj + εij                                                                                            (IV)

The errors under model (II) and (IV) are respectively,

εij = Yij - μi - βj

εij = Yij - βj

But I could not go any further from here, thus failed to work out a formula for hypothesis (III). Any suggestions about this approach?

Then I tried the following method.

Intuitively we would pick SSA* = bΣiYi.2 (thereafter a dot in the place of an index indicates the term as the mean over that index) as the sums of squares for hypothesis (III). It seems SSA* is not really a perfect candidate for hypothesis (III) since

Yi. = (1/b)ΣjYij= μi+ β. + εi.,

which still contains subject effect β., but I failed to come up with a better solution with some combination of Yi., Y.j, Yij, and Y... The expected value of MSA* can be derived as the following

= bE(ΣiYi.2)

= bE[Σii+ β. + εi.)2]

= bE[Σii2+ β.2 + εi.2+2μiβ. + 2μiεi.+ 2β.εi.)]

= bΣii2+ Eβ.2 + Eεi.2+2E(μiβ.) + 2E(μiεi.) + 2E(β.εi.)]

= bΣii2+ σp2/b + σ2/b)

= bΣiμi2+ a(σp2 + σ2)

As the degrees of freedom for SSA* is a, we have

E(MSA*) = (bΣiμi2)/a+ (σp2 + σ2)

Similar derivation can be applied to MSS (mean squares for Subject) and MSAS (mean squares for interaction term between factor A and Subject), which are the same as in traditional one-way within-subject ANOVA,

E(MSS) = aσp2 + σ2,

E(MSAS) = σ2

Based on the above 3 terms, we can construct the following F tests for (III),

F1 = aMSA*/[MSS+(a-1)MSAS]                                             
F2 = (MSA* - MSS)/[MSAS/(1-a)]

F2 is not a good candidate since the numerator might be negative. With a quasi-F statistic F1  with a composite mean square term in the denominator, MSS+(a-1)MSAS, the degrees of freedom for the denominator can be approximated as

dfdenom = [MSS+(a-1)MSAS]2/{MSS2/(b-1)+(a-1)2MSAS2/[(a-1)(b-1)]}  (IV)

And the degrees of freedom for F1 is F(a, dfdenom).


(1) Is everything above correct?
(2) I am not so sure about the multiplier (a-1)2 in (IV): Is it correct?
(3) One thing I don't feel comfortable with this approach is that there might have multiple choices of F, and I have to approximate the degrees of freedom for those composite sums of squares. And I am not so sure whether this approach would be equivalent to the one with general linear model.
(4) Any better choices of F statistic for hypoethesis (III)?
(5) Hypothesis (III) would be denied (rejected) if either
      (i) the main effect of factor A in model (I) is significant (because this denies  that the αi are equal at all), or
      (ii) the hypothesis that the grand mean μ in model (I) is zero is rejected?

This seems very nice because it is much simpler than the above approach with composite sums of squares, but my sub-questions are:

A. Yes, hypothesis (III) would be denied (rejected) if either (i) or (ii) is rejected, but is (III) equivalent to the following 2 hypotheses in the traditional ANOVA combined? Or in other words, is it true that rejecting (III) would lead to at least one of the following 2 hypotheses for model (I) being rejected?

H0 : the main effect of factor A is 0, or αi = 0, i = 1, 2, ..., a


H0 : μ = 0

B.  One annoying part of this new approach is, I will have to have 2 separate F tests, lack of the nice feature of only one statistic for the hypothesis.

Created by Brian Pittman
Last modified 2005-11-09 13:59

Powered by Plone

This site conforms to the following standards: