# The Basics of FMRI Group Analysis

`Terminology`

- Factor (categorical predictor variable) and level
A factor is a variable categorized with nominal values, which are called the levels of the factor. Gender, stimulus type, and subject group are typical factors in FMRI studies.

- Fixed/random factor
In studies where the interest hinges on the effects of the specific factor levels rationally and systematically chosen by the investigator, such a factor is considered

**fixed**. The delibrately selected levels of the factors are fixed in the sense that they don't change randomly from one replication to another. For example, different categorizations of stimuli/tasks/conditions, genders, genotypes, etc.In studies with a

**random**factor, its levels are a sample from a larger population of potential factor levels and inferences center on the whole population. The levels of a random factor are chosen unsystematically by random sampling (e.g., from a volunteering list), and tend to vary across replications.For a random factor, there is usually no interest in the effect of a potential level per se, but rather in the entire population. Therefore each level of a random factor does not carry any particular meaning by itself; instead these levels are chosen at random and are treated as anonymous representatives of an imagined population. Specifically in FMRI studies the investigator wants to CAPTURE the variability of such a random factor (usually subject), which is normally irrelevant to the effect of interest, so that some conclusions (factor effects and contrasts) can be generalized to a whole population while reducing the size of the error term.

**Fixed effects model**is used by some people to refer to ANOVA with all factors being fixed, while in a**random effects model**the investigator treats all factors as random with interest centering on the variability of the factor effects. In FMRI studies we usually more often see**mixed effects models**with a couple of fixed factors plus one random factor (e.g., subject), such as type 3 in 3dANOVA2, types 4 and 5 in 3dANOVA3.However, by "random effects model" and "fixed effects model", some people (e.g., SPM community) mean slightly different from the conventional usage. Their emphasis is not on the characteristic of a factor, but about whether subject variability is modeled or not, and this is related to the fact that contrasts are recommended to bring into their group analysis. As long as subject variability is accounted for properly (e.g., running correct

*t*test with 3dttest or treating subject as a random factor in ANOVA), it is perfect OK to run group analysis directly with regression coefficients in AFNI.In case the user does not want to model subject variability, it will go into residual error. Either treat subjects as multiple samples, or go through the following steps to run a similar analysis to what is done in SPM with so-called "fixed effects model":

(1) Concatenate all subjects into one big file (for example, via 3dTcat).

(2) Shift stimulus timing files so that one subject is followed by another based on their concatenation order. In other words, treat those subejcts as continuous scanning from the beginning to the end.

(3) Label those regressors differently for each subject. In other words, if you have 3 regressors for each subject, then you would have 30 regressors for 10 subjects.

(4) Run 3dDeconvolve on this big concatenated file with those time-shifted regressors.

(5) Obtain each fixed effect (including intergroup comparison) through appropriate contrast vector setup.

- Crossed (factorial)/nested (hierarchical) design

If a design includes ALL possible combinations of factor levels, those factors are said to be **crossed** or **factorial**. Such a design makes it possible for the investigator to explore any potential interactions between/among factors. For example, in a two-way ANOVA AXB, the effect of factor A may vary with the levels of factor B, which can be tested with the effect of the interaction between A and B.

If each level of one factor A contains a UNIQUE set of levels of another factor B, we call this type of design **nested** or **hierarchical**. A is called nesting or outer factor, while B nested or inner factor.

Most of the time nested design occurs in FMRI studies where subjects are classified with different categories, such as gender, genotype, disease, age group, etc..

A **mixed** design contains both crossed and nested factors such as type type 5 in 3dANOVA, type 3 in 3-way ANOVA and types 3,4,5 of 4-way ANOVA in the
Matlab package.

- Simple effect, main effect, interaction, and contrast in factorial design

**Simple effect** refers to the effect of a term with at least one factor fixed at one specific level. It allows the investigator to tease apart various interactions, and can be obtained through 3dttest or options such as -amean in 3dANOVA2.

**Main effect** of a factor is the average effect of all simple effects through all levels of that factor, expressed with the sum of squares of the factor by averaging the differences between different pairs of the factor effects, and a *F*-statistic called **omnibus** or **overall** test. Options such as -fa in 3dANOVA2 are provided for testing the significance of main effects. In the Matlab package all main effect tests are automatically generated. If a factor has only two levels, the contrast between the two levels is basically the same as the main effect of this factor.

An **interaction** between two factors exists if the effect of one factor depends on the specific levels of the other, and it assesses the extent to which the simple effect of one factor varies with the levels of another. It measures the deviation from two-factor additivity. In other words, the main effects alone can't fully explain the results of a crossed design, but rather the effects of each factor only make senses when combined with the levels of another factor.

An interaction between two factors exists when

(1) the effects of factor A vary at the different levels of factor B;

(2) the values of one or more contrasts in factor A vary at the different levels of factor B;

(3) the simple effects of factor A are not the same at all levels of factor B;

(4) the differences among the cell means representing the effect of factor A at one level of factor B don't equal to the corresponding difference at another level of factor B.

This definition of interaction can be generalized to the situation of interactions for more than two factors. Only in a crossed design can the presence of interaction be tested, and the **order** of an interaction is defined as the number of factors whose levels are fixed.

A **contrast** (or **analytical comparison**) refers to a comparison between two or more factor effects (e.g. cell means, simple effects, ...). The coefficients or weights for contrasts are assigned based on the null hypothesis (usually with a *t* statistic) the investigator wants to test, and they are usually scaled so that the sum of all should sum up to zero.

- Effect size

The power of a statistical test depends on the degrees of overlap between the sampling distributions under null hypothesis and its alternative. This overlap is a function of both the distance between null and alternative and standard error. In the context of regression or group analysis, **effect size** is an overall dimensionless measure of the magnitude of a condition or contrast that indicates the size of the effect independent of certain details (e.g. sample size) of the experiment. It measures the magnitude of the effect in terms of standard deviation.

For a *t* test of a condition or contrast *c*, its effect size is the standardized contrast: contrast/(pooled standard deviation), or *t*/sqrt(df). For a *F* test of a factor main effect, its effect size is the square of the correlation ratio, R^{2} = (*a*-1)*F*/[(*a*-1)*F*+*a*(*n*-1)] = (variability explained by the model)/(total variability) = 1 - (unexplained variability)/(total variability), where *a* is the level of the factor and *n* the total number of obervations.

- Between-subjects and within-subject can refer to either designs or factors.

**Between-subjects** design: each subject is exposed to a single factor level combination; all sources of variability extracted in ANOVA represent differences between (or among) subjects.

**Within-subject** (or **repeated-measures**) design: each subject is exposed to SOME or ALL the factor level combinations; effects are based on differences among the multiple measures taken from each subject.

Between-subjects and within-subject factors can occur in a single design, forming a **mixed** design, also called **split-plots** design in agricultural studies. It combines the advantages of both the between-subjects and within-subject designs.

`Group Analysis`

The search for cause-effect relation is part of the fun doing science. The purpose of running group analyis is to partition the final effect (percent signal change) in the group into various potential causes (main effects and interactions) and test their significance. Depending how many potential causes (factors) involved, there are different group analysis tools for different experiment design types as discussed below in more details.

- One-sample t test

One-sample t test (3dttest -base1 ... -set2 ... ) can obtain the effects of some condition (regression coefficients) or effects of some contrast across a group of subjects. In the later case with contrast between two coefficients, it is equivalent to two-sample paired t test (3dttest -set1 ... -set2 ... ) on the two separate coefficients and 2-way ANOVA (single-factor within-subject design of AxS: A, fixed with two levels; S, subject: random) with 3dANOVA2 -type 3 because mean squares of interaction between A and S (MSAS) for the 2-way ANOVA is the same as (1/2) s^{2} in two-sample paired t test.

**Note** Suppose a factor A has multiple levels (conditions) with a group of subjects (single-factor within-subject design of AxS), and the user is interested in testing the significance of one specific level. There are two alternatives to run statistical test for this situation: (1) Run one-sample *t* test for this level; (2) Obtain the simple effect of this level by running two-way ANOVA AXS with 3dANOVA2 -type 3. You will most likely get two slightly different results. Don't be surprised by this difference (not big unless heterogeneous variances exist) between the alternatives. The reason is that only the variation of this level is considered in one-sample *t* test while the analysis with two-way ANOVA AXS uses the pooled variance.

- Single-factor design

In a single-factor design, the factor of interest may either be studied between subjects (thus a between-subjects design, one-way ANOVA with multiple sample size) or within each subject (thus a within-subject design, two-way ANOVA with factor of subject being random). The difference between the two is similar to the situation between unpaired and paired *t* test. In the 2nd situation, a within-subject (repeated-measures) factor is any factor whose levels are crossed with the individual observational units (subjects, in most FMRI research designs).

For example, if we want to test whether a condition (beta coefficient from individual subject analysis) is the same between two groups (e.g., male and female, young and old, or patient and normal) of subjects, then the factor of interest (group) is a between-subjects factor (single-factor between-subjects design, or one-way ANOVA), and the following two analyses are essentially the same: 3dttest with two-sample *t* test (unpaired *t* test), 3dANOVA -levels 2 ... (one factor - group - with two levels) which allows unequal sample size. Both are so-called random effects model in the sense that the variability of all subjects contains that from both within-subject and between-subjects variances.

Another situation is compare two conditions among a population of subjects. The factor of interest (condition) is a within-subject factor (single-factor within-subject design, or two-way ANOVA AXS with S being random). Both 3dttest (paired two-sample *t* test) and 3dANOVA2 -type 3 with two factors (condition: fixed with two levels; subject: random) are the same.

**Note** Suppose a factor A has more than two levels (conditions) with a group of subjects (single-factor within-subject design of AxS), and the user is interested in testing the contrast between two specific levels. There are two alternatives to run statistical test for this situation: (1) Run paired *t* test for the contrast; (2) Analyze the contrast by running two-way ANOVA AXS with 3dANOVA2 -type 3. You will most likely get two slightly different results. Don't be surprised by this difference (not big unless heterogeneous variances exist) between the alternatives. The reason is that only the pooled variance between the two levels is considered in paired *t* test while the analysis with two-way ANOVA AXS uses the overall pooled variance among all levels.

- Multiple-factor design

Suppose that we have two ways (A and B) to categorize stimuli and that scanning is conducted with each subject taking all stimuli. This is a completely within-subject two-factor design (three-way ANOVA) AXBXS: both A and B are fixed while S random. Either 3dANOVA3 -type 4 or 3-way ANOVA (type 3) with the Matlab package is appropriate.

Suppose we have two factors of interest: A (gender) and B (condition). Factor A varies between subjects and factor B within subjects. This is a mixed design, with a notation of BXS(A) (S nested within A). Again, we can run either 3dANOVA3 -type 4 or 3-way ANOVA (type 3) with the Matlab package.

If the interest of effect is only about a contrast of two coefficients between two groups (male and female, patient and normal, young and old, etc.), the following three approaches should be equivalent:

(1) a two-sample unpaired *t* test with 3dttest on the contrast;

(2) one-way ANOVA with the factor of interest being group (two levels);

(3) two-factor mixed design (three-way ANOVA) BXS(A) on the two separate coefficients (not the contrasts) with 3dANOVA3 -type 5 or 3-way ANOVA (type 3) with the Matlab package.

- Analysis of Covariance

See Running ANCOVA

`Multiple comparison correction`

In FRMI studies data analysis is usually done voxel-wise with all statistical tests conducted separately and simultaneously. Although these voxel-by-voxel tests increase the precision of the conclusions in terms of clusters, they also lead to the increase of the chance that at least one of them is wrong. Therefore the probability of at least one error (type I, or false positive) among multiple tests is greater than that of an error on an individual test. To control the severity of this error, some measure of **multiple comparison correction** is desirable during group analysis.

The Bonferroni correction is such a multiple-comparison correction used when multiple dependent or independent statistical tests are being performed simultaneously, but it is overly conservative as it doesn't consider the spatial correlation among neighboring voxels. It works well if the brain behaves like Browian motion of water molecules. Of course the brain has various well-organized structures.

One approach to dealing with this problem in AFNI is to run a Monte Carlo simulation (program AlphaSim) to obtain corrected type I error for the group analysis. Such a dual thresholding of both type I error (i.e., alpha) and cluster size gives a reasonable correction for simultaneous tests during the group analysis. Refer the guideline page for more details.

Another approach is false discovery rate with program 3dFDR

`Related Manuals`

- 3dANOVA, 3dANOVA2, and 3dANOVA3
- Matlab package for ANOVA
- Matlab package for ANCOVA
- 3dRegAna
- 3dclust
- AlphaSim
- 3dFDR

`Related links`

Last modified 2007-05-21 16:17