14.2.13. Taylor et al. (2018). FMRI processing with AFNI: Some comments and corrections …¶
Introduction¶
Here we present commands used in the following paper:
- Taylor PA, Chen G, Glen DR, Rajendra JK, Reynolds RC, Cox RW (2018). FMRI processing with AFNI: Some comments and corrections on ‘Exploring the Impact of Analysis Software on Task fMRI Results’. bioRxiv 308643; doi:10.1101/308643
Abstract: A recent study posted on bioRxiv by Bowring, Maumet and Nichols aimed to compare results of FMRI data that had been processed with three commonly used software packages (AFNI, FSL and SPM). Their stated purpose was to use “default” settings of each software’s pipeline for task-based FMRI, and then to quantify overlaps in final clustering results and to measure similarity/dissimilarity in the final outcomes of packages. While in theory the setup sounds simple (implement each package’s defaults and compare results), practical realities make this difficult. For example, different softwares would recommend different spatial resolutions of the final data, but for the sake of comparisons, the same value must be used across all. Moreover, we would say that AFNI does not have an explicit default pipeline available: a wide diversity of datasets and study designs are acquired across the neuroimaging community, often requiring bespoke tailoring of basic processing rather than a “one-size-fits-all” pipeline. However, we do have strong recommendations for certain steps, and we are also aware that the choice of a given step might place requirements on other processing steps. Given the very clear reporting of the AFNI pipeline used in Bowring et al. paper, we take this opportunity to comment on some of these aspects of processing with AFNI here, clarifying a few mistakes therein and also offering recommendations. We provide point-by-point considerations of using AFNI’s processing pipeline design tool at the individual level, afni_proc.py, along with supplementary programs; while specifically discussed in the context of the present usage, many of these choices may serve as useful starting points for broader processing. It is our intention/hope that the user should examine data quality at every step, and we demonstrate how this is facilitated in AFNI, as well.
Study keywords: task-block, EPI, MPRAGE, human, control, adult, MNI space, nonlinear align
Main programs:
afni_proc.py
, 3dmask_tool
, recon-all
(FS),
@SUMA_Make_Spec_FS
Download scripts¶
To download, either:
... click the link(s) in the following table (perhaps Rightclick -> “Save Link As…”):
BMN-AFNI processing with
afni_proc.py
NIMH-AFNI processing: skullstripping and alignment to standard space (via
@SSwarper
)NIMH-AFNI processing with
afni_proc.py
NIMH-AFNI group level processing:
3dMEMA
,3dClustSim
, clusterizing... or copy+paste into a terminal:
curl -O https://afni.nimh.nih.gov/pub/dist/doc/htmldoc/codex/fmri/media/2018_TaylorEtal/s.bmn_subject_level_02_ap.tcsh curl -O https://afni.nimh.nih.gov/pub/dist/doc/htmldoc/codex/fmri/media/2018_TaylorEtal/s.nimh_subject_level_01_qwarp.tcsh curl -O https://afni.nimh.nih.gov/pub/dist/doc/htmldoc/codex/fmri/media/2018_TaylorEtal/s.nimh_subject_level_02_ap.tcsh curl -O https://afni.nimh.nih.gov/pub/dist/doc/htmldoc/codex/fmri/media/2018_TaylorEtal/s.nimh_group_level_02_mema.tcsh
View scripts¶
These scripts describe different approaches for processing FMRI data with AFNI. Please read the comments at the tops of the scripts carefully, as well as the bioRxiv papers associated with each, in order to understand the steps.
These scripts were run on the NIH’s Biowulf HPC cluster, hence some of
the variables like $SLURM_CPUS_PER_TASK
, $SLURM_JOBID
, etc.
s.bmn_subject_level_02_ap.tcsh
¶
Comment: Several aspects of this particular script’s command would not be recommended for processing (e.g., the nonlinear alignment program; the selection of volreg reference, and more). Please see the main paper for further details.
1#!/usr/bin/env tcsh
2
3# ----------------------------------------------------------------------
4# Script: s.bmn_subject_level_02_ap.tcsh
5# Run on: openfmri/ds001_R2.0.4
6# Date : April, 2018
7#
8# Time series analysis for one subject, basically unmodified from the
9# BMN afni_proc.py command. (Does not require pre-running of any
10# script.)
11#
12# Used for "BMN-AFNI" processing in:
13#
14# Some comments and corrections on FMRI processing with AFNI in
15# "Exploring the Impact of Analysis Software on Task fMRI Results"
16#
17# Paul A. Taylor, Gang Chen, Daniel R. Glen, Justin K. Rajendra,
18# Richard C. Reynolds, Robert W. Cox.
19# Scientific and Statistical Computing Core, NIMH/NIH/DHHS,
20# Bethesda, MD, USA.
21#
22# NOTE: This afni_proc.py command follows the Bowringer et al. (2018)
23# command, described on bioRxiv. Variable names have been adjusted
24# for convenience of running on local computers, but not selected
25# options. See their bioRxiv draft and github page for reference.
26#
27# ----------------------------------------------------------------------
28#
29# To run for a single subject, for example:
30#
31# tcsh -ef s.bmn_subject_level_02_ap.tcsh "01"
32#
33# After this, one can run the following for QC and checking processing
34# steps:
35#
36# @ss_review_basic
37# @ss_review_driver
38#
39# ... and after all subjects in the group have been processed, the
40# following can be run to combine the single subject processing
41# numbers into one table:
42#
43# gen_ss_review_table.py \
44# -tablefile review_table_bmn.xls \
45# -infiles data_proc_bmn/sub*/out.ss_review*
46#
47#########################################################################
48
49# ===================== Set inputs and paths =========================
50
51# The only input argument is the subject number (zero-paddeed, as
52# "01", "02", etc., if that is how the file naming conventions are)
53if( $#argv < 1 )then
54 echo "** ERROR: no command line argument?"
55 exit 1
56endif
57
58# Set subject ID from input
59set ss = $argv[1]
60set subj = sub-${ss}
61set grp = "bmn"
62
63# Set reference template for standard space (location found below)
64set itemp = MNI_avg152T1+tlrc.HEAD
65
66# Make path/file structure variables for readability.
67
68# Top level directory
69set p_0 = /data/NIMH_SSCC
70set path_main = ${p_0}/openfmri/ds001_R2.0.4
71# Inputs
72set prep_onset = ${path_main}/data_basic/ONSETS
73set path_func = ${path_main}/data_basic/sub-${ss}/func
74set path_anat = ${path_main}/data_basic/sub-${ss}/anat
75# Outputs
76set path_ogrp = ${path_main}/data_proc_${grp}
77set path_out = ${path_ogrp}/sub-${ss}
78
79mkdir -p ${path_ogrp}
80
81# =================== Environment variables ===========================
82
83# NB: need this because of initial data's qform code
84setenv AFNI_NIFTI_VIEW orig
85
86# ================== Optional: Biowulf cluster =========================
87
88# This section for possibly running on Biowulf cluster.
89
90# Set thread count
91if( $?SLURM_CPUS_PER_TASK )then
92 setenv OMP_NUM_THREADS $SLURM_CPUS_PER_TASK
93endif
94
95# Set temporary output directory; then requires using something like
96# this on the swarm command line: --sbatch '--gres=lscratch:100'.
97# These variables used again *after* afni_proc.py command, if Biowulfing.
98if( $?SLURM_JOBID )then
99 set tempdir = /lscratch/$SLURM_JOBID/${subj}
100 set usetemp = 1
101else
102 set tempdir = $path_out
103 set usetemp = 0
104endif
105
106# ================== Find standard space template ======================
107
108# Selecting/finding reference template (for defining final standard
109# space); the chosen template file is specified above
110
111set tpath = `@FindAfniDsetPath $itemp`
112set basedset = $tpath/$itemp
113
114if( "$tpath" == "" )then
115 echo "** ERROR: can't find my template $itemp"
116 exit 1
117endif
118
119if( ! -f $basedset )then
120 echo "** ERROR: template $basedset is not present"
121 exit 1
122endif
123
124# ========================= Specify processing =========================
125
126# [PT: July 10, 2019] AP command amended, to have ${tempdir} provided
127# to the '-out_dir ..' opt; allows actual use of scratch disk, if
128# asked for above. No change in processing results, just speed of
129# processing in some cases. Thanks, Yoichi M!
130
131# FINALLY: the afni_proc.py command itself, which makes a single
132# subject processing script
133afni_proc.py \
134 -scr_overwrite \
135 -subj_id sub${ss} \
136 -script proc_${grp}.sub${ss} \
137 -out_dir ${tempdir} \
138 -scr_overwrite \
139 -blocks tshift align tlrc volreg blur mask scale regress \
140 -copy_anat ${path_anat}/sub-${ss}_T1w.nii \
141 -dsets \
142 ${path_func}/sub-${ss}_task-balloonanalogrisktask_run-01_bold.nii.gz \
143 ${path_func}/sub-${ss}_task-balloonanalogrisktask_run-02_bold.nii.gz \
144 ${path_func}/sub-${ss}_task-balloonanalogrisktask_run-03_bold.nii.gz \
145 -tcat_remove_first_trs 2 \
146 -align_opts_aea -cost mi -ginormous_move -check_flip \
147 -anat_uniform_method unifize \
148 -tlrc_base ${basedset} \
149 -volreg_warp_dxyz 2 \
150 -volreg_align_to third \
151 -volreg_align_e2a \
152 -volreg_tlrc_warp \
153 -blur_size 5.0 \
154 -regress_stim_times \
155 ${prep_onset}/sub-${ss}_combined_cash_demean_afni.1d \
156 ${prep_onset}/sub-${ss}_combined_cash_RT_afni.1d \
157 ${prep_onset}/sub-${ss}_combined_control_pumps_demean_afni.1d \
158 ${prep_onset}/sub-${ss}_combined_control_pumps_RT_afni.1d \
159 ${prep_onset}/sub-${ss}_combined_explode_demean_afni.1d \
160 ${prep_onset}/sub-${ss}_combined_pumps_demean_afni.1d \
161 ${prep_onset}/sub-${ss}_combined_pumps_RT_afni.1d \
162 -regress_stim_labels \
163 cash_demean cash_RT control_pumps_demean \
164 control_pumps_RT explode_demean pumps_demean \
165 pumps_RT \
166 -regress_basis_multi \
167 'BLOCK(0.772,1)' 'dmBLOCK' 'BLOCK(0.772,1)' 'dmBLOCK' \
168 'BLOCK(0.772,1)' 'BLOCK(0.772,1)' 'dmBLOCK' \
169 -regress_stim_types \
170 AM2 AM1 AM2 AM1 AM2 AM2 AM1 \
171 -regress_censor_motion 0.3 \
172 -regress_opts_3dD \
173 -gltsym 'SYM: pumps_demean -control_pumps_demean' \
174 -glt_label 1 pumps_demean_vs_ctrl_demean \
175 -regress_make_ideal_sum sum_ideal.1D \
176 -regress_est_blur_epits \
177 -regress_est_blur_errts \
178 -execute
179
180# =============================================================================
181
182# Again, Biowulf-running considerations: if processing went fine and
183# we were using a temporary directory, copy data back.
184if( $usetemp && -d $tempdir )then
185 echo "Copying data from $tempdir to $path_out"
186 mkdir -p $path_out
187 \cp -pr $tempdir/* $path_out/
188endif
189
190# If it worked, run some volreg snapshots and compress outputs:
191# compare chosen EPI reference volume (from alignment to anatomical)
192# to the final anatomical.
193if( -d $path_out )then
194 cd $path_out
195 @snapshot_volreg \
196 anat_final.${subj}+tlrc.HEAD \
197 final_epi_vr_base*+tlrc.HEAD \
198 ${subj}
199 cd ..
200endif
201
202# =============================================================================
203
204echo "++ DONE with afni_proc for subject: $subj"
205
206time ; exit 0
Script: s.nimh_subject_level_01_qwarp.tcsh
¶
Comment: @SSwarper
syntax has changed a lot since these days.
1#!/usr/bin/env tcsh
2
3# ----------------------------------------------------------------------
4# Script: s.nimh_subject_level_01_qwarp.tcsh
5# Run on: openfmri/ds001_R2.0.4
6# Date : April, 2018
7#
8# This script runs @SSwarper for one subject, whose cognomen (ID) is
9# the sole command line argument. The results are used in the analysis
10# script.
11#
12# From @SSwarper's help:
13# Script to skull-strip and warp to the MNI 2009 template.
14#
15# Used for "NIMH-AFNI" processing in:
16#
17# Some comments and corrections on FMRI processing with AFNI in
18# "Exploring the Impact of Analysis Software on Task fMRI Results"
19#
20# Paul A. Taylor, Gang Chen, Daniel R. Glen, Justin K. Rajendra,
21# Richard C. Reynolds, Robert W. Cox.
22# Scientific and Statistical Computing Core, NIMH/NIH/DHHS,
23# Bethesda, MD, USA.
24#
25# ----------------------------------------------------------------------
26#
27# To run for a single subject, for example:
28#
29# tcsh -ef s.nimh_subject_level_01_qwarp.tcsh "01"
30#
31#########################################################################
32
33# The only input argument is the subject number (zero-paddeed, as
34# "01", "02", etc., if that is how the file naming conventions are)
35if( $#argv < 1 )then
36 echo "** ERROR: no command line argument?"
37 exit 1
38endif
39
40# Set subject ID
41set ss = $argv[1]
42
43# Make path/file structure variables for readability.
44
45# Top level directory
46set p_0 = /data/NIMH_SSCC
47set path_main = ${p_0}/openfmri/ds001_R2.0.4
48# Inputs
49set prep_onset = ${path_main}/data_basic/ONSETS
50set path_func = ${path_main}/data_basic/sub-${ss}/func
51set path_anat = ${path_main}/data_basic/sub-${ss}/anat
52
53# ========================================================================
54
55# This section for possibly running on Biowulf cluster.
56
57# Set thread count
58if( $?SLURM_CPUS_PER_TASK )then
59 setenv OMP_NUM_THREADS $SLURM_CPUS_PER_TASK
60endif
61
62# ========================================================================
63
64# switch to the subject’s anat directory
65if( ! -e $path_anat/sub-${ss}_T1w.nii.gz ) then
66 echo "** ERROR: can’t find file $path_anat/sub-${ss}_T1w.nii"
67 exit 1
68endif
69
70cd $path_anat
71
72# Do the work
73@SSwarper sub-${ss}_T1w.nii.gz sub-${ss}
74
75# =============================================================================
76
77echo "++ DONE with warping for subject: $subj"
78
79time ; exit 0
Script: s.nimh_subject_level_02_ap.tcsh
¶
1#!/usr/bin/env tcsh
2
3# ----------------------------------------------------------------------
4# Script: s.nimh_subject_level_02_ap.tcsh
5# Run on: openfmri/ds001_R2.0.4
6# Date : April, 2018
7#
8# Time series analysis for one subject, modified from BMN afni_proc.py
9# command to adhere to current AFNI usage recommendations. The
10# accompanying "s.nimh_subject_level_01_qwarp.tcsh" script must have
11# been run previously to get the warped-to-MNI files for the current
12# subject via @SSwarper.
13#
14# Used for "NIMH-AFNI" processing in:
15#
16# Some comments and corrections on FMRI processing with AFNI in
17# "Exploring the Impact of Analysis Software on Task fMRI Results"
18#
19# Paul A. Taylor, Gang Chen, Daniel R. Glen, Justin K. Rajendra,
20# Richard C. Reynolds, Robert W. Cox.
21# Scientific and Statistical Computing Core, NIMH/NIH/DHHS,
22# Bethesda, MD, USA.
23#
24# NOTE: This afni_proc.py command includes several features that would
25# have been chosen in preference to those of BMN, as described in the
26# above text. *However*, it is not a complete set. Further features
27# should also likely be adjusted, such as in the GLT specification
28# (see points A-5 and A-6 in the accompanying text) and upsampling to
29# 2x2x2 mm^3 would also note be chosen. These features were not
30# changed *here*, for the purposes of matching the BMN specifications
31# more closely. In general, please see the descriptions in the above
32# bioRxiv text for details on settings/options that were chosen.
33#
34# ----------------------------------------------------------------------
35#
36# To run for a single subject, for example:
37#
38# tcsh -ef s.nimh_subject_level_02_ap.tcsh "01"
39#
40# After this, one can run the following for QC and checking processing
41# steps:
42#
43# @ss_review_basic
44# @ss_review_driver
45#
46# ... and after all subjects in the group have been processed, the
47# following can be run to combine the single subject processing
48# numbers into one table:
49#
50# gen_ss_review_table.py \
51# -tablefile review_table_nimh.xls \
52# -infiles data_proc_nimh/sub*/out.ss_review*
53#
54#########################################################################
55
56# ===================== Set inputs and paths =========================
57
58# The only input argument is the subject number (zero-paddeed, as
59# "01", "02", etc., if that is how the file naming conventions are)
60if( $#argv < 1 )then
61 echo "** ERROR: no command line argument?"
62 exit 1
63endif
64
65# Set subject ID from input
66set ss = $argv[1]
67set subj = sub-${ss}
68set grp = "nimh"
69
70# Set reference template for standard space (location found below)
71set itemp = MNI152_2009_template.nii.gz
72
73# Make path/file structure variables for readability.
74
75# Top level directory
76set p_0 = /data/NIMH_SSCC
77set path_main = ${p_0}/openfmri/ds001_R2.0.4
78# Inputs
79set prep_onset = ${path_main}/data_basic/ONSETS
80set path_func = ${path_main}/data_basic/sub-${ss}/func
81set path_anat = ${path_main}/data_basic/sub-${ss}/anat
82# Outputs
83set path_ogrp = ${path_main}/data_proc_${grp}
84set path_out = ${path_ogrp}/sub-${ss}
85
86mkdir -p ${path_ogrp}
87
88# =================== Environment variables ===========================
89
90# NB: need this because of initial data's qform code
91setenv AFNI_NIFTI_VIEW orig
92
93# ================== Optional: Biowulf cluster =========================
94
95# This section for possibly running on Biowulf cluster.
96
97# Set thread count
98if( $?SLURM_CPUS_PER_TASK )then
99 setenv OMP_NUM_THREADS $SLURM_CPUS_PER_TASK
100endif
101
102# Set temporary output directory; then requires using something like
103# this on the swarm command line: --sbatch '--gres=lscratch:100'.
104# These variables used again *after* afni_proc.py command, if Biowulfing.
105if( $?SLURM_JOBID )then
106 set tempdir = /lscratch/$SLURM_JOBID/${subj}
107 set usetemp = 1
108else
109 set tempdir = $path_out
110 set usetemp = 0
111endif
112
113# ================== Find standard space template ======================
114
115# Selecting/finding reference template (for defining final standard
116# space); the chosen template file is specified above
117
118set tpath = `@FindAfniDsetPath $itemp`
119set basedset = $tpath/$itemp
120
121if( "$tpath" == "" )then
122 echo "** ERROR: can't find my template $itemp"
123 exit 1
124endif
125
126if( ! -f $basedset )then
127 echo "** ERROR: template $basedset is not present"
128 exit 1
129endif
130
131# ========================= Specify processing =========================
132
133# [PT: July 10, 2019] AP command amended, to have ${tempdir} provided
134# to the '-out_dir ..' opt; allows actual use of scratch disk, if
135# asked for above. No change in processing results, just speed of
136# processing in some cases. Thanks, Yoichi M!
137
138# FINALLY: the afni_proc.py command itself, which makes a single
139# subject processing script
140afni_proc.py \
141 -scr_overwrite \
142 -subj_id sub${ss} \
143 -script proc_${grp}.sub${ss} \
144 -out_dir ${tempdir} \
145 -blocks tshift align tlrc volreg blur mask scale regress \
146 -copy_anat ${path_anat}/anatSS.${subj}.nii \
147 -anat_has_skull no \
148 -dsets \
149 ${path_func}/sub-${ss}_task-balloonanalogrisktask_run-01_bold.nii.gz \
150 ${path_func}/sub-${ss}_task-balloonanalogrisktask_run-02_bold.nii.gz \
151 ${path_func}/sub-${ss}_task-balloonanalogrisktask_run-03_bold.nii.gz \
152 -tcat_remove_first_trs 2 \
153 -align_opts_aea -cost lpc+ZZ -ginormous_move -check_flip \
154 -volreg_warp_dxyz 2 \
155 -volreg_align_to MIN_OUTLIER \
156 -volreg_align_e2a \
157 -volreg_tlrc_warp \
158 -tlrc_base $basedset \
159 -tlrc_NL_warp \
160 -tlrc_NL_warped_dsets \
161 $path_anat/anatQQ.${subj}.nii \
162 $path_anat/anatQQ.${subj}.aff12.1D \
163 $path_anat/anatQQ.${subj}_WARP.nii \
164 -blur_size 5.0 \
165 -regress_stim_times \
166 ${prep_onset}/sub-${ss}_combined_cash_demean_afni.1d \
167 ${prep_onset}/sub-${ss}_combined_cash_RT_afni.1d \
168 ${prep_onset}/sub-${ss}_combined_control_pumps_demean_afni.1d \
169 ${prep_onset}/sub-${ss}_combined_control_pumps_RT_afni.1d \
170 ${prep_onset}/sub-${ss}_combined_explode_demean_afni.1d \
171 ${prep_onset}/sub-${ss}_combined_pumps_demean_afni.1d \
172 ${prep_onset}/sub-${ss}_combined_pumps_RT_afni.1d \
173 -regress_stim_labels \
174 cash_demean cash_RT control_pumps_demean \
175 control_pumps_RT explode_demean pumps_demean \
176 pumps_RT \
177 -regress_basis_multi \
178 'BLOCK(0.772,1)' 'dmBLOCK' 'BLOCK(0.772,1)' 'dmBLOCK' \
179 'BLOCK(0.772,1)' 'BLOCK(0.772,1)' 'dmBLOCK' \
180 -regress_stim_types \
181 AM2 AM1 AM2 AM1 AM2 AM2 AM1 \
182 -regress_censor_motion 0.3 \
183 -regress_motion_per_run \
184 -regress_opts_3dD \
185 -gltsym 'SYM: pumps_demean -control_pumps_demean' \
186 -glt_label 1 pumps_demean_vs_ctrl_demean \
187 -regress_make_ideal_sum sum_ideal.1D \
188 -regress_3dD_stop \
189 -regress_reml_exec \
190 -regress_est_blur_epits \
191 -regress_est_blur_errts \
192 -execute
193
194# =============================================================================
195
196# Again, Biowulf-running considerations: if processing went fine and
197# we were using a temporary directory, copy data back.
198if( $usetemp && -d $tempdir )then
199 echo "Copying data from $tempdir to $path_out"
200 mkdir -p $path_out
201 \cp -pr $tempdir/* $path_out/
202endif
203
204# If it worked, run some volreg snapshots and compress outputs:
205# compare chosen EPI reference volume (from alignment to anatomical)
206# to the final anatomical.
207if( -d $path_out )then
208 cd $path_out
209 @snapshot_volreg \
210 anat_final.${subj}+tlrc.HEAD \
211 final_epi_vr_base*+tlrc.HEAD \
212 ${subj}
213 cd ..
214endif
215
216# =============================================================================
217
218echo "++ DONE with afni_proc for subject: $subj"
219
220time ; exit 0
Script: s.nimh_group_level_02_mema.tcsh
¶
Comment: Instead of using 3dclust
here, the more modern and
convenient 3dClusterize
would now be recommended.
1#!/bin/tcsh
2
3# ----------------------------------------------------------------------
4# Script: s.nimh_group_level_02_mema.tcsh
5# Run on: openfmri/ds001_R2.0.4
6# Date : April, 2018
7#
8# Take a group of subjects processed for task-based FMRI, run 3dMEMA
9# on their final statistical results. Clustering is also performed
10# here, using 3dClustSim (with the ACF blur estimation). Some
11# individual volumes of t-statistics and effect estimate maps are also
12# output.
13#
14# Used for "NIMH-AFNI" processing in:
15#
16# Some comments and corrections on FMRI processing with AFNI in
17# "Exploring the Impact of Analysis Software on Task fMRI Results"
18#
19# Paul A. Taylor, Gang Chen, Daniel R. Glen, Justin K. Rajendra,
20# Richard C. Reynolds, Robert W. Cox.
21# Scientific and Statistical Computing Core, NIMH/NIH/DHHS,
22# Bethesda, MD, USA.
23#
24# NOTE: This set of commands includes some features that would have
25# been chosen in preference to those of BMN, as described in the above
26# text (e.g., using the '-missing_data 0' option in 3dMEMA).
27# *However*, there are still some things that should be altered, such
28# as using paired one-sided testing-- the search for clusters here is
29# two-sided, and so the two-sided test should be used for correctness.
30#
31# ----------------------------------------------------------------------
32#
33# To run for a single subject, for example:
34#
35# tcsh -ef s.nimh_group_level_02_mema.tcsh
36#
37# ----------------------------------------------------------------------
38
39# ================ Set up paths and many params ======================
40
41set here = $PWD
42set p_0 = /data/NIMH_SSCC
43set path_main = ${p_0}/openfmri/ds001_R2.0.4
44set path_proc = ${path_main}/data_proc_nimh
45set odir = GROUP_LEVEL_nimh
46set path_mema = ${path_proc}/${odir}
47
48# Running 3dMEMA
49set script_mema = do_group_level_nimh.tcsh # generated command file name
50set omema = mema_results.nii.gz # output effect+stats filename
51set omema_mskd = mema_results_mskd.nii.gz # masked, stats in brain
52
53# Cluster parameters
54set csim_NN = "NN1" # neighborhood; could be NN1, NN2, NN3
55set csim_sided = "1sided" # could be 1sided, 2sided or bisided
56set csim_pthr = 0.01 # voxelwise thr in 3dClustSim
57set csim_alpha = 0.05 # nominal FWE
58
59# ================== Optional: Biowulf cluster =========================
60
61# This section for possibly running on Biowulf cluster.
62
63#### The following 2 lines could be uncommented to load these modules;
64#### that would likely not be necessary outside of NIH's Biowulf cluster.
65# source /etc/profile.d/modules.csh
66# module load afni R
67
68# set thread count if we are running SLURM
69if ( $?SLURM_CPUS_PER_TASK ) then
70 setenv OMP_NUM_THREADS $SLURM_CPUS_PER_TASK
71endif
72
73# Set temporary output directory; then requires using something like
74# this on the swarm command line: --sbatch '--gres=lscratch:50'.
75# These variables used again *after* afni_proc.py command, if running
76# on Biowulf.
77if( $?SLURM_JOBID )then
78 set tempdir = /lscratch/$SLURM_JOBID/${odir}
79 set usetemp = 1
80else
81 set tempdir = $path_mema
82 set usetemp = 0
83endif
84
85mkdir -p $tempdir
86
87# ====================== Voxelwise modeling ==========================
88
89# Create and execute 3dMEMA command. Note the inclusion of the
90# "-options -missing_data 0" flags, to avoid some numerical badness of
91# possible FOV edge effects, etc.
92gen_group_command.py \
93 -command 3dMEMA \
94 -write_script ${tempdir}/${script_mema} \
95 -prefix ${tempdir}/${omema} \
96 -dsets ${path_proc}/sub*/stats.sub*_REML+tlrc.HEAD \
97 -set_labels "controls" \
98 -subs_betas 'pumps_demean_vs_ctrl_demean#0_Coef' \
99 -subs_tstats 'pumps_demean_vs_ctrl_demean#0_Tstat' \
100 -options -missing_data 0
101
102tcsh -ef ${tempdir}/${script_mema}
103echo "++ MEMAed!"
104
105cd ${tempdir}
106echo "++ Now in the group stat dir: $PWD"
107
108# ====================== Make group mask ==========================
109
110# Make a group level mask for reporting results. This also determines
111# the volume over which multiple comparisons corrections for voxelwise
112# stats are done.
1133dmask_tool \
114 -prefix mask.nii.gz \
115 -input `ls ${path_proc}/sub-*/mask_epi_anat.*.HEAD` \
116 -frac 1.0
117
118# ==================== Calc average smoothness =======================
119
120# Get the line from each blur estimate file that has our desired
121# parameters; we are running REML here and using the ACF smoothness
122# estimation, so the gpattern variable reflects both. For N subjects,
123# the output *1D file contains N rows of the 3 ACF parameters.
124set gpattern = 'err_reml ACF'
125grep -h "$gpattern" ${path_proc}/sub*/blur_est* \
126 | cut -d\ -f1-3 \
127 > group_ACF_ests.1D
128
129# Get the group average ACF parameters.
130set blur_est = ( `3dTstat -mean -prefix - group_ACF_ests.1D\'` )
131echo "++ The group average ACF params are: $blur_est"
132
133# ==================== Cluster simulations =======================
134
135# Simulations for FWE corrected cluster-size inference
1363dClustSim \
137 -both \
138 -mask mask.nii.gz \
139 -acf $blur_est \
140 -prefix ClustSim
141
142# Get the volume threshold value from the ClustSim results, based on
143# user's choice of parameters as set above. The "-verb 0" means that
144# just the threshold number of voxels is returned, so we can save that
145# as a variable.
146set clust_thrvol = `1d_tool.py -verb 0 \
147 -infile ClustSim.${csim_NN}_${csim_sided}.1D \
148 -csim_pthr $csim_pthr \
149 -csim_alpha "$csim_alpha"`
150
151# Get the statistic value equivalent to the desired voxelwise p-value,
152# for thresholding purposes. Using the same p-value and sidedness
153# that were selected in the ClustSim results. This program also gets
154# the number of degrees of freedom (DOF) from the header of the volume
155# containing the statistic. The "-quiet" means that only the
156# associated statistic value is returned, so we can save it to a
157# variable.
158set voxstat_thr = `p2dsetstat -quiet \
159 -pval $csim_pthr \
160 "-${csim_sided}" \
161 -inset ${omema}'[controls:t]'`
162
163echo "++ The final cluster volume threshold is: $clust_thrvol"
164echo "++ The voxelwise stat value threshold is: $voxstat_thr"
165
166# ================== Make cluster maps =====================
167
168# Apply WB mask to the statistics results, so we only find clusters
169# that lie within the mask of interest.
1703dcalc \
171 -a ${omema} \
172 -b mask.nii.gz \
173 -expr 'a*b' \
174 -prefix ${omema_mskd}
175
176# Following the previous paper, these commands extract the "positive"
177# and "negative" cluster info, respectively, into separate files.
178# Each applies clustering parameters above and makes: a *map* of
179# cluster ROIs (regions numbered 1, 2, 3, ...); a map of the effect
180# estimate (EE) values within those clusters; and a table report of
181# the clusters.
182# Technical note: '-1tindex 1' means that the threshold is applied to
183# the [1]st volume of the input dset, which here holds the statistic
184# values; '-1dindex 0' means that data values saved in '-prefix ...'
185# come from the [0]th volume of the input dset, which here holds the
186# EE values.
187set csim_pref = "clust_tpos"
1883dclust \
189 -1Dformat -nosum -1tindex 1 -1dindex 0 \
190 -2thresh -1e+09 $voxstat_thr -dxyz=1 \
191 -savemask ${csim_pref}_map.nii.gz \
192 -prefix ${csim_pref}_EE.nii.gz \
193 1.01 ${clust_thrvol} ${omema_mskd} > ${csim_pref}_table.1D
194# Also make a mask of t-stats (not necessary, but matching previous
195# work)
1963dcalc \
197 -a ${csim_pref}_map.nii.gz \
198 -b ${omema}'[1]' \
199 -expr "step(a)*b" \
200 -prefix ${csim_pref}_t.nii.gz \
201 -float
202
203set csim_pref = "clust_tneg"
2043dclust \
205 -1Dformat -nosum -1tindex 1 -1dindex 0 \
206 -2thresh -$voxstat_thr 1e+09 -dxyz=1 \
207 -savemask ${csim_pref}_map.nii.gz \
208 -prefix ${csim_pref}_EE.nii.gz \
209 1.01 ${clust_thrvol} ${omema_mskd} > ${csim_pref}_table.1D
210# Also make a mask of t-stats (not necessary, but matching previous
211# work)
2123dcalc \
213 -a ${csim_pref}_map.nii.gz \
214 -b ${omema}'[1]' \
215 -expr "step(a)*b" \
216 -prefix ${csim_pref}_t.nii.gz \
217 -float
218
219cd $here
220
221# ===================================================================
222
223# Again, Biowulf-running considerations: if processing went fine and
224# we were using a temporary directory, copy data back.
225if( $usetemp && -d $tempdir )then
226 echo "Copying data from $tempdir to $path_mema"
227 mkdir -p $path_mema
228 \cp -pr $tempdir/* $path_mema/
229endif
230
231# ===================================================================
232
233echo "\n++ DONE with group level stats + clustering!\n"
234
235time ; exit 0