Cox's Brain Blog
The direction of projection is rotated around, and the result is a series of 2D images that show in each pixel the brightest (or darkest) voxel along the projection ray. The vessels stand out as 1D tubular structures in this type of animation. It is a very useful way to look at MRA data.
This technique could be used as a way of detecting vessels automatically as well. The idea is to do many such projections from a lot of directions covering the ½-sphere, and for each voxel, keep track of from how many directions it is the max (or min) along a projection. Voxels that are quite often the extreme value are likely to be in a vessel. Then some sort of connectivity criterion can be used to exclude isolas and/or spackle over gaps.
Each projection, on a 256x256x128 volume, should take less than 0.5 s, so in a couple minutes the basic process could be carried out. The 'extract_' functions in AFNI's cox_render.c could be modified to carry out this computation relatively easily. Someday ....
Tom Nichols believes that the smoothness should be measured locally, not globally. So my idea is to use the method of the previous post, with the mask from 3dAutomask, to
- incrementally blur a little
- test the smoothness in each region
- go back and continue blurring where the image isn't smooth enough yet
The obvious thing to do is to treat Gaussian blurring as the solution to the heat/diffusion PDE in boundaryless space. We then add boundaries. Neumann boundary conditions are appropriate, since otherwise the edge values are fixed and unsmoothed. If we think of diffusion as an averaging over the values visited by an ensemble of Brownian particles, then Neumann BCs one the PDE come from assuming reflecting BCs for the particle paths (Dirichlet come from assuming absorbing BCs).
In the discrete world of image processing, we deal with random walks, not Brownian motions. One way is to implement diffusion is with an Euler step using a finite-difference stencil. In 1D space, the stencil is:
1 -2 1
so that the evolution of some grid-function u() is
u(n,t+1) = u(n,t) + ε [ u(n-1,t) - 2⋅u(n,t) + u(n-1,t) ]
u(n,t+1) = (1-2ε)u(n,t) + εu(n-1,t) + εu(n+1,t)
where ε is some small quantity, n is the grid index, and t is some artificial 'time'. However, at a mask boundary, we don't want to include the n-1 or n+1 value. The Neumann BC says that when a particle leaves the boundary, it is reflected back. So if u(n-1,t) is the value we don't want to include, but u(n+1,t) is OK to keep, then we modify the evolution at the n^{th} grid point to be
u(n,t+1) = (1-ε)u(n,t) + εu(n+1,t)
since the u(n-1,t) 'exploration' by the wandering particle is reflected back to u(n,t). (Implicitly, this is setting the boundary half-way between the n-1 and n grid points.)
In higher dimensions, we modify the stencil similarly, taking the weights that would have come from the non-central points that are outside the mask, and lumping those back into the central point. Then, to get more than a tiny amount of blurring, we iterate the process repeatedly. Obviously, this will not be possible to do in FFT space. Also obviously, this will be a slow way to get a lot of blurring, and is really only useful when a limited amount of blurring is required. That is the case for what we need this procedure for, in the AFNI program 3danisosmooth.
As pointed out by Larry Frank (UCSD), the rate at which we gather the data is plenty fast to avoid aliasing in time from the heartbeat and respiration -- say, one image every 100 ms -- but the problem is that the images are spread out among various slices, so in any given voxel, the TR is 2000 ms or more.
However, the cardiac and respiratory cycle are the same for all voxels. Admittedly, their impact on each voxel's time series will be different. So what we'd like to estimate, from the reconstructed magnitude EPI time series alone, is two things:
(t) the time series for the heartbeat and respiration (e.g., their "phase", as in Gary Glover's RETROICOR)
and
(x) the coefficients in each voxel that measure the effect of the (t) estimates on the time series of data in that voxel.
Numerology regarding (t): if we have N volumes in time with a given TR, then our frequency resolution is 1/(N⋅TR). The heartbeat (for most people) is centered at 1.1 HZ ± 0.1 Hz, so let's say we need a bandwidth of 0.2 Hz for this, which is 0.4 Hz when we consider negative frequencies, which is 0.4⋅&N⋅TR values to estimate. If there are V voxels in the brain, we have N⋅V data values to work with. As long as V>>0.4⋅TR≈1, this is "trivial" (except for the details). A similar calculation applies to estimating the respiration. The spatial estimate of the (x) coefficients is similar to RETROICOR, at least in principle.
Of course, both (t) and (x) have to be estimated at once. This is reminiscent of spatial factorizing as in principal components, so I suppose one model would lead to some kind of eigenvalue/vector problem. Alternatively, it might be useful to pre-select only a given set of voxels with extra-large variance, as those most likely to evidence the physiological fluctuations. Then the (t) data can be estimated from these (somehow), and afterwards RETROICOR directly applied to get the (x) stuff.
All the above is without equations. The next step is the hard part -- figuring out a model that is plausible and that leads to an algorithm. Volunteers?
Unfortunately, this goal doesn't appear possible. However, I was thinking about the following mathematical factoid:
y(x) = x − a⋅sin(x)
has the solution (for |a|<1, which is the well-posed case)
x(y) = y + ∑_{n=1}^{∞} (2/n)⋅J_{n}(a⋅n)⋅sin(ny)
(This type of series of Bessel functions is known as a Kapteyn series; also see Chapter 17 of GN Watson, A Treatise on the Theory of Bessel Functions, 2nd Edition.) Furthermore, J_{n}(a⋅n) decreases exponentially fast as n→∞. This means that for warps represented as low-order finite Fourier series, the inverse functions, although requiring an infinite number of terms to be exact, are pretty well represented themselves by low-order finite Fourier series.
In higher dimensions, the obvious generalization is a tensor-product Fourier series; this would be reasonable over a rectangular domain. Slightly less obvious, over the interior of a spherical domain, a better thing to do would be spherical harmonic series based on Y_{nm}(θ,φ)j_{n}(λ_{p}r). On the surface, drop the half-order Bessel functions. Must investigate numerics.
The problem is that image registration is ultimately based on comparing the two (or more) images as the software wiggles them around, and stopping when the comparison is good. The comparison might be least squares (e.g., correlation), mutual information, or whatnot. But if the images aren't that comparable in detail, then the registration might be off, and it is hard to judge how much.
For images going into the DWI-to-DTI calculation, one solution is the DWI-to-DTI calcuation itself. That is, the goal after registration is then to compute the DT from the DW data at each voxel, and this computation is itself a fitting of the DT to the DW values. At the end, in each voxel we have residual = what's left after we subtract the DT fit from the DW data. A precise registration method would be to wiggle the DW images around until the overall residuals from the DT fit are small. Schematically:
(1) Do some preliminary standard registration method, just to get things off to a good start;
(2) Compute the DWI-to-DTI fit at each voxel;
(3) Compute the residual variance (or MAD) at each voxel, then average these (or median) over the brain as a measure of the overall goodness-of-fit;
(4) Wiggle the DW images arounds and go back to step (2), with the goal of minimizing the overall residual value computed in (3).
There are many details to be worked out, such as the method used in step (4) to decide upon the image movement parameters.
The principle is to directly attack the problem: misregistration produces bad DTI fits, so use try to minimize this badness.
This is probably a Ph.D. level problem, taking a couple years to get working well, I'd guess, for a decent student.
It is relatively easy to make an image of the static magnetic field itself, using two acquisitions with TE shifting. However, making an image of the gradient fields is harder, since the gradients themselves are what is used to make an image in the first place.
One method is to make an image of a fiducial object, something with some spatial structure that has been measured outside the scanner. Then take an image of this object: its distortions will map the gradient field nonlinearities.
Without such an object, another way is to use an internal reference: the diffusion tensor of pure water (OK, water doped with a T1 relaxation agent to make the signal brighter). This tensor has a known value. Now use a DTI sequence to make an image of this tensor. Its deviation from being a spatially constant multiple of the identity is a measure of the nonlinearity of the gradient fields. Noise is a problem, but since the gradient fields are smooth, we could fit them to the DTI data by an expansion in spherical harmonics.
The advantage of this method is that the only phantom required is a bottle of water, the plainer the better. It would be important to make sure the water was still, so the phantom should be placed in the scanner bed carefully and then left for a while (overnight?) to let internal currents damp out. To calibrate the gradients precisely, the temperature of the phantom should be uniform and known, as well. Details, details, details.
This would be a nice M.S. or postbac project, but is probably too small for a Ph.D. project (unless it can be tricked out with more wrinkles).
- qt_space_short.pdf = the short explanation (60 KB, 1 page, very terse)
- qt_space_long.pdf = the long explanation (900+ MB, multiple pages)
What are the fundamental principles, that most people could agree with, on which a theory of FMRI data analysis could be built? I can think of two (so far)
- Hemodynamic Response Function: FMRI data is too noisy, so we have to repeat many trials of each stimulus class, so we have to assume that the results are the "same" each time and then do some statistics. The "sameness" assumption, plus the assumption of additivity, is the HRF principle.
There are many potential ramifications of this principle for analysis, the simplest of which is the current favorite analysis for FMRI papers ("GLM"). Other possibilities include:
- Assuming the HRF is the same unknown function across the whole brain, with unknown amplitudes in each voxel; leads to a kind of principal components deconvolution.
- Assuming the HRF varies randomly a little between stimulus repetitions. Independently or coherently between voxels.
- Blobs: That is to say, we are looking for regions of activation, not randomly scattered junk. I'm not sure of the ramifications of this, other than that we need to come up with a coherent scheme for describing possible spatial activation patterns; then the job of choosing amongst them will be soundly based. Wavelets? On the cortical gray-matter shell/surface?
(i) a coherent set of models for FMRI data upon which
(ii) a coherent set of algorithms for extracting information can be based?
Reluctantly, I have decided that it is now necessary to fully support oblique 3D volumes in AFNI, since more and more people are acquiring oblique partial-coverage EPI volumes and then want to overlay the FMRI results from these onto non-oblique anatomicals. There are essentially 2 approaches:
- (a) Transform everything to Talairach or MNI space and then everything is aligned (or at least, you can't overlay anything until you Talairach-ize). As far as I understand, this is what SPM and FSL do.
- (b) Allow arbitrarily oriented volumes to be overlaid and processed.
The more drastic (b) is what I lean towards, since it fits into the AFNI philosophy better. This approach leads to a host of problems and issues, some harder than others, and some as yet unforeseen:
- orientation in space will now be defined by a matrix, rather than a set of codes
- must deal with old dataset whose headers don't contain the matrix, just the codes (easy)
- must fix all dataset input and creation functions to make sure the matrix gets defined from all the formats AFNI supports (including MINC, NIFTI, CTF, ANALYZE)
- must examine all 1400+ places in the AFNI source code, in 100+ files, where the "daxes" sub-struct of a dataset is accessed in order to determine what the impact is at that location
- what to do about 3dZeropad? 3dresample? these are some of the programs that make special use of the orientation (e.g., '3dZeropad -I 7' adds 7 planes of 0s on the Inferior edge, but if the dataset is oblique, do we do the 'inferior-most' edge or just complain?)
- must fix the EDIT_dset_items() and EDIT_*_copy() functions so that defining and changing the axes is coherent
- 3drefit needs some changes
- what to do with an orientation matrix that is not a diagonal multiple of an orthogonal matrix (i.e., the 3 axes of the dataset are not perpendicular)? Complain? Use the nearest orthogonal set of directions? Or actually allow non-perpendicular axes (skewed volumes)? I do not like this last choice, Sam I am.
- the Matlab I/O functions must be modified to match the AFNI dataset header.
- the few non-SSCC users who write AFNI compatible code need to be informed so they can adapt their programs.
- Effect on realtime scanning?
I have started this process; last week, adding a matrix 'ijk_to_dicom' to the daxes sub-struct, and a function to load this matrix from the old orientation codes, and one to load the old orientation codes from the matrix, and an AFNI header attribute to store the matrix. The code is mostly in thd_matdaxes.c for now, but as more than 100 AFNI source code files refer to the daxes sub-struct, there will be a lot of changes coming down the pike. This will take a while, a long while, so I'm hoping to be able to incorporate the changes gradually. That's why I have the function that takes the matrix and computes the old orientation codes -- this will mean that compatibility with non-oblique datasets will be there even with as-yet unmodified code.
The next few steps I'm planning are geared towards overlaying obliques properly. There's some more invisible structural support that need building, and then the actual overlay business -- the return of warp-on-demand, since oblique datasets need to be interpolated to be displayed in the cardinal orientations of the AFNI windows. This in turn means I have to revisit the warp-on-demand code that I've carefully left untouched for about 9 years (I created it, but now I'm afraid of it -- the Frankenstein's monster of AFNI).
I don't know how long this will take. My enthusiasm for this is low, but I now feel it is required, and that I'm the only one who can do it since all the coordinate guts of AFNI are mine, built in from the very beginning in 1994.