Skip to content

AFNI and NIfTI Server for NIMH/NIH/PHS/DHHS/USA/Earth

Sections
Personal tools
You are here: Home » SSCC » rwcox's Home » Cox's Brain Blog

Cox's Brain Blog Cox's Brain Blog

Document Actions
Random thoughts about whatever, where "random" means that which occurs to me while I'm walking to/from the NIH, and "whatever" means something loosely related to brains, AFNI, MRI, mathematics, physics, biology, .... However, theology will be eschewed.

Submitted by bpittman. on 2006-01-03 08:59.
It is common in MRA to display the 3D results by a Maximum Intensity Projection (MIP) for 'white blood' imaging, and a minimum Intensity Projection (mIP) for 'black blood' imaging.

The direction of projection is rotated around, and the result is a series of 2D images that show in each pixel the brightest (or darkest) voxel along the projection ray. The vessels stand out as 1D tubular structures in this type of animation. It is a very useful way to look at MRA data.

This technique could be used as a way of detecting vessels automatically as well. The idea is to do many such projections from a lot of directions covering the ½-sphere, and for each voxel, keep track of from how many directions it is the max (or min) along a projection. Voxels that are quite often the extreme value are likely to be in a vessel. Then some sort of connectivity criterion can be used to exclude isolas and/or spackle over gaps.

Each projection, on a 256x256x128 volume, should take less than 0.5 s, so in a couple minutes the basic process could be carried out. The 'extract_' functions in AFNI's cox_render.c could be modified to carry out this computation relatively easily. Someday ....


Comments: 1 
Submitted by bpittman. on 2005-12-29 13:18.
Lee Friedman suggested that EPI images should be smoothed to a given smoothness prior to time series analysis.

Tom Nichols believes that the smoothness should be measured locally, not globally. So my idea is to use the method of the previous post, with the mask from 3dAutomask, to

  • incrementally blur a little
  • test the smoothness in each region
  • go back and continue blurring where the image isn't smooth enough yet
Details need to be worked out.


Comments: 1 
Submitted by bpittman. on 2005-12-28 09:46.
Gaussian blurring, as in AFNI's edt_blur.c functions, is easily implemented using FFTs and/or convolution. But if there is a mask involved, and we don't want to include data from outside the mask, what to do?

The obvious thing to do is to treat Gaussian blurring as the solution to the heat/diffusion PDE in boundaryless space. We then add boundaries. Neumann boundary conditions are appropriate, since otherwise the edge values are fixed and unsmoothed. If we think of diffusion as an averaging over the values visited by an ensemble of Brownian particles, then Neumann BCs one the PDE come from assuming reflecting BCs for the particle paths (Dirichlet come from assuming absorbing BCs).

In the discrete world of image processing, we deal with random walks, not Brownian motions. One way is to implement diffusion is with an Euler step using a finite-difference stencil. In 1D space, the stencil is:
  1   -2   1
so that the evolution of some grid-function u() is
  u(n,t+1) = u(n,t) + ε [ u(n-1,t) - 2⋅u(n,t) + u(n-1,t) ]
  u(n,t+1) = (1-2ε)u(n,t) + εu(n-1,t) + εu(n+1,t)
where ε is some small quantity, n is the grid index, and t is some artificial 'time'. However, at a mask boundary, we don't want to include the n-1 or n+1 value. The Neumann BC says that when a particle leaves the boundary, it is reflected back. So if u(n-1,t) is the value we don't want to include, but u(n+1,t) is OK to keep, then we modify the evolution at the nth grid point to be
  u(n,t+1) = (1-ε)u(n,t) + εu(n+1,t)
since the u(n-1,t) 'exploration' by the wandering particle is reflected back to u(n,t). (Implicitly, this is setting the boundary half-way between the n-1 and n grid points.)

In higher dimensions, we modify the stencil similarly, taking the weights that would have come from the non-central points that are outside the mask, and lumping those back into the central point. Then, to get more than a tiny amount of blurring, we iterate the process repeatedly. Obviously, this will not be possible to do in FFT space. Also obviously, this will be a slow way to get a lot of blurring, and is really only useful when a limited amount of blurring is required. That is the case for what we need this procedure for, in the AFNI program 3danisosmooth.


Comments: 1 
Submitted by bpittman. on 2005-12-23 08:56.
Estimation of the physiological noise without external monitoring should be possible.

As pointed out by Larry Frank (UCSD), the rate at which we gather the data is plenty fast to avoid aliasing in time from the heartbeat and respiration -- say, one image every 100 ms -- but the problem is that the images are spread out among various slices, so in any given voxel, the TR is 2000 ms or more.

However, the cardiac and respiratory cycle are the same for all voxels. Admittedly, their impact on each voxel's time series will be different. So what we'd like to estimate, from the reconstructed magnitude EPI time series alone, is two things:
  (t) the time series for the heartbeat and respiration (e.g., their "phase", as in Gary Glover's RETROICOR)
and
  (x) the coefficients in each voxel that measure the effect of the (t) estimates on the time series of data in that voxel.

Numerology regarding (t): if we have N volumes in time with a given TR, then our frequency resolution is 1/(N⋅TR). The heartbeat (for most people) is centered at 1.1 HZ ± 0.1 Hz, so let's say we need a bandwidth of 0.2 Hz for this, which is 0.4 Hz when we consider negative frequencies, which is 0.4⋅&N⋅TR values to estimate. If there are V voxels in the brain, we have N⋅V data values to work with. As long as V>>0.4⋅TR≈1, this is "trivial" (except for the details). A similar calculation applies to estimating the respiration. The spatial estimate of the (x) coefficients is similar to RETROICOR, at least in principle.

Of course, both (t) and (x) have to be estimated at once. This is reminiscent of spatial factorizing as in principal components, so I suppose one model would lead to some kind of eigenvalue/vector problem. Alternatively, it might be useful to pre-select only a given set of voxels with extra-large variance, as those most likely to evidence the physiological fluctuations. Then the (t) data can be estimated from these (somehow), and afterwards RETROICOR directly applied to get the (x) stuff.

All the above is without equations. The next step is the hard part -- figuring out a model that is plausible and that leads to an algorithm. Volunteers?


Comments: 0 
Submitted by bpittman. on 2005-12-22 16:29.
It would be nice to have a family of simple high-order multi-dimensional warping functions that were easily invertible -- ideally, the family would contain its own inverse functions.

Unfortunately, this goal doesn't appear possible. However, I was thinking about the following mathematical factoid:
  y(x) = x − a⋅sin(x)
has the solution (for |a|<1, which is the well-posed case)
  x(y) = y + ∑n=1 (2/n)⋅Jn(a⋅n)⋅sin(ny)
(This type of series of Bessel functions is known as a Kapteyn series; also see Chapter 17 of GN Watson, A Treatise on the Theory of Bessel Functions, 2nd Edition.) Furthermore, Jn(a⋅n) decreases exponentially fast as n→∞. This means that for warps represented as low-order finite Fourier series, the inverse functions, although requiring an infinite number of terms to be exact, are pretty well represented themselves by low-order finite Fourier series.

In higher dimensions, the obvious generalization is a tensor-product Fourier series; this would be reasonable over a rectangular domain. Slightly less obvious, over the interior of a spherical domain, a better thing to do would be spherical harmonic series based on Ynm(θ,φ)jnpr). On the surface, drop the half-order Bessel functions. Must investigate numerics.

Comments: 0 
Submitted by bpittman. on 2005-12-21 09:43.
Registering DWI images for DTI calculations has the problem that the different gradient weightings produce markedly different images. The result is that the Gx-direction image looks quite a lot different from the Gy-weighted image (say). This poses problems for standard registration methods.

The problem is that image registration is ultimately based on comparing the two (or more) images as the software wiggles them around, and stopping when the comparison is good. The comparison might be least squares (e.g., correlation), mutual information, or whatnot. But if the images aren't that comparable in detail, then the registration might be off, and it is hard to judge how much.

For images going into the DWI-to-DTI calculation, one solution is the DWI-to-DTI calcuation itself. That is, the goal after registration is then to compute the DT from the DW data at each voxel, and this computation is itself a fitting of the DT to the DW values. At the end, in each voxel we have residual = what's left after we subtract the DT fit from the DW data. A precise registration method would be to wiggle the DW images around until the overall residuals from the DT fit are small. Schematically:
  (1) Do some preliminary standard registration method, just to get things off to a good start;
  (2) Compute the DWI-to-DTI fit at each voxel;
  (3) Compute the residual variance (or MAD) at each voxel, then average these (or median) over the brain as a measure of the overall goodness-of-fit;
  (4) Wiggle the DW images arounds and go back to step (2), with the goal of minimizing the overall residual value computed in (3).

There are many details to be worked out, such as the method used in step (4) to decide upon the image movement parameters.

The principle is to directly attack the problem: misregistration produces bad DTI fits, so use try to minimize this badness.

This is probably a Ph.D. level problem, taking a couple years to get working well, I'd guess, for a decent student.


Comments: 1 
Submitted by bpittman. on 2005-12-21 09:29.
Measuring the gradient fields in MRI is actually tricky, without using some sort of external fiducial object. Here's one way to avoid this problem.

It is relatively easy to make an image of the static magnetic field itself, using two acquisitions with TE shifting. However, making an image of the gradient fields is harder, since the gradients themselves are what is used to make an image in the first place.

One method is to make an image of a fiducial object, something with some spatial structure that has been measured outside the scanner. Then take an image of this object: its distortions will map the gradient field nonlinearities.

Without such an object, another way is to use an internal reference: the diffusion tensor of pure water (OK, water doped with a T1 relaxation agent to make the signal brighter). This tensor has a known value. Now use a DTI sequence to make an image of this tensor. Its deviation from being a spatially constant multiple of the identity is a measure of the nonlinearity of the gradient fields. Noise is a problem, but since the gradient fields are smooth, we could fit them to the DTI data by an expansion in spherical harmonics.

The advantage of this method is that the only phantom required is a bottle of water, the plainer the better. It would be important to make sure the water was still, so the phantom should be placed in the scanner bed carefully and then left for a while (overnight?) to let internal currents damp out. To calibrate the gradients precisely, the temperature of the phantom should be uniform and known, as well. Details, details, details.

This would be a nice M.S. or postbac project, but is probably too small for a Ph.D. project (unless it can be tricked out with more wrinkles).


Comments: 0 
Submitted by bpittman. on 2005-12-20 14:25.
An attempt to unify the various theories of DWI (q-space and tensors). Not complete, and maybe not correct.


Comments: 0 
Submitted by bpittman. on 2005-12-20 14:01.
Don't we wish.

What are the fundamental principles, that most people could agree with, on which a theory of FMRI data analysis could be built? I can think of two (so far)
  • Hemodynamic Response Function: FMRI data is too noisy, so we have to repeat many trials of each stimulus class, so we have to assume that the results are the "same" each time and then do some statistics. The "sameness" assumption, plus the assumption of additivity, is the HRF principle.

    There are many potential ramifications of this principle for analysis, the simplest of which is the current favorite analysis for FMRI papers ("GLM"). Other possibilities include:

    • Assuming the HRF is the same unknown function across the whole brain, with unknown amplitudes in each voxel; leads to a kind of principal components deconvolution.
    • Assuming the HRF varies randomly a little between stimulus repetitions. Independently or coherently between voxels.

  • Blobs: That is to say, we are looking for regions of activation, not randomly scattered junk. I'm not sure of the ramifications of this, other than that we need to come up with a coherent scheme for describing possible spatial activation patterns; then the job of choosing amongst them will be soundly based. Wavelets? On the cortical gray-matter shell/surface?
And how to combine these two principals into
  (i) a coherent set of models for FMRI data upon which
  (ii) a coherent set of algorithms for extracting information can be based?
Comments: 0 
Submitted by bpittman. on 2005-12-20 13:52.
Oblique datasets in AFNI - the work begins!

Reluctantly, I have decided that it is now necessary to fully support oblique 3D volumes in AFNI, since more and more people are acquiring oblique partial-coverage EPI volumes and then want to overlay the FMRI results from these onto non-oblique anatomicals. There are essentially 2 approaches:

  • (a) Transform everything to Talairach or MNI space and then everything is aligned (or at least, you can't overlay anything until you Talairach-ize). As far as I understand, this is what SPM and FSL do.
  • (b) Allow arbitrarily oriented volumes to be overlaid and processed.

The more drastic (b) is what I lean towards, since it fits into the AFNI philosophy better. This approach leads to a host of problems and issues, some harder than others, and some as yet unforeseen:

  • orientation in space will now be defined by a matrix, rather than a set of codes
  • must deal with old dataset whose headers don't contain the matrix, just the codes (easy)
  • must fix all dataset input and creation functions to make sure the matrix gets defined from all the formats AFNI supports (including MINC, NIFTI, CTF, ANALYZE)
  • must examine all 1400+ places in the AFNI source code, in 100+ files, where the "daxes" sub-struct of a dataset is accessed in order to determine what the impact is at that location
  • what to do about 3dZeropad? 3dresample? these are some of the programs that make special use of the orientation (e.g., '3dZeropad -I 7' adds 7 planes of 0s on the Inferior edge, but if the dataset is oblique, do we do the 'inferior-most' edge or just complain?)
  • must fix the EDIT_dset_items() and EDIT_*_copy() functions so that defining and changing the axes is coherent
  • 3drefit needs some changes
  • what to do with an orientation matrix that is not a diagonal multiple of an orthogonal matrix (i.e., the 3 axes of the dataset are not perpendicular)? Complain? Use the nearest orthogonal set of directions? Or actually allow non-perpendicular axes (skewed volumes)? I do not like this last choice, Sam I am.
  • the Matlab I/O functions must be modified to match the AFNI dataset header.
  • the few non-SSCC users who write AFNI compatible code need to be informed so they can adapt their programs.
  • Effect on realtime scanning?

I have started this process; last week, adding a matrix 'ijk_to_dicom' to the daxes sub-struct, and a function to load this matrix from the old orientation codes, and one to load the old orientation codes from the matrix, and an AFNI header attribute to store the matrix. The code is mostly in thd_matdaxes.c for now, but as more than 100 AFNI source code files refer to the daxes sub-struct, there will be a lot of changes coming down the pike. This will take a while, a long while, so I'm hoping to be able to incorporate the changes gradually. That's why I have the function that takes the matrix and computes the old orientation codes -- this will mean that compatibility with non-oblique datasets will be there even with as-yet unmodified code.

The next few steps I'm planning are geared towards overlaying obliques properly. There's some more invisible structural support that need building, and then the actual overlay business -- the return of warp-on-demand, since oblique datasets need to be interpolated to be displayed in the cardinal orientations of the AFNI windows. This in turn means I have to revisit the warp-on-demand code that I've carefully left untouched for about 9 years (I created it, but now I'm afraid of it -- the Frankenstein's monster of AFNI).

I don't know how long this will take. My enthusiasm for this is low, but I now feel it is required, and that I'm the only one who can do it since all the coordinate guts of AFNI are mine, built in from the very beginning in 1994.


Comments: 0 
Blog
« May 2021 »
Su Mo Tu We Th Fr Sa
            1
2 3 4 5 6 7 8
9 10 11 12 13 14 15
16 17 18 19 20 21 22
23 24 25 26 27 28 29
30 31          
 
 

Powered by Plone

This site conforms to the following standards: