# 3D cell nuclei segmentation based on gradient flow tracking

- Gang Li
^{1, 2}, - Tianming Liu
^{1, 3}, - Ashley Tarokh
^{1, 3}, - Jingxin Nie
^{1, 2}, - Lei Guo
^{2}, - Andrew Mara
^{4}, - Scott Holley
^{4}and - Stephen TC Wong
^{1, 3}Email author

**8**:40

**DOI: **10.1186/1471-2121-8-40

© Li et al; licensee BioMed Central Ltd. 2007

**Received: **17 January 2007

**Accepted: **04 September 2007

**Published: **04 September 2007

## Abstract

### Background

Reliable segmentation of cell nuclei from three dimensional (3D) microscopic images is an important task in many biological studies. We present a novel, fully automated method for the segmentation of cell nuclei from 3D microscopic images. It was designed specifically to segment nuclei in images where the nuclei are closely juxtaposed or touching each other. The segmentation approach has three stages: 1) a gradient diffusion procedure, 2) gradient flow tracking and grouping, and 3) local adaptive thresholding.

### Results

Both qualitative and quantitative results on synthesized and original 3D images are provided to demonstrate the performance and generality of the proposed method. Both the over-segmentation and under-segmentation percentages of the proposed method are around 5%. The volume overlap, compared to expert manual segmentation, is consistently over 90%.

### Conclusion

The proposed algorithm is able to segment closely juxtaposed or touching cell nuclei obtained from 3D microscopy imaging with reasonable accuracy.

## Background

Reliable segmentation of cell nuclei from three dimensional (3D) microscopic images is an important task in many biological studies as it is required for any subsequent comparison or classification of the nuclei. For example, zebrafish somitogenesis is governed by a clock that generates oscillations in gene expression within the presomitic mesoderm [1, 2]. The subcellular localization of oscillating mRNA in each nucleus, imaged through multi-channel microscopy, can be used to identify different phases within the oscillation. To automate the classification of the phase of an individual nucleus, each nucleus within the presomitic mesoderm first needs to be accurately segmented.

In recent years, there has been significant effort towards the development of automated methods for 3D cell or cell nuclei image segmentation [3–9, 15, 16]. Thresholding, watershed and active surface based methods are among the most commonly used techniques for 3D cell or cell nuclei segmentation. Unfortunately, thresholding-based methods often have difficulties in dealing with images that do not have a well-defined constant contrast between the objects and the background. Given this characteristic of the thresholding-based methods, they often have difficulties in segmenting images with clustered or juxtaposed nuclei. Watershed-based methods are also very popular for segmentation of clustered cell nuclei [3–5, 10]. However, these methods often result in the over-segmentation of clustered cell nuclei. In order to deal with this issue, heuristic rules have been developed for region merging [3–5] as a post-processing step. Segmentation problems have also been targeted through the use of active surface-based methods [8, 9, 15, 16] in the literature. However, such algorithms suffer from an inherent dependency on the initial guess. If the initial guess is wrong, these methods have difficulties in dealing with clustered cell nuclei.

Despite active research and progress in the literature, development of a fully automated and robust computational algorithm for 3D cell nuclei segmentation still remains a challenge when dealing with significant inherent nuclei shape and size variations in image data. Examples include cases where the contrast between nuclei and background is low, where there are differences in shapes and sizes of nuclei, and where we are dealing with 3D images of low quality [3, 4, 6–8]. Complications also arise when nuclei are juxtaposed or connected to one another, increasing the rate of over-segmentation or under-segmentation.

In this paper, we present a novel automated method that aims to tackle the aforementioned challenges of segmentation of clustered or connected 3D cell nuclei. We approach the segmentation problem by first generating the gradient vector field corresponding to the 3D volume image, and then diffusing the gradient vector field with an elastic deformable transform. After the elastic deformable transform is completed, the noisy gradient vector field is smoothed and the gradient vectors with large magnitude are propagated to the areas with weak gradient vectors. This gradient diffusion procedure results in a gradient flow field, in which the gradient vectors are smoothly flowing towards or outwards from the centers of the nuclei. Subsequently, a gradient flow tracking procedure is performed from each vector point to find the corresponding center to which the points flow. We group all points that flow to the same center into a region, and refer to this region as the attraction basin of the center. Once we have completed the process of tracking the gradient flow, the boundaries of juxtaposed nuclei are formed naturally and hence these juxtaposed nuclei are divided. The final step includes performing local thresholding in each attraction basin in order to extract the nuclei from their corresponding background. We have evaluated and validated this algorithm and have presented results attesting its validity.

## Results

In this section, a series of experiments are designed to evaluate and validate the gradient flow tracking method for segmentation of 3D images with juxtaposed nuclei. Both qualitative and quantitative results on synthesized and original 3D images are provided to demonstrate the performance and general applicability of the proposed method.

### Validation using synthesized 3D image

where *R*_{
a
}is the automated extracted region and *R*_{
g
}is the ground truth region. The ⋂ operator takes the intersection of two regions. *S*(·) is the volume of the region.

We have synthesized seven 3D nuclei images. In order to present a quantitative measure for the validation of our results, the average value of volume overlap measurement for the seven cases is around 0.971, and the standard deviation is 0.014. As it is clear from these results, our proposed segmentation method achieves significant volume overlap with the ground truth, indicating the accurate performance of the gradient flow tracking method.

### Validation using 3D *C. elegans* embryo images

*C. elegans*embryo images. Dr. Sean Megason of California Institute of Technology provided us the

*C. elegans*embryo images. The nuclei are labelled with histone-EGFP. The scope used for the C. elegans image set is a Zeiss LSM510 with a 40X C-Apochromat objective and a 488 nm laser. The original voxel size is 0.14*0.14*0.74 micron in x, y and z directions. In our experiment, the voxel size is re-sampled to isotropic in all directions. The details of the experimental and imaging settings are provided at http://www.digitalfish.org/beta/samples/. Two metrics of over-segmentation and under-segmentation were utilized for evaluation of the segmentation method. The over-segmentation metric indicates that a nucleus has been separated into more than one object, or an extracted object has not been labeled as nucleus. This is done in comparison to visual inspection of an expert. The under-segmentation indicates that clusters of nuclei have not been appropriately divided or a nucleus marked by visual inspection was not at all extracted. Four 3D images of

*C. elegans*embryos were used to evaluate the proposed segmentation method based on the above two metrics. Table 1 shows the performance of the proposed segmentation method. On average, the over-segmentation and under-segmentation rates are 1.59% and 0.39% respectively, indicating a desirable performance by our segmentation method. The errors are probably caused by the interpolation and re-sampling and inherent noise in the images. For the convenience of visual inspection of the segmentation results, we provide the volume rendering of the original 3D image and surface rendering of the segmentation results. As an example, Figure 7a provides the volume rendering of the original juxtaposed 3D nuclei. The segmentation results represented by surface boundaries are shown in Figure 7b. To validate the segmentation results, two experts manually segmented the nuclei respectively, and then we computed the volume overlap between the automated result and that of each of the two experts. We also calculated the volume overlap between the two experts. The mean value of volume overlap is over 95% for both expert 1 and expert 2, and the standard deviation is around 0.02 for both experts, indicating that the automated 3D cell nuclei segmentation results are comparable to manual segmentation results.

The segmentation results of 3D C. elegans embryo images

Image Index | True Number | Over-segmentation | Under-segmentation | ||
---|---|---|---|---|---|

Number | Percentage | Number | Percentage | ||

1 | 187 | 2 | 1.07% | 0 | 0.00% |

2 | 187 | 3 | 1.60% | 1 | 0.53% |

3 | 185 | 3 | 1.62% | 0 | 0.00% |

4 | 192 | 4 | 2.08% | 2 | 1.04% |

Average | 187.75 | 3 | 1.59% | 0.75 | 0.39% |

### Validation using 3D zebrafish nuclei images

The segmentation results of 3D cell nuclei image of zebrafish

Image Index | True Number | Over-segmentation | Under-segmentation | ||
---|---|---|---|---|---|

Number | Percentage | Number | Percentage | ||

1 | 283 | 13 | 4.59% | 15 | 5.30% |

2 | 309 | 15 | 4.85% | 16 | 5.48% |

3 | 320 | 17 | 5.31% | 15 | 4.39% |

4 | 293 | 14 | 4.78% | 13 | 4.44% |

5 | 325 | 17 | 5.23% | 19 | 5.85% |

6 | 306 | 15 | 4.90% | 17 | 5.56% |

7 | 332 | 13 | 3.92% | 19 | 5.72% |

8 | 304 | 16 | 5.26% | 15 | 4.93% |

9 | 311 | 16 | 5.14% | 13 | 4.18% |

10 | 320 | 17 | 5.31% | 17 | 5.31% |

Average | 310.3 | 15.3 | 4.93% | 15.6 | 5.03% |

## Discussion

Our method has several advantages over previous approaches. The major advantage of the method is the ability to robustly segment densely packed, touching, or connected nuclei. Additionally, no sophisticated rules are used. The only assumption is that the centers of nuclei are brighter or darker than a nearby region. The fundamental difference between our method and existing methods lies in the diffused gradient vector information. In existing methods such as the threshold or watershed methods, intensity is the only adopted information, hence those methods are sensitive to the noise in the image, which usually results in over-segmentation. In contrast, in our method the gradient vector diffusion procedure propagates gradient vectors with large magnitudes to the areas with weak gradient vectors and smoothes the noisy gradient vector field. Meanwhile, it preserves the potential structural information of the gradient vector field. For example when two nuclei are touching each other, the diffused gradient vectors point toward the corresponding centers of the nuclei. This step greatly contributes to the success of touching nuclei segmentation. The disadvantage of this method is that it may have difficulty in processing the images of textured blob objects, since in that situation the gradient vector at the centers of nuclei are cluttered and the condition is violated. Currently the method is implemented using the C/C++ language, without using any other common library. Without any optimization, it takes less than 50 seconds on an Intel Pentium4 2.4 GHz machine with 1 GB memory to segment a volume image with a size of 230*162*80. The running time can be reduced further with multi-resolution implementation and code optimisation. After evaluating this method on larger and more diverse image datasets, we intend to release the algorithm to the cell biology community.

## Conclusion

We presented a novel, automated algorithm for 3D cell nuclei segmentation based on gradient flow tracking. To validate the efficacy and performance of the proposed segmentation algorithm, we evaluated it by using synthesized and real biological images. The results show that the algorithm is able to segment juxtaposed nuclei correctly, a persistent problem in the field of cellular image analysis.

## Methods

### Gradient vector diffusion by elastic deformation transformation

Gradient information is an important factor in three-dimensional (3D) nuclei segmentation due to the fact that in any given nuclei image, the gradient vectors either point towards the central area of a bright nucleus, or outwards from the central area of a dark nucleus. However, in practice, the gradient magnitude is very small, and the direction of the gradient vector is usually not trustworthy due to the noise present in the image when approaching the central area of a nucleus. Additionally, when we are dealing with nuclei that are of irregular shapes, the gradient vectors tend to be cluttered. Motivated by these facts, here, we introduce a physical model that incorporates the diffused gradient vectors from the boundaries of the image to generate a smooth gradient field. Our gradient vector diffusion procedure propagates gradient vectors with large magnitudes to the areas with weak gradient vectors and smoothes the noisy gradient vector field [11]. For a detailed introduction to gradient vector diffusion, we refer to [11]. We adopt an elastic deformation transformation, under which the image is modeled as elastic sheets warped by an external force field to achieve gradient vector diffusion. This model has been previously employed for image registration [12, 13], where the deformation of boundary points are fixed and then the deformation field is propagated to inner regions of the image by solving the elastic model equation. Here, we extend this model to analyze 3D microscopic nuclei images.

The diffused gradient vector field **v**(*x*, *y*, *z*) = (*u*(*x*, *y*, *z*), *v*(*x*, *y*, *z*), *w*(*x*, *y*, *z*)) (*u*(*x*, *y*, *z*), *v*(*x*, *y*, *z*) and *w*(*x*, *y*, *z*) are three components of the diffused gradient vector projecting to *x*, *y* and *z* axis respectively) in a 3D image is defined to be a solution to the partial differential equation (PDE), also known as a Navier-Stokes equation, describing the deformation of an elastic sheet [13]:

*μ*∇^{2}**v** + (*λ* + *μ*)∇*div*(**v**) + *q* × (∇*f* - **v**) = 0, (1)

where ∇^{2} is the Laplacian operator, *div* is the divergence operator, ∇ is the gradient operator, ∇*f* is the original gradient vector field, and Lame's coefficients *μ* and *λ* refer to the elastic properties of the material. In this paper, we aim to diffuse the gradient vectors toward the central areas of nuclei objects to obtain a gradient flow field. Therefore, *f* is set to be

*f* (*x*, *y*, *z*) = *G*_{σ} (*x*, *y*, *z*)**I*(*x*, *y*, *z*),

*I*(

*x*,

*y*,

*z*) is a 3D intensity image and

*G*

_{ σ }(

*x*,

*y*,

*z*) is a 3D Gaussian function with standard derivation

*σ*. Note that before computing the convolution and gradient vector, the images should have been interpolated using a spline-based method and re-sampled to isotropic voxel sizes.

*q*is a function indicating whether or not the displacement is pre-fixed at the position. In our method, the indicator function is set as

*Threshold*is set to be 0. Once the

*Threshold*is large, the gradient vectors with small magnitudes will be omitted, including some noisy gradient vectors and some useful gradient vectors. Therefore, the

*Threshold*creates a compromise between keeping useful gradient vectors and removing noisy gradient vectors. The model is solved by treating

*u*,

*v*and

*w*as functions of time:

where **v**_{
t
}(*x*, *y*, *z*, *t*) denotes the partial derivative of **v**(*x*, *y*, *z*, *t*) with respect to time *t*. The equation is decoupled as:

*u*_{
t
}(*x*, *y*, *z*, *t*) = *μ*∇^{2}*u*(*x*, *y*, *z*, *t*) + (*λ* + *μ*) (∇*div*(**v**(*x*, *y*, *z*, *t*)))_{
x
}+ *q*(*x*, *y*, *z*)((∇*f*(*x*, *y*, *z*))_{
x
}- *u*(*x*, *y*, *z*, *t*))

*v*_{
t
}(*x*, *y*, *z*, *t*) = *μ*∇^{2}*v*(*x*, *y*, *z*, *t*) + (*λ* + *μ*) (∇*div*(**v**(*x*, *y*, *z*, *t*)))_{
y
}+ *q*(*x*, *y*, *z*)((∇*f*(*x*, *y*, *z*))_{
y
}- *v*(*x*, *y*, *z*, *t*))

*w*_{
t
}(*x*, *y*, *z*, *t*) = *μ*∇^{2}*w*(*x*, *y*, *z*, *t*) + (*λ* + *μ*) (∇*div*(**v**(*x*, *y*, *z*, *t*)))_{
z
}+ *q*(*x*, *y*, *z*)((∇*f*(*x*, *y*, *z*))_{
z
}- *w*(*x*, *y*, *z*, *t*))

*x*, Δ

*y*, Δ

*z*and time interval Δ

*t*all to be 1 and letting the indices

*i*,

*j*,

*k*and

*n*correspond to

*x*,

*y*,

*z*and

*t*respectively, the equations are approximated as:

**v**represents velocity and hence, considering hydromechanics rules, the second term in Equation 2 denotes the compression level of a compressible fluid. Given this description, setting

*div*(

**v**) = 0 represents an uncompressible fluid. The terms

*μ*and

*λ*in Equation 2 determine the tradeoff between conformability to the pre-fixed deformation vectors and smoothness of the deformation field [13]. As it is clear from Equation 2, when

*μ*and

*λ*are small, the pre-fixed deformation vectors are preserved. Moreover, having large values for terms

*μ*and

*λ*will result in obtaining a smoother deformation field. As an example, in Figure 1, we demonstrate a comparison between the zoomed diffused gradient vector field with elastic deformation transformation and the original gradient vector field of a slice from a 3D nucleus image. As is clear from Figure 1, the diffused vector field using the elastic deformable model flows more smoothly towards the central areas of nuclei compared to the original gradient vector field. Moreover, even though nuclei are closely juxtaposed, the diffused flow field splits along a clear boundary and flows towards the corresponding central areas of each nucleus. This property greatly contributes to the success of 3D nucleus segmentation.

### Gradient flow tracking

**x**= (

*x*,

*y*,

*z*), the next point

**x**' = (

*x'*,

*y'*,

*z'*) that

**x**flows through in the diffused gradient field is computed as:

**v**(

**x**) is the diffused gradient vector at point

**x**, and "round" returns the nearest integer. The angle between the diffused gradient vectors of these two adjacent points is determined as:

The algorithm of the gradient flow tracking is summarized as follows.

1. Randomly select a point **x** as the initial point **x**^{0}.

2. Obtain **x**^{n + 1}(*n* = 0,1,2...) using Equation 3 based on **x**^{
n
}.

3. Compute the angle *θ*_{
n
}of diffused gradient vector between **x**^{n + 1}and **x**^{
n
}with Equation 4. If *θ*_{
n
}is larger than $\frac{\pi}{2}$, stop.

4. Replace **x**^{
n
}with **x**^{n + 1}. Return to step 2.

Gradient flow tracking is applied to each point in the image. All points in the same attraction basin are grouped into the same cluster. Since it is time consuming to run the tracking algorithm for every point, in order to improve the performance of our method, gradient flow tracking is not applied to the points that have already been on the gradient flow trajectory of a previously processed pixel. Instead, these visited points are directly associated with the sink to which the path flows. This improvement not only speeds up the segmentation, but also yields reproducible segmentation results.

### Local adaptive thresholding

After the gradient flow tracking step, the image is segmented into smaller regions each of which is expected to contain only a single nucleus. From here the nuclei segmentation problem is turned into binary classification problem where we are interested in distinguishing the nuclei from their background in a small region. Therefore an intensity thresholding method is appropriate for extracting the nuclei from the background. In order to approach this problem, we can take advantage of the method employed by Otsu in [14], which has the ability to extract the nucleus from each attraction basin. Another approach for dealing with this problem is through designing a more involved method that employs techniques such as graph cut, level set, etc. Here, we employ the locally adaptive method of Otsu [14] because of its ability to deal with situations where the intensity of nuclei and background are not constant across an image. In each segmented region, pixels whose intensities are larger than the automatically determined local Otsu threshold are grouped as nuclei, otherwise they are grouped as background. Finally, an optional procedure is performed after extracting the nuclei to eliminate small regions, which contain a lower number of pixels than a threshold.

### Summary of 3D cell nuclei segmentation method

The algorithm of the 3D cell nuclei segmentation method based on gradient flow tracking is summarized as follows.

1. Obtain the diffuse gradient vector field using the elastic deformation transformation

2. From each point, run the gradient flow tracking procedure, and label each passed pixel with a converged sink position.

3. Combine the attraction basins of the sinks whose distance is less than three pixels.

4. Assign the same label to the points in the same attraction basin.

5. Perform local adaptive thresholding in each attraction basin to extract the nucleus.

6. Optional procedure: eliminate regions with smaller number of pixels than a threshold T.

### Running example

## Declarations

### Acknowledgements

This work is funded by a Bioinformatics Research Center Program Grant from Harvard Center for Neurodegeneration and Repair, Harvard Medical School (STCW). We would like to thank Dr. Sean Megason of California Institute of Technology for providing the *C. elegans* embryo images.

## Authors’ Affiliations

## References

- Holley SA, Geisler R, Nüsslein-Volhard C: Control of her1 expression during zebrafish somitogenesis by a Delta-dependent oscillator and an independent wavefront activity. Genes & Dev. 2000, 14: 1678-1690.Google Scholar
- Jülich D, Hwee LC, Round J, Nicoliaje C, Scroeder J, Davies A, Geisler R, Lewis J, Jiang YJ, Holley SA: beamter/deltaC and the role of Notch ligands in the zebrafish somite segmentation, hindbrain neurogenesis and hypochord differentiation. Dev Biol. 2005, 286: 391-404. 10.1016/j.ydbio.2005.06.040.View ArticlePubMedGoogle Scholar
- Lin G, Adiga U, Olson K, Guzowski J, Barnes C, Roysam B: A hybrid 3-D watershed algorithm incorporating gradient cues and object models for automatic segmentation of nuclei in confocal image stacks. Cytometry. 2003, 56A: 23-36. 10.1002/cyto.a.10079.View ArticleGoogle Scholar
- Lin G, Chawla MK, Olson K, Guzowski JF, Barnes CA, Roysam B: Hierarchical, model-based merging of multiple fragments for improvoed three-dimensional segmentation of nuclei. Cytometry. 2005, 63A: 20-33. 10.1002/cyto.a.20099.View ArticleGoogle Scholar
- Umesh Adiga PS, Chaudhuri BB: An efficient method based on watershed and rule-based merging for segmentation of 3-D histo-pathological images. Pattern Recognition. 2001, 34: 1449-1458. 10.1016/S0031-3203(00)00076-5.View ArticleGoogle Scholar
- Belien JAM, Ginkel HAHM, Tekola P, Ploeger LS, Poulin NM, Baak JPA, Diest PJ: Confocal DNA Cytometry: A Contour-Based Segmentation Algorithm for Automated Three-Dimensional Image Segmentation. Cytometry. 2002, 49: 12-21. 10.1002/cyto.10138.View ArticlePubMedGoogle Scholar
- Wahlby C, Sintorn IM, Erlandsson F, Borgefors G, Bengtsson E: Combining intensity, edge and shape information for 2D and 3D segmentation of cell nuclei in tissue sections. J Microsc. 2004, 215: 67-76. 10.1111/j.0022-2720.2004.01338.x.View ArticlePubMedGoogle Scholar
- Dufour A, Shinin V, Tajbakhsh S, Guillen-Aghion N, Olivo-Marin JC, Zimmer C: Segmentation and Tracking Fluorescent Cells in Dynamic 3-D Microscopy with Coupled Active Surfaces. IEEE Trans Image Processing. 2005, 14: 1396-1410. 10.1109/TIP.2005.852790.View ArticleGoogle Scholar
- Sarti A, de Solorzano CO, Locket S, Malladi R: A Geometric Model for 3-D Confocal Image Analysis. IEEE Trans Biomedical Engineering. 2000, 47: 1600-1609. 10.1109/10.887941.View ArticleGoogle Scholar
- Vincent L, Soille P: Watersheds in digital spaces: an efficient algorithm based on immersion simulations. IEEE Trans Pattern Anal Mach Intell. 1991, 13: 583-598. 10.1109/34.87344.View ArticleGoogle Scholar
- Xu C, Prince J: Snakes, shapes, and gradient vector flow. IEEE Trans Image Processing. 1998, 7: 359-369. 10.1109/83.661186.View ArticleGoogle Scholar
- Bajcsy R, Kovacic S: Multiresolution elastic matching. Computer Vision, Graphics and Image Processing. 1989, 46: 1-21. 10.1016/S0734-189X(89)80014-3.View ArticleGoogle Scholar
- Davatzikos C, Prince J, Bryan R: Image Registration Based on Boundary Mapping. IEEE Trans Med Imaging. 1996, 15: 112-115. 10.1109/42.481446.View ArticlePubMedGoogle Scholar
- Otsu N: A threshold selection method from gray-level histograms. IEEE Trans Systems Man Cybernetics. 1979, 9: 62-66.View ArticleGoogle Scholar
- Ortiz de Solorzano C, Garcia Rodriguez E, Jones A, Pinkel D, Gray JW, Sudar D, Lockett SJ: Segmentation of confocal microscope images of cell nuclei in thick tissue sections. J Microsc. 1999, 193 (Pt 3): 212-26. 10.1046/j.1365-2818.1999.00463.x.View ArticlePubMedGoogle Scholar
- De Solorzano CO, Malladi R, Lelievre SA, Lockett SJ: Segmentation of nuclei and cells using membrane related protein markers. J Microsc. 2001, 201 (Pt 3): 404-415. 10.1046/j.1365-2818.2001.00854.x.View ArticlePubMedGoogle Scholar

## Copyright

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.