MRI PROSTATE SEGMENTATION

PROJECT COLLABORATORS

NIH, Clinical Center, Imaging Biomarkers and CAD Laboratory:
Ronald M. Summers (MD, PhD, Chief, Director), Holger R. Roth (PhD), Nathan Lay (PhD), Le Lu (PhD)
NCI, Molecular Imaging Program:
Peter Pinto (MD, Chief, Director)
NCI, Center of Cancer Research, Urologic Oncology Branch:
Peter Pinto (MD, Chief, Director)
NIH, Clinical Center, Center of Interventional Oncology:
Brad Wood (MD, Chief, Director)
CIT, OIR, Signal Processing and Instrumentation Section:
Tom Pohida (PhD, Chief)

PUBLICATION

Cheng, R., Roth, H., Lu, L., Turkbey, B., Gandler, W., McCreedy, E.S., Choyke, P., McAuliffe, M.J., Summers, Ron. Automatic MR prostate segmentation by deep learning with holistically-nested networks. (Accepted, 2017). Journal of Medical Imaging.

Cheng, R., Roth, H.R., Lay, N., Lu, L., Turkbey, B., Gandler, W., McCreedy, E.S., Choyke, P., Summers, R. M., McAuliffe, M. J. (Feb, 2017). Automatic MR Prostate Segmentation by Deep Learning with Holistically-Nested Network. SPIE Medical Imaging.

Cheng, R., Lay, N.,  Roth, H. R., Lu, L., Turkbey, B., Mertan, F., Gandler, W., McCreedy, E.S., Choyke, P., McAuliffe, M.J., Summers, R. M. Deep Learning with Orthogonal Volumetric HED Prostate Segmentation and 3D Surface Reconstruction Model of Prostate MRI. (April 2017). IEEE ISBI.

Cheng, R., Roth, H.R., Lu, L., Wang, S., Turkbey, B., Gandler, W., McCreedy, E.S., Agarwal, H.K., Choyke, P., Summers, R.M., McAuliffe, M.J. Active Appearance Model and Deep Learning for More Accurate Prostate Segmentation on MRI. (Feb, 2016). SPIE Medical Imaging.

Cheng, R., Turkbey, B., Gandler, W., Agarwal, H.K., Shah, V.P., Bokinsky, A., McCreedy, E.S., Wang, S., Sankineni, S., Bernardo, M., Pohita, T., Choyke, P., McAuliffe, M.J. Atlas Based AAM and SVM Model for Fully Automatic MRI Prostate Segmentation. (Aug, 2014). IEEE EMBC.

Cheng, R., Bernardo, M., Senseney, J., Bokinsky, A., Gandler, W., Turkbey, B., Pohida, T., Choyke, P., McAuliffe, M.J. Segmentation and Surface Reconstruction Model of Prostate MRI to Improve Prostate Cancer Diagnosis. (April, 2013). IEEE ISBI.

Cheng, R., Turkbey, B., Senseney, J., Bokinsky, A., Gandler, W., McCreedy, E.S., Pohida, T., Choyke, P., McAuliffe, M.J. (Feb, 2013). SPIE Medical Imaging.

PROJECT BRIEF

Multi-parametric MRI (mpMRI) of the prostate has been shown to be effective for detecting likely regions of prostate cancer. These are then targeted for biopsy to confirm and grade prostate cancer. There have been a plethora of computer-aided detection systems developed and evaluated to assist radiologists in detecting prostate cancer in mpMRI. These systems require or benefit from prostate segmentation and central gland segmentation by restricting the region of interest a CAD is trained or evaluated on and sometimes providing additional anatomical information. There is also a niche application of prostate segmentation where 3D slicing molds are prepared from MRI images prior to prostatectomy. The molds are then used to slice the prostate in an orientation consistent with the MRI for later analysis. Manual segmentation of the prostate and anatomical structures can be prohibitively time consuming as well as error prone. Fully automatic segmentation systems are expected to improve efficiency and reduce error.

We develop a system for automatic prostate and central gland segmentation in T2W MRI using Holistically Nested Networks that have also been successfully applied to segmenting the lymph nodes, pancreas as well as other organs. The automatic segmentation proceeds as follows:

  1. An HNN model is used on each 2D axial slice of a T2W image to produce a 2D segmentation mask of the prostate and central gland.
  2. Optionally, specialized HNN models may be used on 2D sagittal and/or coronal image slices to produce sagittal and/or coronal 2D segmentation masks.
  3. The 2D segmentation masks are stacked to produce a volume.
  4. Rolling-ball smoothing is used on the mask to smooth the 3D mesh (or merge the other optional masks) followed by segmentation mesh smoothing/hole filling by solving a Laplace system of equation.

The result is a 3D segmentation mesh or binary mask of the prostate and central gland.

METHODS

We developed applications using deep learning based Deep Convolutional Neural Networks (DCNNs) that has become an important area of medical imaging research. Using MIPAV as a foundation, we implemented a new machine learning component inside MIPAV software. We have applied two generations of DCNNs models into our research. We applied them to two MICCAI challenge problems (MRI prostate and knees) and achieved superior performance over the traditional methods. We first applied Alex-net [1] DCNNs (first generation) model as the second-tier refinement framework to atlas based AAM (Active Appearance Model) segmented VOI contours. The model is 2D patch-to-patch based pixel prediction as shown in Figure 1. The overall architecture of the DCNN is depicted in Figure 2.

Figure 1. AlexNet patches generation and VOIs contours prediction. (a) patches along normal line, (b) probability map, (c) final contour from probability.
Figure 1. AlexNet patches generation and VOIs contours prediction. (a) patches along normal line, (b) probability map, (c) final contour from probability.
Figure 2. Alex-net DCNN architecture
Figure 2. Alex-net DCNN architecture

Additionally, we investigated the feasibility of applying the Holistically-nested networks (HNN) [2] deep-learning model to MRI segmentation of the prostate for both central gland and the whole prostate. The HNN architecture was adapted from VGGNet-16 [3] architecture by adding a side-output layer to each convolutional layer as shown in Figure 3. It has 5 stages of different scale levels as depicted in the color boxes. HNN computes the image-to-image or pixel-to-pixel prediction edge maps holistically. Figure 4 shows the predicted probability map from the HNN model. To use HNN as the foundation, we applied image pre-processing steps of N4-correction [4], cropping, histogram and resolution equalization to the MRI image to improve image contrast and quality. We then trained MRI slices and CED (Coherence Enhanced Diffusion) slices together with a single HNN model. This approach achieved state-of-art-performance as compared to other literature results. We utilized this model as the pre-processing step to segment the whole prostate and the central gland before prostate cancer detection in MICCAI ProstateX 2017 challenge. With more than 200 teams in this competition, our method, in collaboration Dr. Summers’s (CC) group, was awarded 3rd place.

Figure 3. Schematic view of enhanced HNN architecture
Figure 3. Schematic view of enhanced HNN architecture

 

Figure 4. Probability maps from the enhanced HNN model
Figure 4. Probability maps from the enhanced HNN model

One other project includes the use of volumetric Holistically-nested Edge Detection (HED) segmentation and a 3D surface reconstruction model for MR prostate images. We applied HED segmentation to orthogonal prostate images, and generated a high-resolution 3D prostate surface from the low-resolution MR images. Solidworks CAD system takes the 3D surface as input and generates the 3D prostate mold as shown in Figure 5. The 3D mold is fabricated via 3D printing. The 3D mold is used to generate pathology images from the prostate biopsy. Radiologists compare the pathology images with MR images and other multi-modal images to detect the prostate cancer. The contribution of this piece of work are: 1) using 2D based volumetric HED segmentation to achieve state-of-the-art performance, which is better and faster than other 3D DCNN counterparts; 2) keeping the processing time under 1 minute with each testing image, which significant improvement upon the single image clinical processing time (2 to 3 hrs); 3) correcting HED segmentation errors due to orthogonal compensation; 5) contributing the whole application freely to the medical research community.

Figure 5. Prostate 3D mold creation and 3D printing
Figure 5. Prostate 3D mold creation and 3D printing

 

REFERENCES

  1. Krizhevsky, A., Sutskever, I., Hinton, G. ImageNet classification with deep convolutional neural networks. (2012). Advances in Neural Information Processing Systems 25, NIPS.
  2. Xie, S., Tu, Z. Holistically-nested edge detection. (2015). Proceeding of the IEEE ICCV, 1395-1403.
  3. Simonyan, K., Zisserman, A. Very deep convolutional networks for large-scale image recognition. (2015). ICLR.
  4. Tustison, N.J., Avants, B.B., Cook, P.A., Zheng, Y., Egan, A., Yushkevich, P.A., and Gee, J.C. N4ITK: Improved N3 Bias Correction. (June 2010). IEEE Transactions on Medical Imaging, 29(6):1310-1320.

VIDEOS

Video file