EYE TRACKING SYSTEM FOR CANCER DETECTION

PROJECT COLLABORATORS

NCI, Molecular Imaging Program:
Peter Choyke (MD, Chief, Director), Baris Turkbey (MD)
NIH, Clinical Center, Center of Interventional Oncology:
Brad Wood (MD, Chief, Director), Haydar Celik (PhD)
Center of Research in Computer Vision, University of Central Florida:
Ulas Bagci (PhD)

PUBLICATION

Bagci, U., Celik, H., Turkbey, B., Cheng, R., McCreedy, E.S., Choyke, P., McAuliffe, M.J., Wood, B. Developing Eye Tracking Environment for Prostate Cancer Diagnosis Using Multi-parametric MRI. (April, 2017). ISMRM.

Bagci, U. Celik, H., Turkbey B., Cheng, R., McCreedy, E.S., Choyke, P., McAuliffe, M.J., Wood, B. Gaze2Segment: A Pilot Study for Integrating Eye-Tracking Technology into Medical Image Segmentation. (Oct, 2016). MICCAI workshop.

PROJECT BRIEF

We are currently developing a unique MIPAV application with an eye-tracking system, which integrates with machine learning algorithms for automatic diagnosis and quantification of the diseases, such as lung and prostate cancer as shown in Figure. This is a collaboration work with groups led by Dr. Brad Wood and Dr. Peter Choyke.

In this study, we developed a novel system that integrating biological and computer vision techniques to support radiologists’ reading experience with automatic image segmentation task. During the diagnostic assessment of lung CT or MRI scans, the radiologists’ gaze information was used to create a visual attention map. This map was combined with a computer-derived saliency map, extracted from the CT or MRI images. The visual attention map was used as an input for indicating roughly the location of a region of interest, i.e. cancer region. With computer-derived saliency information, on the other hand, we aimed at finding foreground and background cues for the object of interest found in the previous step. These cues are used to initiate a seed-based delineation process. The proposed system achieved a dice similarity coefficient of 86% and Hausdorff distance of 1.45 mm as the segmentation accuracy. To the best of our knowledge, the system is the first true integration of eye-tracking technology into medical image segmentation task without the need for any further user-interaction.

MIPAV EYE-TRACKER APPLICATIONS FOR CANCER DETECTION

Prostate cancer detection.
Prostate cancer detection.

Enable visual search/perception studies using multi-parametric MRI of prostate cancer. Four different images used by molecular imaging radiologists were synchronized in the system: T2-weighted, diffusion weighted, apparent diffusion coefficient map, and dynamic contrast enhanced images. Gaze map was successfully created for each image types separated using eye tracker data. Radiologists’ gaze information was successfully extracted from MIPAV multi-window MRI viewer.

Lung cancer detection.
Lung cancer detection.

Five steps to perform a segmentation task. Input is inferred from the eye-tracking data. Step 1: Real-time tracking of radiologists’ eye movements for extracting gaze information and mapping them into the CT scans (i.e., converting eye tracker data into image coordinate system). Step 2: Jitter Removal for filtering out the unwanted eye movements and stabilization of the gaze information. Step 3: Creating visual attention maps from gaze information and locating object of interest from the most important attention points. Step 4: Obtaining computer-derived local saliency and gradient information from gray-scale CT images to identify foreground and background cues for an object of interest. Step 5: Segmenting the object of interest (identified in step 3) based on the inferred cues (identified in Step 4).