About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Publication
MIPR 2019
Conference paper
A Novel Framework for 3D-2D Vertebra Matching
Abstract
3D-2D medical image matching is a crucial task in image-guided surgery, image-guided radiation therapy and minimally invasive surgery. The task relies on identifying the correspondence between a 2D reference image and the 2D projection of 3D target image. In this paper, we propose a novel image matching framework between 3D CT projection and 2D X-ray image, tailored for vertebra images. The main idea is to learn a vertebra detector by means of deep neural network. The detected vertebra is represented by a bounding box in the 3D CT projection. Next, the bounding box annotated by the doctor on the X-ray image is matched to the corresponding box in the 3D projection. We evaluate our proposed method on our own-collected 3D-2D registration dataset. The experimental results show that our framework outperforms the state-of-the-art neural network-based keypoint matching methods.