A Novel Framework for 3D-2D Vertebra Matching
3D-2D medical image matching is a crucial task in image-guided surgery, image-guided radiation therapy and minimally invasive surgery. The task relies on identifying the correspondence between a 2D reference image and the 2D projection of 3D target image. In this paper, we propose a novel image matching framework between 3D CT projection and 2D X-ray image, tailored for vertebra images. The main idea is to learn a vertebra detector by means of deep neural network. The detected vertebra is represented by a bounding box in the 3D CT projection. Next, the bounding box annotated by the doctor on the X-ray image is matched to the corresponding box in the 3D projection. We evaluate our proposed method on our own-collected 3D-2D registration dataset. The experimental results show that our framework outperforms the state-of-the-art neural network-based keypoint matching methods.