Tree-loss function for training neural networks on weakly-labelled datasets
Neural networks are powerful tools for medical image classification and segmentation. However, existing network structures and training procedures assume that the output classes are mutually exclusive and equally important. Many datasets of medical images do not satisfy these conditions. For example, some skin disease datasets have images labelled as coarse-grained class (such as Benign) in addition to images with fine-grained labels (such as a Benign subclass called Blue Nevus), and conventional neural network can not leverage such additional data for training. Also, in the clinical decision making, some classes (such as skin cancer or Melanoma) often carry more importance than other lesion types. We propose a novel Tree-Loss function for training and fine-tuning a neural network classifier using all available labelled images. The key step is the definition of the class taxonomy tree, which is used to describe the relations between labels. The tree can be also adjusted to reflect the desired importance of each class. These steps can be performed by a domain expert without detailed knowledge of machine learning techniques. The experiments demonstrate the improved performance compared with the conventional approach even without using additional data.