Portrait segmentation has gained more and more attractions in recent years due to the popularity of selfie images. Compared to general semantic segmentation problems, portrait segmentation focuses on facial areas with higher requirements especially over the boundaries. To improve the performance of portrait segmentation, we propose a boundary-sensitive deep neural network (BSN) for better accuracy among the portrait boundaries. BSN introduces three novel techniques. First, an individual boundary-sensitive mask is proposed by dilating the contour line and assigning the boundary pixels with multi-class labels. Second, a global boundary-sensitive mask is employed as a position sensitive prior to further constrain the overall shape of the segmentation map. Third, we train a boundary-sensitive attribute classifier jointly with the segmentation network to reinforce the network with semantic boundary shape information. We have evaluated BSN on the state-of-the-art public portrait segmentation datasets, i.e., the PFCN dataset, as well as the portrait images collected from other three popular image segmentation datasets: COCO, COCO-Stuff, and PASCAL VOC. Our method achieves the superior quantitative and qualitative performance over state-of-the-arts on the evaluated datasets, especially obtains better visualization effect on the portrait boundary region.