Characterization of lesions by artificial intelligence (AI) has been the subject of extensive research. In recent years, many studies demonstrated the ability of convolution neural networks (CNNs) to successfully distinguish between malignant and benign breast lesions in mammography (MG) images. However, to date, no study has assessed the specific sub-type of lesions in MG images, as detailed in histolopathology reports. We present a method for finer classification of breast lesions in MG images into multiple pathology sub-types. Our approach works well with radiologists’ diagnostic workflow, and uses data available in radiology reports. The proposed Dual-Radiology Dual-Resolution Network (Du-Rad Du-Res Net) receives dual input from the radiologist and dual image resolutions. The radiologist input includes annotation of the lesion area and semantic radiology features; the dual image resolutions comprise a low resolution of the entire mammogram and a high resolution of the lesion area. The network estimates the likelihood of malignancy, as well as the associated pathological sub-type. We show that the combined input of the lesion region of interest (ROI) and the entire mammogram is important for optimizing the model’s performance. We tested the AI in a reader study on a dataset of 100 heldout cases. The AI outperformed three breast radiologists in the task of lesion histopathology sub-typing.