Automatic detection of findings and their locations in chest x-ray studies is an important research area for AI application in healthcare. Whereas for finding classification tasks image-level labeling suffices, additional annotation in the form of bounding boxes is required for detection of findings locations. However, the process of locally marking findings on chest xray images is both time consuming and costly as it needs to be performed by radiologists. To address this problem, weakly supervised approaches have been employed to depict finding locations by looking at attention maps produced by convolution networks trained for findings classification. However, these approaches have not shown much promise so far and raised concerns whether the networks are actually focusing on the right abnormality regions. With this in mind, in this paper we propose an automatic approach for labeling chest x-ray images for findings and locations by leveraging radiology reports. Our labeling approach is anatomically standardized to the upper, middle, and lower lung zones for the left and right lungs, and is composed of two stages. In the first stage, we use a lungs segmentation UNet model and an atlas of normal patients to mark the six lung zones on the image with standardized bounding boxes. In the second stage, the associated radiology report is used to label each lung zone as positive or negative for finding, resulting in a set of six labeled bounding boxes per image. Using this approach, we automatically annotated a dataset of 13911 CXR images in a matter of hours, with an average annotation recall of 0.881 and precision of 0.896 when evaluated on 300 dual validated images. Finally, we used this 'silver' bounding boxes dataset to train an opacity detection model using a RetinaNet architecture, and obtained localization results on par with the state-of-the-art.