The flourishing of machine learning use for cognitive tasks has driven an increased demand for large annotated training datasets. In the medical imaging domain, such datasets are scarce, and the process of labeling them is costly, error prone and requires high expertise. Unsupervised learning is therefore an attractive approach for analyzing unlabeled medical images. In this paper we describe an unsupervised analysis method, consisting of feature learning by Stacked Auto-Encoders, K-means clustering for building a data model, and encoding of new images using the model. We utilize this method for image-level and patch-level analysis of breast mammograms. At the image-level, we demonstrate that our cluster-based image encoding is able to identify outlier images such as images with implants or non-standard acquisition views. At the patch-level, we show that image signatures using patch clustering can be used for unsupervised semantic segmentation of breast tissues, as well as for separating mammograms with high and low breast density. We evaluate our suggested methods on large datasets and discuss potential applications for data curation, machine-guided annotation and automatic interpretation of medical images.