Image aesthetics assessment has been challenging due to its subjective nature. Inspired by the Chatterjee's visual neuroscience model, we design Deep Chatterjee's Machine (DCM) tailored for this task. DCM first learns attributes through the parallel supervised pathways, on a variety of selected feature dimensions. A high-level synthesis network is trained to associate and transform those attributes into the overall aesthetics rating. We then extend DCM to predicting the distribution of human ratings, since aesthetics ratings are often subjective. We also highlight our first-of-its-kind study of label-preserving transformations in the context of aesthetics assessment, which leads to an effective data augmentation approach. Experimental results on the AVA dataset show that DCM gains significant performance improvement, compared to other state-of-the-art models.