Automated Ontology Learning systems are nowadays practical and used in a variety of domains. By using these systems, subject matter experts (SMEs) and ontology designers can readily construct very large ontologies consisting of tens of thousands of concepts and their relations based on a corpus. However, ontologies of this size make it extremely challenging for such SMEs to understand and further tune these ontologies. Prior studies have proposed techniques for concept ranking based solely on the analysis of the structure of the ontology graphs. In this paper, we propose a novel approach, which further exploits a word-level summarization technique applied to the source documents used to generate the ontology. Using the document summarization technique, we devise features that measure concept importance based on source documents where concepts are extracted. We demonstrate the effectiveness of our approach by comparing with existing ranking methods and by devising a scalable evaluation process inspired from the document retrieval domain.