Software metrics computation and presentation are considered an important feature of many software design and development tools. The System Grokking Technology developed by IBM research enables investigation, validation and evolution of complex software systems at the level of abstraction suitable for human comprehension. As part of our ongoing effort to improve the tool and offer more useful abstractions we considered adorning the presented information with software metrics. The difficulty in doing that is in selecting among the legions of metrics competing for both scarce screen space and for the architect's attention. In this paper, we describe a new criterion for evaluating the competing metrics based on a normalized version of Shannon's information theoretical content. We also give values of these in a large software corpus and for a large set of metrics. Based on our measurements and this criterion, we can recommend the presentation of two metrics: module centrality, as measured by a variant of Google's classical page ranking algorithm, and module size, as measured by Chidamber and Kemerer's WMC metric. © 2011 ACM.