Performance and scalability of cloud applications are heavily dependent on those same characteristics in the underlying network. Several factors complicate cloud network evaluation. Most notable are 1) large scale, 2) a practically infinite set of application workloads, 3) a large set of potential metrics, 4) a large variety of available configurations within and across clouds and 5) a lack of visibility into the underlying network. There is a tremendous need for a cloud network evaluation methodology that overcomes these challenges, that is practical to apply and provides predictive results. We describe our work to develop such a methodology for cloud network evaluation and a tool (Euclid) that automates its application. Euclid leverages a set of widely used traffic generators to evaluate throughput and latency across a cloud network between endpoint pairs. Evaluation of these micro benchmarks between and among different sets of endpoints is coordinated to reflect several well-known application traffic patterns. Large sets of results are distilled into a tractable set of metrics and associated graphs that facilitate consumption of the results by cloud operators and users. We report results from applying Euclid to seventeen cloud configurations that span three public and one private cloud. The results demonstrate the practicality of our approach and highlight some interesting differences across clouds.