System noise or Jitter is the activity of hardware, firmware, operating system, runtime system, and management software events. It is shown to disproportionately impact application performance in current generation large-scale clustered systems running general-purpose operating systems (GPOS). Jitter mitigation techniques such as co-scheduling jitter events across operating systems improve application performance but their effectiveness on future petascale systems is unknown. To understand if existing jitter mitigation solutions enable scalable petascale performance, we construct two complementary jitter models based on detailed analysis of system noise from the nodes of a large-scale system running a GPOS. We validate these two models using experimental data from a system consisting of 256 GPOS instances with 8192 CPUs. Based on our models, we project a minimum slowdown of 1.8%, 4.1%, and 6.5% for applications executing on a similar one petaflop system running 1024 GPOS instances and having global synchronization operations once every 100 ms, 10 ms, and 1 ms, respectively. Our projections indicate that-although existing mitigation solutions enable scalable petascale performance-additional techniques are required to contain the impact of jitter on multi-petaflop systems, especially for tightly synchronized applications. © 2013 Elsevier Ltd. All rights reserved.