About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Publication
SPLASH 2012
Conference paper
Does better throughput require worse latency?
Abstract
Let throughput denote the amount of application-level work performed in unit time, normalized to the amount of work that would be accomplished with perfect linear scaling. Let latency denote the mean time required for a thread on one core to observe a change effected by a thread on another core, normalized to the best latency possible for the given platform. Might it be true that algorithms that improve application-level throughput worsen inter-core application-level latency? As techniques for improving performance have evolved from mutex-and-locks to raceand-repair, each seems to have offered more throughput at the expense of increased latency. Copyright © 2012 ACM.