Publication
SPLASH 2012
Conference paper

Does better throughput require worse latency?

View publication

Abstract

Let throughput denote the amount of application-level work performed in unit time, normalized to the amount of work that would be accomplished with perfect linear scaling. Let latency denote the mean time required for a thread on one core to observe a change effected by a thread on another core, normalized to the best latency possible for the given platform. Might it be true that algorithms that improve application-level throughput worsen inter-core application-level latency? As techniques for improving performance have evolved from mutex-and-locks to raceand-repair, each seems to have offered more throughput at the expense of increased latency. Copyright © 2012 ACM.

Date

01 Dec 2012

Publication

SPLASH 2012

Authors

Share