Evaluating the performance of multithreading and prefetching in multiprocessors
Abstract
This paper presents new analytical models of the performance benefits of multithreading and prefetching, and experimental measurements of parallel applications on the MIT Alewife multiprocessor. For the first time, both techniques are evaluated on a real machine as opposed to simulations. The models determine the region in the parameter space where the techniques are most effective, while the measurements determine the region where the applications lie. We find that these regions do not always overlap significantly. The multithreading model shows that only 2-4 contexts are necessary to maximize this technique's potential benefit in current multiprocessors. For these multiprocessors, multithreading improves execution time by less than 10% for most of the applications that we examined. The model also shows that multithreading can significantly improve the performance of the same applications in multiprocessors with longer latencies. Reducing context-switch overhead is not crucial. The software prefetching model shows that allowing 4 outstanding prefetches is sufficient to achieve most of this technique's potential benefit on current multiprocessors. Prefetching improves performance over a wide range of parameters, and improves execution time by as much as 20-50% even on current multiprocessors. A comparison between the two models shows that prefetching has a significant advantage over multithreading for machines with low memory latencies and/or applications with high cache miss rates, because a prefetch instruction consumes less time than a context-switch. © 1996 Academic Press, Inc.