From flop to megaflops: Java for technical computing
Abstract
Although there has been some experimentation with Java as a language for numerically intensive computing, there is a perception by many that the language is unsuited for such work because of performance deficiencies. In this article we show how optimizing array bounds checks and null pointer checks creates loop nests on which aggressive optimizations can be used. Applying these optimizations by hand to a simple matrix-multiply test case leads to Java-compliant programs whose performance is in excess of 500 Mflops on a four-processor 332MHz RS/6000 model F50 computer. We also report in this article the effect that various optimizations have on the performance of six floating-point-intensive benchmarks. Through these optimizations we have been able to achieve with Java at least 80% of the peak Fortran performance on the same benchmarks. Since all of these optimizations can be automated, we conclude that Java will soon be a serious contender for numerically intensive computing.