About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Publication
SIAM Journal on Scientific Computing
Paper
How accurately should I compute implicit matrix-vector products when applying the hutchinson trace estimator?
Abstract
The Hutchinson estimator defines an estimate of the trace of a matrix M, based on a bilinear form with independent vectors y of zero-mean unit-variance uncorrelated entries. This technique is particularly useful when M is only implicitly given but the matrix-vector product My can be efficiently computed without M being explicitly formed. Well-known examples in practice are M = A-1, and more generally, M = f(A). We study in this paper the conditions under which the numerical error incurred in computing My is comparable with the statistical uncertainty caused by the randomness of y. With these conditions, we also derive the sufficient number of random vectors that guarantees a relative error bound given any desired probability. For the purpose of obtaining easily computable conditions, we focus on the use of random vectors consisting of normal variables, a precursor technique attributed to Girard by Hutchinson. As demonstrated in many practical scenarios, normal variables are as effective as symmetric Bernoulli variables (a more common definition under the name of Hutchinson) but are advantageous in that they enjoy a simultaneous estimation of the estimator variance.