How accurately should I compute implicit matrix-vector products when applying the hutchinson trace estimator?
The Hutchinson estimator defines an estimate of the trace of a matrix M, based on a bilinear form with independent vectors y of zero-mean unit-variance uncorrelated entries. This technique is particularly useful when M is only implicitly given but the matrix-vector product My can be efficiently computed without M being explicitly formed. Well-known examples in practice are M = A-1, and more generally, M = f(A). We study in this paper the conditions under which the numerical error incurred in computing My is comparable with the statistical uncertainty caused by the randomness of y. With these conditions, we also derive the sufficient number of random vectors that guarantees a relative error bound given any desired probability. For the purpose of obtaining easily computable conditions, we focus on the use of random vectors consisting of normal variables, a precursor technique attributed to Girard by Hutchinson. As demonstrated in many practical scenarios, normal variables are as effective as symmetric Bernoulli variables (a more common definition under the name of Hutchinson) but are advantageous in that they enjoy a simultaneous estimation of the estimator variance.