Cellular Neural Networks is a multiprocessor computing architecture where the processors are only directly connected to nearby processors. This results in a trade off between the number of connections between processors and the number of steps needed to perform global computation. We consider such a locally connected computing architecture and present some preliminary analysis on this trade off and study this architecture's applicability to the specific problem of matrix multiplication including linear transforms applications such as 1-D and 2-D DCT and DWT. We illustrate that in general there is a trade-off between the following 3 parameters: the number of iterations needed to perform the global computation, the amount of memory in each processor and the connectedness of the graph. This latter parameter is expressed as the relative diameter of the computer architecture graph with respect to the problem graph. © 2011 IEEE.