This study explores efficient structures of artificial neural networks for associative memories. Motivated by the real brain structure and the demand of energy efficiency in hardware implementation, we consider neural networks with sparse modular structures. Numerical experiments are performed to clarify how the storage capacity of associative memory depends on regularity and randomness of the network structures. We first show that a fully regularized network, suited for design of hardware, has poor recall performance and a fully random network, undesired for hardware implementation, yields excellent recall performance. For seeking a network structure with good performance and high implementability, we consider four different modular networks constructed based on different combinations of regularity and randomness. From the results of associative memory tests for these networks, we find that the combination of random intramodule connections and regular intermodule connections works better than the other cases. Our results suggest that the parallel usage of regularity and randomness in network structures could be beneficial for developing energy-efficient neural networks.