Fully Homomorphic Encryption (FHE) enables arbitrary computations on encrypted data without decryption, thus protecting data in cloud computing scenarios. However, FHE adoption has been slow due to the significant computation and memory overhead it introduces. This becomes particularly challenging for end-to-end processes, including training and inference, for conventional neural networks on FHE-encrypted data. Additionally, machine learning tasks require a high throughput system due to data-level parallelism. However, existing FHE accelerators only utilize a single SoC, disregarding the importance of scalability. In this work, we address these challenges through two key innovations. First, at an algorithmic level, we combine hyperdimensional Computing (HDC) with FHE. The machine learning formulation based on HDC, a brain-inspired model, provides lightweight operations that are inherently well-suited for FHE computation. Consequently, FHE-HD has significantly lower complexity while maintaining comparable accuracy to the state-of-the-art. Second, we propose an efficient and scalable FHE system for FHE-based machine learning. The proposed system adopts a novel interconnect network between multiple FHE accelerators, along with an automated scheduling and data allocation framework to optimize throughput and hardware utilization. We evaluate the value of the proposed FHE-HD system on the MNIST dataset and demonstrate that the expected training time is 4.7 times faster compared to state-of-the-art MLP training. Furthermore, our system framework exhibits up to 38.2 times speedup and 13.8 times energy efficiency improvement over the baseline scalable FHE systems that use the conventional data-parallel processing flow.