Rangachari Anand, Kishan Mehrotra, et al.
IEEE Transactions on Neural Networks
In this paper, we present a self-generating modular neural network architecture for supervised learning. In the architecture, any kind of feedforward neural networks can be employed as component nets. For a given task, a tree-structured modular neural network is automatically generated with a growing algorithm by partitioning input space recursively to avoid the problem of pre-determined structure. Due to the principle of divide-and- conquer used in the proposed architecture, the modular neural network can yield both good performance and significantly faster training. The proposed architecture has been applied to several supervised learning tasks and has achieved satisfactory results.
Rangachari Anand, Kishan Mehrotra, et al.
IEEE Transactions on Neural Networks
Benjamin N. Grosof
AAAI-SS 1993
Yale Song, Zhen Wen, et al.
IJCAI 2013
Tim Erdmann, Stefan Zecevic, et al.
ACS Spring 2024