About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Paper
Distributed learning strategy based on chips for classification with large-scale dataset
Abstract
Learning with very large-scale datasets is always necessary when handling real problems using artificial neural networks. However, it is still an open question how to balance computing efficiency and learning stability, when traditional neural networks spend a large amount of running time and memory to solve a problem with large-scale learning dataset. In this paper, we report the first evaluation of neural network distributed-learning strategies in large-scale classification over protein secondary structure. Our accomplishments include: (1) an architecture analysis on distributed-learning, (2) the development of scalable distributed system for large-scale dataset classification, (3) the description of a novel distributed-learning strategy based on chips, (4) a theoretical analysis of distributed-learning strategies for structure-distributed and data-distributed, (5) an investigation and experimental evaluation of distributed-learning strategy based-on chips with respect to time complexity and their effect on the classification accuracy of artificial neural networks. It is demonstrated that the novel distributed-learning strategy is better-balanced in parallel computing efficiency and stability as compared with the previous algorithms. The application of the protein secondary structure prediction demonstrates that this method is feasible and effective in practical applications. © World Scientific Publishing Company.