Publication
IEEE Access
Paper

Learning without Forgetting: A New Framework for Network Cyber Security Threat Detection

Download paper

Abstract

Progressive learning addresses the problem of incrementally learning new tasks without compromising the prediction accuracy of previously learned tasks. In the context of artificial neural networks, several algorithms exist for achieving the progressive learning goal of learning without forgetting. However, these algorithms have traditionally been tested on the well-known and widely available datasets from the domain of image understanding and computer vision. Very little has been done on exploring the suitability of progressive learning algorithms in the important area of network threat detection. On a more fundamental level, progressive learning algorithms are still faced with the challenge of predicting the ultimate ability of a given neural network architecture to add more tasks to its repertoire without undergoing catastrophic forgetting. The goal of this paper is to address such a challenge in the context of cyber security threat detection. It does so by providing a unified conceptual and computational framework where progressive learning algorithms can be analyzed, compared, and contrasted in terms of their learning capacity and prediction accuracy for specific datasets from the cloud cyber security domain. In particular, this paper provides rigorous metrics for predicting the onset of catastrophic forgetting in the cyber security domain and contrasts them with their usage in the imaging domain. Our extensive numerical results show that progressive learning, along with the proposed criteria for catastrophic forgetting, provides a very structured framework for automating network threat detection as new threats emerge throughout network operation.

Date

01 Jan 2021

Publication

IEEE Access

Authors

Resources

Share