Recent years have seen an astounding growth in the adoption of Machine Learning algorithms to classify data gathered through monitoring activities. Those algorithms can effectively classify data as system indicators, network packets, and logs according to a model they infer during training. This way, they provide sophisticated means to conduct intrusion detection by suspecting anomalies due to attacks in the value of those features. Additionally, Meta-Learners as Bagging and Boosting build ensembles of homogeneous classifiers that are known to improve classification performance with positive impact on intrusion detection. On the other hand, it is not yet clear if ensembles of heterogeneous or diverse classifiers can build better intrusion detectors. To such extent, we first recap on n-version programming, k-out-of-m (k-o-o-m) systems and the role of diversity. Then, we present k-o-o-m systems of classifiers for intrusion detection, expanding on meta-learning and diversity measures to be applied to classifiers. This paves the way for an experimental campaign which exercises supervised and unsupervised classifiers as well as k-o-o-m voting ensembles. After presenting and discussing results, we conclude that voting ensembles of diverse classifiers does not improve intrusion detection. Therefore, while voting has been acknowledged since decades as a staple to manage n-version programming for reliable systems engineering, it is not as effective as a meta-learner to improve classification performance of intrusion detectors.
Detecting Intrusions by Voting Diverse Machine Learners: Is It Really Worth? / Tommaso Zoppi, Andrea Bondavalli, Andrea Ceccarelli. - ELETTRONICO. - 2021-:(2021), pp. 57-66. (Intervento presentato al convegno 2021 IEEE 26th Pacific Rim International Symposium on Dependable Computing (PRDC) tenutosi a aus nel 2021) [10.1109/PRDC53464.2021.00017].
Detecting Intrusions by Voting Diverse Machine Learners: Is It Really Worth?
Tommaso Zoppi;Andrea Bondavalli;Andrea Ceccarelli
2021
Abstract
Recent years have seen an astounding growth in the adoption of Machine Learning algorithms to classify data gathered through monitoring activities. Those algorithms can effectively classify data as system indicators, network packets, and logs according to a model they infer during training. This way, they provide sophisticated means to conduct intrusion detection by suspecting anomalies due to attacks in the value of those features. Additionally, Meta-Learners as Bagging and Boosting build ensembles of homogeneous classifiers that are known to improve classification performance with positive impact on intrusion detection. On the other hand, it is not yet clear if ensembles of heterogeneous or diverse classifiers can build better intrusion detectors. To such extent, we first recap on n-version programming, k-out-of-m (k-o-o-m) systems and the role of diversity. Then, we present k-o-o-m systems of classifiers for intrusion detection, expanding on meta-learning and diversity measures to be applied to classifiers. This paves the way for an experimental campaign which exercises supervised and unsupervised classifiers as well as k-o-o-m voting ensembles. After presenting and discussing results, we conclude that voting ensembles of diverse classifiers does not improve intrusion detection. Therefore, while voting has been acknowledged since decades as a staple to manage n-version programming for reliable systems engineering, it is not as effective as a meta-learner to improve classification performance of intrusion detectors.File | Dimensione | Formato | |
---|---|---|---|
Detecting_Intrusions_by_Voting_Diverse_Machine_Learners_Is_It_Really_Worth.pdf
Accesso chiuso
Tipologia:
Pdf editoriale (Version of record)
Licenza:
Tutti i diritti riservati
Dimensione
526.2 kB
Formato
Adobe PDF
|
526.2 kB | Adobe PDF | Richiedi una copia |
I documenti in FLORE sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.