Contaminated training sets can highly affect the performance of classification rules. For this reason, robust supervised classifiers have been introduced. Amongst the many, this work focuses on depth-based classifiers, a class of methods which have been proven to enjoy some robustness properties. However, no robustness studies are available for them within a directional data framework. Here, their performance under some directional contamination schemes is evaluated. A comparison with the directional Bayes rule is also provided. Different directional specific contamination scenarios are introduced and discussed: antipodality and orthogonality of the contaminated distribution mean, and the directional mean shift outlier model.
Distance-based directional depth classifiers: a robustness study
Demni Houyem;Porzio Giovanni Camillo
2021-01-01
Abstract
Contaminated training sets can highly affect the performance of classification rules. For this reason, robust supervised classifiers have been introduced. Amongst the many, this work focuses on depth-based classifiers, a class of methods which have been proven to enjoy some robustness properties. However, no robustness studies are available for them within a directional data framework. Here, their performance under some directional contamination schemes is evaluated. A comparison with the directional Bayes rule is also provided. Different directional specific contamination scenarios are introduced and discussed: antipodality and orthogonality of the contaminated distribution mean, and the directional mean shift outlier model.File | Dimensione | Formato | |
---|---|---|---|
Demni_Messaoud_Porzio_2021_Communications_in_Statistics.pdf
solo utenti autorizzati
Descrizione: Articolo in rivista
Tipologia:
Versione Editoriale (PDF)
Licenza:
Copyright dell'editore
Dimensione
3.08 MB
Formato
Adobe PDF
|
3.08 MB | Adobe PDF | Visualizza/Apri Richiedi una copia |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.