Counterexample to theorems of Cox and Fine
Joseph Y. Halpern
aaai 1996
We introduce three character degradation models in a boosting algorithm for training an ensemble of character classifiers. We also compare the boosting ensemble with the standard ensemble of networks trained independently with character degradation models. An interesting discovery in our comparison is that although the boosting ensemble is slightly more accurate than the standard ensemble at zero reject rate, the advantage of the boosting training over independent training quickly disappears as more patterns are rejected. Eventually the standard ensemble outperforms the boosting ensemble at high reject rates. Explanation of such a phenomenon is provided in the paper. © 1997 Elsevier Science B.V.
Joseph Y. Halpern
aaai 1996
Ran Iwamoto, Kyoko Ohara
ICLC 2023
Seung Gu Kang, Jeff Weber, et al.
ACS Fall 2023
Cristina Cornelio, Judy Goldsmith, et al.
JAIR