Improving simple models with confidence profiles
Amit Dhurandhar, Ronny Luss, et al.
NeurIPS 2018
An ensemble of random decision trees is a popular classification technique, especially known for its ability to scale to large domains. In this paper, we provide an efficient strategy to compute bounds on the moments of the generalization error computed over all datasets of a particular size drawn from an underlying distribution, for this classification technique. Being able to estimate these moments can help us gain insights into the performance of this model. As we will see in the experimental section, these bounds tend to be significantly tighter than the state-of-the-art Breiman’s bounds based on strength and correlation and hence more useful in practice.
Amit Dhurandhar, Ronny Luss, et al.
NeurIPS 2018
Naiyu Yin, Hanjing Wang, et al.
NeurIPS 2023
Lucas Monteiro Paes, Dennis Wei, et al.
ACL 2025
Amit Dhurandhar, Tejaswini Pedapati
ICKG 2022