About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Publication
ICDAR 2003
Conference paper
Probability table compression for handwritten character recognition
Abstract
This paper presents a new probability table memory compression method based, on mixture models and its application to N-tuple recognizers and N-gram character language models. Joint probability tables are decomposed into lower dimensional probability components and their mixtures. The maximum likelihood parameters of the mixture models are trained by the Expectation Maximization (EM) algorithm and quantized to one byte integers. Probability elements that mixture models do not estimate reliably are kept separately. Experimental results with on-line handwritten UNIPEN uppercase and lowercase characters show that the total memory size of an on-line scanning N-tuple recognizer is reduced from 12.3MB to 0.66MB bytes, while the recognition rate drops from 91.64% to 91.13% for uppercase characters and from 88-44% to 87.31% for lowercase characters. The N-gram character language model was compressed from 73.6MB to 0.58MB with minimal reduction in performance.