Publication
ICDAR 2003
Conference paper

Probability table compression for handwritten character recognition

View publication

Abstract

This paper presents a new probability table memory compression method based, on mixture models and its application to N-tuple recognizers and N-gram character language models. Joint probability tables are decomposed into lower dimensional probability components and their mixtures. The maximum likelihood parameters of the mixture models are trained by the Expectation Maximization (EM) algorithm and quantized to one byte integers. Probability elements that mixture models do not estimate reliably are kept separately. Experimental results with on-line handwritten UNIPEN uppercase and lowercase characters show that the total memory size of an on-line scanning N-tuple recognizer is reduced from 12.3MB to 0.66MB bytes, while the recognition rate drops from 91.64% to 91.13% for uppercase characters and from 88-44% to 87.31% for lowercase characters. The N-gram character language model was compressed from 73.6MB to 0.58MB with minimal reduction in performance.

Date

Publication

ICDAR 2003

Authors

Topics

Share