Image Manipulation via Neuro-Symbolic Networks
Harman Singh, Poorva Garg, et al.
NeurIPS 2022
For large-vocabulary handwriting-recognition applications, such as note-taking, word-level language modeling is of key importance, to constrain the recognizer's search and to contribute to the scoring of hypothesized texts. We discuss the creation of a word-unigram language model, which associates probabilities with individual words. Typically, such models are derived from a large, diverse text corpus. We describe a three-stage algorithm for determining a word unigram from such a corpus. First is tokenization, the segmenting of a corpus into words. Second, we select for the model a subset of the set of distinct words found during tokenization. Complexities of these stages are discussed. Finally, we create recognizer-specific data structures for the word set and unigram. Applying our method to a 600-million-word corpus, we generate a 50,000-word model which eliminates 45% of word-recognition errors made by a baseline system employing only a character-level language model. © 2001 IEEE.
Harman Singh, Poorva Garg, et al.
NeurIPS 2022
Yang Wang, Zicheng Liu, et al.
CVPR 2007
Daniel A. Vaquero, Rogerio S. Feris, et al.
WACV 2009
W.D. Little, R. Williams
SIGGRAPH 1976