Markov logic networks have been successfully applied to many problems in AI. However, the computational complexity of the inference procedures has limited their application. Previous work in lifted inference, lazy inference and cutting plane inference has identified cases where the entire ground network need not be constructed. These approaches are specific to particular inference procedures, and apply well only to certain classes of problems. We introduce a method of focused grounding that can use either general purpose or domain specific heuristics to produce only the most relevant ground formulas. Though a solution to the focused grounding is not, in general, a solution to the complete grounding, we show empirically that the smaller search space of a focused grounding makes it easier to locate a good solution. We evaluate focused grounding on two diverse domains, joint entity resolution and abductive plan recognition. We show improved results and decreased computation cost for the entity resolution domain relative to a complete grounding. Focused grounding in abductive plan recognition produces state of the art results in a domain where complete grounding proved intractable. Copyright © 2012, Association for the Advancement of Artificial Intelligence. All rights reserved.