In this work, we propose a framework for resolving ambiguity in user-generated natural language queries. We use pragmatics to formalize the refinement of an incoming query into possible interpretations which we call a response graph. Each of the possible interpretations are assigned likelihoods of being correct by the pragmatics framework, as well as Quality of Information (QoI) scores that quantify how useful we expect the response to be. We discuss two schemes for traversing the response graph and determining the querent's intended meaning, an up-front one-shot algorithm ("static") and an iterative runtime algorithm ("dynamic"). We analyze the performance of these two schemes by presenting data from simulated conversations between a querent and system using randomly generated response graphs. We show that both schemes are able to achieve a significant reduction in the cost to retrieve the desired information, allowing such a system to make more intelligent decisions about how to handle and respond to natural language queries.