Credit card fraud analysis is almost entirely automated. However, there may be occasions when a human analyst is required to intervene. In this paper, we consider situations in which a transaction triggers an automated alert but not sufficiently to allow automated response. On such occasions, automated analysis makes a recommendation as to the fraud pattern that has been identified and the human analyst decides, if this recommendation is correct and what action to take. In order to support the analyst, a 'dashboard' can be used to display information that is relevant to the fraud pattern. Thus, a computer could analyze transaction data, define this as a known fraud pattern, and then present the analyst with a dashboard to illustrate how the transaction might fit the pattern. We explore the efficiency with which people respond to the computer's recommendation, and whether computer confidence has an impact on this response. We define efficiency in terms of information search: the user will either have the relevant information on screen or will need to drill-into a dashboard (i.e., open additional windows for information). The results show that participants adapt their decision making to the confidence of the automated support, and, although they drill-down even when not required, are efficient in terms of time spent looking at relevant information.