Erik Altman, Jovan Blanusa, et al.
NeurIPS 2023
The Turing Test (TT) is claimed by many to be a way to test for the presence, in computers, of such "deep" phenomena as thought and consciousness. Unfortunately, attempts to build computational systems able to pass TT (or at least restricted versions of this test) have devolved into shallow symbol manipulation designed to, by hook or by crook, trick. The human creators of such systems know all too well that they have merely tried to fool those people who interact with their systems into believing that these systems really have minds. And the problem is fundamental: The structure of the TT is such as to cultivate tricksters. A better test is one that insists on a certain restrictive epistemic relation between an artificial agent (or system) A, its output o, and the human architect H of A - a relation which, roughly speaking, obtains when H cannot account for how A produced o. We call this test the "Lovelace Test" in honor of Lady Lovelace, who believed that only when computers originate things should they be believed to have minds.
Erik Altman, Jovan Blanusa, et al.
NeurIPS 2023
Yehuda Naveli, Michal Rimon, et al.
AAAI/IAAI 2006
R. Sebastian, M. Weise, et al.
ECPPM 2022
Arnon Amir, Michael Lindenbaum
IEEE Transactions on Pattern Analysis and Machine Intelligence