Assumption of knowledge and the Chinese Room in Turing test interrogation

Whilst common sense knowledge has been well researched in terms of intelligence and in particular artificial intelligence, specific, factual knowledge also plays a critical part in practice. When it comes to testing for intelligence, testing for factual knowledge is, in every-day life, frequently used as a front line tool. This paper presents new results which were the outcome of a series of practical Turing tests held on 23rd June 2012 at Bletchley Park, England. The focus of this paper is on the employment of specific knowledge testing by interrogators. Of interest are prejudiced assumptions made by interrogators as to what they believe should be widely known and subsequently the conclusions drawn if an entity does or does not appear to know a particular fact known to the interrogator. The paper is not at all about the performance of machines or hidden humans but rather the strategies based on assumptions of Turing test interrogators. Full, unedited transcripts from the tests are shown for the reader as working examples. As a result, it might be possible to draw critical conclusions with regard to the nature of human concepts of intelligence, in terms of the role played by specific, factual knowledge in our understanding of intelligence, whether this is exhibited by a human or a machine. This is specifically intended as a position paper, firstly by claiming that practicalising Turing's test is a useful exercise throwing light on how we humans think, and secondly, by taking a potentially controversial stance, because some interrogators adopt a solipsist questioning style of hidden entities with a view that it is a thinking intelligent human if it thinks like them and knows what they know. The paper is aimed at opening discussion with regard to the different aspects considered.

[1]  Kevin Warwick,et al.  Good Machine Performance in Turing's Imitation Game , 2014, IEEE Transactions on Computational Intelligence and AI in Games.

[2]  R. French The Turing Test: the first 50 years , 2000, Trends in Cognitive Sciences.

[3]  Andrew Hodges,et al.  Alan Turing: The Enigma , 1983 .

[4]  A. M. Turing,et al.  Computing Machinery and Intelligence , 1950, The Philosophy of Artificial Intelligence.

[5]  James H. Moor,et al.  An analysis of the turing test , 1976 .

[6]  Huma Shah,et al.  Testing Turing's five minutes, parallel-paired imitation game , 2010, Kybernetes.

[7]  F. M.,et al.  The Concise Oxford Dictionary of Current English , 1929, Nature.

[8]  The quest for intelligence. , 1996, Healthcare informatics : the business magazine for information and communication systems.

[9]  Kevin Warwick,et al.  Effects of lying in practical Turing tests , 2013, AI & SOCIETY.

[10]  Kevin Warwick,et al.  Artificial Intelligence: The Basics , 2011 .

[11]  Kenneth M. Ford,et al.  Turing Test Considered Harmful , 1995, IJCAI.

[12]  Kevin Warwick,et al.  Some Implications of a Sample of Practical Turing Tests , 2013, Minds and Machines.

[13]  J. Searle Mind: A Brief Introduction , 2004 .

[14]  Kevin Warwick,et al.  Human misidentification in Turing tests , 2015, J. Exp. Theor. Artif. Intell..

[15]  Marvin Minsky,et al.  Commonsense-based interfaces , 2000, CACM.

[16]  Kevin Warwick,et al.  Hidden Interlocutor Misidentification in Practical Turing Tests , 2010, Minds and Machines.

[17]  A. Caramazza,et al.  Domain-Specific Knowledge Systems in the Brain: The Animate-Inanimate Distinction , 1998, Journal of Cognitive Neuroscience.

[18]  Peter Norvig,et al.  Artificial Intelligence: A Modern Approach , 1995 .

[19]  Kevin Warwick,et al.  QI: The Quest for Intelligence , 2000 .

[20]  John McCarthy,et al.  Applications of Circumscription to Formalizing Common Sense Knowledge , 1987, NMR.