近些年,机器人开始逐渐从传统的工业领域,一点一点的渗透进我们的日常生活。关于人工智能的发展是否对人类安全存在隐患的讨论,从来就没有停止过。去年上映的奥斯卡获奖影片《机械姬》从图灵测试的角度出发,再次引起了人们对于人工智能的恐慌。
根据考文垂大学机器智能研究人员所作出的报告,人工智能系统在接受图灵测试时,可以通过突然的沉默欺骗人类判定者,让后者相信它们是人类。今天的雅思阅读栏目,新航道小编向大家推荐这篇科技类文章,关于人工智能是否能通过图灵测试的讨论。想了解更多关于长沙雅思培训的信息,请关注长沙新航道国际教育集团。
Alan Turing, considered the father of theoretical computer science and AI, devised the Turing test in an attempt to outline what it means for a thing to think.
In the test, a human judge or interrogator has a conversation with an unseen entity, which might be a human or a machine.
The test posits that the machine can be considered to be “thinking” or “intelligent” if the interrogator is unable to tell whether or not the machine is a human.
Also known as the imitation game, the test has become an often-erroneous standard to determine if AI have qualities like intellect, active thought, and even consciousness.
In the study, Taking the Fifth Amendment in Turing’s Imitation Game, published in the Journal of Experimental and Theoretical Artificial Intelligence by Dr. Huma Shah and Dr. Kevin Warwick of Coventry University, the researchers analyzed six transcripts from prior Turing tests and determined that, when the machines fell silent, the judges were left undecided about their interlocutor’s humanness.
The silence doesn’t even need to be intentional. In fact, it tended to result from technical difficulties.
“The idea [for the study] came from technical issues with a couple of the computer programs in Turing test experiments,” Shah tells Digital Trends.
“The technical issues entailed the failure of the computer programs to relay messages or responses to the judge’s questions. The judges were unaware of the situation and hence in some cases they classified their hidden interlocutor as ‘unsure.’”
The silent machines may have baffled their judges, but their silence helped expose a flaw in the exam rather than confirm its utility, which Warwick says this raises serious questions regarding the Turing tests validity and ability to test thinking systems.
“We need a much more rigorous and all-encompassing test to do so, even if we are considering a machine’s way of thinking to be different to that of a human,” he tells Digital Trends.
Shah meanwhile notes that the test was designed to give a framework within which to “build elaborate machines to respond in a satisfactory and sustained manner,” not to build machines that simply trick judges.
In short, the systems are meant to imitate human conversation and no human who takes the test seriously would fall silent. Right?
Well, they might, thinks Warwick. “One thing that I have learnt from such tests is that hidden human entities will almost surely do unexpected and illogical things,” he says.
“In this case a human could easily get upset or annoyed by something a judge has said and decide not to reply — they are human after all.”
An alternative view is that the Turing test has already been undermined by the current state of AI.
Shah says she agrees with Microsoft’s Chief Envisaging Officer in the UK, Dave Coplin, who thinks the “machine vs. human” challenges are no longer relevant.
At the AI summit in London in May, Coplin pointed out that, given enough resources, at the rate AI is advancing, developing an intelligent machine doesn’t seem all that farfetched.
“The role of AI is to augment human performance with intelligent agents,” Shah says.
“For example, a human educator using an AI to score student assignments and exam questions leaving the teacher time to innovate learning, inspiring students, encouraging more into STEM, including females, for a better life or world of cooperation.”
From this perspective, it’s absurd to develop an AI whose sole goal is to fool a human into thinking it’s human — especially if the simplest way to do so entails making it mute.
阿兰·图灵被认为是计算机科学理论和人工智能理论之父,他设计发明了图灵测试,用于了解机器的思维方式。
在测试中,人类测试者或问询者将在看不到对方的情况下,与一名人类或一台机器进行对话。
这个测试假设的前提是,如果该人类测试者无法判断对方是机器还是人类,则可以认定这台计算机是“有思维的”或“智能的”。
图灵测试,也被称为模拟游戏,已经成为了判定人工智能是否具备智力、思维活跃和意识的一个错误标准。
考文垂大学的博士呼玛·沙和凯文·瓦维克于《人工智能实验与理论》期刊上发布了“在图灵模拟游戏中运用美国宪法第五条修正案”的研究报告,其中,研究人员们分析了之前图灵测试的六份报告,并认为当机器陷入沉默时,人类测试者们便难以判定对话者是否为人类。
这样的沉默甚至无需故意为之。实际上,沉默出现的原因是技术上的一些问题。
沙说道:“这份研究报告的想法来源于图灵测试实验中计算机程序的一些技术问题。”
“这些技术问题使得计算机程序无法向人类测试者传递信息,或回答他们的问题。测试者们是不知道这个情况的,因此,在个别情况中,他们把自己的对话者分类为‘不确定’。”
计算机陷入沉默可能会迷惑人类测试者们,但是,这也恰好泄露了图灵测试的瑕疵,而非验证其实用性。瓦维克称,这对致力于测试计算机有效性和思维能力的图灵测试来说是个很严重的问题。
他对Digital Trends说:“即便我们认为机器和人类的思维方式不同,我们还是需要进行更加严格和全面的测试,才能测定。”
与此同时,沙还提到,图灵测试的设计意图,是为“制造能以令人满意的方式给出反应的精细计算机,并使之保持一贯性”搭建一个架构。
简单来说,这些系统是用来模仿人类对话的,而接受测试的人中没有人会突然沉默的。对吗?
其实也可能会。瓦维克是这样想的。“我从这种测试中学到的一件事情是,隐藏起来的那些人类对话者肯定也会做一些意料之外和不合逻辑的事情。”
“这种情况下,人类会很容易被对方测试者说的话激怒,然后决定不予回复——他们毕竟是人类。”
另一种观点是,图灵测试已经被现在的人工智能状态渐渐破坏掉了。
沙说,她赞同微软英国首席正视官(Chief Envisaging Officer)戴夫·科普林的说法,后者认为“机器PK人类”这一挑战已不再重要。
在5月举行的伦敦人工智能峰会上,科普林指出,只要有足够的资源,以人工智能现在的发展速度,开发出一个智能机器并非天方夜谭。
沙说:“人工智能的角色是用智能代理来增强人类的能力。”
“比如说,教育工作者可以使用AI来为学生作业和试卷打分,让自己有更多时间来创新学习方式、启发学生、鼓励更多人进入STEM计划,包括女性在内,让所有人都能生活得更好,或者让世界更好地协作。”
从这个角度看,开发出目标就是欺骗人类自己也是人类的人工智能是很荒谬的事——尤其是,如果做到这一点的最简单方法就是让它静音的话。
更多长沙雅思培训信息,请关注长沙新航道雅思频道。