Artificial Intelligence, or AI, seems to have finally arrived. With the introduction of ChatGPT, millions of people suddenly discovered that AI was far, far more than just a research activity. The range and sophistication1 of ChatGPT’s answers to questions across a wide range of disciplines is, frankly, pretty stunning.
人工智能(AI)似乎终于来了。语言模型ChatGPT的推出使无数人猛然意识到,AI远不仅是一项研究活动。ChatGPT回答的问题涉及各个学科领域,解答范围之广、复杂程度之深,实话实说,令人惊叹。
AI is already in lots of places where you might not even realize it. Google Translate has been using AI for years, and it’s remarkably good, although nowhere near2 as good as a human translator. The technology that Pandora uses to customize your music is a type of AI, as is the technology behind Tesla’s self-driving cars.
AI已经出现在诸多我们可能都未意识到的领域。谷歌翻译多年来一直在使用AI,并且AI表现非常出色,尽管还远不及人工翻译的水平。流媒体音乐服务商潘多拉使用AI技术为用户定制音乐,汽车公司特斯拉的自动驾驶车辆也同样采用了AI技术。
Along with these apparently beneficial developments, though, comes a great deal of concern. As AI gets better and better, can we continue to trust it to make decisions for us? It’s one thing if an AI recommends a song that we don’t like, which we can just ignore, but if AI is driving our cars, then mistakes might be much more costly.
尽管AI带来了这些显然有益的发展,大量担忧也随之而来。随着AI技术越来越完善,我们是否还能继续信任其为我们做决定?如果AI推荐了一首我们不喜欢的歌,我们完全可以忽略;但如果它正在驾驶我们的车,那么失误可能会让我们付出更为昂贵的代价。
And there are far riskier tasks that AI might be doing soon: what if the military starts using AI on the battlefield? One might be tempted to say no, of course they won’t do that, but what if another country starts using AI this way? Can we simply ignore that?
而且,AI可能很快就会执行风险程度高得多的任务:要是军方开始在战场上使用AI,情况会如何?也许有人会倾向于说不,他们当然不会那么做,但如果别的国家开始在军事上使用AI呢?我们能只是视而不见吗?
To be honest, I haven’t been worried, at least not yet. I have a little secret to confess: I did my Ph.D. thesis in AI, in the area called machine learning. I haven’t been doing AI research for the past 25 years, but I think I still understand at least a little of the basics behind the technology.
说实话,至少到目前为止,我并未感到担忧。我有一个小秘密要透露:我的博士论文是有关AI的,领域是机器学习。尽管过去25年我并未从事AI研究,但自认为至少仍然了解这项技术背后的一些基础知识。
Thanks mostly to the attention that ChatGPT is getting, though, there’s suddenly a huge concern that AI represents a danger to society, although no one is exactly sure why. President Biden had a meeting with the CEOs of Google, Microsoft, and OpenAI (the creators of ChatGPT) to talk about these concerns, where he told the execs that “What you’re doing has enormous potential and enormous danger.”
不过,主要由于对ChatGPT关注度的增长,人们突然开始非常担忧,认为AI可能对社会造成威胁,尽管没人确切知道原因。拜登总统与谷歌、微软和OpenAI(ChatGPT的研发公司)的首席执行官举行了一次会议,会议上讨论了这些担忧。他告诉这些高管,“你们所从事的工作既存在巨大的潜能,也潜藏极大的危险。”
Of course, politicians have no idea what AI technology is—although I had to laugh at this quote from the White House press secretary: “The president has been extensively briefed on3 ChatGPT and knows how it works.” Well no, he doesn’t, but I’m skeptical that the CEOs of Google and Microsoft know how it works either. However, you don’t need to understand the technology to understand the dangers.
当然,政客们并不了解AI技术,但我不得不对白宫新闻秘书的这句发言感到好笑:“总统已经广泛听取了有关ChatGPT及其工作原理的介绍。”其实他不懂,但我怀疑谷歌和微软的首席执行官也不知道ChatGPT如何运行。然而,你不需要了解这项技术就能明白其中的危险。
Just recently, one of AI’s early pioneers, Geoffrey Hinton, has been speaking out to warn the public that AI has reached a point where “there’s a danger of something really bad happening.” Not surprisingly, other experts, such as German scientist Jürgen Schmidhuber, disagree.
最近,AI领域早期先驱之一的杰弗里·欣顿一直在公开发声,警告公众,AI已经发展到了“可能发生非常严重问题”的地步。毫不意外,德国科学家于尔根·施米德胡贝等一些专家则持有不同观点。
The chair of the FTC4, Lina Khan, says we can regulate AI effectively, and perhaps we can. But I’m not so sure that regulation will eliminate the dangers that Geoffrey Hinton is worried about.
美国联邦贸易委员会主席莉娜·汗表示,可以对人工智能实施有效管控,这或许可以做到。但管控能否消除杰弗里·欣顿担忧的危险,我无法确定。
Much of the concern in the press seems to be about the use of AI to spread misinformation. For example, AI can help create “deep fakes”5 that can be used to convince people that, say, an election was stolen even when it wasn’t. That’s a legitimate concern, but it’s happening even without AI.
媒体主要关心的问题是利用AI传播错误信息。例如,AI技术生成的“深度伪造”内容可能会让公众误以为某次选举存在不正当手段,尽管事实并非如此。这是一个合理的担忧,但即使没有AI,这种情况也在发生。
A better question, which a family member just asked me, is “should we be worried about AI making decisions on the battlefield?” I thought I should take this question directly to the source, so I asked ChatGPT itself. The answer I got was remarkably clear and direct; I’ll quote it here in full:
有位家人曾问过我一个更值得深思的问题:“我们应该担心AI在战场上做出决策吗?”我认为应当直接从源头入手,所以我将这个问题抛给了ChatGPT。我得到的回答相当清晰且直接,在这里我将完整引用:
Yes, people should be concerned about AI making decisions on the battlefield. There are several reasons for this.
是的,人类应当担心AI在战场上做出决策。这有几个原因。
Firstly, AI lacks human empathy, judgment, and intuition. In a complex and dynamic environment such as the battlefield, human decision-making is often guided by experience, intuition, and an understanding of the broader context. AI lacks the ability to consider these intangible6 factors and may make decisions solely based on pre-programmed algorithms or data inputs, which could lead to unintended consequences or even catastrophic errors.
首先,AI缺乏人类的同理心、判断力和直觉。在像战场这样复杂多变的环境中,人类常常根据经验、直觉,同时综合更广泛的背景做出判断。而AI缺少考虑这些无形因素的能力,可能仅基于预先编程的算法或数据输入提供方案,这可能导致意外后果,甚或出现灾难性错误。
Secondly, the use of AI in military decision-making raises significant ethical concerns. For instance, an AI system may be programmed to prioritize the protection of friendly forces at the expense of civilian lives, leading to potentially devastating consequences for non-combatants caught in the crossfire. Additionally, AI systems may be susceptible to7 hacking or manipulation, which could result in unintended consequences or a loss of control over the decision-making process.
其次,利用AI制定军事决策会引发重大伦理担忧。例如,AI系统可能被设定优先保护友军,而牺牲平民的生命,这对于被卷入交火的非战斗人员可能是一种毁灭性的灾难。此外,AI系统或易受到黑客攻击或操纵,这可能会导致意外后果或对决策过程失去控制。
Finally, there is the risk that reliance on AI systems may lead to the erosion of human decision-making capabilities, potentially diminishing the overall quality of military decision-making and increasing the likelihood of errors or unintended consequences.
最后,依赖AI系统存在削弱人类决策能力的风险,可能导致军事决策的整体质量下降,增加错误或意外情况发生的可能性。
So there you have it: ChatGPT itself gives us several reasons why it’s a terrible idea to let AI make decisions that might cost human lives. We should listen to what it’s telling us, because as it gets more intelligent (and it will), it might decide not to be so honest in the future. And I have to say that it’s reasoning here is both well-written and compelling.
现在你明白了:ChatGPT本身向我们提供了几个原因,说明为何让AI做出能左右人类生死的决定是一个糟糕的主意。我们应认真倾听它的建议,因为随着AI越来越智能(这是肯定的),未来它可能决定不再如此坦诚。而且我必须承认,它的上述陈论相当精彩,令人叹服。
One thing that worries me as well is that the very term “artificial intelligence” is much too broad. We should be concerned about giving any technology the power to control our lives, whatever someone calls it. But when an AI itself says “don’t trust me,” then I think we might want to listen.
“人工智能”这个术语太过笼统,这是我担心的另一点。对于赋予任何技术掌控我们生活的权力,不论它被怎样命名,我们都应警惕。但当一个AI自己表示“不要信赖我”时,我想这值得我们认真考虑。
(译者为“《英语世界》杯”翻译大赛获奖者)
1 sophistication(机器、方法的)精密,复杂,先进。 2 nowhere near远非,远不足,表示数量或程度远远不够。
3 be briefed on听取有关……的情况介绍。 4 FTC = Federal Trade Commission (美国)联邦贸易委员会。 5 对Deepfake技术的描述性称呼。Deepfake指深度伪造(Deep Learning + Fake),是一种利用深度学习技术合成虚假内容的技术。
6 intangible不可捉摸的,难以确定的。
7 be susceptible to易受攻击的,易受影响的。