雷恩·卡罗
摘 要:新一轮人工智能热潮至少有两点独特之处:一是得益于计算能力和训练数据的巨大增长,机器学习取得实质性突破,促使人工智能的大规模应用成为可能;二是决策者终于给予了密切的关注。当前,人工智能引发了一系列严峻的政策挑战,包括公正与平等、武力使用、安全与认证、隐私和权力、税收和失业以及机构配置与专业知识、投资和采购、消除归责的障碍、人工智能的心理模型等跨领域问题。人工智能末日论反映了人类对于人工智能等拟人化科技的特殊恐惧,在可预见的未来并不会真实发生。相反,对人工智能末日论投入过多的关注和资源,可能会分散决策者对于人工智能更直接的危害和挑战的注意力,进而阻碍有关人工智能对当前社会影响的研究。
关键词:人工智能;政策挑战;机器学习;人工智能末日论
1 该文原载于《加州大学戴维斯分校法律评论》(UC Davis Law Reviews)2017年第51卷第2期。感谢作者对译事的慷慨授权。摘要和关键词由译者整理添加。
2 参见Cade Metz, In a Huge Breakthrough, Googles AI Beats a Top Player at the Game of Go, Wired, Jan. 27,2016。报道称经过几十年的努力,谷歌的人工智能终于在围棋游戏中击败了人类顶级选手。围棋是一款有着2500年历史,比象棋更为复杂的考验策略和直觉能力的游戏。
3 参见Cathy ONeil,Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, Crown,2016,p.27(作者将这种算法与大规模杀伤性武器相比较,认为两者都将带来恶性循环);Julia Angwin,et al., Machine Bias, Propublica, May 23, 2016(文章探讨了算法在生成风险评估指数时所犯的错误)。
1 参见Martin Ford, Rise of the Robots: Technology and the Threat of a Jobless Future, Basic Books,2015:p.xvi(该书预测,机器的角色将由工人的工具向工人本身进化)。
2 参见James Barrat, Our Final Invention: Artificial Intelligence and the End of the Human Era, Thomas Dunne Books,2013:p.5(该书作者认为,“人类将与这个问题斗争到底”)。
3 See Batya Friedman,Helen Nissenbaum, “Bias in Computer Systems”, in ACM Transactions on Info. Sys.,1996,14,p.330.
4 参见Harley Shaiken, A Robot Is After Your Job: New Technology Isnt a Panacea, N.Y. Times, Sept. 3, 1980。有关机器人取代人类工作岗位的时间表,请参见:Louis Anslow, Robots Have Been About to Take All the Jobs for More than 200 Years, Timeline,May 16, 2016。
5 See Selmer Bringsjord,et al., Creativity, the Turing Test, and the (Better) Lovelace Test, in Minds and Machines,2001,11,p.5;Peter Stone,et al.,Artificial Intelligence and Life in 2030: Report of the 2015 Study Panel,2016,p.50.
6 参见Peter Stone,et al., Artificial Intelligence and Life in 2030: Report of the 2015 Study Panel,2016,pp.50-51; Will Knight, Facebook Heads to Canada for the Next Big AI Breakthrough, MIT Tech. Rev.,Sept. 15, 2017(該文介绍了与加拿大有关的人工智能领军人物以及技术突破)。
7 See Peter Stone,et al., Artificial Intelligence and Life in 2030: Report of the 2015 Study Panel,2016,p.14; National Science and Technology Council, Preparing for the Future of Artificial Intelligence,2016,p.6.
8 See Louis Anslow, Robots Have Been About to Take All the Jobs for More than 200 Years, Timeline,May 16, 2016.
9 不过,肯尼迪总统发表了一篇关于“高效和强有力的政府领导”必要性的演讲,以回应“自动化问题”。参见John F. Kennedy, Remarks at the AFL-CIO Convention,June 7,1960。
10 See Louis Anslow, Robots Have Been About to Take All the Jobs for More than 200 Years, Timeline,May 16, 2016.
11 See Ted Cruz, Sen. Cruz Chairs First Congressional Hearing on Artificial Intelligence, Press Release, Nov. 30, 2016; The Transformative Impact of Robots and Automation: Hearing Before the J. Econ. Comm.,114th Cong.,2016.
1 See National Science and Technology Council, Preparing for the Future of Artificial Intelligence,2016,p.12.
2 See Iina Lietzen, Robots: Legal Affairs Committee Calls for EU-Wide Rules, European Parliament News, Jan.12,2017; Japan Ministry of Econ., Trade and Indus., Robotics Policy Office Is to Be Established in METI, July 1, 2015.
3 See Peter Stone,et al., Artificial Intelligence and Life in 2030: Report of the 2015 Study Panel,2016,p.51.
4 See Peter Stone,et al., Artificial Intelligence and Life in 2030: Report of the 2015 Study Panel,2016,p.51.
5 See Peter Stone,et al., Artificial Intelligence and Life in 2030: Report of the 2015 Study Panel,2016,p.51; National Science and Technology Council, Preparing for the Future of Artificial Intelligence,2016,p.25.
6 See Peter Stone,et al., Artificial Intelligence and Life in 2030: Report of the 2015 Study Panel,2016,p.51.
7 参见Peter Stone,et al., Artificial Intelligence and Life in 2030: Report of the 2015 Study Panel,2016,pp.6-9。最初,学界区分“弱人工智能(weak AI或narrow AI)”和“强人工智能(strong AI)”的概念,前者主要是解决单一问题的智能,如下棋;而后者则是能够像人类一样解决所有问题的智能。今天,强人工智能的概念已经让位于“通用人工智能(artificial general intelligence,AGI)”的概念,指的能够执行不止一个领域的任务但并不需要解决所有认知任务的智能。
8 See National Science and Technology Council, Preparing for the Future of Artificial Intelligence,2016,p.8.
1 See Harry Surden, “Machine Learning and Law”, in Wash. L. Rev.,2014,89,p.88.
2 See Peter Stone,et al., Artificial Intelligence and Life in 2030: Report of the 2015 Study Panel,2016,p.51.
3 See Peter Stone,et al., Artificial Intelligence and Life in 2030: Report of the 2015 Study Panel,2016,pp.14-15; National Science and Technology Council, Preparing for the Future of Artificial Intelligence,2016,pp.9-10.
4 有一些私人机构和公共研究室对于人工智能也十分敏感,包括艾伦人工智能研究所(Allen Institute for AI)和斯坦福研究所(Stanford Research Institute,简称“SRI”)。
5 参见Jordan Pearson,Ubers AI Hub in Pittsburgh Gutted a University Lab — Now Its in Toronto, Vice Motherboard,May 9,2017[报告担心Uber公司将会成为一家“从公共机构吸取营养(并由纳税人资助研究)的寄生虫”]。
6 参见Joseph Weizenbaum, Computer Power and Human Reason: From Judgment to Calculation,W. H. Freeman and Company,1976,pp.271-272(该文探讨了资助人工智能研究的资金来源)。
7 参见Vinod Iyengar, Why AI Consolidation Will Create the Worst Monopoly in U.S.History, Techcrunch,Aug.24, 2016.(文章分析了這些主要的科技公司是如何收购那些具有前途的人工智能初创公司的);Quora, What Companies Are Winning the Race for Artificial Intelligence?, Forbes, Feb. 24,2017,当然,也有一些致力于人工智能民主化的努力,包括资金充裕但非营利性的机构OpenAI。
8 See Clay Dillow, Tired of Repetitive Arguing About Climate Change, Scientist Makes a Bot to Argue for Him, Popular Sci.,Nov. 3, 2010.
9 See Cognitive Assistant that Learns and Organizes, SRI INTL,http://www.ai.sri.com/project/CALO(2017年10月18日访问)。
1 See Ryan Calo, “Robotics and the Lessons of Cyberlaw”, in Calif. L. Rev.,2015,103,p.532.
2 See Matthew Hutson, Our Bots, Ourselves, Atlantic,Mar.3,2017.
3 See “Ethics and Governance of Artificial Intelligence”, Mass. Inst. of Tech. Sch.of Architecture & Planning, https://www.media.mit.edu/groups/ethics-and-governance/ overview(2017年10月15日访问)。
4 参见IEEE, Ethically Aligned Design: A Vision for Prioritizing Human Wellbeing with Artifcial Intelligence and Autonomous Systems,2016,p.2。我作为法律委员会的成员参加了这项工作。IEEE, Ethically Aligned Design: A Vision for Prioritizing Human Wellbeing with Artifcial Intelligence and Autonomous Systems,2016,p.125.
5 See José de Sousa e Brito, “Right, Duty, and Utility: From Bentham to Kant and from Mill to Aristotle”,in Revista Iberoamericana De Estudios Utilitaristas,2010, XVII/2,pp.91-92.
6 根据哈特的观点,法律有一个“承认规则(rule of recognition)”。参见H.L.A. Hart, The Concept of Law, 3rd edn, Oxford University Press,2012,p.100。
7 See Matthew Hutson, Our Bots, Ourselves, Atlantic,Mar.3,2017.
8 See Brian R. Cheffins, The History of Corporate Governance, in Douglas Michael Wright,et al. eds., The Oxford Handbook of Corporate Governance, Oxford University Press,2013,p.46.
9 參见R.A.W. Rhodes, “The New Governance: Governing Without Government”, in Pol. Stud.,1996,44,p.657; Wendy Brown, Undoing the Demos: Neoliberalisms Stealth Revolution, Zone Books,2015:pp.122-123(文章注意到“几乎所有的学者和定义都一致认为,治理”涉及“网络化、一体化、协作性、合作性、传播性和至少部分自组织性”的控制)。
1 ICANN和IETF是由美国政府资助设立的,但今天它们成为很大程度上独立于国家控制的非营利性组织。
2 See R.A.W. Rhodes, “The New Governance: Governing Without Government”,in Pol. Stud.,1996,44:p.657; Wendy Brown, Undoing the Demos: Neoliberalisms Stealth Revolution, Zone Books,2015,pp.122-123.
3 参见Rebecca Wexler, “Life, Liberty, and Trade Secrets: Intellectual Property in the Criminal Justice System”, in Stan. L. Rev.,2018,70,pp.1343-1429(除了其他方面之外,该文还澄清了公司不得援引商业秘密法以规避刑事案件的被告对其人工智能或算法系统进行审查)。
4 See Kate Crawford,et al.,The AI NOW Report: The Social and Economic Implications of Artificial Intelligence Technologies in the Near-Term,2016,pp.6-8.
5 See “Fairness, Accountability, and Transparency in Machine Learning”, FAT/ML,http://www.fatml.org(2017年10月14日访问). See also Danielle Keats Citron, Technological Due Process,in Wash. U. L.Rev.,2008,85,pp.1249-1313(该文讨论了“技术正当程序”的概念)。
6 See Adam Rose, Are Face-Detection Cameras Racist?, Time,Jan. 22, 2010.欧美对于中国人普遍有“眯缝眼”的歧视性看法,这里的照相机软件显然也带有这一偏见。——译者注
7 See Jessica Guynn, Google Photos Labeled Black People “Gorillas”, USA Today,July 1, 2015.
8 See Aylin Caliskan,et al., “Semantics Derived Automatically from Language Corpora Contain Human-Like Biases”, in Science,2017,356,pp.183-184.
1 See Julia Angwin,Terry Parris, Jr., Facebook Lets Advertisers Exclude Users by Race, Propublica,Oct. 28, 2016.
2 See Julia Angwin,Jeff Larson, The Tiger Mom Tax: Asians Are Nearly Twice as Likely to Get a Higher Price from Princeton Review, Propublica,Sept. 1,2015.
3 See Selina Cheng, An Algorithm Rejected an Asian Mans Passport Photo for Having “Closed Eyes”, Quartz,Dec. 7, 2016.
4 參见Adam Hadhazy, Biased Bots: Artificial-Intelligence Systems Echo Human Prejudices, Princeton Univ.,Apr. 18, 2017。(“土耳其语中的‘o是一个中性的第三人称代词。然而,在使用谷歌在线翻译服务时,‘o bir doktor和‘o birhem?ire却翻译成了‘他是一名医生和‘她是一名护士”)参见 Aylin Caliskan,et al., “Semantics Derived Automatically from Language Corpora Contain Human-Like Biases”, in Science,2017,356,pp.183-186(该文也探讨了计算机系统中职业的性别歧视问题)。
5 See Adam Rose, Are Face-Detection Cameras Racist?, Time,Jan. 22, 2010.(探讨了照相软件中性能与种族的话题)
6 See Jessica Saunders,et al., “Predictions Put into Practice: A Quasi Experimental Evaluation of Chicagos Predictive Policing Pilot”, in J. Experimental Criminology,2016,12,pp.350-351.
7 See Kate Crawford,Ryan Calo, “There Is a Blind Spot in AI Research”, in Nature,2016,538,pp.311-312.
8 See Kate Crawford,Ryan Calo, “There Is a Blind Spot in AI Research”, in Nature,2016,538,pp.311-312;Will Knight, The Financial World Wants to Open AIs Black Boxes, MIT Tech. Rev., Apr. 13, 2017.
9 参见Solon Barocas, Andrew D. Selbst, “Big Datas Disparate Impact”,in Calif. L. Rev., 2016,104,pp.730-732(讨论在数据挖掘的背景下适用反歧视法的优缺点)。
10 参见Danielle Keats Citron, “Technological Due Process”,in Wash. U. L.Rev.,2008,85,pp.1249-1313(文章认为人工智能决策会危害宪法正当程序保证,并提倡采用新的“技术正当程序”)。
11 See Kate Crawford,Jason Schultz, “Big Data and Due Process: Toward a Framework to Redress Predictive Privacy Harms”,in B.C. L. REV.,2014,55,p.110; Solon Barocas,Andrew D. Selbst, “Big Datas Disparate Impact”,in Calif. L. Rev., 2016,104,pp.730-732.
1 参见Bryce Goodman,Seth Flaxman, European Union Regulations on Algorithmic Decision-Making and a “Right to Explanation”,ARXIV,Aug. 31, 2016。需要说明的是,欧盟《一般数据保护条例》明确规定用户有权要求数据控制者人工介入,但对于人工智能决策结果的解释权问题尚有争议。——译者注
2 参见Jessica Saunders,et al., “Predictions Put into Practice: A Quasi Experimental Evaluation of Chicagos Predictive Policing Pilot”,in J. Experimental Criminology,2016,12,pp.350-351.(探讨了预防性警务中的热区问题);Julia Angwin,et al., Machine Bias, Propublica,May 23, 2016(探讨了在刑事责任认定中使用算法的风险系数); Joseph Walker, State Parole Boards Use Software to Decide Which Inmates to Release, Wall St. J.,Oct. 11, 2013.
3 参见Danielle Keats Citron, “Technological Due Process”,in Wash. U. L.Rev.,2008,85,pp.1249-1313.(探讨了技术正当程序的目标); Kate Crawford, Jason Schultz, “Big Data and Due Process: Toward a Framework to Redress Predictive Privacy Harms”,in B.C. L. REV.,2014,55,p.110(探討了正当程序与大数据);Joshua A. Kroll et al., Accountable Algorithms,in U. Pa. L. Rev.,2017,165,p.633(认为当前的决策程序并没有跟上技术发展的脚步)。
4 FED. R. CIV. P. 1. 我要感谢我的同事伊丽莎白·波特(Elizabeth Porter)。
5 U.S. CONST. amend. VI(规定被告有权知晓指控的性质和原因、与对他不利的证人面对面质证、强制有利于他的证人作证以及得到律师的帮助,这一切都是迅速和公开审判的一部分)。
6 See Jason Millar,Ian Kerr, Delegation, Relinquishment, and Responsibility: The Prospect of Expert Robots, in Ryan Calo,et al. eds.,Robot Law,Edward Elgar Publishing, 2016,p.126.
7 参见Jason Millar,Ian Kerr, Delegation, Relinquishment, and Responsibility: The Prospect of Expert Robots, in Ryan Calo,et al. eds.,Robot Law,Edward Elgar Publishing, 2016,p.126;Michael L. Rich, “Machine Learning, Automated Suspicion Algorithms, and the Fourth Amendment”, in U. Pa. L. Rev.,2016,164,pp.877-879(探讨了新兴技术与当前宪法第四修正案审判的关系);Andrea Roth, “Machine Testimony”, in Yale L.J.,2017,126,p.1972(探讨了机器作为证人的话题)。
8 宽大规则要求法院严格限制刑事法规的解释,即使立法意图似乎倾向于更广泛的解读。例如,在McBoyle v. United States, 283 U.S. 25,26 -27(1931)案中,法院就拒绝将“盗窃车辆”的法规扩展到“盗窃飞机”上。有关将法律转换为机器代码的限制的讨论,请参见:Harry Surden,Mary-Anne Williams, “Technological Opacity,Predictability, and Self-Driving Cars”,in Cardozo L. Rev.,2016,38,pp.162-163。
9 参见James H. Moor, “Are There Decisions Computers Should Never Make?”, in Nature & System,1979,1,p.226。下文有关武力使用的部分也反应了这一担忧。
10 See Kate Crawford,et al., The AI NOW Report: The Social and Economic Implications of Artificial Intelligence Technologies in the Near-Term,2016,pp.6-8;Danielle Keats Citron, “Technological Due Process”, in Wash. U. L.Rev.,2008,85,pp.1249-1313;“Fairness, Accountability, and Transparency in Machine Learning”, FAT/ML,http://www.fatml.org(2017年10月14日访问)。
1 See Jon Kleinberg,et al., Inherent Trade-Offs in the Fair Determination of Risk Scores, Proc. Innovations Theoretical Computer Sci.,2017,p.2.
2 See Jon Kleinberg,et al., Inherent Trade-Offs in the Fair Determination of Risk Scores, Proc. Innovations Theoretical Computer Sci.,2017:p.1.
3 需要注意的是,武力使用在更多的情况下不是军事冲突。我们可能会追问,国内巡逻人员、警察甚至私人保安使用武力是否恰当。有关这些问题的讨论,请参见:Elizabeth E. Joh, “Policing Police Robots”, in Ucla L. Rev. Discourse,2016,64:pp.530-542。
4 See Heather M. Roff, Richard Moyes, Meaningful Human Control, Artifcial Intelligence and Autonomous Weapons,Article36,Apr.11,2016.
5 参见Rebecca Crootof, “A Meaningful Floor for ‘Meaningful Human Control”, in Temp. Int L and Comp. L.J.,2016,30:p.54(“对于‘有意义的人类控制的实际要求并没有达成一致”)。
6 肯尼斯·安德森(Kenneth Anderson)和马修·瓦克斯曼(Matthew Waxman)为人工智能武器的现实政治做出了重要贡献。参见Kenneth Anderson, Matthew Waxman, Law and Ethics for Autonomous Weapon Systems: Why a Ban Wont Work and How the Laws of War Can, Hoover Inst.,Apr. 9, 2013。(主张自主性武器既是大家所追求的,同时也是不可避免的)
7 See Kenneth Anderson, Matthew Waxman, Law and Ethics for Autonomous Weapon Systems: Why a Ban Wont Work and How the Laws of War Can, Hoover Inst.,Apr. 9, 2013.
8 参见John Naughton, Death by Drone Strike, Dished Out by Algorithm, Guardian,Feb. 21,2016(“美國中央情报局和国家安全局前任局长迈克尔·海登(Michael Hayden)将军说道:‘我们根据元数据杀人”)。
1 参见M.C. Elish, Moral Crumple Zones: Cautionary Tales in Human-Robot Interaction, We Robot 2016 Working Paper,2016,p.1;Madeleine Clare Elish, Tim Hwang, When Your Self-Driving Car Crashes, You Could Still Be the One Who Gets Sued, Quartz, July 25, 2015(这一推理同样可以适用于自动驾驶汽车的驾驶者)。
2 See Henrik I. Christensen,et al., From Internet to Robotics:A Roadmap for US Robotics,2016:pp.105-109; Peter Stone,et al., Artificial Intelligence and Life in 2030: Report of the 2015 Study Panel,2016,p.42.
3 参见Self-Driving Vehicle Legislation: Hearing Before the Subcomm. on Digital Commerce and Consumer Prot. of the H. Comm. on Energy and Commerce, 115th Cong.,2017[提供了数字商务与消费者保护委员会主席雷格·沃尔登(Greg Walden)先生的开幕致辞]。
4 参见Guido Calabresi, The Costs of Accidents: A Legal and Economic Analysis,Yale University Press,1970(讨论了交通事故法裁判的不同政策)。
5 参见Bryant Walker Smith,“How Governments Can Promote Automated Driving”,in N.M. L. Rev.,2017,47,p.101(探讨了政府推动自动驾驶和社区条件准备的不同路径,以便在自动驾驶汽车实现其道路价值时能够无缝对接)。
6 See Ryan Calo,The Case for a Federal Robotics Commission, Brookings Institution Center for Technology Innovation,2014,pp.9-10.
1 See Bence Kolianyi,et al., Bots and Automation over Twitter during the Second U.S. Presidential Debate,Comprop Data Memo,2016.
2 See Ryan Calo, “Robotics and the Lessons of Cyberlaw”, in Calif. L. Rev.,2015,103,pp.538-545.
3 有关这个话题的论述,请参见:Andrea Bertolini,et al., “On Robots and Insurance”, in Intl J. Soc. Robotics,2016,8,p.381(探讨了保险行业有必要对机器人作出回应)。
4 See Henrik I. Christensen,et al., From Internet to Robotics: A Roadmap for US Robotics,2016,p.105.
5 参见Mark Harris, Will You Need a New License to Operate a Self-Driving Car?, IEEE Spectrum,Mar. 2, 2015(探讨了当前有关自动驾驶汽车“乘客”执照许可计划的不确定状态)。
6 See Megan Molteni, Wellness Apps Evade the FDA, Only to Land in Court, Wired,Apr. 3, 2017.
7 See Arezou Rezvani, “Robot Lawyer” Makes the Case Against Parking Tickets, NPR,Jan.16, 2017.
8 参见Greg Allen,Taniel Chan,Artificial Intelligence and National Security, Belfer Center for Science and International Affairs,2017(探讨了制定有关人工智能和国家安全政策的方法)。
9 参见上文有关武力使用部分的讨论。
10 参见Ryan Calo, “Open Robotics”,in Md. L. Rev.,2011,70,pp.593-601(探讨了机器人是如何有能力去引起物理性损害和损失的)。
11 See Cyber Grand Challenge, DEF CON 24, https://www.defcon.org/html/defcon-24/dc-24-cgc.html(2017年9月18日访问); see also “Mayhem” Declared Preliminary Winner of Historic Cyber Grand Challenge, DEF. Advanced Res. Projects Agency,Aug. 4, 2016.
1 例如,最重要的隱私法研讨会——隐私法学者会议(Privacy Law Scholars Conference)——最近也举行了10周年庆。当然,有关隐私的讨论还可以追溯到更早的时间。
2 参见Neil M. Richards, “The Dangers of Surveillance”,in Harv. L. Rev.,2013,126,pp.1952-1958(提供了各种关于监督机构是如何使用监视、敲诈、说服手段,并将人们分类管理的例子)。
3 参见Margot E. Kaminski,et al., “Security and Privacy in the Digital Age: Averting Robot Eyes”,in Md. L. Rev.,2017,76,pp.983-1024(解释了配置限制型人工智能的机器人的感官能力)。
4 参见Kashmir Hill, How Target Figured Out A Teen Girl Was Pregnant Before Her Father Did, Forbes,Feb. 16, 2012。泰·扎斯凯(Tal Z. Zarsky)对于这一现象有着深入的研究。参见Tal Zarsky, Transparent Predictions,in U. ILL. L. REV.,2013,4,pp.1503-1569(描述了政府通过收集的数据努力预测的趋势和行为类型)。
5 See Daniel J. Solove, “Privacy Self-Management and the Consent Dilemma”,in Harv. L. Rev.,2013,126,pp.1889-1893.
6 See Daniel J. Solove, “Privacy and Power: Computer Databases and Metaphors for Information Privacy”,in Stan. L. Rev.,2000,53,pp.1424-1428; Tal Z. Zarsky, “Incompatible: The GDPR in the Age of Big Data”,in Seton Hall L. Rev.,2017,47,pp.1003-1009.
7 例如,Decide.com是一款人工智能工具,可帮助消费者决定何时购买产品和服务。Decide.com最终被eBay收购。John Cook, eBay Acquires Decide.com, Shopping Research Site Will Shut Down Sept. 30, Geekwire,Sept. 6, 2013.
8 参见Ryan Calo, “Can Americans Resist Surveillance?”,in U. Chi. L. Rev.,2016,83,pp.23-43(文章分析了美国公民可以采取不同的方法来改革政府监视和相关挑战)。
9 See Joel Reidenberg, “Privacy in Public”, in U. MiamiI L. Rev.,2014,69,pp.143-147.
1 法院和相关法规都倾向性地认为,电子邮件等资讯的内容信息相较于非内容信息应该得到更强保护,后者包括发送的目的地、是否被加密、是否包含附件等等。Cf. Riley v. California, 134 S. Ct. 2473 (2014) (在一起逮捕事件中无证搜查和扣押一部手机的行为是无效的。)
2 See Florida v. Jardines, 569 U.S. 1, 8-9 (2013).
3 參见Orin S. Kerr, “Searches and Seizures in a Digital World”, in Harv. L. Rev.,2005,119,p.551(法院认为直到信息出现在屏幕上让人看到才构成一个搜索,简单的计算机处理或传输到硬盘的行为并不是)。
4 See Christina M. Mulligan, “Perfect Enforcement of Law: When to Limit and When to Use Technology”, in Rich. J.L. and Tech.,2008,14,pp.78-102.
5 See Ryan Calo, “Digital Market Manipulation”, in Geo. Wash. L. Rev.,2014,82,pp.1001-1002.
6 参见Ian R. Kerr, Bots, Babes, and the Californication of Commerce,in U. Ottawal. and Tech. J.,2004,1,pp.312-317(预见性地描述了聊天机器人在线上贸易中扮演的角色)。
7 See Ira S. Rubenstein, “Voter Privacy in the Age of Big Data”,in Wis. L. Rev.,2014,5,pp.866-867.
8 See Amanda Levendowski, “How Copyright Law Can Fix Artificial Intelligences Implicit Bias Problem”, in Wash. L. Rev.,2018,93,pp.610-618.
9 See Amanda Levendowski, “How Copyright Law Can Fix Artificial Intelligences Implicit Bias Problem”, in Wash. L. Rev.,2018,93,pp.610-618.
10 参见Amanda Levendowski, “How Copyright Law Can Fix Artificial Intelligences Implicit Bias Problem”, in Wash. L. Rev.,2018,93,pp.606-609(部分归因于大公司能够获得更多数据的事实)。
1 参见本文第一部分。
2 See Jan Whittington,et al., Push, Pull, and Spill: A Transdisciplinary Case Study in Municipal Open Government,in Berkeley Tech. L.J.,2015,30,p.1904.
3 参见Julia Powles,Hal Hodson, Google DeepMind and Healthcare in An Age of Algorithms, Health Tech.,Mar. 16, 2017(介绍了谷歌的Deepmind访问患者敏感信息的事件,以及英国政府是如何最大限度地限制访问的)。
4 See Sorrell v. IMS Health Inc., 564 U.S. 552, 579-80 (2011).
5 See James Vincent, Google Is Testing a New Way of Training its AI Algorithms Directly on Your Phone, Verge,Apr.10,2017; Cynthia Dwork, Differential Privacy, in Michele Bugliesi,et al. eds., Automata Languages and Programming,Springer,2006,pp.2-3.
6 See Martin Ford, Rise of the Robots: Technology and the Threat of a Jobless Future,Basic Books,2015:p.xvi.(“机器本身正在成为工人……”)
7 See Erik Brynjolfsson, Andrew McAfee,The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies,W. W. Norton and Company,2014,pp.126-128.
8 See Exec. Office of the President, Artificial Intelligence, Automation, and the Economy,2016,pp.35-42.
1 See Erik Brynjolfsson, Andrew McAfee,The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies,W. W. Norton and Company,2014,pp.134-138.
2 See Queena Kim, As Our Jobs Are Automated, Some Say Well Need a Guaranteed Basic Income, NPR Weekend Edition,Sept.24,2016.
3 我尤其想到的是羅伯特·希曼(Robert Seamans)在纽约大学斯特恩商学院所做的工作。参见Robert Seamans, We Wont Even Know If a Robot Takes Your Job, Forbes,Jan.11,2017。
4 See “Treasury Responds to Suggestion that Robots Pay Income Tax”, in Tax Notes,1984,25,p.20.(“无生命体无需进行所得税申报”)
5 See Kevin J. Delaney, The Robot that Takes Your Job Should Pay Taxes, Says Bill Gates, Quartz,Feb.17, 2017.
6 See Steve Cousins, Is a “Robot Tax” Really an “Innovation Penalty”?, Techcrunch,Apr. 22, 2017.
7 See Ronald Collins,David Skover, Robotica:Speech Rights and Artificial Intelligence,Cambridge University Press,2018;Annemarie Bridy, “Coding Creativity: Copyright and the Artificially Intelligent Author”, in Stan. Tech. L. Rev.,2012,5,pp.21-27; James Grimmelmann, “Copyright for Literate Robots”,in Iowa L. Rev., 2016, 101,p.670.
8 参见本文第三部分。
9 New State Ice Co. v. Liebmann, 285 U.S. 262, 311 (1932)[布兰迪斯(Brandeis)法官发表了反对意见)(阐述了州作为民主实验室的经典概念]。
1 See Andrew Tutt, “An FDA for Algorithms”,in Admin. L. Rev.,2017,69,pp.91-106.
2 See Orin S. Kerr, “The Next Generation Communications Privacy Act”,in U. Pa. L.Rev.,2014,162,pp.375-390.
3 See Woodrow Hartzog, “Unfair and Deceptive Robots”,in Md. L. Rev.,2015,74,pp.812-814.
4 See Ryan Calo,The Case for a Federal Robotics Commission, Brookings Institution Center for Technology Innovation,2014,p.4.
5 See Ryan Calo,The Case for a Federal Robotics Commission, Brookings Institution Center for Technology Innovation,2014,pp.6-10(列举了州政府或联邦政府在缺乏专业知识的情况下应对新技术的各种困难)。
6 See Ryan Calo,The Case for a Federal Robotics Commission, Brookings Institution Center for Technology Innovation,2014,p.3;Tom Krazit, Updated: Washingtons Sen. Cantwell Prepping Bill Calling for AI Committee, Geekwire,July 10, 2017.
7 See Networking and Information Technology Research and Development Subcommittee of National Science and Technology Council,The National Artificial Intelligence Research and Development Strategic Plan,2016,pp.15-22.
8 参见Bryant Walker Smith, “How Governments Can Promote Automated Driving”,in N.M. L. Rev.,2017,47,pp.118-119(探讨了有关自动驾驶汽车的采购);Jan Whittington,et al., “Push, Pull, and Spill: A Transdisciplinary Case Study in Municipal Open Government”,in Berkeley Tech. L.J.,2015,30,pp.1908-1909(探讨了有关公开的市政数据的采购)。
1 See Loomis v. State, 881 N.W.2d 749, 759,Wis. 2016(虽然被告可能不会对算法本身提出质疑,但他或她仍然可能会对结果分数进行审查和质疑)。
2 See Rebecca Wexler, When a Computer Program Keeps You in Jail, N.Y. Times,June 13, 2017.
3 See Kate Crawford,et al.,The AI NOW Report: The Social and Economic Implications of Artificial Intelligence Technologies in the Near-Term,2016;Peter Stone,et al., Stanford Univ., Artificial Intelligence and Life in 2030: Report of the 2015 Study Panel,2016.
4 See Kate Crawford,et al.,The AI NOW Report: The Social and Economic Implications of Artificial Intelligence Technologies in the Near-Term,2016;Peter Stone,et al., Stanford Univ., Artificial Intelligence and Life in 2030: Report of the 2015 Study Panel,2016.
5 參见本文第三部分。
6 这里有一些例子可以追溯到机器人一词的起源。See Danny Lewis, 78 Years Ago Today, BBC Aired the First Science Fiction Television Program,Smithsonian,Feb.11, 2016. 这里也有一些例子,比如德国无声电影的代表《大都会》(1927年乌发影业),美国当代电影《机械姬》(2014年环球影业)。但是,并不是所有的情形都认为机器人是坏人。例如,陪伴日本成年人童年的动画片《铁臂阿童木》就是如此,里面的阿童木机器人就是一个英雄。Astro Boy [Mighty Atom] (Manga), Tezuka in English, http://tezukainenglish.com/wp/?page_id=138(2017年10月18日访问)。
7 See Kate Darling, “Whos Johnny?”: Anthropomorphic Framing in Human-Robot Interaction, Integration, and Policy, in Patrick Lin,et al. eds., Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence,Oxford University Press,2017,pp.173-188(探讨了拟人化机器人的影响)。
8 See Ryan Calo, “Digital Market Manipulation”,in Geo. Wash. L. Rev.,2014,82:pp.1001-1002;Ian R. Kerr, “Bots, Babes, and the Californication of Commerce”,in U. Ottawal. and Tech. J.,2004,1,pp.312-317;Christina M. Mulligan, “Perfect Enforcement of Law: When to Limit and When to Use Technology”, in Rich. J.L. and Tech.,2008,14,p.101.
1 See Ryan Calo, “People Can Be So Fake: A New Dimension to Privacy and Technology Scholarship”,in Pa. St. L. Rev.,2009,114,pp.843-846.
2 See Noel Sharkey,et al., Our Sexual Future with Robots: A Foundation for Responsible Robotics Consultation Report,2017,p.1.
3 参见Kate Crawford,Ryan Calo, “There Is a Blind Spot in AI Research”,in Nature,2016,538,pp.311-312(“对人工智能未来影响的担忧,正在分散研究人员对已经应用的系统的真正风险的注意力……”)。
4 参见Sonali Kohli, Bill Gates Joins Elon Musk and Stephen Hawking in Saying Artificial Intelligence Is Scary, Quartz,Jan. 29, 2015(讨论了有多少工业巨头认为人工智能将对人类构成威胁)。
5 See generally Nick Bostrom, Superintelligence:Paths, Dangers, Strategies, Oxford University Press,2014(探讨“人类迄今为止面临的最艰巨的挑战”,并思考我们该如何最好地应对)。
6 参见Raffi Khatchadourian, The Doomsday Invention, New Yorker,Nov. 23,2015。在博斯特罗姆的其他作品中,他认为我们很可能全部都活在由我们后代所创造的计算机虚拟世界中。Nick Bostrom, Are You Living in A Simulation?,in Phil. Q.,2003,53,p.211.这个观点包含了一个有趣的悖论:如果人工智能在未来消灭了我们所有人,那么我们不可能生活在由我们后代所创造的计算机虚拟世界中。反之,如果我们真的生活在由我们后代所创造的计算机虚拟世界中,那么这意味着人工智能并没有将我们人类全部消灭。我认为,博斯特罗姆在这个问题上可能犯了错误。
7 参见Erik Sofge, Why Artificial Intelligence Will Not Obliterate Humanity, Popular Sci.,Mar.19, 2015。澳大利亚计算机科学家玛丽·安妮·威廉姆斯(Mary Anne Williams)曾经对我说过:“自从20世纪50年代人工智能这一术语诞生以来,我们一直在研究人工智能,现在机器人的智能只相当于昆虫级别。”
1 参见Connie Loizos, This Famous Roboticist Doesnt Think Elon Musk Understands AI, Techcrunch,July 19, 2017[引用罗德尼·布鲁克斯(Rodney Brooks)的话,人工智能杞人忧天者“有一个共同点:他们自己并不从事人工智能的研发工作”]。
2 参见Dave Blanchard, Musks Warning Sparks Call for Regulating Artificial Intelligence,NPR,July 19, 2017(引用杨立坤的一项观察,支配的欲望并不一定与智能关联)。
3 参见Daniel Wilson, Robopocalypse: A Novel,Vintage,2012.威尔逊的书是令人兴奋的,部分原因在于威尔逊接受过有关机器人方面的训练,并有意增加大量准确的细节描写,以使情节更加逼真。
4 参见Nick Bostrom, Superintelligence:Paths, Dangers, Strategies,Oxford University Press,2015,p.123.
5 参见Aristotle, Politics, B. Jowett trans., Clarendon Press,1885,p.17(描述了米达斯国王点石成金的失控力量);《幻想曲》(Fantasia,华特·迪斯尼公司1940年出品。一群神奇的魔法扫帚不停地给大锅里加水,差一点把米老鼠给淹死)。我把米达斯国王比作加州大学伯克利分校著名的计算机科学家斯圖尔特·罗素(Stuart Russell)教授,他是为数不多与马斯克等人一样担忧人工智能威胁人类能力的人工智能专家。
6 See Daniel Suarez, Daemon, Signet Books,2009.
7 See Bad Actors and Artificial Intelligence Workshop, The Future of Humanity Inst.,2017.
8 参见Alan Moore, Watchmen, Turtleback Books, 1995,pp.382-390(书中描绘了一个恶棍工程师从人脑中克隆了一只摧毁纽约的巨大怪物之后引发的混乱)。
9 参见“Past Events, The Future of Life Inst.”, https://futureoflife.org/past_events,2017年10月18日访问。考察未来生命研究院(Future of Life Institute)过往主办的活动就可发现这一点,该组织致力于“维护生命和开发有关未来的乐观愿景,包括人类考虑新技术和新挑战指导自身方向的积极方式”。
1 Skynet和HAL分别是科幻电影《终结者》和《2001:太空漫游》中致力于毁灭人类的恶性超级智能。——译者注
2 See Pedro Domingos, Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World, Brilliance Audio,2015,p.286.