Methods to Guide Ethical Research and Design 引导伦理研究与设计的方法
A/IS for Sustainable Development 自主智能系统促进可持续发展
Embedding Values into Autonomous and Intelligent Systems 将价值观嵌入自主智能系统
Policy 政策
Law 法律
II. General Principles 二、通用原则
The ethical and values-based design, development, and implementation of autonomous and intelligent systems should be guided by the following General Principles: 自主与智能系统的伦理及价值观导向设计、开发与实施应遵循以下通用原则:
1. Human Rights 1. 人权
A/IS shall be created and operated to respect, promote, and protect internationally recognized human rights. 人工智能系统的创建与运行应当尊重、促进并保护国际公认的人权。
2. Well-being 2. 福祉
A/IS creators shall adopt increased human well-being as a primary success criterion for development. 人工智能系统开发者应将提升人类福祉作为研发的首要成功标准。
3. Data Agency 3. 数据自主权
A/IS creators shall empower individuals with the ability to access and securely share their data, to maintain people’s capacity to have control over their identity. 人工智能/自主系统(A/IS)的创造者应赋予个人访问并安全共享其数据的能力,以保持人们对其身份的控制权。
4. Effectiveness 4. 有效性
A/IS creators and operators shall provide evidence of the effectiveness and fitness for purpose of A/IS. 人工智能/自主系统(A/IS)的创造者和运营者应提供证据,证明其系统的有效性和适用性。
5. Transparency 5. 透明度
The basis of a particular A/IS decision should always be discoverable. 特定 A/IS 决策的依据应始终可追溯。
6. Accountability 6. 问责制
A/IS shall be created and operated to provide an unambiguous rationale for all decisions made. A/IS 的创建和运行须为所有决策提供明确依据。
7. Awareness of Misuse 7. 滥用风险意识
A/IS creators shall guard against all potential misuses and risks of A/IS in operation. 人工智能/智能系统(A/IS)的创造者应防范系统运行中所有潜在的滥用行为和风险。
8. Competence 8. 专业能力
A/IS creators shall specify and operators shall adhere to the knowledge and skill required for safe and effective operation. A/IS 创造者须明确规定安全有效运行所需的知识与技能,操作人员则必须严格遵守这些专业要求。
III. Ethical Foundations 三、伦理基础
Classical Ethics 古典伦理学
By drawing from over two thousand five hundred years of classical ethics traditions, the authors of Ethically Aligned Design explored established ethics systems, addressing both scientific and religious approaches, including secular philosophical traditions, to address human morality in the digital age. Through reviewing the philosophical foundations that define autonomy and ontology, this work addresses the alleged potential for autonomous capacity of intelligent technical systems, morality in amoral systems, and asks whether decisions made by amoral systems can have moral consequences. 《伦理对齐设计》的作者们借鉴了两千五百多年的古典伦理学传统,探究了既有的伦理体系,涵盖科学与宗教方法,包括世俗哲学传统,以应对数字时代的人类道德问题。通过审视界定自主性与本体论的哲学基础,本研究探讨了智能技术系统所谓的自主能力潜力、无道德系统中的道德问题,并质疑无道德系统所作决策是否可能产生道德后果。
IV. Areas of Impact 四、影响领域
A/IS for Sustainable Development 自主智能系统促进可持续发展
Through affordable and universal access to communications networks and the Internet, autonomous and intelligent systems can be made available to and benefit populations anywhere. They can significantly alter institutions and institutional relationships toward more human-centric structures, and they can address humanitarian and sustainable development issues resulting in increased individual societal and environmental well-being. Such efforts could be facilitated through the recognition of and adherence to established indicators of societal flourishing such as the United Nations Sustainable Development Goals so that human well-being is utilized as a primary success criteria for A/IS development. 通过普及经济实惠的通信网络和互联网接入,自主智能系统能够惠及全球各地民众。这些系统可显著改变机构形态及其相互关系,构建更以人为本的组织架构,并能应对人道主义与可持续发展议题,从而提升个人福祉、社会效益及环境质量。通过认可并遵循联合国可持续发展目标等既定的社会繁荣指标,将人类福祉作为自主智能系统开发的主要成功标准,此类努力将得到有力推进。
Personal Data Rights and Agency Over Digital Identity 个人数据权利与数字身份自主权
People have the right to access, share, and benefit from their data and the insights it provides. Individuals require mechanisms to help create and curate the terms and conditions regarding access to their identity and personal data, and to control its safe, specific, and finite exchange. Individuals also require policies and practices that make them explicitly aware of consequences resulting from the aggregation or resale of their personal information. 人们有权获取、共享并受益于自身数据及其提供的洞见。个人需要相关机制来帮助制定和管理关于其身份及个人数据访问的条款与条件,并控制数据安全、特定且有限的交换。同时,个人还需要相关政策和实践,使其明确知晓个人信息的聚合或转售可能带来的后果。
Legal Frameworks for Accountability 责任法律框架
The convergence of autonomous and intelligent systems and robotics technologies has led to the development of systems with attributes that simulate those of human beings in terms of partial autonomy, ability to perform specific intellectual tasks, and even a human physical appearance. The issue of the legal status of complex autonomous and intelligent systems thus intertwines with broader legal questions regarding how to ensure accountability and allocate liability when such systems cause harm. It is clear that: 自主智能系统与机器人技术的融合催生了具有部分自主性、执行特定智力任务能力甚至人类外形特征的系统。复杂自主智能系统的法律地位问题,与如何确保此类系统造成损害时的责任认定及分配这一更广泛的法律问题相互交织。需要明确的是:
Autonomous and intelligent technical systems should be subject to the applicable regimes of property law. 自主智能技术系统应受财产法相关制度的约束。
Government and industry stakeholders should identify the types of decisions and operations that should never be delegated to such systems. These stakeholders should adopt rules and standards that ensure effective human control over those decisions and how to allocate legal responsibility for harm caused by them. 政府与行业利益相关方应明确界定哪些决策和操作绝不能委托给此类系统。这些利益相关方应制定规则与标准,确保人类能有效掌控相关决策,并明确如何分配由此造成损害的法律责任。
The manifestations generated by autonomous and intelligent technical systems should, in general, be protected under national and international laws. 自主智能技术系统产生的表现形态,原则上应受到国内法与国际法的保护。
Standards of transparency, competence, accountability, and evidence of effectiveness should govern the development of autonomous and intelligent systems. 透明度、胜任力、问责制及有效性证明等标准,应作为自主智能系统开发的指导原则。
Policies for Education and Awareness 教育认知政策体系
Effective policy addresses the protection and promotion of human rights, safety, privacy, and cybersecurity, as well as the public understanding of the potential impact of autonomous and intelligent technical systems on society. To ensure that they best serve the public interest, policies should: 有效的政策应关注人权保护与促进、安全保障、隐私维护及网络安全,同时提升公众对自主智能技术系统社会潜在影响的认知。为确保这些系统最大限度服务于公共利益,政策制定需:
Support, promote, and enable internationally recognized legal norms. 支持、促进并落实国际公认的法律规范
Develop government expertise in related technologies. 培养政府在相关技术领域的专业能力
Ensure governance and ethics are core components in research, development, acquisition, and use. 将治理与伦理作为研发、采购及使用过程中的核心要素
Regulate to ensure public safety and responsible system design. 加强监管以确保公共安全和负责任的系统设计。
Educate the public on societal impacts of related technologies. 对公众进行相关技术社会影响的教育。
V. Implementation 五、实施
Well-being Metrics 福祉指标
For autonomous and intelligent systems to provably advance a specific benefit for humanity, there need to be clear indicators of that benefit. Common metrics of success include profit, gross domestic product, consumption levels, and occupational safety. While important, these metrics fail to encompass the full spectrum of well-being for individuals, the environment, and society. Psychological, social, economic fairness, and environmental factors matter. Wellbeing metrics include such factors, allowing the benefits arising from technological progress to be more comprehensively evaluated, providing opportunities to test for unintended negative consequences that could diminish human well-being. A/IS can improve capturing of and analyzing the pertinent data, which in turn could help identify where these systems would increase human well-being, providing new routes to societal and technological innovation. 要使自主智能系统能够切实促进人类的特定福祉,就必须确立明确的效益指标。常见的成功衡量标准包括利润、国内生产总值、消费水平和职业安全等。尽管这些指标很重要,却未能全面涵盖个人、环境和社会福祉的完整维度。心理、社会、经济公平与环境因素同样至关重要。福祉指标纳入了这些要素,使得技术进步带来的效益能够得到更全面的评估,并为检测可能损害人类福祉的意外负面后果提供了机会。自主智能系统可以改进相关数据的采集与分析,从而帮助识别这些系统在哪些方面能够提升人类福祉,为社会和技术创新开辟新路径。
Embedding Values into Autonomous and Intelligent Systems 将价值观嵌入自主智能系统
If machines engage in human communities as quasi-autonomous agents, then those agents must be expected to follow the community’s social and moral norms. Embedding norms in such quasi-autonomous systems requires a clear delineation of the community in which they are to be deployed. Further, even within a particular community, different types of technical embodiments will demand different sets of norms. The first step is to identify the norms of the specific community in which the systems 若机器以准自主代理的身份参与人类社群,这些代理就必须遵循该社群的社会与道德规范。将规范嵌入此类准自主系统时,需明确界定其部署的具体社群范围。此外,即便在同一社群内部,不同类型的技术载体也需适配不同的规范体系。首要步骤是识别目标系统所处特定社群的规范准则。
are to be deployed and, in particular, norms relevant to the kinds of tasks that they are designed to perform. 需要部署的规范,特别是与它们设计执行的任务类型相关的规范。
Methods to Guide Ethical Research and Design 引导伦理研究与设计的方法
To create autonomous and intelligent technical systems that enhance and extend human well-being and freedom, values-based design methods must put human advancement at the core of development of technical systems. This must be done in concert with the recognition that machines should serve humans and not the other way around. Systems developers should employ values-based design methods in order to create sustainable systems that can be evaluated in terms of not only providing increased economic value for organizations but also of broader social costs and benefits. 为创造能够提升和扩展人类福祉与自由的自主智能技术系统,基于价值观的设计方法必须将人类进步置于技术系统开发的核心。这一过程需与"机器应服务于人类而非相反"的认知同步推进。系统开发者应采用基于价值观的设计方法,以创建可持续的系统——这些系统不仅能为组织带来更高的经济价值,还能从更广泛的社会成本与效益角度进行评估。
Affective Computing 情感计算
Affect is a core aspect of intelligence. Drives and emotions such as anger, fear, and joy are often the foundations of actions throughout our lives. To ensure that intelligent technical systems will be used to help humanity to the greatest extent possible in all contexts, autonomous and intelligent systems that participate in or facilitate human society should not cause harm by either amplifying or dampening human emotional experience. 情感是智能的核心维度。愤怒、恐惧、喜悦等驱动力与情绪往往构成我们生命行动的基石。为确保智能技术系统能在所有情境中最大限度地造福人类,参与或促进人类社会的自主智能系统既不应放大也不应抑制人类情感体验,从而避免造成伤害。
Acknowledgements 致谢
Our progress and the ongoing positive influence of this work are due to the volunteer experts serving on all our Committees and IEEE P7000 ^("TM "){ }^{\text {TM }} Standards Working Groups, along with the IEEE professional staff who support our efforts. Thank you for your dedication toward defining, designing, and inspiring the ethical principles and standards that will ensure that autonomous and intelligent systems and the technologies associated with them will positively benefit humanity. 我们取得的进展以及这项工作持续产生的积极影响,要归功于所有委员会和 IEEE P7000 ^("TM "){ }^{\text {TM }} 标准工作组的志愿专家,以及支持我们工作的 IEEE 专业团队。感谢你们在定义、设计和激励伦理原则与标准方面的奉献,这些原则与标准将确保自主智能系统及其相关技术为人类带来积极效益。
We wish to thank the Executive Committee and Committees of The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems: 我们要向 IEEE 自主与智能系统伦理全球倡议的执行委员会及各委员会致以谢意:
Executive Committee Officers 执行委员会官员
Raja Chatila, Chair 主席拉贾·查蒂拉
Kay Firth-Butterfield, Vice Chair 凯·弗斯-巴特菲尔德,副主席
John C. Havens, Executive Director 约翰·C·哈文斯,执行董事
Executive Committee Members 执行委员会成员
Dr. Greg Adamson, Karen Bartleson, Virginia Dignum, Danit Gal, Malavika Jayaram, Sven Koenig, Eileen M. Lach, Raj Madhavan, Richard Mallah, AJung Moon, Monique Morrow, Francesca Rossi, Alan Winfield, and Hagit Messer Yaron 格雷格·亚当森博士、卡伦·巴特尔森、弗吉尼亚·迪格努姆、达尼特·加尔、马拉维卡·贾亚拉姆、斯文·柯尼希、艾琳·M·拉赫、拉吉·马达范、理查德·马拉、AJung Moon、莫妮克·莫罗、弗朗西斯卡·罗西、艾伦·温菲尔德、哈吉特·梅塞尔·亚龙
Committee Chairs 委员会主席
General Principles: Mark Halverson, and Peet van Biljon 总原则:马克·哈尔弗森与皮特·范比隆
Embedding Values into Autonomous Intelligent Systems: Francesca Rossi and Bertram F. Malle 将价值观嵌入自主智能系统:弗朗西斯卡·罗西与伯特伦·F·马尔
Methodologies to Guide Ethical Research and Design: Raja Chatila and Corinne Cath 指导伦理研究与设计的方法论:拉贾·查蒂拉与科琳娜·凯斯
Safety and Beneficence of Artificial General Intelligence (AGI) and Artificial Superintelligence (ASI): Malo Bourgon and Richard Mallah 人工通用智能(AGI)与人工超级智能(ASI)的安全性与有益性:马洛·布尔贡与理查德·马拉
Personal Data and Individual Agency: 个人数据与个体能动性:
Katryna Dow and John C. Havens 卡特琳娜·道与约翰·C·哈文斯
Reframing Autonomous Weapons Systems: Peter Asaro 重构自主武器系统:彼得·阿萨罗
Sustainable Development: Elizabeth Gibbons 可持续发展:伊丽莎白·吉本斯
Law: Nicolas Economou and John Casey 法律:尼古拉斯·埃科诺穆与约翰·凯西
Affective Computing: John Sullins and Joanna J. Bryson 情感计算:约翰·苏林斯与乔安娜·J·布莱森
Classical Ethics in A/IS: Jared Bielby 人工智能/信息系统中的古典伦理学:贾里德·比尔比
Policy: Peter Brooks and Mina Hannah 政策:彼得·布鲁克斯与米娜·汉娜
Extended Reality: Monique Morrow and Jay Iorio 扩展现实:莫妮克·莫罗与杰伊·约里奥
Well-being: Laura Musikanski and John C. Havens 福祉:劳拉·穆西坎斯基与约翰·C·海文斯
Editing: Karen Bartleson and Eileen M. Lach 编辑:卡伦·巴特尔森与艾琳·M·拉赫
Outreach: Maya Zuckerman and Ali Muzaffar 外联事务:玛雅·祖克曼与阿里·穆扎法尔
Communications: Leanne Seeto and Mark Halverson 传播事务:李安·西图与马克·哈尔弗森
High School: Tess Posner 高中项目:泰丝·波斯纳
Global Coordination: Victoria Wang, Arisa Ema, Pavel Gotovtsev 全球协调:王维多利亚、江户有彩、帕维尔·戈托夫采夫
Programs and Projects Inspired by The IEEE Global Initiative: 受 IEEE 全球倡议启发的项目与计划:
Ethically Aligned Design University Consortium: Hagit Messer, Chair 伦理对齐设计大学联盟:主席哈吉特·梅塞尔
Ethically Aligned Design Community: 伦理对齐设计社区:
Lisa Morgan, Program Director, Content and Community 内容与社区项目总监丽莎·摩根
Ethics Certification Program for Autonomous and Intelligent Systems: 自主与智能系统伦理认证计划:
Meeri Haataja, Chair; Ali Hessami, Vice-Chair 主席:梅里·哈塔亚;副主席:阿里·赫萨米
Glossary: Sara M. Jordan, Chair 术语表:主席萨拉·M·乔丹
People 人员
We would like to warmly recognize the leadership and constant support of The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems by Dr. Ing. Konstantinos Karachalios, Managing Director of the IEEE Standards Association. 我们衷心感谢 IEEE 标准协会常务董事康斯坦丁诺斯·卡拉查里奥斯博士对《IEEE 自主与智能系统伦理全球倡议》的卓越领导与持续支持。
We would also like to thank Stephen Welby, Executive Director and Chief Operating Officer of IEEE for his generous and insightful support of the Ethically Aligned Design, First Edition process and The IEEE Global Initiative overall. 我们同时要感谢 IEEE 执行董事兼首席运营官斯蒂芬·韦尔比对《伦理对齐设计》第一版制定过程及整个 IEEE 全球倡议所给予的慷慨而富有洞见的支持。
We would especially like to thank Eileen M. Lach, the former IEEE General Counsel and Chief Compliance Officer, whose heartfelt conviction that there is a pressing need to focus the global community on highlighting ethical considerations in the development of autonomous and intelligent systems served as a strong catalyst for the development of the Initiative within IEEE. 特别感谢 IEEE 前总法律顾问兼首席合规官艾琳·M·拉赫女士,她坚信全球社会亟需关注自主与智能系统发展中的伦理考量,这一坚定信念有力推动了该倡议在 IEEE 内部的孕育与发展。
Finally, we would like to also acknowledge the ongoing work of three Committees of The IEEE Global Initiative regarding their chapters of Ethically Aligned Design that, for timing reasons, we were not able to include in Ethically Aligned Design, First Edition. These Committees include: Reframing Autonomous Weapons Systems, Extended Reality (formerly Mixed Reality) and Safety and Beneficence of Artificial General Intelligence (AGI) and Artificial Superintelligence (ASI). We would like to thank Peter Asaro, Monique Morrow and Jay Iorio, Malo Bourgon and Richard Mallah for their leadership in these groups along with all their Committee Members. Once these chapters have completed their review and been accepted by IEEE they could either be included in Ethically Aligned Design, published by The IEEE Global Initiative, or in other publications of IEEE. 最后,我们还要感谢 IEEE 全球倡议组织下属的三个委员会正在开展的《伦理对齐设计》章节编写工作。由于时间原因,这些内容未能纳入《伦理对齐设计》第一版。这些委员会包括:重构自主武器系统委员会、扩展现实(原混合现实)委员会,以及人工通用智能(AGI)与人工超级智能(ASI)的安全与效益委员会。特别感谢 Peter Asaro、Monique Morrow、Jay Iorio、Malo Bourgon 和 Richard Mallah 作为这些工作组的负责人,以及所有委员会成员的贡献。待这些章节完成评审并获得 IEEE 批准后,或将收录于 IEEE 全球倡议组织出版的《伦理对齐设计》,也可能发表于 IEEE 其他出版物。
For information on disclaimers associated with EAD1e, see How the Document Was Prepared. 关于《伦理对齐设计》第一版的免责声明,请参阅"文档编制说明"章节。
From Principles to Practice Ethically Aligned Design Conceptual Framework 从理论到实践:《伦理对齐设计》概念框架
Ethically Aligned Design, First Edition (EAD1e) represents more than a comprehensive report, distilling the consensus of its vast community of creators into a set of high-level ethical principles, key issues, and practical recommendations. EAD1e is an in-depth seminal work, a one-of-a-kind treatise, intended not only to inform a broader public but also to inspire its audience and readership of academics, engineers, policy makers, and manufacturers of autonomous and intelligent systems¹ (A/IS) to take action. 《伦理对齐设计》第一版(EAD1e)不仅是一份综合性报告,更凝聚了庞大创作者群体的共识,形成了一套高层伦理原则、关键议题与实践建议。这部开创性的深度专著堪称独树一帜,其目标不仅是向公众普及知识,更旨在激励学术界、工程界、政策制定者及自主智能系统(A/IS)制造商等读者群体付诸行动¹。
This Chapter, “From Principles to Practice”, provides a mapping of the conceptual framework of Ethically Aligned Design. It outlines the logic behind “Three Pillars” that form the basis of EAD1e, and it connects the Pillars to high-level “General Principles” which guide all manner of ethical A/IS design. Following this, the content of the Chapters of EAD1e is mapped to the Principles. Finally, examples of EAD1e already in practice are described. 本章"从原则到实践"系统阐述了伦理对齐设计的概念框架,揭示了构成 EAD1e 基础的"三大支柱"理论逻辑,并将这些支柱与指导各类伦理 A/IS 设计的"通用原则"相衔接。随后将 EAD1e 各章节内容与原则体系进行对应映射,最后列举了已投入实践的 EAD1e 应用案例。
Sections in this Chapter: 本章节包含:
The Three Pillars of the Ethically Aligned Design Conceptual Framework 伦理对齐设计概念框架的三大支柱
The General Principles of Ethically Aligned Design 《伦理对齐设计通用准则》
Mapping the Pillars to the Principles 《支柱与准则的对应关系》
Mapping the Principles to the Content of the Chapters 《准则与章节内容的映射》
From Principles to Practice 《从准则到实践》
Ethically Aligned Design in Implementation 实施中的伦理对齐设计
The Three Pillars of the Ethically Aligned Design Conceptual Framework 伦理对齐设计概念框架的三大支柱
The Pillars of the Ethically Aligned Design Conceptual Framework fall broadly into three areas, reflecting anthropological, political, and technical aspects: 伦理对齐设计概念框架的支柱大致可分为三个领域,分别反映人类学、政治和技术层面:
Universal Human Values: A/IS can be an enormous force for good in society provided they are designed to respect human rights, align with human values, and holistically increase well-being while empowering as many people as possible. They should also be designed to safeguard our environment and natural resources. These values should guide policy makers as well as engineers, designers, and developers. Advances in A/IS should be in the service of all people, rather than benefiting solely small groups, a single nation, or a corporation. 普世人类价值:人工智能与智能系统(A/IS)若能以尊重人权、符合人类价值观、全面提升福祉并赋能尽可能多人群的方式设计,将成为推动社会向善的巨大力量。这类系统还应被设计用于保护我们的环境和自然资源。这些价值观不仅应指导政策制定者,也应指引工程师、设计师和开发人员。A/IS 的进步应当服务于全人类,而非仅仅惠及小群体、单一国家或企业。
Political Self-Determination and Data Agency: A/IS-if designed and implemented properlyhave a great potential to nurture political freedom and democracy, in accordance with the cultural precepts of individual societies, when people have access to and control over the data constituting and representing their identity. These systems can improve government effectiveness and accountability, foster trust, and protect our private sphere, but only when people have agency over their digital identity and their data is provably protected. 政治自决与数据自主权:若人工智能/自主系统(A/IS)设计实施得当,当人们能够访问并控制构成及代表其身份的数据时,这些系统将极有可能根据各社会的文化准则培育政治自由与民主。此类系统能提升政府效能与问责制、增进信任并保护私人领域——但前提是民众必须拥有对其数字身份的主导权,且其数据获得可验证的保护。
Technical Dependability: Ultimately, A/IS should deliver services that can be trusted. ^(2){ }^{2} This trust means that A/IS will reliably, safely, and actively accomplish the objectives for which they were designed while advancing the human-driven values they were intended to reflect. Technologies should be monitored to ensure that their operation meets predetermined ethical objectives aligning with human values and respecting codified rights. In addition, validation and verification processes, including aspects of explainability, should be developed that could lead to better auditability and to certification ^(3){ }^{3} of A/IS. 技术可靠性:归根结底,A/IS 应提供值得信赖的服务。这种信任意味着系统将可靠、安全且主动地实现设计目标,同时贯彻其本应体现的人本价值观。需对技术实施监督,确保其运行符合预设的伦理目标,既与人类价值观保持一致,又尊重法定权利。此外,应建立包含可解释性要素的验证流程,以提升系统可审计性,并推动 A/IS 认证机制的完善。
The General Principles of Ethically Aligned Design 《伦理对齐设计通用原则》
The General Principles of Ethically Aligned Design have emerged through the continuous work of dedicated, open communities in a multi-year, creative, consensus-building process. They articulate high-level principles that should apply to all types of autonomous and intelligent systems (A/IS). Created to guide behavior and inform standards and policy making, the General Principles define imperatives for the ethical design, development, deployment, adoption, and decommissioning of autonomous and intelligent systems. The Principles consider the role of A/IS creators, i.e., those who design and manufacture, of operators, i.e., those with expertise specific to use of A/IS, other users, and any other stakeholders or affected parties. 《伦理对齐设计通用原则》是通过多年持续努力,由专注开放的社群在创造性共识构建过程中形成的成果。这些原则阐述了适用于所有类型自主与智能系统(A/IS)的高层次准则。作为指导行为、制定标准与政策的纲领性文件,本通用原则界定了自主与智能系统在伦理设计、开发、部署、采用及退役各环节的基本要求。原则考量了 A/IS 创造者(即设计与制造者)、操作者(即具备 A/IS 使用专长的人员)、其他使用者以及所有利益相关方或受影响群体的角色定位。
The General Principles ^(4){ }^{4} of Ethically Aligned Design 《伦理对齐设计通用原则》 ^(4){ }^{4}
Human Rights-A/IS shall be created and operated to respect, promote, and protect internationally recognized human rights. 人权原则——自主与智能系统的创建和运行必须尊重、促进并保护国际公认的人权。
Well-being-A/IS creators shall adopt increased human well-being as a primary success criterion for development. 福祉优先-人工智能/智能系统开发者应将提升人类福祉作为开发的主要成功标准。
Data Agency-A/IS creators shall empower individuals with the ability to access and securely share their data, to maintain people’s capacity to have control over their identity. 数据自主权-人工智能/智能系统开发者应赋予个人访问和安全共享其数据的能力,以保持人们对其身份的控制权。
Effectiveness-A/IS creators and operators shall provide evidence of the effectiveness and fitness for purpose of A/IS. 效能验证-人工智能/智能系统开发者及运营者需提供系统效能与目标适配性的实证依据。
Transparency-The basis of a particular A//ISA / I S decision should always be discoverable. 决策透明-特定 A//ISA / I S 决策的依据应始终具备可追溯性。
Accountability-A/IS shall be created and operated to provide an unambiguous rationale for all decisions made. 责任归属-人工智能/信息系统(A/IS)的创建与运行需为所有决策提供明确依据。
Awareness of Misuse-A/IS creators shall guard against all potential misuses and risks of A/IS in operation. 滥用防范-人工智能/信息系统(A/IS)创建者须防范系统运行中所有潜在的滥用行为与风险。
Competence-A/IS creators shall specify and operators shall adhere to the knowledge and skill required for safe and effective operation. 专业能力-人工智能/信息系统(A/IS)创建者应明确规定且操作者须遵守确保安全有效运行所需的知识技能。
From Principles to Practice 从原则到实践
Ethically Aligned Design Conceptual Framework 伦理对齐设计概念框架
Mapping the Pillars to the Principles 支柱原则映射关系
Whereas the Pillars of the Ethically Aligned Design Conceptual Framework represent broad anthropological, political, and technical aspects relating to autonomous and intelligent systems, the General Principles provide contextual filters for deeper analysis and pragmatic implementation. 伦理对齐设计概念框架的支柱代表了与自主智能系统相关的广泛人类学、政治和技术维度,而通用原则则为深入分析和实际实施提供了情境化筛选机制。
It is also important to recognize that the General Principles do not live in isolation of EAD’s Pillars and vice versa. While the General Principle of “Transparency” may inform the design of a specific autonomous or intelligent system, the A/IS must also account for universal human values, political self-determination, and data agency. Moreover, Transparency goes beyond technical features. It is an important requirement also for the processes of policy and lawmaking. In this way, EAD1e’s Pillars form the holistic ethical grounding upon which the Principles can build, and the latter may apply in various spheres of human activity. 需要特别指出的是,通用原则与伦理对齐设计的支柱并非彼此孤立。虽然"透明度"这一通用原则可能指导特定自主智能系统的设计,但 A/IS 系统还必须兼顾普世人类价值、政治自决权及数据自主权。此外,透明度不仅涉及技术特性,也是政策制定与立法流程的重要要求。由此可见,EAD1e 的支柱构成了原则得以建立的完整伦理基础,而这些原则可应用于人类活动的各个领域。
EAD1e Pillars Mapped to General Principles EAD1e 支柱与通用原则的对应关系
EAD Pillars EAD 支柱
Universal Human Values 普世人类价值观
Political Self-Determination Data Agency 政治自决数据机构
Technical Dependability 技术可靠性
EAD General Principles EAD 通用原则
Human Rights 人权
◻\square
◻\square
Well-being 福祉
Data Agency 数据机构
Effectiveness 效能
Transparency 透明度
Accountability 问责制
Awareness of Misuse 滥用意识
Competence 能力
Indicates General Principle mapped to Pillar. 表示映射至支柱的通用原则
EAD Pillars
Universal Human Values Political Self-Determination Data Agency Technical Dependability
EAD General Principles Human Rights ◻ ◻
Well-being https://cdn.mathpix.com/cropped/2025_06_04_f0300f32a915b4226f27g-014.jpg?height=43&width=36&top_left_y=1790&top_left_x=888 https://cdn.mathpix.com/cropped/2025_06_04_f0300f32a915b4226f27g-014.jpg?height=43&width=40&top_left_y=1790&top_left_x=1285
Data Agency https://cdn.mathpix.com/cropped/2025_06_04_f0300f32a915b4226f27g-014.jpg?height=33&width=36&top_left_y=1873&top_left_x=888 https://cdn.mathpix.com/cropped/2025_06_04_f0300f32a915b4226f27g-014.jpg?height=35&width=38&top_left_y=1871&top_left_x=1287 https://cdn.mathpix.com/cropped/2025_06_04_f0300f32a915b4226f27g-014.jpg?height=35&width=46&top_left_y=1871&top_left_x=1682
Effectiveness https://cdn.mathpix.com/cropped/2025_06_04_f0300f32a915b4226f27g-014.jpg?height=38&width=46&top_left_y=1946&top_left_x=1682
Transparency https://cdn.mathpix.com/cropped/2025_06_04_f0300f32a915b4226f27g-014.jpg?height=40&width=36&top_left_y=2018&top_left_x=888 https://cdn.mathpix.com/cropped/2025_06_04_f0300f32a915b4226f27g-014.jpg?height=40&width=42&top_left_y=2018&top_left_x=1283 https://cdn.mathpix.com/cropped/2025_06_04_f0300f32a915b4226f27g-014.jpg?height=40&width=46&top_left_y=2018&top_left_x=1682
Accountability https://cdn.mathpix.com/cropped/2025_06_04_f0300f32a915b4226f27g-014.jpg?height=34&width=36&top_left_y=2098&top_left_x=888 https://cdn.mathpix.com/cropped/2025_06_04_f0300f32a915b4226f27g-014.jpg?height=36&width=40&top_left_y=2096&top_left_x=1285 https://cdn.mathpix.com/cropped/2025_06_04_f0300f32a915b4226f27g-014.jpg?height=36&width=46&top_left_y=2096&top_left_x=1682
Awareness of Misuse https://cdn.mathpix.com/cropped/2025_06_04_f0300f32a915b4226f27g-014.jpg?height=42&width=46&top_left_y=2170&top_left_x=1682
Competence https://cdn.mathpix.com/cropped/2025_06_04_f0300f32a915b4226f27g-014.jpg?height=36&width=33&top_left_y=2248&top_left_x=1682
Indicates General Principle mapped to Pillar. | | | EAD Pillars | | |
| :--- | :--- | :--- | :--- | :--- |
| | | Universal Human Values | Political Self-Determination Data Agency | Technical Dependability |
| EAD General Principles | Human Rights | $\square$ | $\square$ | |
| | Well-being |  |  | |
| | Data Agency |  |  |  |
| | Effectiveness | | |  |
| | Transparency |  |  |  |
| | Accountability |  |  |  |
| | Awareness of Misuse | | |  |
| | Competence | | |  |
| Indicates General Principle mapped to Pillar. | | | | |
From Principles to Practice 从原则到实践
Ethically Aligned Design Conceptual Framework 道德对齐设计概念框架
Mapping the Principles to the Content of the Chapters 将原则映射至各章节内容
The Chapters of Ethically Aligned Design provide in-depth subject matter expertise that allows readers to move from the General Principles to more deeply analyze ethical A/IS issues within the context of their specific work. 《道德对齐设计》各章节提供了深入的专题专业知识,使读者能够从通用原则出发,更深入地分析其特定工作背景下的 A/IS 伦理问题。
The mapping or indexing provided in the table below serve as directional starting points since elements of a Principle like “Competence” may resonate in several EAD1e Chapters. In addition, where core subjects are primarily covered by specific Chapters, we have done our best to indicate this via our mapping below. 下表提供的映射或索引可作为方向性起点,因为像"能力"这样的原则要素可能在多个 EAD1e 章节中产生共鸣。此外,对于主要由特定章节涵盖的核心主题,我们已通过下方映射尽可能予以标注。
EAD1e General Principles Mapped to Chapters EAD1e 通用原则与章节对应关系
EAD Chapters EAD 章节
General Principles 通用原则
Classical Ethics in A/IS 人工智能/智能系统中的经典伦理
Well-being 幸福感
Affective Computing 情感计算
Data & Individual Agency 数据与个人能动性
Methods A/IS Design A/IS 设计方法
A/IS for Sustainable Dev. 可持续性发展的人工智能/信息系统
Embedding Values into A/IS 将价值观融入人工智能/信息系统
Policy 政策
Law 法律
EAD General Principles 欧洲档案著录通用原则
Human Rights 人权
Well-being 福祉
Data Agency 数据主体性
Effectiveness 有效性
Transparency 透明度
Accountability 问责制
Awareness of Misuse 滥用意识
Competence 能力
- Indicates General Principle mapped to Chapter. - 表示映射至章节的通用原则。
Indicates primary EAD Chapter providing elaboration on a General Principle. 表示对通用原则进行详细阐述的主要 EAD 章节。
It is at this step of the Ethically Aligned Design Conceptual Framework that readers will be able to identify the Principles and Chapters of key relevance to their work. Content provided in EAD1e Chapters is organized by “Issues” identified as the most pressing ethical matters surrounding A/IS design to address today and “Recommendations” on how it should be done. By reviewing these Issues and Recommendations in light of a specific A/IS product, service, or system being designed, readers are provided with a simple form of impact assessment and due diligence process to help put their “Principles into Practice” for themselves. Of course, more fine-tuned customization and adaptation of the content of EAD1e to fit specific sectors or applications are possible and will be pursued in the near future. See below for some implementation examples already happening. 在《伦理对齐设计概念框架》的这一步骤中,读者将能识别与其工作密切相关的原则和章节。EAD1e 各章节内容按"议题"(当前 A/IS 设计中最紧迫的伦理问题)和"建议"(应对方案)进行组织。通过结合正在设计的特定 A/IS 产品、服务或系统来审视这些议题与建议,读者可获得一种简易的影响评估和尽职调查流程,从而自主实现"原则落地"。当然,未来还将对 EAD1e 内容进行更精细的行业适配与定制化调整。下文展示了一些已在实施的案例。
Ethically Aligned Design in Implementation 实施中的伦理对齐设计
Ethically Aligned Design, First Edition represents the culmination of a three-year process guided bottom-up since 2015 by the rigor and standards of the engineering profession and by a globally open and iterative process involving hundreds of global experts. The analysis of the Principles, Issues, and Recommendations generated as part of an iterative process have already inspired the creation of fourteen IEEE Standardization Projects, a Certification Program, A/IS Ethics Courses, and multiple other action-oriented programs currently in development. 《伦理对齐设计》第一版是自 2015 年以来历时三年、自下而上推进的成果,整个过程严格遵循工程专业的严谨标准,并通过全球开放的迭代流程汇集了数百位国际专家的智慧。作为这一迭代过程产出的原则、议题与建议分析,已直接催生了十四项 IEEE 标准化项目、一个认证体系、人工智能/自主系统伦理课程,以及多个目前正在开发中的行动导向项目。
In its earlier manifestations, Ethically Aligned Design informed collaborations on A/IS governance with a broad range of governmental and civil society organizations, including the United Nations, the European Commission, the Organization for Economic Cooperation and Development and many national and municipal governments and institutions. ^(5){ }^{5} Moreover, the engagement in all of these arenas and with such partners has put the collective knowledge and creativity of The IEEE Global Initiative in the service of global policy-making with tangible and visible results. Beyond inspiring the policy arena, EAD1e and this growing body of work has also been influencing the development of industry-related resources. ^(6){ }^{6} 在早期阶段,《伦理对齐设计》为人工智能/智能系统(A/IS)治理领域的合作提供了指导框架,合作对象涵盖联合国、欧盟委员会、经济合作与发展组织等众多政府机构与民间社会组织,以及诸多国家和地方政府与机构。 ^(5){ }^{5} 更重要的是,通过这些跨领域、多主体的合作参与,IEEE 全球倡议组织的集体智慧与创造力已切实服务于全球政策制定,并取得显著成效。除政策领域外,《伦理对齐设计》第一版及后续不断丰富的研究成果,也持续影响着行业相关资源的开发进程。 ^(6){ }^{6}
It is time to move “From Principles to Practice” in society regarding the governance of emerging autonomous and intelligent systems. The implementation of ethical principles must be validated by dependable applications of A/IS in practice while honoring our desire for political self-determination and data agency. To achieve societal progress, the autonomous and intelligent systems we create must be trustworthy, provable, and accountable and must align to our explicitly formulated human values. 现在是时候在社会中推动“从原则到实践”的转变,以治理新兴的自主与智能系统。伦理原则的实施必须通过 A/IS 在实际中的可靠应用来验证,同时尊重我们对政治自决和数据自主权的诉求。为实现社会进步,我们创造的自主与智能系统必须具备可信性、可验证性和可问责性,并且必须与我们明确阐述的人类价值观保持一致。
It is our hope that Ethically Aligned Design and this conceptual framework will provide action-oriented inspiration for your work as well. 我们希望《伦理对齐设计》及这一概念框架也能为你的工作提供具有行动导向的启发。
Ethically Aligned Design Conceptual Framework-From Principles to Practice 《伦理对齐设计》概念框架——从原则到实践
For information on disclaimers associated with EAD1e, see How the Document Was Prepared. 有关 EAD1e 免责声明的信息,请参阅《文档编制说明》。
Endnotes 尾注
^(1){ }^{1} We prefer not to use-as far as possiblethe vague term “AI” and use instead the term autonomous and intelligent systems (A/IS). This terminology is applied throughout Ethically Aligned Design, First Edition to ensure the broadest possible application of ethical considerations in the design of the addressed technologies and systems. 我们尽可能避免使用模糊的"AI"术语,而采用"自主智能系统(A/IS)"这一表述。该术语贯穿《伦理对齐设计》第一版全文,旨在确保所涉技术与系统设计中伦理考量的最广泛适用性。
^(2){ }^{2} See also Draft Ethics Guidelines for Trustworthy Al of The European Commission’s High Level Expert Group on AI. ^(2){ }^{2} 另见欧盟委员会人工智能高级别专家组发布的《可信人工智能伦理指南草案》。^(3){ }^{3} A/IS should be subject to specific certification procedures by competent and qualified agencies with participation or control of public authorities in the same way other technical systems require certification before deployment. The IEEE has launched one of the world’s first programs dedicated to creating A/IS certification processes. The Ethics Certification Program for Autonomous and Intelligent Systems (ECPAIS) offers processes by which organizations can seek certified A/IS products, systems, and services. It is being developed through an extensive and open public-private collaboration. ^(3){ }^{3} 自主与智能系统应接受由具备资质的专业机构实施的特定认证程序,公共权力机构需参与或监督该过程,其认证要求应与其他技术系统的部署前认证保持一致。IEEE 已启动全球首批专门制定 A/IS 认证流程的项目——自主与智能系统伦理认证计划(ECPAIS),该计划为组织提供获取 A/IS 产品、系统及服务认证的流程,目前正通过广泛的公私合作机制进行开发。^(4){ }^{4} For their overall framing, see the “General Principles” Chapter. ^(4){ }^{4} 关于整体框架,请参阅“通用原则”章节。^(5){ }^{5} As an example, the recently published report Draft Ethics Guidelines for Trustworthy AI of The European Commission’s High Level Expert Group on AI explicitly mentions EAD as a major source of their inspiration. EAD has also been guiding policy creation for efforts of the United Nations and the Organization for Economic Cooperation and Development. ^(5){ }^{5} 例如,欧盟委员会人工智能高级别专家组最新发布的报告《可信人工智能伦理指南草案》明确将 EAD 列为其主要灵感来源之一。EAD 还一直为联合国和经济合作与发展组织的政策制定工作提供指导。^(6){ }^{6} Everyday Ethics for Artificial Intelligence: A Practical Guide for Designers and Developers ^(2){ }^{2} 另见欧盟委员会人工智能高级别专家组《可信人工智能伦理准则草案》。 ^(3){ }^{3} 与其他技术系统在部署前需通过认证类似,自主与智能系统(A/IS)应接受由具备资质的专业机构实施、公共部门参与或监管的特定认证程序。IEEE 已启动全球首批专门制定 A/IS 认证流程的项目之一——"自主与智能系统伦理认证计划"(ECPAIS),该计划为组织提供获得 A/IS 产品、系统及服务认证的流程,目前正通过广泛的公私协作开放开发。 ^(4){ }^{4} 总体框架请参阅"通用原则"章节。 ^(5){ }^{5} 例如欧盟委员会人工智能高级别专家组最新发布的《可信人工智能伦理准则草案》明确将 EAD 列为其主要灵感来源。EAD 还持续为联合国与经济合作发展组织的政策制定工作提供指导。 《人工智能日常伦理:设计师与开发者实用指南》
General Principles 通用原则
The General Principles of Ethically Aligned Design articulate high-level ethical principles that apply to all types of autonomous and intelligent systems (A/IS), regardless of whether they are physical robots, such as care robots or driverless cars, or software systems, such as medical diagnosis systems, intelligent personal assistants, or algorithmic chat bots, in real, virtual, contextual, and mixed-reality environments. 《伦理对齐设计通用原则》阐述了适用于各类自主与智能系统(A/IS)的高层级伦理准则,无论其是实体机器人(如护理机器人或无人驾驶汽车),还是软件系统(如医疗诊断系统、智能个人助理或算法聊天机器人),亦或是存在于现实、虚拟、情境化及混合现实环境中的系统。
The General Principles define imperatives for the design, development, deployment, adoption, and decommissioning of autonomous and intelligent systems. The Principles consider the role of A/IS creators, i.e., those who design and manufacture, of operators, i.e., those with expertise specific to use of A/IS, other users, and any other stakeholders or affected parties. 该通用原则界定了自主与智能系统在设计、开发、部署、采用及退役全生命周期中的伦理要求。这些原则考量了 A/IS 创造者(即设计与制造者)、操作者(即具备 A/IS 使用专长的人员)、其他使用者以及所有利益相关方或受影响群体的角色定位。
We have created these ethical General Principles for A/IS that: 我们为自主与智能系统制定的核心伦理原则旨在:
Embody the highest ideals of human beneficence within human rights. 在人权框架内体现人类福祉的最高理想。
Prioritize benefits to humanity and the natural environment from the use of A/IS over commercial and other considerations. Benefits to humanity and the natural environment should not be at odds-the former depends on the latter. Prioritizing human well-being does not mean degrading the environment. 优先考虑人工智能/智能系统(A/IS)应用对人类和自然环境的益处,而非商业或其他考量。对人类与自然环境的裨益不应相互冲突——前者依赖于后者。以人类福祉为先并不意味着以环境退化为代价。
Mitigate risks and negative impacts, including misuse, as A/IS evolve as socio-technical systems, in particular by ensuring actions of A/IS are accountable and transparent. 随着 A/IS 作为社会技术系统的发展,须降低包括滥用在内的风险与负面影响,尤其应确保 A/IS 行为的可问责性与透明度。
These General Principles are elaborated in subsequent sections of this chapter of Ethically Aligned Design, with specific contextual, cultural, and pragmatic explorations which impact their implementation. 这些通用原则将在《伦理对齐设计》本章后续章节中详细阐述,包含影响其实施的具体情境、文化及务实考量。
General Principles 通用原则
General Principles as Imperatives 作为要务的通用原则
We offer high-level General Principles in Ethically Aligned Design that we consider to be imperatives for creating and operating A/IS that further human values and ensure trustworthiness. In summary, our General Principles are: 我们提出《伦理对齐设计》中的高层次通用原则,这些原则对于创建和运作用于增进人类价值并确保可信度的人工智能/智能系统(A/IS)至关重要。简而言之,我们的通用原则包括:
Human Rights-A/IS shall be created and operated to respect, promote, and protect internationally recognized human rights. 人权原则——人工智能/智能系统的创建和运行应当尊重、促进并保护国际公认的人权。
Well-being-A/IS creators shall adopt increased human well-being as a primary success criterion for development. 福祉原则——人工智能/智能系统的开发者应将提升人类福祉作为研发的主要成功标准。
Data Agency-A/IS creators shall empower individuals with the ability to access and securely share their data, to maintain people’s capacity to have control over their identity. 数据自主权——人工智能/智能系统(A/IS)创建者应赋予个人访问并安全共享其数据的能力,以保持人们对其身份的控制权。
Effectiveness-A/IS creators and operators shall provide evidence of the effectiveness and fitness for purpose of A/IS. 有效性——人工智能/智能系统(A/IS)创建者和运营者应提供证据,证明其系统的有效性和适用性。
Transparency-The basis of a particular A/IS decision should always be discoverable. 透明度——特定人工智能/智能系统(A/IS)决策的依据应始终可追溯。
Accountability-A/IS shall be created and operated to provide an unambiguous rationale for all decisions made. 问责制——人工智能/智能系统(A/IS)的创建和运行应能为所有决策提供明确的依据。
Awareness of Misuse-A/IS creators shall guard against all potential misuses and risks of A/IS in operation. 防范滥用意识——人工智能/智能系统(A/IS)的创造者应防范其运行中所有潜在的滥用和风险。
Competence-A/IS creators shall specify and operators shall adhere to the knowledge and skill required for safe and effective operation. 能力要求——A/IS 创造者须明确规范,操作者须遵守确保安全有效运行所需的知识与技能标准。
Principle 1-Human Rights 原则 1-人权保障
A/IS shall be created and operated to respect, promote, and protect internationally recognized human rights. A/IS 的创建与运行必须尊重、促进并保护国际公认的人权准则。
Background 背景
Human benefit is a crucial goal of A/IS, as is respect for human rights set out in works including, but not limited to: The Universal Declaration of Human Rights, the International Covenant on Civil and Political Rights, the Convention on the Rights of the Child, the Convention on the Elimination of all forms of Discrimination against Women, the Convention on the Rights of Persons with Disabilities, and the Geneva Conventions. 人类福祉是人工智能/智能系统(A/IS)的核心目标,同时需要尊重包括但不限于以下文献所载的人权:《世界人权宣言》、《公民权利和政治权利国际公约》、《儿童权利公约》、《消除对妇女一切形式歧视公约》、《残疾人权利公约》以及《日内瓦公约》。
Such rights need to be fully taken into consideration by individuals, companies, professional bodies, research institutions, and governments alike to reflect the principle that A/IS should be designed and operated in a way that both respects and fulfills human rights, freedoms, human dignity, and cultural diversity. 这些权利需要被个人、企业、专业机构、研究机构和政府充分考量,以体现 A/IS 的设计与运作应尊重并实现人权、自由、人类尊严及文化多样性的原则。
While their interpretation may change over time, “human rights”, as defined by international law, provide a unilateral basis for creating any A/IS, as these systems affect humans, their emotions, 尽管对人权的诠释可能随时间演变,但国际法所定义的"人权"为任何 A/IS 的创建提供了普适基础,因为这些系统影响着人类及其情感、
data, or agency. While the direct coding of human rights in A/IS may be difficult or impossible based on contextual use, newer guidelines from The United Nations provide methods to pragmatically implement human rights ideals within business or corporate contexts that could be adapted for engineers and technologists. In this way, technologists can take into account human rights in the way A/IS are developed, operated, tested, and validated. In short, human rights should be part of the ethical risk assessment of A/IS. 数据或代理权。虽然根据具体使用情境,直接在人工智能/智能系统(A/IS)中编码人权可能困难或无法实现,但联合国最新指南提供了在商业或企业环境中务实落实人权理念的方法,这些方法可被工程师和技术专家借鉴。通过这种方式,技术专家可以在 A/IS 的开发、运行、测试和验证过程中纳入人权考量。简言之,人权应成为 A/IS 伦理风险评估的组成部分。
Recommendations 建议
To best respect human rights, society must assure the safety and security of A/IS so that they are designed and operated in a way that benefits humans. Specifically: 为最大限度尊重人权,社会必须确保 A/IS 的安全性和可靠性,使其设计和运行方式造福人类。具体而言:
Governance frameworks, including standards and regulatory bodies, should be established to oversee processes which ensure that the use of A/IS does not infringe upon human rights, freedoms, dignity, and privacy, and which ensure traceability. This will contribute to building public trust in A/IS. 应建立包括标准与监管机构在内的治理框架,监督相关流程以确保 A/IS 的使用不侵犯人权、自由、尊严和隐私,并确保可追溯性。这将有助于建立公众对 A/IS 的信任。
A way to translate existing and forthcoming legal obligations into informed policy and technical considerations is needed. Such a method should allow for diverse cultural norms as well as differing legal and regulatory frameworks. 需要一种方法将现有及未来的法律义务转化为明智的政策与技术考量。这种方法应能兼容多元文化规范以及不同的法律与监管框架。
General Principles 总则
A/IS should always be subordinate to human judgment and control. 自主/智能系统(A/IS)应始终服从人类判断与管控。
For the foreseeable future, A/IS should not be granted rights and privileges equal to human rights. 在可预见的未来,不应赋予自主/智能系统(A/IS)等同于人类权利的权限与特权。
Further Resources 延伸阅读资源
The following documents and organizations are provided both as references and examples of the types of work that can be emulated, adapted, and proliferated regarding ethical best practices around A//IS\mathrm{A} / \mathrm{IS} to best honor human rights: 以下文献与机构既可作为参考,也提供了可效仿、调整及推广的伦理实践典范,旨在围绕 A//IS\mathrm{A} / \mathrm{IS} 领域更好地维护人权:
The Universal Declaration of Human Rights, 1947. 《世界人权宣言》,1947 年
N. Wiener, The Human Use of Human Beings, New York: Houghton Mifflin, 1954. N·维纳,《人有人的用途》,纽约:霍顿米夫林出版社,1954 年
The International Covenant on Civil and Political Rights, 1966. 《公民权利和政治权利国际公约》,1966 年
The International Covenant on Economic, Social and Cultural Rights, 1966. 《经济、社会及文化权利国际公约》,1966 年
The International Convention on the Elimination of All Forms of Racial Discrimination, 1965. 《消除一切形式种族歧视国际公约》,1965 年
The Convention on the Rights of the Child, 1990. 《儿童权利公约》,1990 年
The Convention on the Elimination of All 《消除一切形式种族歧视国际公约》
Forms of Discrimination against Women, 1979. 《消除对妇女一切形式歧视公约》,1979 年。
The Convention on the Rights of Persons with Disabilities, 2006. 《残疾人权利公约》,2006 年。
The Geneva Conventions and Additional Protocols, 1949. 《日内瓦公约》及其 1949 年附加议定书
IRTF’s Research into Human Rights Protocol Considerations, 2018. 互联网研究任务组(IRTF)关于人权协议考量的研究,2018 年。
The UN Guiding Principles on Business and Human Rights, 2011. 联合国工商业与人权指导原则,2011 年。
British Standards Institute BS8611:2016, Robots and Robotic Devices. Guide to the Ethical Design and Application of Robots and Robotic Systems 英国标准协会 BS8611:2016《机器人与机器人设备 机器人及机器人系统伦理设计与应用指南》
General Principles 通用原则
Principle 2-Well-being 原则二:福祉提升
A/IS creators shall adopt increased human well-being as a primary success criterion for development. 人工智能/智能系统(A/IS)开发者应将提升人类福祉作为研发的主要成功标准。
Background 背景说明
For A/IS technologies to demonstrably advance benefit for humanity, we need to be able to define and measure the benefit we wish to increase. But often the only indicators utilized in determining success for A/IS are avoiding negative unintended consequences and increasing productivity and economic growth for customers and society. Today, these are largely measured by gross domestic product (GDP), profit, or consumption levels. 要使 A/IS 技术切实促进人类福祉,我们需要明确界定并量化期望提升的效益。然而当前衡量 A/IS 成功与否的指标往往仅限于避免意外负面后果,以及提高客户和社会的生产力与经济增长。这些指标目前主要通过国内生产总值(GDP)、利润或消费水平来衡量。
Well-being, for the purpose of Ethically Aligned Design, is based on the Organization for Economic Co-operation and Development’s (OECD) “Guidelines on Measuring Subjective Well-being” perspective that, “Being able to measure people’s quality of life is fundamental when assessing the progress of societies.” There is now widespread acknowledgement that measuring subjective well-being is an essential part of measuring quality of life alongside other social and economic dimensions as identified within Nassbaum-Sen’s capability approach whereby well-being is objectively defined in terms of human capabilities necessary for functioning and flourishing. 在《伦理对齐设计》框架下,福祉的概念基于经济合作与发展组织(OECD)《主观福祉测量指南》的视角:"衡量人们生活质量的能力是评估社会进步的基础"。目前学界普遍认同,测量主观福祉是衡量生活质量的重要组成部分,需结合纳瑟鲍姆-森能力理论中界定的其他社会经济维度——该理论将福祉客观定义为人类实现功能与繁荣所必需的能力集合。
Since modern societies will be largely constituted of A/IS users, we believe these considerations to be relevant for A/IS creators. 鉴于现代社会主要由人工智能/智能系统(A/IS)用户构成,我们认为这些考量因素对 A/IS 创造者具有重要参考价值。
A/IS technologies can be narrowly conceived from an ethical standpoint. They can be legal, profitable, and safe in their usage, yet not positively contribute to human and environmental well-being. This means technologies created with the best intentions, but without considering well-being, can still have dramatic negative consequences on people’s mental health, emotions, sense of themselves, their autonomy, their ability to achieve their goals, and other dimensions of well-being. 从伦理角度狭义地理解,A/IS 技术可能仅满足合法、盈利和使用安全等基本要求,却未能积极促进人类与环境福祉。这意味着,即便技术出于最佳意图而设计,若未将福祉纳入考量,仍可能对人们的心理健康、情感状态、自我认知、自主权、目标实现能力及其他福祉维度造成显著的负面影响。
Recommendation 建议
A/IS should prioritize human well-being as an outcome in all system designs, using the best available and widely accepted well-being metrics as their reference point. A/IS 系统设计应始终将人类福祉作为核心目标,并以当前最优且广受认可的福祉衡量标准作为设计依据。
The Measurement of Economic Performance and Social Progress now commonly referred to as “The Stiglitz Report”, commissioned by the then President of the French Republic, 2009. From the report: "…the time is ripe for our measurement system to shift emphasis from measuring economic production to measuring 《经济表现与社会进步衡量报告》(现通称“斯蒂格利茨报告”),由时任法兰西共和国总统于 2009 年委托编撰。报告指出:"……当前正是测量体系将重点从衡量经济生产转向衡量
General Principles 总体原则
people’s well-being … emphasizing well-being is important because there appears to be an increasing gap between the information contained in aggregate GDP data and what counts for common people’s well-being." 民众福祉的恰当时机……强调福祉至关重要,因为综合 GDP 数据所包含的信息与影响普通民众福祉的要素之间似乎存在着日益扩大的鸿沟。"
OECD Guidelines on Measuring Subjective Well-being, 2013. 《经合组织主观幸福感测量指南》,2013 年。
OECD Better Life Index, 2017. 《经合组织美好生活指数》,2017 年。
World Happiness Reports, 2012 - 2018. 《世界幸福报告》,2012 至 2018 年。
United Nations Sustainable Development Goal (SDG) Indicators, 2018. 《联合国可持续发展目标(SDG)指标》,2018 年。
Beyond GDP, European Commission, 2018. From the site: “The Beyond GDP initiative is about developing indicators that are as clear and appealing as GDP, but more inclusive of environmental and social aspects of progress.” 超越 GDP,欧盟委员会,2018 年。网站说明:"超越 GDP 倡议旨在开发与 GDP 同样清晰且具有吸引力的指标,但更全面地涵盖环境和社会层面的发展进步。"
Genuine Progress Indicator, State of Maryland (first developed by Redefining Progress), 2015. 真实发展指标,马里兰州(最初由"重新定义进步"组织开发),2015 年。
The International Panel on Social Progress, Social Justice, Well-Being and Economic Organization, 2018. 国际社会进步委员会,《社会正义、福祉与经济组织》,2018 年。
R. Veenhoven, World Database of Happiness, Erasmus University Rotterdam, The Netherlands, Accessed 2018 at: http:// worlddatabaseofhappiness.eur.nl. R·维恩霍芬,《世界幸福数据库》,荷兰鹿特丹伊拉斯姆斯大学,2018 年访问网址:http://worlddatabaseofhappiness.eur.nl
Royal Government of Bhutan, The Report of the High-Level Meeting on Wellbeing and Happiness: Defining a New Economic Paradigm, New York: The Permanent Mission of the Kingdom of Bhutan to the United Nations, 2012. 不丹王国政府,《福祉与幸福高级别会议报告:定义新经济范式》,纽约:不丹王国常驻联合国代表团,2012 年。
Principle 3-Data Agency 原则 3-数据自主权
A/IS creators shall empower individuals with the ability to access and securely share their data, to maintain people’s capacity to have control over their identity. 人工智能/智能系统创造者应赋予个人访问及安全共享其数据的能力,以保障人们对其身份的控制权。
Background 背景
Digital consent is a misnomer in its current manifestation. Terms and conditions or privacy policies are largely designed to provide legally accurate information regarding the usage of people’s data to safeguard institutional and corporate interests, while often neglecting the needs of the people whose data they process. “Consent fatigue”, the constant request for agreement to sets of long and unreadable data handling conditions, causes a majority of users to simply click and accept terms in order to access the services they wish to use. General obfuscation regarding privacy policies, and scenarios like the Cambridge Analytica scandal in 2018, demonstrate that even when individuals provide consent, the understanding of the value regarding their data and its safety is out of an individual’s control. "数字同意"在当前表现形式下实为误称。服务条款和隐私政策的设计主要旨在提供法律上准确的信息,以保护机构和企业的利益,却常常忽视数据主体的实际需求。"同意疲劳"现象——即用户不断被要求同意冗长难懂的数据处理条款——导致大多数用户仅为获取所需服务而机械点击接受。隐私政策的普遍模糊性,以及 2018 年剑桥分析公司丑闻等事件表明,即便个人给予了同意,其对自身数据价值及安全性的理解仍超出个人掌控范围。
This existing model of data exchange has eroded human agency in the algorithmic age. People don’t know how their data is being used at all times or when predictive messaging is honoring their existing preferences or manipulating them to create new behaviors. 这种现存的数据交换模式已逐渐消解算法时代中的人类自主权。人们无从知晓自己的数据如何被实时利用,也无法辨别预测性信息推送是在尊重既有偏好,还是在操纵行为塑造新习惯。
Regulations like the EU General Data Protection Regulation (GDPR) will help improve this lack of clarity regarding the exchange of personal data. But compliance with existing models of consent is not enough to safeguard people’s agency regarding their personal information. In an era where A/IS are already pervasive in society, governments must recognize that limiting the misuse of personal data is not enough. 欧盟《通用数据保护条例》(GDPR)等法规将有助于改善个人数据交换方面的不明确性。但仅遵守现有的同意模式不足以保障人们对自身个人信息的主体权利。在人工智能/信息系统已遍布社会的时代,各国政府必须认识到,仅限制个人数据的滥用是远远不够的。
Society must also recognize that human rights in the digital sphere don’t exist until individuals globally are empowered with means-including tools and policies-that ensure their dignity through some form of sovereignty, agency, symmetry, or control regarding their identity and personal data. These rights rely on individuals being able to make their choices, outside of the potential influence of biased algorithmic messaging or bad actors. Society also needs to be confident that those who are unable to provide legal informed consent, including minors and people with diminished capacity to make informed decisions, do not lose their dignity due to this. 社会还必须意识到,只有当全球个体被赋予包括工具和政策在内的手段——通过某种形式的主权、主体性、对称性或对其身份及个人数据的控制权来确保其尊严时,数字领域的人权才能真正存在。这些权利依赖于个人能够在不受有偏见的算法信息或恶意行为者潜在影响的情况下做出选择。社会还需要确信,那些无法提供法律意义上知情同意的人(包括未成年人和决策能力受限者)不会因此而丧失尊严。
Recommendation 建议
Organizations, including governments, should immediately explore, test, and implement technologies and policies that let individuals specify their online agent for case-by-case authorization decisions as to who can process what personal data for what purpose. For minors and those with diminished capacity to make informed decisions, current guardianship approaches should be viewed to determine their suitability in this context. 包括政府在内的各类组织应立即探索、测试并实施相关技术与政策,使个人能够指定其在线代理,针对具体案例授权决定哪些主体出于何种目的可处理哪些个人数据。对于未成年人及决策能力受限者,应评估现行监护制度在此情境下的适用性。
General Principles 基本原则
The general solution to give agency to the individual is meant to anticipate and enable individuals to own and fully control autonomous and intelligent (as in capable of learning) technology that can evaluate data use requests by external parties and service providers. This technology would then provide a form of “digital sovereignty” and could issue limited and specific authorizations for processing of the individual’s personal data wherever it is held in a compatible system. 赋予个人能动性的通用解决方案旨在预见并确保个体能够拥有并完全控制具备自主性与智能(即学习能力)的技术,该技术可评估外部机构与服务提供商的数据使用请求。此类技术将提供一种"数字主权"形式,并能在兼容系统中对个人数据(无论存储于何处)的处理发布有限且具体的授权。
Further Resources 延伸资源
The following resources are designed to provide governments and other organizations-corporate, for-profit, not-for-profit, B Corp, or any form of public institution-basic information on services designed to provide user agency and/or sovereignty over their personal data. 以下资源旨在为政府及其他组织——包括企业、营利机构、非营利组织、共益企业或任何形式的公共机构——提供关于增强用户数据自主权及控制权的服务基础信息。
The European Data Protection Supervisor defines personal information management systems (PIMS) as: 欧洲数据保护监督官将个人信息管理系统(PIMS)定义为:
“…systems that help give individuals more control over their personal data…allowing individuals to manage their personal data in secure, local or online storage systems and share them when and with whom they choose. Providers of online services and advertisers will need to interact with the PIMS if they plan to process individuals’ data. This can enable a human centric approach to personal information and new business models.” For further information and ongoing research regarding PIMS, visit Crtl-Shift’s PIMS monthly archive. "...帮助个人更好地掌控其个人数据的系统...允许用户通过安全的本机或在线存储系统管理个人数据,并自主决定分享对象与时机。在线服务提供商与广告主若需处理用户数据,则必须与 PIMS 系统交互。这种模式可实现以人为核心的个人信息管理方式,并催生新型商业模式。"欲获取 PIMS 相关最新研究动态,请访问 Crtl-Shift 的 PIMS 月度档案库。
IEEE P7006 ^("TM "){ }^{\text {TM }}, IEEE Standards Project for Personal Data Artificial Intelligence (AI) Agent describes the technical elements required to create and grant access to a personalized Artificial Intelligence that will comprise inputs, learning, ethics, rules, and values controlled by individuals. IEEE P7006 标准项目《个人数据人工智能(AI)代理技术规范》阐述了构建个性化人工智能代理所需的技术要素,该代理系统将包含由个体控制的输入参数、学习机制、伦理准则、规则体系及价值取向,并规范其访问权限管理。
IEEE P7012 ^("TM "){ }^{\text {TM }}, IEEE Standards Project for Machine Readable Personal Privacy Terms is designed to provide individuals with a means to proffer their own terms respecting personal privacy in ways that can be read, acknowledged, and be agreed to by machines operated by others in the networked world. IEEE P7012 标准项目《机器可读个人隐私条款》旨在为个人提供一种方式,使其能够在网络世界中以机器可读、可确认并被他人操作的机器所接受的形式,提出关于个人隐私的自主条款。
Principle 4-Effectiveness 原则 4-有效性
Creators and operators shall provide evidence of the effectiveness and fitness for purpose of A/IS. 创建者和运营者应提供证据证明人工智能/智能系统(A/IS)的有效性及目标适用性。
Background 背景
The responsible adoption and deployment of A/IS are essential if such systems are to realize their many potential benefits to the well-being of both individuals and societies. A/IS will not be trusted unless they can be shown to be effective in use. Harms caused by A/IS, from harm to an individual through to systemic damage, can undermine the perceived value of A/IS and delay or prevent its adoption. 若要让人工智能/智能系统(A/IS)充分发挥其对个人和社会福祉的诸多潜在益处,就必须以负责任的态度采用和部署这些系统。除非能够证明 A/IS 在实际应用中的有效性,否则它们将无法获得信任。A/IS 造成的危害,从对个人的伤害到系统性破坏,都可能削弱其感知价值,并延迟或阻碍其被采用。
Operators and other users will therefore benefit from measurement of the effectiveness of the A/IS in question. To be adequate, effective measurements need to be both valid and accurate, as well as meaningful and actionable. And such measurements must be accompanied by practical guidance on how to interpret and respond to them. 因此,操作者和其他用户将从对相关 A/IS 有效性的评估中受益。为确保评估充分有效,测量方法不仅需要具备效度和信度,还应具有实际意义和可操作性。同时,这些测量结果必须辅以关于如何解读和应对的实用指南。
Recommendations 建议
Creators engaged in the development of A/IS should seek to define metrics or benchmarks that will serve as valid and meaningful gauges of the effectiveness of the system in meeting its objectives, adhering to standards and remaining within risk tolerances. Creators 参与开发 A/IS 的创建者应致力于定义相关指标或基准,这些指标应能有效且有意义地衡量系统在实现目标、遵守标准及保持风险容忍度方面的表现。
building A/IS should ensure that the results when the defined metrics are applied are readily obtainable by all interested parties, e.g., users, safety certifiers, and regulators of the system. 构建人工智能/自主系统(A/IS)时应确保所有相关方(如系统用户、安全认证机构和监管机构)能够便捷获取所定义指标的应用结果。
Creators of A/IS should provide guidance on how to interpret and respond to the metrics generated by the systems. A/IS 的创建者应提供关于如何解读和响应系统生成指标的指导说明。
To the extent warranted by specific circumstances, operators of A/IS should follow the guidance on measurement provided with the systems, i.e., which metrics to obtain, how and when to obtain them, how to respond to given results, and so on. 在特定情况需要时,A/IS 的操作者应遵循系统提供的测量指南,包括获取哪些指标、如何及何时获取、如何对给定结果作出响应等。
To the extent that measurements are samplebased, measurements should account for the scope of sampling error, e.g., the reporting of confidence intervals associated with the measurements. Operators should be advised how to interpret the results. 对于基于抽样的测量,测量过程应考虑抽样误差范围(例如报告与测量相关的置信区间)。应向操作者说明如何解读这些结果。
Creators of A/IS should design their systems such that metrics on specific deployments of the system can be aggregated to provide information on the effectiveness of the system across multiple deployments. For example, in the case of autonomous vehicles, metrics should be generated both for a specific instance of a vehicle and for a fleet of many instances of the same kind of vehicle. 人工智能/智能系统(A/IS)的开发者应确保系统设计能够聚合特定部署场景的指标数据,从而提供跨多场景部署的系统效能评估。例如在自动驾驶领域,既需生成单车运行指标,也需汇总同型号车辆组成的车队整体数据。
In interpreting and responding to measurements, allowance should be made for variation in the specific objectives and circumstances of a given deployment of A/IS. 在解读和响应测量数据时,应充分考虑人工智能/智能系统在特定部署场景中目标与环境的差异性。
General Principles 通用原则
To the extent possible, industry associations or other organizations, e.g., IEEE and ISO, should work toward developing standards for the measurement and reporting on the effectiveness of A/IS. 行业协会及其他组织(如 IEEE 和 ISO)应尽可能推动制定人工智能/智能系统效能测量与报告的标准规范。
Further Resources 延伸阅读资源
R. Dillmann, KA 1.10 Benchmarks for Robotics Research, 2010. R. Dillmann,《机器人研究基准 KA 1.10》,2010 年。
A. Steinfeld, T.W. Fong, D. Kaber, J. Scholtz, A. Schultz, and M. Goodrich, “Common Metrics for Human-Robot Interaction”, 2006 Human-Robot Interaction Conference, March, 2006. A. Steinfeld、T.W. Fong、D. Kaber、J. Scholtz、A. Schultz 与 M. Goodrich 合著,《人机交互通用评价指标》,2006 年人机交互会议,2006 年 3 月。
R. Madhavan, E. Messina, and E. Tunstel, Eds., Performance Evaluation and Benchmarking of Intelligent Systems, Boston, MA: Springer, 2009. R. Madhavan、E. Messina 与 E. Tunstel 主编,《智能系统性能评估与基准测试》,马萨诸塞州波士顿:施普林格出版社,2009 年。
IEEE Robotics & Automation Magazine, Special Issue on Replicable and Measurable Robotics Research, Volume 22, No. 3, September 2015. 《IEEE 机器人与自动化杂志》,可复现与可测量的机器人研究专刊,第 22 卷第 3 期,2015 年 9 月。
C. Flanagin, A Survey on Robotics Systems and Performance Analysis, 2011. C. 弗拉纳根,《机器人系统与性能分析综述》,2011 年。
Transaction Processing Performance Council (TPC) Establishes Artificial Intelligence Working Group (TPC-AI) tasked with developing industry standard benchmarks for both hardware and software platforms associated with running Artificial Intelligence (Al) based workloads, 2017. 事务处理性能委员会(TPC)成立人工智能工作组(TPC-AI),负责制定与运行人工智能(AI)工作负载相关的硬件和软件平台的行业标准基准,2017 年。
Principle 5-Transparency 原则 5——透明度
The basis of a particular A/IS decision should always be discoverable. 特定 A/IS 决策的依据必须始终可追溯。
Background 背景
A key concern over autonomous and intelligent systems is that their operation must be transparent to a wide range of stakeholders for different reasons, noting that the level of transparency will necessarily be different for each stakeholder. Transparent A/IS are ones in which it is possible to discover how and why a system made a particular decision, or in the case of a robot, acted the way it did. The term “transparency” in the context of A/IS also addresses the concepts of traceability, explainability, and interpretability. 关于自主与智能系统的核心关切在于:基于不同原因,其运行机制必须对多元利益相关方保持透明,且需注意透明度要求将因利益相关方而异。透明的 A/IS 系统应满足以下条件:能够追溯系统作出特定决策的机制与动因,或机器人采取特定行为的内在逻辑。在 A/IS 语境中,"透明度"这一术语还涵盖可追踪性、可解释性与可诠释性等概念维度。
A/IS will perform tasks that are far more complex and have more effect on our world than prior generations of technology. Where the task is undertaken in a non-deterministic manner, it may defy simple explanation. This reality will be particularly acute with systems that interact with the physical world, thus raising the potential level of harm that such a system could cause. For example, some A/IS already have real consequences to human safety or well-being, such as medical diagnosis or driverless car autopilots. Systems such as these are safetycritical systems. 相较于前代技术,A/IS 将执行更复杂且对现实世界影响更显著的任务。当任务以非确定性方式执行时,可能无法进行简单归因解释。这一特性在与物理世界交互的系统中尤为突出,从而提高了系统可能造成伤害的潜在风险等级。例如,部分 A/IS 已对人类安全或福祉产生实质影响,如医疗诊断系统或无人驾驶汽车自动驾驶系统,此类系统均属于安全关键系统。
At the same time, the complexity of A/IS technology and the non-intuitive way in which it may operate will make it difficult for users of those systems to understand the actions of the A/IS that they use, or with which they interact. This opacity, combined with the often distributed manner in which the A/IS are developed, will complicate efforts to determine and allocate responsibility when something goes wrong. Thus, lack of transparency increases the risk and magnitude of harm when users do not understand the systems they are using, or there is a failure to fix faults and improve systems following accidents. Lack of transparency also increases the difficulty of ensuring accountability (see Principle 6- Accountability). 与此同时,A/IS 技术的复杂性及其可能以非直观方式运行的特性,将使得系统用户难以理解其所使用或交互的 A/IS 的行为。这种不透明性,加上 A/IS 开发往往采用分布式方式,将使得在出现问题时确定和分配责任的工作变得复杂。因此,当用户不理解所使用系统或事故后未能修复缺陷和改进系统时,缺乏透明度会加大危害的风险和程度。透明度不足还增加了确保问责的难度(参见原则 6-问责制)。
Achieving transparency, which may involve a significant portion of the resources required to develop the A/IS, is important to each stakeholder group for the following reasons: 实现透明度可能需要投入开发 A/IS 所需资源的相当大部分,这对各利益相关方群体都很重要,原因如下:
For users, what the system is doing and why. 对用户而言,需要了解系统在做什么及其原因。
For creators, including those undertaking the validation and certification of A/IS, the systems’ processes and input data. 对创建者(包括负责 A/IS 验证和认证的人员)而言,需要了解系统的处理流程和输入数据。
For an accident investigator, if accidents occur. 对于事故调查员而言,当事故发生时。
For those in the legal process, to inform evidence and decision-making. 对于法律程序参与者而言,为证据和决策提供依据。
For the public, to build confidence in the technology. 对于公众而言,旨在建立对该技术的信心。
General Principles 通用原则
Recommendation 建议
Develop new standards that describe measurable, testable levels of transparency, so that systems can be objectively assessed and levels of compliance determined. For designers, such standards will provide a guide for self-assessing transparency during development and suggest mechanisms for improving transparency. The mechanisms by which transparency is provided will vary significantly, including but not limited to, the following use cases: 制定新标准以描述可测量、可测试的透明度等级,使系统能够被客观评估并确定合规程度。对设计者而言,此类标准将为开发过程中的透明度自评提供指南,并提出改进透明度的机制建议。提供透明度的机制将存在显著差异,包括但不限于以下应用场景:
For users of care or domestic robots, a “why-did-you-do-that button” which, when pressed, causes the robot to explain the action it just took. 对于护理或家用机器人的使用者,设置"为何如此操作"按钮,按下后机器人将解释其刚执行的动作。
For validation or certification agencies, the algorithms underlying the A/IS and how they have been verified. 对于验证或认证机构,需提供人工智能/智能系统(A/IS)的底层算法及其验证方式。
For accident investigators, secure storage of sensor and internal state data comparable to a flight data recorder or black box. 对于事故调查人员而言,需确保传感器与内部状态数据的安全存储,其功能应类似于飞行数据记录仪或黑匣子。
IEEE P7001 ^("TM "){ }^{\text {TM }}, IEEE Standard for Transparency of Autonomous Systems is one such standard, developed in response to this recommendation. IEEE P7001 标准(自主系统透明度标准)正是响应这一建议而制定的相关规范之一。
Further Resources 延伸阅读资源
C. Cappelli, P. Engiel, R. Mendes de Araujo, and J. C. Sampaio do Prado Leite, “Managing Transparency Guided by a Maturity Model,” 3rd Global Conference on Transparency Research 1 no. 3, pp. 1-17, Jouy-en-Josas, France: HEC Paris, 2013. C. Cappelli、P. Engiel、R. Mendes de Araujo 与 J. C. Sampaio do Prado Leite 合著,《基于成熟度模型的透明度管理》,第三届全球透明度研究大会论文集第 1 卷第 3 期,第 1-17 页,法国茹伊昂若萨:巴黎 HEC 商学院,2013 年。
J.C. Sampaio do Prado Leite and C. Cappelli, “Software Transparency.” Business & Information Systems Engineering 2, no. 3, pp. 127-139, 2010. J.C. Sampaio do Prado Leite 和 C. Cappelli,《软件透明度》,《商业与信息系统工程》第 2 卷第 3 期,第 127-139 页,2010 年。
A, Winfield, and M. Jirotka, “The Case for an Ethical Black Box,” Lecture Notes in Artificial Intelligence 10454, pp. 262-273, 2017. A. Winfield 和 M. Jirotka,《伦理黑盒的论证》,《人工智能讲义》第 10454 卷,第 262-273 页,2017 年。
R. R. Wortham, A. Theodorou, and J. J. Bryson, “What Does the Robot Think? Transparency as a Fundamental Design Requirement for Intelligent Systems,” IJCAI-2016 Ethics for Artificial Intelligence Workshop, New York, 2016. R. R. Wortham、A. Theodorou 和 J. J. Bryson,《机器人如何思考?透明度作为智能系统的基本设计需求》,IJCAI-2016 人工智能伦理研讨会,纽约,2016 年。
Machine Intelligence Research Institute, “Transparency in Safety-Critical Systems,” August 25, 2013. 机器智能研究所,《安全关键系统中的透明度》,2013 年 8 月 25 日。
M. Scherer, “Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies,” Harvard Journal of Law & Technology 29, no. 2, 2015. M. 舍雷尔,《人工智能系统监管:风险、挑战、能力与策略》,《哈佛科技法期刊》第 29 卷第 2 期,2015 年。
U.K. House of Commons, “Decision Making Transparency,” Report of the U.K. House of Commons Science and Technology Committee on Robotics and Artificial Intelligence, pp. 17-18, September 13, 2016. 英国下议院,《决策透明度》,英国下议院科学技术委员会关于机器人技术与人工智能的报告,第 17-18 页,2016 年 9 月 13 日。
General Principles 通用原则
Principle 6-Accountability 原则 6-问责制
A/IS shall be created and operated to provide an unambiguous rationale for decisions made. 人工智能系统(A/IS)的创建和运行应当为所做决策提供明确的依据。
Background 背景
The programming, output, and purpose of A/IS are often not discernible by the general public. Based on the cultural context, application, and use of A/IS, people and institutions need clarity around the manufacture and deployment of these systems to establish responsibility and accountability, and to avoid potential harm. Additionally, manufacturers of these systems must be accountable in order to address legal issues of culpability. It should, if necessary, be possible to apportion culpability among responsible creators (designers and manufacturers) and operators to avoid confusion or fear within the general public. 人工智能系统的编程逻辑、输出结果和设计目的通常难以为公众所理解。鉴于不同文化背景下的应用场景差异,个人与机构需要清晰了解这些系统的制造和部署机制,以明确责任归属、建立问责制度,并防范潜在危害。此外,系统制造商必须承担相应责任,以解决法律层面的归责问题。在必要时,应能在责任主体(设计者、制造商)与运营方之间划分责任比例,从而消除公众的困惑或恐慌。
Accountability and partial accountability are not possible without transparency, thus this principle is closely linked with Principle 5-Transparency. 缺乏透明度就无法实现完全或部分问责,因此本原则与第五原则"透明度"密切相关。
Recommendations 建议
To best address issues of responsibility and accountability: 为妥善解决责任与问责问题:
Legislatures/courts should clarify responsibility, culpability, liability, and accountability for A/IS, where possible, prior to development and deployment so that manufacturers and users understand their rights and obligations. 立法机构/法院应尽可能在人工智能/智能系统(A/IS)开发部署前,明确其责任、过错认定、法律责任及问责机制,确保制造商与用户明晰各自权责。
Designers and developers of A/IS should remain aware of, and take into account, the diversity of existing cultural norms among the groups of users of these A/IS. A/IS 设计开发者须充分认知并考量不同用户群体间现存文化规范的多样性。
Multi-stakeholder ecosystems including creators, and government, civil, and commercial stakeholders, should be developed to help establish norms where they do not exist because A/IS-oriented technology and their impacts are too new. These ecosystems would include, but not be limited to, representatives of civil society, law enforcement, insurers, investors, manufacturers, engineers, lawyers, and users. The norms can mature into best practices and laws. 应建立包含创作者、政府、民间及商业利益相关方在内的多利益相关者生态系统,以帮助在人工智能/自主系统(A/IS)导向技术及其影响尚属新兴领域时建立规范框架。该生态系统应涵盖(但不限于)公民社会代表、执法机构、保险公司、投资者、制造商、工程师、律师及终端用户。这些规范可逐步发展为最佳实践并最终形成法律条文。
General Principles 通用原则
Systems for registration and record-keeping should be established so that it is always possible to find out who is legally responsible for a particular A/IS. Creators, including manufacturers, along with operators, of A/IS should register key, high-level parameters, including: 应建立注册与记录保存系统,确保始终能够追溯特定自主/智能系统(A/IS)的法定责任主体。A/IS 的创造者(包括制造商)及运营者需登记关键高层级参数,包括:
Intended use, 预期用途,
Training data and training environment, if applicable, 适用的训练数据和训练环境,
Sensors and real world data sources, 传感器与现实世界数据源,
Algorithms, 算法,
Process graphs, 流程图,
Model features, at various levels, 各层级的模型特征,
User interfaces, 用户界面,
Actuators and outputs, and 执行器与输出装置,以及
Optimization goals, loss functions, and reward functions. 优化目标、损失函数与奖励函数
Further Resources 延伸阅读
B. Shneiderman, “Human Responsibility for Autonomous Agents,” IEEE Intelligent Systems 22, no. 2, pp. 60-61, 2007. B. 施奈德曼,《人类对自主智能体的责任》,《IEEE 智能系统》第 22 卷第 2 期,第 60-61 页,2007 年
A. Matthias, “The Responsibility Gap: Ascribing Responsibility for the Actions of Learning Automata.” Ethics and Information Technology 6, no. 3, pp. 175-183, 2004. A. 马蒂亚斯,《责任缺口:学习自动体行为的责任归属》,《伦理与信息技术》第 6 卷第 3 期,第 175-183 页,2004 年
A. Hevelke and J. Nida-Rümelin, “Responsibility for Crashes of Autonomous Vehicles: An Ethical Analysis,” Science and Engineering Ethics 21, no. 3, pp. 619-630, 2015. A. Hevelke 与 J. Nida-Rümelin,《自动驾驶车辆事故责任:伦理分析》,《科学与工程伦理》第 21 卷第 3 期,第 619-630 页,2015 年。
An example of good practice (in relation to Recommendation #3) can be found in Sciencewise-the U.K. national center for public dialogue in policy-making involving science and technology issues. (关于建议#3)的良好实践范例可见于英国科学智慧中心——该国在涉及科技议题的政策制定中进行公众对话的国家级平台。
Principle 7-Awareness of Misuse 原则 7-滥用风险防范
Creators shall guard against all potential misuses and risks of A/IS in operation. 创造者应防范人工智能/智能系统(A/IS)在运行中所有潜在的滥用行为与风险。
Background 背景
New technologies give rise to greater risk of deliberate or accidental misuse, and this is especially true for A/IS. A/IS increases the impact of risks such as hacking, misuse of personal data, system manipulation, or exploitation of vulnerable users by unscrupulous parties. Cases of A/IS hacking have already been widely reported, with driverless cars, for example. The Microsoft Tay Al chatbot was famously manipulated when it mimicked deliberately offensive users. In an age where these powerful tools are easily available, there is a need for a new kind of education for citizens to be sensitized to risks associated with the misuse of A/IS. The EU’s General Data Protection Regulation (GDPR) provides measures to remedy the misuse of personal data. 新技术的出现带来了蓄意或意外滥用的更大风险,人工智能与智能系统(A/IS)领域尤其如此。A/IS 放大了黑客攻击、个人数据滥用、系统操控或不法分子利用弱势用户等风险的影响。A/IS 遭黑客攻击的案例已屡见报端,例如无人驾驶汽车。微软 Tay AI 聊天机器人曾因模仿具有故意冒犯性的用户而臭名昭著。在这个强大工具唾手可得的时代,亟需通过新型公民教育来提高人们对 A/IS 滥用相关风险的认识。欧盟《通用数据保护条例》(GDPR)已针对个人数据滥用制定了补救措施。
Responsible innovation requires A/IS creators to anticipate, reflect, and engage with users of A/IS. Thus, citizens, lawyers, governments, etc., all have a role to play through education and awareness in developing accountability structures (see Principle 6), in addition to guiding new technology proactively toward beneficial ends. 负责任的创新要求 A/IS 创造者预判风险、审慎思考并与 A/IS 用户保持互动。因此,除主动引导新技术向有益方向发展外,公民、律师、政府等各方还需通过教育和认知建设(参见原则 6),共同参与问责机制的构建。
Recommendations 建议方案
Creators should be aware of methods of misuse, and they should design A/IS in ways to minimize the opportunity for these. 开发者应了解可能的滥用方式,并应通过设计人工智能/智能系统(A/IS)来尽量减少这些滥用机会。
Raise public awareness around the issues of potential A/IS technology misuse in an informed and measured way by: 通过以下方式以理性审慎的态度提升公众对 A/IS 技术潜在滥用问题的认知:
Providing ethics education and security awareness that sensitizes society to the potential risks of misuse of A/IS. For example, provide “data privacy warnings” that some smart devices will collect their users’ personal data. 开展伦理教育与安全意识培养,使社会对 A/IS 的潜在滥用风险保持敏感。例如,提供"数据隐私警示",说明某些智能设备会收集用户的个人数据。
Delivering this education in scalable and effective ways, including having experts with the greatest credibility and impact who can minimize unwarranted fear about A/IS. 以可扩展且有效的方式实施这类教育,包括邀请最具公信力和影响力的专家参与,以消除对 A/IS 不必要的恐慌。
Educating government, lawmakers, and enforcement agencies about these issues of A/IS so citizens can work collaboratively with these agencies to understand safe use of A/IS. For example, the same way police officers give public safety lectures in schools, they could provide workshops on safe use and interaction with A/IS. 对政府、立法者和执法机构进行有关人工智能/智能系统(A/IS)问题的教育,使公民能够与这些机构合作,理解如何安全使用 A/IS。例如,就像警察在学校进行公共安全讲座一样,他们也可以提供关于安全使用和与 A/IS 互动的研讨会。
Further Resources 更多资源
A. Greenberg, “Hackers Fool Tesla S’s_Autopilot to Hide and Spoof Obstacles,” Wired, August 2016. A. Greenberg,《黑客欺骗特斯拉 S 自动驾驶系统隐藏和伪造障碍物》,《连线》杂志,2016 年 8 月。
C. Wilkinson and E. Weitkamp, Creative Research and Communication: Theory and Practice, Manchester, UK: Manchester University Press, 2016 (in relation to Recommendation #2). C. Wilkinson 和 E. Weitkamp,《创造性研究与传播:理论与实践》,英国曼彻斯特:曼彻斯特大学出版社,2016 年(与建议#2 相关)。
Engineering and Physical Sciences Research Council, “Anticipate, Reflect, Engage and Act (AREA),” Framework for Responsible Research and Innovation, Accessed 2018. 工程与物理科学研究理事会,《预见、反思、参与与行动(AREA)》,负责任研究与创新框架,2018 年访问。
Principle 8-Competence 原则 8-专业能力
Creators shall specify and operators shall adhere to the knowledge and skill required for safe and effective operation. 创造者应明确说明且操作者应遵守安全有效运行所需的知识与技能。
Background 背景
A/IS can and often do make decisions that previously required human knowledge, expertise, and reason. Algorithms potentially can make even better decisions, by accessing more information, more quickly, and without the error, inconsistency, and bias that can plague human decision-making. As the use of algorithms becomes common and the decisions they make become more complex, however, the more normal and natural such decisions appear. 自主智能系统(A/IS)能够且经常做出以往需要人类知识、专业判断和理性分析才能完成的决策。算法通过获取更全面信息、更快速响应以及避免人类决策中常见的错误、不一致性和偏见,甚至可能做出更优决策。然而随着算法应用的普及化及其决策复杂度的提升,这类决策将显得愈发自然寻常。
Operators of A/IS can become less likely to question and potentially less able to question the decisions that algorithms make. Operators will not necessarily know the sources, scale, accuracy, and uncertainty that are implicit in applications of A/IS. As the use of A/IS expands, more systems will rely on machine learning where actions are not preprogrammed and that might not leave a clear record of the steps that led the system to its current state. Even if those records do exist, operators might not have access to them or the expertise necessary to decipher those records. 人工智能/自主系统(A/IS)的操作者可能会逐渐减少对算法决策的质疑倾向,甚至可能丧失质疑能力。操作者往往无从知晓 A/IS 应用中隐含的数据来源、规模、准确性及不确定性。随着 A/IS 应用范围的扩大,更多系统将依赖机器学习技术——这些系统的行为并非预先编程,且可能不会留存清晰的决策过程记录。即便存在此类记录,操作者也可能无法获取或缺乏解读这些记录所需的专业知识。
Standards for the operators are essential. Operators should be able to understand how 制定操作者标准至关重要。操作者应当能够理解:
A/IS reach their decisions, the information and logic on which the A/IS rely, and the effects of those decisions. Even more crucially, operators should know when they need to question A/IS and when they need to overrule them. A/IS 的决策机制、系统所依赖的信息与逻辑,以及这些决策产生的实际影响。更为关键的是,操作者必须明确何时需要质疑 A/IS 的决策,何时需要行使否决权。
Creators of A/IS should take an active role in ensuring that operators of their technologies have the knowledge, experience, and skill necessary not only to use A/IS, but also to use it safely and appropriately, towards their intended ends. Creators should make provisions for the operators to override A/IS in appropriate circumstances. 人工智能/自主系统(A/IS)的创造者应积极确保技术操作人员不仅具备使用 A/IS 所需的知识、经验和技能,还能安全恰当地运用该技术实现预期目标。创造者应制定相应机制,允许操作人员在适当情况下对 A/IS 进行人工干预。
While standards for operator competence are necessary to ensure the effective, safe, and ethical application of A/IS, these standards are not the same for all forms of A/IS. The level of competence required for the safe and effective operation of A/IS will range from elementary, such as “intuitive” use guided by design, to advanced, such as fluency in statistics. 虽然建立操作人员能力标准对保障 A/IS 有效、安全及符合伦理的应用至关重要,但这些标准并不适用于所有形式的 A/IS。确保 A/IS 安全有效运行所需的能力水平存在梯度差异——从设计引导的"直觉式"基础操作,到需要统计学专业素养的高级应用皆涵盖其中。
Recommendations 建议
Creators of A/IS should specify the types and levels of knowledge necessary to understand and operate any given application of A/IS. In specifying the requisite types and levels of expertise, creators should do so for the individual components of A/IS and for the entire systems. A/IS 创造者应明确说明理解与操作任一 A/IS 应用所需的知识类型及水平。在制定必备专业知识类型与水平时,创造者需针对 A/IS 的各个组件及整体系统分别予以规范。
Creators of A/IS should integrate safeguards against the incompetent operation of their systems. Safeguards could include issuing 人工智能/智能系统(A/IS)的创造者应在其系统中整合防范不当操作的保障措施。这些保障措施可包括发布
General Principles 通用原则
notifications/warnings to operators in certain conditions, limiting functionalities for different levels of operators (e.g., novice vs. advanced), system shut-down in potentially risky conditions, etc. 在特定条件下向操作员发出通知/警告,针对不同级别操作员(如新手与高级)限制功能,在潜在风险情况下系统关闭等。
3. Creators of A/IS should provide the parties affected by the output of A/IS with information on the role of the operator, the competencies required, and the implications of operator error. Such documentation should be accessible and understandable to both experts and the general public. 3. 人工智能/智能系统(A/IS)的创建者应向受 A/IS 输出影响的各方提供有关操作员角色、所需能力以及操作失误影响的信息。此类文档应确保专家和普通公众都能获取并理解。
4. Entities that operate A/IS should create documented policies to govern how A/IS should be operated. These policies should include the real-world applications for such A/IS, any preconditions for their effective use, who is qualified to operate them, what training is required for operators, how to measure the performance of the A/IS, and what should be expected from the A/IS. The policies should also include specification of circumstances in which it might be necessary for the operator to override the A/IS. 4. 运营人工智能/智能系统(A/IS)的实体应制定书面政策,规范 A/IS 的运营方式。这些政策需包含:该 A/IS 的实际应用场景、有效使用的前提条件、合格操作人员的资质标准、操作者所需的培训内容、A/IS 性能的评估方法,以及对 A/IS 的预期目标。政策还应明确规定操作人员在何种情况下有必要对 A/IS 进行人工干预。
5. Operators of A/IS should, before operating a system, make sure that they have access to the requisite competencies. The operator need not be an expert in all the pertinent domains but should have access to individuals with the requisite kinds of expertise. 5. 自主/智能系统(A/IS)的操作者在运行系统前,应确保具备必要的操作资质。操作者无需精通所有相关领域,但必须能够获得具备相应专业知识的支持人员。
Further Resources 延伸阅读资源
S. Barocas and A.D. Selbst, The Intuitive Appeal of Explainable Machines, Fordham Law Review, 2018. S.巴罗卡斯与 A.D.塞尔布斯特,《可解释机器的直觉吸引力》,《福特汉姆法律评论》,2018 年。
W. Smart, C. Grimm, and W. Hartzog, “An Education Theory of Fault for Autonomous Systems”, 2017. W.斯马特、C.格里姆与 W.哈特佐格,《自主系统过失的教育理论》,2017 年。
Thanks to the Contributors 致谢贡献者
We wish to acknowledge all of the people who contributed to this chapter. 我们谨向所有为本章节做出贡献的人士致以诚挚的谢意。
The General Principles Committee 通用原则委员会
Alan Winfield (Founding Chair) - Professor, Bristol Robotics Laboratory, University of the West of England; Visiting Professor, University of York 艾伦·温菲尔德(创始主席)——西英格兰大学布里斯托机器人实验室教授;约克大学客座教授
Mark Halverson (Co-Chair) - Founder and CEO at Precision Autonomy 马克·哈尔弗森(联合主席)——Precision Autonomy 创始人兼首席执行官
Peet van Biljon (Co-Chair) - Founder and CEO at BMNP Strategies LLC, advisor on strategy, innovation, and business transformation; Adjunct professor at Georgetown University; Business ethics author 皮特·范·比永(联合主席)——BMNP Strategies LLC 创始人兼首席执行官,战略、创新与业务转型顾问;乔治城大学兼职教授;商业伦理作家
Shahar Avin - Research Associate, Centre for the Study of Existential Risk, University of Cambridge 沙哈尔·阿文——剑桥大学存在风险研究中心研究员
Bijilash Babu - Senior Manager, Ernst and Young, EY Global Delivery Services India LLP 比吉拉什·巴布——安永会计师事务所高级经理,EY 全球交付服务印度有限公司
Richard Bartley - Senior Director - Analyst, Security & Risk Management, Gartner, Toronto, Canada Security Principal Director, Accenture, Toronto, Canada. 理查德·巴特利 - 高级总监 - 分析师,安全与风险管理,高德纳公司,加拿大多伦多 | 安全首席总监,埃森哲公司,加拿大多伦多
R. R. Brooks - Professor, Holcombe Department of Electrical and Computer Engineering, Clemson University R·R·布鲁克斯 - 教授,克莱姆森大学霍库姆电气与计算机工程系
Nicolas Economou - Chief Executive Officer, H5; Chair, Science, Law and Society Initiative at The Future Society Chair, Law Committee, Global Governance of AI Roundtable; Member, Council on Extended Intelligence (CXI) 尼古拉斯·埃科诺穆 - H5 公司首席执行官 | 未来社会"科学、法律与社会倡议"主席 | 全球人工智能治理圆桌会议法律委员会主席 | 扩展智能理事会(CXI)成员
Hugo Giordano - Engineering Student at Texas A&M University 雨果·乔尔达诺 - 德州农工大学工程专业学生
Alexei Grinbaum - Researcher at CEA (French Alternative Energies and Atomic Energy Commission) and Member of the French Commission on the Ethics of Digital Sciences and Technologies CERNA 亚历克西·格林鲍姆 - 法国替代能源与原子能委员会(CEA)研究员,法国数字科学与技术伦理委员会(CERNA)成员
Jia He - Independent Researcher, Graduate Delft University of Technology in Engineering and Public Policy, project member within United Nations, ICANN, and ITU Executive Director of Toutiao Research (Think Tank), Bytedance Inc. 何佳 - 独立研究员,代尔夫特理工大学工程与公共政策硕士,联合国、ICANN 和国际电信联盟项目成员,字节跳动公司头条研究院(智库)执行主任
Bruce Hedin - Principal Scientist, H5 布鲁斯·赫丁 - H5 公司首席科学家
Cyrus Hodes - Advisor AI Office, UAE Prime Minister’s Office, Co-founder and Senior Advisor, Al Initiatives@The Future Society; Member, AI Expert Group at the OECD, Member, Global Council on Extended Intelligence; Co-founder and Senior Advisor, The AI Initiative @ The Future Society 赛勒斯·霍兹 - 阿联酋总理办公室人工智能顾问,未来社会人工智能倡议联合创始人兼高级顾问;经合组织人工智能专家组成员,全球扩展智能理事会成员;未来社会人工智能倡议联合创始人兼高级顾问
Nathan F. Hutchins - Applied Assistant Professor, Department of Electrical and Computer Engineering, The University of Tulsa 内森·F·哈钦斯 - 塔尔萨大学电气与计算机工程系应用助理教授
Narayana GPL. Mandaleeka (“MGPL”) Vice President & Chief Scientist, Head, Business Systems & Cybernetics Centre, Tata Consultancy Services Ltd. 纳拉亚纳·GPL·曼达利卡("MGPL")- 塔塔咨询服务有限公司副总裁兼首席科学家,商业系统与控制论中心负责人
George T. Matthew - Chief Medical Officer, North America, DXC Technology 乔治·T·马修 - DXC 科技北美首席医疗官
General Principles 通用原则
Nicolas Miailhe - Co-Founder & President, The Future Society; Member, Al Expert Group at the OECD; Member, Global Council on Extended Intelligence; Senior Visiting Research Fellow, Program on Science Technology and Society at Harvard Kennedy School. Lecturer, Paris School of International Affairs (Sciences Po); Visiting Professor, IE School of Global and Public Affairs 尼古拉斯·米亚耶 - 未来社会联合创始人兼主席;经合组织人工智能专家组成员;扩展智能全球理事会成员;哈佛大学肯尼迪政府学院科技与社会项目高级访问研究员。巴黎国际事务学院(巴黎政治学院)讲师;IE 全球与公共事务学院客座教授
Rupak Rathore - Principal Consultant at ATCS for Telematics, Connected Car and Internet of Things; Advisor on strategy, innovation and transformation journey management; Senior Member, IEEE 鲁帕克·拉托尔 - ATCS 车联网、互联汽车及物联网首席顾问;战略、创新与转型路径管理顾问;IEEE 高级会员
Peter Teneriello - Investment Analyst, Private Equity and Venture Capital, TMRS 彼得·特内里洛 - TMRS 私募股权与风险投资分析师
Niels ten Oever - Head of Digital, Article 19, Co-chair Research Group on Human Rights Protocol Considerations in the Internet Research Taskforce (IRTF) 尼尔斯·滕奥弗 - Article 19 数字部门主管;互联网研究任务组(IRTF)人权协议考量研究组联合主席
Alan R. Wagner - Assistant Professor, Department of Aerospace Engineering, Research Associate, The Rock Ethics Institute, The Pennsylvania State University. 艾伦·R·瓦格纳 - 宾夕法尼亚州立大学航空航天工程系助理教授,洛克伦理研究所研究员。
For a full listing of all IEEE Global Initiative Members, visit standards.ieee.org/content/dam/ ieee-standards/standards/web/documents/other/ ec bios.pdf. 完整 IEEE 全球倡议成员名单请访问:standards.ieee.org/content/dam/ieee-standards/standards/web/documents/other/ec_bios.pdf
For information on disclaimers associated with EAD1e, see How the Document Was Prepared. 关于 EAD1e 相关免责声明信息,请参阅"文档编制说明"章节。
Classical Ethics in A/IS 人工智能/智能系统中的经典伦理
We applied classical ethics methodologies to considerations of algorithmic design in autonomous and intelligent systems (A/IS) where machine learning may or may not reflect ethical outcomes that mimic human decision-making. To meet this goal, we drew from classical ethics theories and the disciplines of machine ethics, information ethics, and technology ethics. 我们将经典伦理学方法应用于自主智能系统(A/IS)的算法设计考量中,无论机器学习是否能够反映模拟人类决策的伦理结果。为实现这一目标,我们借鉴了经典伦理学理论以及机器伦理、信息伦理和技术伦理等学科领域。
As direct control over tools becomes further removed, creators of autonomous systems must ask themselves how cultural and ethical presumptions bias artificially intelligent creations. Such introspection is more necessary than ever because the precise and deliberate design of algorithms in self-sustained digital systems will result in responses based on such design. 随着对工具直接控制权的进一步剥离,自主系统的创造者必须自问:文化预设和伦理假设如何使人工智能产物产生偏见。这种自省比以往任何时候都更为必要,因为自持数字系统中算法的精确刻意设计,将导致系统基于该设计作出响应。
By drawing from over two thousand years’ worth of classical ethics traditions, we explore established ethics systems, including both philosophical traditions (utilitarianism, virtue ethics, and deontological ethics) and religious and culture-based ethical systems (Buddhism, Confucianism, African Ubuntu traditions, and Japanese Shinto) and their stance on human morality in the digital age.’ In doing so, we critique assumptions around concepts such as good and evil, right and wrong, virtue and vice, and we attempt to carry these inquiries into artificial systems’ decision-making processes. 通过汲取两千多年的古典伦理传统,我们探索了既有的伦理体系,包括哲学传统(功利主义、德性伦理与义务论)以及宗教与文化伦理体系(佛教、儒家思想、非洲乌班图传统与日本神道),及其在数字时代对人类道德的立场。在此过程中,我们批判性地审视了关于善恶、对错、美德与恶行等概念的预设,并尝试将这些探讨延伸至人工系统的决策过程中。
Through reviewing the philosophical foundations that define autonomy and ontology, we address the potential for autonomous capacity of artificially intelligent systems, posing questions of morality in amoral systems and asking whether decisions made by amoral systems can have moral consequences. Ultimately, we address notions of responsibility and accountability for the decisions made by autonomous systems and other artificially intelligent technologies. 通过审视界定自主性与本体论的哲学基础,我们探讨人工智能系统实现自主能力的可能性,在非道德系统中提出道德问题,并追问非道德系统作出的决策是否可能产生道德后果。最终,我们针对自主系统与其他人工智能技术所作决策的责任归属与问责机制展开论述。
Classical Ethics in A/IS 自主与智能系统中的古典伦理
Section 1-Definitions for Classical Ethics in Autonomous and Intelligent Systems Research 第一节 自主与智能系统研究中的古典伦理定义
Issue: Assigning Foundations for Morality, Autonomy, and Intelligence 议题:道德、自主与智能的基础构建
Background 背景
Classical theories of economy in the Western tradition, starting with Plato and Aristotle, embrace three domains: the individual, the family, and the polis. The formation of the individual character (ethos) is intrinsically related to the others, as well as to the tasks of administration of work within the family (oikos). Eventually, this all expands into the framework of the polis, or public space (poleis). When we discuss ethical issues of A/IS, it becomes crucial to consider these three traditional economic dimensions, since western classical ethics was developed from this foundation and has evolved in modernity into an individual morality disconnected from economics and politics. This disconnection has been questioned and explored by thinkers such as Adam Smith, Georg W. F. Hegel, Karl Marx, and others. In particular, 西方传统中的古典经济理论,自柏拉图和亚里士多德始,涵盖三大领域:个体、家庭与城邦。个体品格(ethos)的形成本质上与他人相关,也与家庭(oikos)内部劳动管理的职责相连。最终,这一切都扩展至城邦(poleis)或公共领域的框架之中。当我们探讨自主智能系统(A/IS)的伦理问题时,必须考量这三个传统经济维度,因为西方古典伦理学正是由此基础发展而来,并在现代演变为脱离经济和政治维度的个人道德观。这种割裂状态已受到亚当·斯密、格奥尔格·W·F·黑格尔、卡尔·马克思等思想家的质疑与探讨。特别是,
Immanuel Kant’s ethics located morality within the subject (see: categorical imperative) and separated morality from the outside world and the consequences of being a part of it. The moral autonomous subject of modernity became thus a worldless isolated subject. This process is important to understand in terms of ethics for A/IS since it is, paradoxically, the kind of autonomy that is supposed to be achieved by intelligent machines as humans evolve into digitally networked beings. 伊曼努尔·康德的伦理学将道德性定位于主体内部(参见:绝对命令),并将道德性与外部世界及其作为其中一部分的后果分离开来。现代性中道德自主的主体由此成为无世界的孤立主体。这一过程对于理解人工智能/智能系统(A/IS)的伦理至关重要,因为矛盾的是,随着人类进化为数字化网络存在,这种自主性正是智能机器理应实现的目标。
There lies a danger in uncritically attributing classical concepts of anthropomorphic autonomy to machines, including using the term “artificial intelligence” to describe them since, in the attempt to make them “moral” by programming moral rules into their behavior, we run the risk of assuming economic and political dimensions that do not exist, or that are not in line with contemporary human societies. While the concepts of artificial intelligence and autonomy are mainly used metaphorically as technical terms in computer science and technology, general and popular discourse may not share in the same nuanced understanding, and political and societal discourse may become distorted or 不加批判地将拟人化的古典自主概念赋予机器存在危险,包括使用"人工智能"这一术语来描述它们——因为当我们试图通过编程将道德规则植入其行为来使机器变得"道德"时,我们可能错误地预设了现实中并不存在、或与当代人类社会不相符的经济和政治维度。尽管人工智能和自主性概念在计算机科学与技术领域主要作为隐喻性技术术语使用,但大众流行话语可能缺乏同样的细微理解,从而导致政治和社会论述被扭曲或
Classical Ethics in A/IS 经典伦理学在自主智能系统(A/IS)中的应用
misleading. The question of whether A/IS and the terminology used to describe them will have any kind of impact on our conception of autonomy depends on our policy toward it. For example, the commonly held fear that A/IS will relegate humanity to mere spectators or slaves, whether realistic or not, is informed by our view of, and terminology around, A/IS. Such attitudes are flexible and can be negotiated. As noted above, present human societies are being redefined in terms of digital citizenship via online social networks. The present public debate about the replaceability of human work by “intelligent” machines is a symptom of this lack of awareness of the economic and political dimensions as defined by classical ethics, reducing ethical thinking to the “morality” of a worldless and isolated machine. 具有误导性。关于自主智能系统及其相关术语是否会影响我们对自主性概念的理解,这取决于我们制定的相关政策。例如,无论是否现实,人们普遍担忧自主智能系统会将人类降级为旁观者或奴隶,这种恐惧正是源于我们对自主智能系统的认知和术语界定。此类态度具有可塑性,可以通过协商调整。如前所述,当前人类社会正通过在线社交网络被重新定义为数字公民身份。当下关于"智能"机器取代人类工作的公开辩论,恰恰暴露出人们缺乏从经典伦理学定义的经济和政治维度进行思考,将伦理考量简化为对孤立机器"道德性"的讨论。
There is still value that can be gained by considering how Western ethical traditions can be integrated into either A/IS public awareness campaigns or supplemented in engineering and science education programs, as noted under the issue “Presenting ethics to the creators of A/IS”. Below is a short overview of how four different traditions can add value. 正如"向自主智能系统创造者传递伦理观"议题所述,通过思考如何将西方伦理传统融入自主智能系统的公众意识宣传活动,或将其补充到工程与科学教育课程中,我们仍能获得重要价值。以下简要概述四种不同伦理传统可能带来的价值贡献。
Virtue ethics: Aristotle argues, using the concept of telos, or goal, that the ultimate goal of humans is “eudaimonia”, roughly translated as “flourishing”. A moral agent achieves “flourishing”-since it is an action, not a state-by constantly balancing factors including social environment, material provisions, friends, family, and one’s own self. One cultivates the self through habituation, practicing and strengthening virtuous action as the “golden mean” (a principle of rationality). Such cultivation requires an appropriate 美德伦理学:亚里士多德运用"目的"(telos)这一概念提出,人类的终极目标是"eudaimonia",大致可译为"蓬勃生长"。道德主体通过持续平衡社会环境、物质条件、朋友、家庭及自我等因素来实现"蓬勃生长"——因其是一种行动而非状态。个体通过习惯化过程进行自我修养,将作为"中庸之道"(理性原则)的德行行为加以实践和强化。这种修养需要在过度与不足这两个被亚里士多德视为恶习的极端之间保持恰当
balance between extremes of excess and deficiency, which Aristotle identifies as vices. In the context of A/IS, virtue ethics has two immediate values. First, it provides a model for iterative learning and growth, and moral value informed by context and practice, not just as compliance with a given, static ruleset. Second, it provides to those who develop and implement A/IS a framework to counterbalance tendencies toward excess, which are common in economically-driven environments. 的平衡。在人工智能/智能系统(A/IS)语境下,美德伦理学具有双重现实价值:其一,它提供了基于情境与实践(而非仅遵循既定静态规则)的迭代学习成长模式及道德价值体系;其二,为 A/IS 开发者与实施者提供了制衡经济驱动环境中常见过度倾向的伦理框架。
Deontological ethics: As developed by 18th century German philosopher, Immanuel Kant, the basic premise of deontological ethics addresses the concept of duty. Humans have a rational capacity to create and abide by rules that allow for duty-based ethics to emerge. Rules that produce duties are said to have value in themselves without requiring a greater-good justification. Such rules are fundamental to our existence, self-worth, and to creating conditions that allow for peaceful coexistence and interaction, e.g., the duty not to harm others; the duty not to steal. To identify rules that can be universalized and made duties, Kant uses the categorical imperative: “Act only on that maxim through which you can at the same time will that it should become a universal law.” This means the rule must be inherently desirable, doable, valuable, and others must be able to understand and follow it. Rules based merely on personal choice without wider appeal are not capable of universalization. There is mutual reciprocity in rule-making and rule adherence; if you “will” that a rule should become universal law, you not only contribute 义务论伦理学:由 18 世纪德国哲学家伊曼努尔·康德创立的义务论伦理学,其核心理念聚焦于责任概念。人类具有制定并遵守规则的理性能力,这使得基于责任的伦理得以形成。产生责任的规则被认为具有内在价值,无需诉诸更大善的合理性证明。此类规则是我们生存、自我价值的基础,也是创造和平共处与互动条件的根本,例如不伤害他人的责任、不偷盗的责任。为识别可普遍化为责任的规则,康德提出定言命令式:"只按照你同时愿意它成为普遍法则的那个准则去行动"。这意味着该规则必须本身具有合意性、可行性、价值性,且他人能够理解并遵循。仅基于个人选择而缺乏广泛吸引力的规则无法实现普遍化。在规则制定与遵守中存在着相互性:若你"愿意"某规则成为普遍法则,你不仅贡献了
Classical Ethics in A/IS 人工智能/智能系统中的经典伦理理论
to rule creation but also agree to be bound by the same rule. The rule should be actionguiding, i.e., recommending, prescribing, limiting, or proscribing action. Kant also uses the humanity formulation of the categorical imperative: “Act in such a way that you always treat humanity, whether in your own person or in the person of any other, never simply as a means, but always at the same time as an end.” This produces duties to respect humanity and human dignity, and not to treat either as a means to an end. 不仅参与规则的制定,同时也同意受同一规则约束。规则应具有行为指导性,即能够建议、规定、限制或禁止行为。康德还运用了绝对命令的人性公式:"你要如此行动,永远将人性——无论是你自身还是他人的人格——不仅当作手段,同时始终视作目的。"这衍生出尊重人性与人格尊严的义务,禁止将二者仅视为达成目的之手段。
In the context of A/IS, one consideration is to wonder if developers are acting with the best interests of humanity and human dignity in mind. This could possibly be extended to A/IS whereby they are assisting humanity as an instrument of action that has an impact on decision-making capabilities, despite being based on neural machine learning or set protocols. The humanity formulation of “the categorical imperative” has implications for various scenarios. The duty to respect human dignity may require some limitations on the functions and capability of A/IS so that they do not completely replace humans, human functions, and/or “human central thinking activities” such as judgment, discretion, and reasoning. Privacy and safeguarding issues around A/IS assisting humans, e.g., healthcare robots, may require programming certain values so that A/IS do not divulge personal information to third parties, or compromise a human’s physical or mental well-being. It may also involve preventing A/IS from deceiving or manipulating humans. 在人工智能/智能系统(A/IS)的语境下,一个值得思考的问题是开发者是否以人类福祉和人格尊严为最高准则行事。这一原则可延伸至 A/IS 系统——即便其基于神经机器学习或既定协议运作,作为影响人类决策能力的行动工具时,仍应服务于人类利益。康德"绝对命令"中的人性公式对多种情境具有启示意义:尊重人格尊严的义务可能需要对 A/IS 的功能与能力施加限制,以防止其完全取代人类、人类功能及"人类核心思维活动"(如判断、裁量与推理)。涉及 A/IS 辅助人类的隐私与安全保障问题(例如医疗护理机器人),可能需要通过编程植入特定价值观,确保系统不会向第三方泄露个人信息,或损害人类的身心健康。这还包括防止 A/IS 欺骗或操纵人类行为。
Potential benefits and financial incentives from exploiting A/IS may provide ends-means 开发 A/IS 可能带来的潜在收益与经济激励,或将催生目的-手段
justifications for their use, while disregarding the treatment of humanity as an end in itself, e.g., cutting back on funding rigorous testing of A/IS before they reach the market and society. Maintaining human agency in human-machine interaction is a manifestation of the duty to respect human dignity. For example, a human has the right to know when they are interacting with A//IS\mathrm{A} / \mathrm{IS}, and may require consent for any A/IS interaction. 为其使用寻找正当理由,却忽视将人性本身作为目的来对待,例如削减对人工智能/智能系统(A/IS)上市前严格测试的投入。在人机交互中保持人类主体性,正是尊重人类尊严责任的具体体现。举例而言,人类有权知晓自己何时正与 A//IS\mathrm{A} / \mathrm{IS} 进行交互,并可能要求对任何 A/IS 交互行为取得知情同意。
Utilitarian ethics: Also called consequentialist ethics, this code of ethics refers to the consequences of one’s decisions and actions. According to the utility principle, the right course of action is the one that maximizes the utility (utilitarianism) or pleasure (hedonism) for the greatest number of people. This ethics theory does, however, warn against superficial and short-term evaluations of utility or pleasure. Therefore, it is the responsibility of the A/IS developers to consider long-term effects. Social justice is paramount in this instance, thus it must be ascertained if the implementation of A/IS will contribute to humanity, or negatively impact employment or other capabilities. Indeed, where it is deemed A/IS can supplement humanity, it should be designed in such a way that the benefits are obvious to all the stakeholders. 功利主义伦理学:亦称后果论伦理学,该伦理准则关注决策与行为产生的后果。根据效用原则,正确的行为选择应是为最大多数人实现效用最大化(功利主义)或快乐最大化(享乐主义)。但该伦理理论同时警示对效用或快乐进行肤浅短视的评估。因此,人工智能/智能系统(A/IS)开发者有责任考量长期影响。在此情境下,社会公正具有至高重要性,必须确认 A/IS 的应用究竟会造福人类,还是对就业或其他能力产生负面影响。当确定 A/IS 可辅助人类时,其设计应确保所有利益相关方都能明确认知其益处。
Ethics of care: Generally viewed as an instance of feminist ethics, this approach emphasizes the importance of relationships which is context-bound. Relationships are ontologically basic to humanity, according to Nel Noddings, feminist and philosopher of education; to care for other human beings is one of our basic human attributes. For such 关怀伦理学:通常被视为女性主义伦理学的一个分支,该方法强调情境化人际关系的重要性。教育哲学家、女性主义者内尔·诺丁斯认为,人际关系在人类存在论中具有基础性地位,关怀他人是人类的基本属性。
Classical Ethics in A/IS 经典伦理学在人工智能/智能系统(A/IS)中的应用
a theory to have relevance in this context, one needs to consider two criteria: 1) the relationship with the other person, or entity, must already exist or must have the potential to exist, and 2) the relationship should have the potential to grow into a caring relationship. Applied to A/IS, an interesting question comes to the foreground: Can one care for humans and their interests in tandem with non-human entities? If one expects A/IS to be beneficial to humanity, as in the instance of robots assisting with care of the elderly, then can one deduce the possibility of humans caring for A/IS? If that possibility exists, do principles of social justice become applicable to A/IS? 要使某种理论在此背景下具有相关性,需考虑两个标准:1)与其他人或实体的关系必须已经存在或具有存在的可能性;2)该关系应具备发展为关怀关系的潜力。应用于 A/IS 时,一个引人深思的问题浮现:人类能否在关怀自身利益的同时,也关怀非人实体?如果我们期望 A/IS 造福人类(例如辅助老年人护理的机器人),那么是否可以推导出人类关怀 A/IS 的可能性?若这种可能性存在,社会正义原则是否也适用于 A/IS?
Recommendations 建议
By returning to classical ethics foundations, expand the discussion on ethics in A/IS to include a critical assessment of anthropomorphic presumptions of ethics and moral rules for A/IS. Keep in mind that machines do not, in terms of classical autonomy, comprehend the moral or legal rules they follow. They move according to their programming, following rules that are designed by humans to be moral. 通过回归经典伦理学基础,将 A/IS 伦理讨论拓展至对拟人化伦理预设的批判性评估。需谨记:机器在经典自主性意义上并不理解其所遵循的道德或法律规则,它们仅依据人类设计的道德程序规则运行。
Expand the discussion on ethics for A//IS\mathrm{A} / \mathrm{IS} to include an exploration of the classical foundations of economy, outlined above, as potentially influencing current views and assumptions around machines achieving isolated autonomy. 将关于 A//IS\mathrm{A} / \mathrm{IS} 伦理的讨论扩展至包含对上述经济学古典基础的探讨,这些基础可能影响当前关于机器实现孤立自主性的观点和假设。
Further Resources 延伸阅读资源
J. Bielby, Ed., “Digital Global Citizenship,” International Review of Information Ethics, vol. 23, pp. 2-3, Nov. 2015. J. Bielby 编,《数字全球公民》,《国际信息伦理评论》,第 23 卷,第 2-3 页,2015 年 11 月。
O. Bendel, “Towards Machine Ethics,” in Technology Assessment and Policy Areas of Great Transitions: Proceedings from the PACITA 2013 Conference in Prague, PACITA 2013, Prague, March 13-15, 2013, T. Michalek, L. Hebáková, L. Hennen, C. Scherz, L. Nierling, J. Hahn, Eds. Prague: Technology Centre ASCR, 2014. pp. 321-326. O. Bendel,《迈向机器伦理》,收录于《重大转型期的技术评估与政策领域:2013 年布拉格 PACITA 会议论文集》,2013 年 3 月 13-15 日,布拉格,T. Michalek, L. Hebáková, L. Hennen, C. Scherz, L. Nierling, J. Hahn 编。布拉格:捷克科学院技术中心,2014 年,第 321-326 页。
O. Bendel, “Considerations about the Relationship between Animal and Machine Ethics,” Al & Society, vol. 31, no. 1, pp. 103-108, Feb. 2016. O. Bendel,《关于动物伦理与机器伦理关系的思考》,《人工智能与社会》,第 31 卷第 1 期,第 103-108 页,2016 年 2 月。
N. Berberich and K. Diepold, “The Virtuous Machine - Old Ethics for New Technology?” arXiv:1806.10322 [cs.Al], June 2018. N. Berberich 与 K. Diepold,《美德机器——新技术需要旧伦理吗?》,arXiv:1806.10322 [cs.Al],2018 年 6 月。
R. Capurro, M. Eldred, and D. Nagel, Digital Whoness: Identity, Privacy and Freedom in the Cyberworld. Berlin: Walter de Gruyter, 2013. R. Capurro、M. Eldred 与 D. Nagel,《数字存在:网络世界中的身份、隐私与自由》,柏林:Walter de Gruyter 出版社,2013 年。
D. Chalmers, “The Singularity: A Philosophical Analysis,” Journal of Consciousness Studies, vol. 17, pp. 7-65, 2010. D. Chalmers,《奇点:一种哲学分析》,《意识研究杂志》,第 17 卷,第 7-65 页,2010 年。
D. Davidson, “Representation and Interpretation,” in Modelling the Mind, K. A. M. Said, W. H. Newton-Smith, R. Viale, and K. V. Wilkes, Eds. New York: Oxford University Press, 1990, pp. 13-26. D. 戴维森,《表征与解释》,载《心智建模》,K. A. M. 赛义德、W. H. 牛顿-史密斯、R. 维亚莱、K. V. 威尔克斯编,纽约:牛津大学出版社,1990 年,第 13-26 页。
N. Noddings, Caring: A Relational Approach to Ethics and Moral Education. Oakland, CA: University of California Press, 2013. N. 诺丁斯,《关怀:伦理与道德教育的关系路径》,奥克兰:加州大学出版社,2013 年。
O. Ulgen, “Kantian Ethics in the Age of Artificial Intelligence and Robotics,” QIL, vol. 43, pp. 59-83, Oct. 2017. O. 乌尔根,《人工智能与机器人时代的康德伦理学》,《国际法季刊》第 43 卷,第 59-83 页,2017 年 10 月。
Classical Ethics in A/IS 人工智能/智能系统中的古典伦理学
O. Ulgen, “The Ethical Implications of Developing and Using Artificial Intelligence and Robotics in the Civilian and Military Spheres,” House of Lords Select Committee, Sept. 6, 2017, UK. O. Ulgen,《民用与军事领域中人工智能及机器人技术开发与应用的伦理影响》,英国上议院特别委员会,2017 年 9 月 6 日。
O. Ulgen, “Human Dignity in an Age of Autonomous Weapons: Are We in Danger of Losing an ‘Elementary Consideration of Humanity’?” in How International Law Works in Times of Crisis, I. Ziemele and G. Ulrich, Eds. Oxford: Oxford University Press, 2018. O. Ulgen,《自主武器时代的人类尊严:我们是否正面临丧失"人性基本考量"的危险?》,载于《危机中国际法的运作方式》,I. Ziemele 与 G. Ulrich 编,牛津:牛津大学出版社,2018 年。
Issue: The Distinction between Agents and Patients 议题:行为主体与受动者之辨
Background 背景
Of particular concern when understanding the relationship between human beings and A/IS is the uncritically applied anthropomorphic approach toward A/IS that many industry and policymakers are using today. This approach erroneously blurs the distinction between moral agents and moral patients, i.e., subjects, otherwise understood as a distinction between “natural” self-organizing systems and artificial, non-self-organizing devices. As noted above, A/IS cannot, by definition, become autonomous in the sense that humans or living beings are autonomous. With that said, autonomy in machines, when critically defined, designates how machines act and operate independently in certain contexts through a consideration of implemented order generated by laws and rules. In this sense, A/IS can, by definition, qualify as 在理解人类与自主智能系统(A/IS)关系时,特别值得关注的是当前许多行业和政策制定者不加批判地采用拟人化方法对待 A/IS。这种方法错误地模糊了道德主体与道德受体(即客体)之间的界限,或者说模糊了"自然"自组织系统与人造非自组织装置之间的区别。如前所述,A/IS 根据定义不可能获得人类或生命体意义上的自主性。尽管如此,经过严格界定的机器自主性,指的是机器通过考量法律规则生成的既定秩序,在特定情境中独立行动和运作的能力。从这个意义上说,A/IS 根据定义可以被视为
autonomous, especially in the case of genetic algorithms and evolutionary strategies. However, attempts to implant true morality and emotions, and thus accountability, i.e., autonomy, into A/IS blurs the distinction between agents and patients and may encourage anthropomorphic expectations of machines by human beings when designing and interacting with A//IS\mathrm{A} / \mathrm{IS}. 具有自主性,特别是在遗传算法和进化策略的应用场景中。然而,试图将真正的道德、情感以及由此产生的责任(即自主性)植入 A/IS,不仅会混淆主体与受体的界限,还可能在设计和交互过程中助长人类对机器的拟人化期待。
Thus, an adequate assessment of expectations and language used to describe the human-A/IS relationship becomes critical in the early stages of its development, where analyzing subtleties is necessary. Definitions of autonomy need to be clearly drawn, both in terms of A/IS and human autonomy. On one hand, A/IS may in some cases manifest seemingly ethical and moral decisions, resulting for all intents and purposes in efficient and agreeable moral outcomes. Many human traditions, on the other hand, can and have manifested as fundamentalism under the guise of morality. Such is the case with many religious moral foundations, where established cultural mores are neither questioned nor assessed. In such scenarios, one must consider whether there is any functional difference between the level of autonomy in A/IS and that of assumed agency -the ability to choose and act-in humans via the blind adherence to religious, traditional, or habitual mores. The relationship between assumed moral customs, the ethical critique of those customs, and the law are important distinctions. 因此,在人类与人工智能/智能系统(A/IS)关系发展的早期阶段,对相关预期及描述性语言进行充分评估至关重要,此时必须细致分析其中的微妙差异。关于自主性的定义需要明确界定,这既涉及 A/IS 的自主性,也关乎人类的自主性。一方面,A/IS 在某些情况下可能表现出看似合乎伦理道德的决策,实质上产生高效且令人满意的道德结果;另一方面,许多人类传统却可能且已经以道德为名表现为原教旨主义——诸多宗教道德基础便是如此,其中既定的文化习俗既不受质疑也不被评估。在此类情境中,我们必须思考:A/IS 的自主程度与人类通过盲目遵循宗教、传统或习惯习俗所展现的假定能动性(即选择与行动能力)之间,是否存在任何功能性差异。假定道德习俗、对这些习俗的伦理批判以及法律三者之间的关系,构成了重要的区分维度。
The above misunderstanding in definitions of autonomy arises in part because of the tendency for humans to shape artificial creations in their own image, and our desire to lend our human 上述关于自主性定义的误解部分源于人类倾向于按照自身形象塑造人工造物,以及我们渴望将人类
Classical Ethics in A/IS 人工智能系统中的古典伦理
experience to shaping a morphology of artificially intelligent systems. This is not to say that such terminology cannot be used metaphorically, but the difference must be maintained, especially as A/IS begin to resemble human beings more closely. It is possible for terms like “artificial intelligence” or “morality of machines” to be used as metaphors without resulting in misunderstanding. This is how language works and how humans try to understand their natural and artificial environment. 经验投射到塑造人工智能系统的形态上。这并非意味着此类术语不能用作隐喻,但必须保持其差异性,尤其是当人工智能系统开始与人类愈发相似时。像"人工智能"或"机器道德"这样的术语作为隐喻使用而不导致误解是可能的——这正是语言的运作方式,也是人类试图理解其自然与人工环境的方式。
However, the critical difference between human autonomy and autonomous systems involves questions of free will, predetermination, and being (ontology). The questions of critical ontology currently being applied to machines are not new questions to ethical discourse and philosophy; they have been thoroughly applied to the nature of human being as well. John Stuart Mill, for example, is a determinist and claims that human actions are predicated on predetermined laws. He does, however, argue for a reconciliation of human free will with determinism through a theory of compatibility. Millian ethics provides a detailed and informed foundation for defining autonomy that could serve to help overcome general assumptions of anthropomorphism in A/IS and thereby address the uncertainty therein (Mill, 1999). 然而,人类自主性与自主系统之间的关键差异涉及自由意志、决定论和存在(本体论)问题。当前应用于机器的批判本体论问题并非伦理讨论和哲学中的新议题;这些问题同样被深入应用于对人类本质的探讨。例如,约翰·斯图尔特·密尔作为决定论者,主张人类行为基于既定法则。但他通过兼容性理论论证了人类自由意志与决定论的和解可能。密尔伦理学为定义自主性提供了详尽而专业的理论基础,有助于克服人工智能/智能系统(A/IS)中普遍存在的拟人化假设,从而解决其中的不确定性(Mill, 1999)。
Recommendations 建议
When addressing the nature of “autonomy” in autonomous systems, it is recommended that the discussion first consider free will, civil liberty, and society from a Millian perspective in order to better grasp definitions of autonomy and to address general assumptions of anthropomorphism in A/IS. 在探讨自主系统的"自主性"本质时,建议首先从密尔主义视角考量自由意志、公民自由与社会的关系,以便更准确地把握自主性的定义,并应对 A/IS 领域中普遍存在的拟人化假设。
Further Resources 延伸阅读资源
R. Capurro, “Toward a Comparative Theory of Agents.” Al & Society, vol. 27, no. 4, pp. 479488, Nov. 2012. R. 卡普罗,《走向一种比较性的主体理论》,《人工智能与社会》,第 27 卷,第 4 期,第 479-488 页,2012 年 11 月。
W. J. King and J. Ohya, “The Representation of Agents: Anthropomorphism, Agency, and Intelligence,” in Conference Companion on Human Factors in Computing Systems. Vancouver: ACM, 1996, pp. 289-290. W. J. 金与 J. 大谷,《主体的表征:拟人化、能动性与智能》,《人机交互计算系统会议论文集》。温哥华:美国计算机协会,1996 年,第 289-290 页。
W. Hofkirchner, “Does Computing Embrace Self-Organisation?” in Information and Computation: Essays on Scientific and Philosophical Understanding of Foundations of Information and Computation, G. DodigCrnkovic and M. Burgin, Eds. London: World Scientific, 2011, pp. 185-202. W. 霍夫基希纳,《计算科学是否包含自组织?》,收录于《信息与计算:关于信息与计算基础的科学与哲学理解论文集》,G. 多迪格-克恩科维奇与 M. 伯金编。伦敦:世界科技出版社,2011 年,第 185-202 页。
International Center for Information Ethics, 2018. 国际信息伦理中心,2018 年。
J. S. Mill, On Liberty. London: Longman, Roberts & Green, 1869. J. S. 密尔,《论自由》。伦敦:Longman, Roberts & Green 出版社,1869 年。
P. P. Verbeek, What Things Do: Philosophical Reflections on Technology, Agency, and Design. University Park, PA: Pennsylvania State University Press, 2005. P. P. 韦贝克,《物何为:技术、能动性与设计的哲学反思》。宾夕法尼亚州大学园:宾夕法尼亚州立大学出版社,2005 年。
Classical Ethics in A/IS 人工智能/智能系统中的古典伦理学
Issue: The Need for an Accessible, Classical Ethics Vocabulary 议题:建立一套可普及的古典伦理学术语体系的必要性
Background 背景
Philosophers and ethicists are trained in vocabulary relating to philosophical concepts and terminology. There is an intrinsic value placed on these concepts when discussing ethics and A//IS\mathrm{A} / \mathrm{IS}, since the layered meaning behind the terminology used is foundational to these discussions and is grounded in a subsequent entrenchment of values. Unfortunately, using philosophical terminology in cross-disciplinary instances, i.e., a conversation between technologists and policymakers, is often ineffective since not everyone has the education to be able to encompass the abstracted layers of meaning contained in philosophical terminology. 哲学家和伦理学家接受过哲学概念与术语相关的专业训练。在探讨伦理及 A//IS\mathrm{A} / \mathrm{IS} 问题时,这些概念具有内在价值,因为所用术语背后的深层含义是此类讨论的基础,并植根于后续价值观的巩固。遗憾的是,在跨学科场景(例如技术人员与政策制定者对话时)使用哲学术语往往收效甚微,因为并非所有人都具备理解哲学术语抽象层次含义的教育背景。
However, not understanding a philosophical definition does not detract from the necessity of its utility. While ethical and philosophical theories should not be over-simplified for popular consumption, being able to adequately translate the essence of the rich history of ethics will go a long way in supporting a constructive dialogue on ethics and A/IS. With access and accessibility concerns intricately linked with education in communities, as well as secondary and tertiary institutions, society needs to take a vested interest in creating awareness for government officials, rural communities, and school teachers. Creating a more “user-friendly” vocabulary raises awareness on the necessity and application of classical ethics to digital societies. 然而,不理解某个哲学定义并不减损其实际应用的必要性。虽然伦理与哲学理论不应为了大众传播而过度简化,但若能充分阐释伦理学悠久历史的思想精髓,将极大促进关于伦理与自主/智能系统(A/IS)的建设性对话。鉴于获取渠道与教育普及问题在社区及中高等教育机构中的复杂性,社会各界需切实投入精力,提升政府官员、农村社区和学校教师的认知水平。构建更"用户友好"的伦理术语体系,有助于增强数字社会对古典伦理学必要性及实践价值的认知。
Identifying terms that will be intelligible to all relevant audiences is pragmatic, but care should be taken not to dilute or misrepresent concepts that are familiar to moral philosophy and ethics. One way around this is to engage in applied ethics; illustrate how a particular concept would work in the A/IS context or scenario. Another way is to understand whether terminology used across different disciplines actually has the same or similar meaning and effect which can be expressed accordingly. 选择所有相关受众都能理解的术语是务实的,但需注意不应淡化或曲解道德哲学与伦理学中的既有概念。解决途径之一是开展应用伦理研究,通过具体案例展示特定概念在自主/智能系统(A/IS)情境中的实际应用。另一途径是厘清跨学科术语是否具有相同或相似的内涵与外延,从而进行准确表述。
Recommendations 建议事项
Support and encourage the efforts of groups raising awareness for social and ethics committees, whose roles are to support ethics dialogue within their organizations, seeking approaches that are both aspirational and valuesbased. A/IS technologists should engage in cross-disciplinary exchanges whereby philosophy scholars and ethicists attend and present in non-philosophical courses. This will both raise awareness and sensitize non-philosophical scholars and practitioners to the vocabulary. 支持并鼓励社会伦理委员会等团体的意识提升工作,这些机构的职责是促进组织内部的伦理对话,寻求既具理想性又立足价值的解决方案。A/IS 技术专家应开展跨学科交流,邀请哲学学者与伦理学家参与非哲学类课程并进行专题报告。此举既能提升认知度,也能帮助非哲学领域的学者与实践者熟悉专业术语。
Further Resources 延伸资源
R. T. Ames, Confucian Role Ethics: A Vocabulary. Hong Kong: Chinese University Press, 2011. R. T. Ames,《儒家角色伦理:词汇集》。香港:中文大学出版社,2011 年。
R. Capurro, “Towards an Ontological Foundation of Information Ethics,” Ethics and Information Technology, vol. 8, no. 4, pp. 175186, 2006. R. Capurro,《信息伦理学的本体论基础初探》,《伦理与信息技术》,第 8 卷第 4 期,第 175-186 页,2006 年。
S. Mattingly-Jordan, R. Day, B. Donaldson, P. Gray, and L. M. Ingram, “Ethically Aligned Design, First Edition Glossary,” Prepared for The IEEE Global Initiative for Ethically Aligned Design, Feb. 2019. S. Mattingly-Jordan、R. Day、B. Donaldson、P. Gray 与 L. M. Ingram,《伦理对齐设计第一版术语表》,为 IEEE 伦理对齐设计全球倡议编制,2019 年 2 月。
Classical Ethics in A/IS A/IS 中的古典伦理
B. M. Lowe, Emerging Moral Vocabularies: The Creation and Establishment of New Forms of Moral and Ethical Meanings. Lanham, MD: Lexington Books, 2006. B. M. 洛,《新兴道德词汇表:道德与伦理意义新形式的创造与确立》。马里兰州兰哈姆:列克星敦出版社,2006 年。
D. J. Flinders, “In Search of Ethical Guidance: Constructing a Basis for Dialogue,” International Journal of Qualitative Studies in Education, vol. 5, no. 2, pp. 101-115, 1992. D. J. 弗林德斯,《寻求伦理指引:构建对话基础》,《教育质性研究国际期刊》第 5 卷第 2 期,第 101-115 页,1992 年。
G. S. Saldanha, “The Demon in the Gap of Language: Capurro, Ethics and Language in Divided Germany,” in Information Cultures in the Digital Age. Wiesbaden, Germany: Springer Fachmedien, 2016, pp. 253-268. G. S. 萨尔达尼亚,《语言裂隙中的恶魔:卡普罗、伦理与分裂德国的语言》,载《数字时代的信息文化》。德国威斯巴登:施普林格专业出版社,2016 年,第 253-268 页。
J. Van Den Hoven and G. J. Lokhorst, “Deontic Logic and Computer Supported Computer Ethics,” Metaphilosophy, vol. 33, no. 3, pp. 376-386, April 2002. J. 范登霍文与 G. J. 洛克霍斯特,《道义逻辑与计算机辅助计算机伦理》,《元哲学》第 33 卷第 3 期,第 376-386 页,2002 年 4 月。
Issue: Presenting Ethics to the Creators of Autonomous and Intelligent Systems 议题:向自主与智能系统开发者呈现伦理问题
Background 背景
The question arises as to whether or not classical ethics theories can be used to produce metalevel orientations to data collection and data use in decision-making. Keeping in mind that the task of philosophical ethics should be to examine good and evil, ethics should examine values, not prescribe them. Laws, which arise from ethics, are entrenched mores that have been critically assessed to prescribe. 问题在于,经典伦理学理论是否能够为数据收集及决策中的数据运用提供元层面的指导。需谨记哲学伦理学的任务应是审视善恶,伦理应检视价值而非规定价值。源于伦理的法律,是经过批判性评估后确立的规定性习俗规范。
The key is to embed ethics into engineering in a way that does not make ethics a servant, but instead a partner in the process. In addition to an ethics-in-practice approach, providing students and engineers with the tools necessary to build a similar orientation into their inventions further entrenches ethical design practices. In the abstract, this is not so difficult to describe, but is very difficult to encode into systems. This problem can be addressed by providing students with job aids such as checklists, flowcharts, and matrices that will help them select and use a principal ethical framework, and then exercise use of those devices with steadily more complex examples. In such an iterative process, students will start to determine for themselves what examples do not allow for perfectly clear decisions, and, in fact, require some interaction between frameworks. Produced outcomes such as videos, essays, and other formats-such as project-based learning activities-allow for a didactic strategy which proves effective in artificial intelligence ethics education. 关键在于将伦理融入工程实践,使其不再处于从属地位,而是成为设计过程中的平等伙伴。除了采用实践伦理方法论外,为学生和工程师提供必要的工具,使其能将这种伦理导向植入发明创造,将进一步巩固伦理设计实践。从理论层面描述这一过程并不困难,但将其编码为系统却极具挑战性。通过向学生提供工作辅助工具——如检查清单、流程图和决策矩阵——可帮助其选择并运用核心伦理框架,再通过逐步复杂的案例演练这些工具的使用。在这种迭代过程中,学生将自行识别那些无法做出绝对清晰决策的案例,这些案例实际上需要不同框架间的交互作用。视频、论文等成果产出形式,以及基于项目的学习活动,共同构成了在人工智能伦理教育中卓有成效的教学策略。
The goal is to provide students a means to use ethics in a manner analogous to how they are being taught to use engineering principles and tools. In other words, the goal is to help engineers tell the story of what they are doing. 目标是让学生能够运用伦理学知识,其方式类似于他们学习运用工程学原理和工具的方法。换言之,这一目标旨在帮助工程师阐明他们工作的伦理内涵。
Ethicists should use information flows and consider at a meta-level what information flows do and what they are supposed to do. 伦理学家应当分析信息流的作用机制,并在元层面思考信息流的实际功能与预期功能。
Engineers should then build a narrative that outlines the iterative process of ethical considerations in their design. Intentions are part of the narrative and provide a base to reflect back on those intentions. 工程师需要构建一个叙事框架,系统阐述设计过程中伦理考量的迭代过程。设计意图作为叙事的组成部分,为后续反思提供了基准参照。
Classical Ethics in A/IS 人工智能/智能系统领域的经典伦理范式
The process then allows engineers to better understand their assumptions and adjust their intentions and design processes accordingly. They can only get to these by asking targeted questions. 这一过程使工程师能够更好地理解他们的假设,并相应调整其意图和设计流程。他们只有通过提出针对性问题才能达成这些目标。
This process, one with which engineers are quite familiar, is basically Kantian and Millian ethics in play. 工程师们相当熟悉的这一过程,本质上是在运用康德和密尔的伦理学理论。
The aim is to produce what is referred to in the computer programming lexicon as a macro. A macro is code that takes other code as its input(s) and produces unique outputs. This macro is built using the Western ethics tradition of virtue ethics. 其目标是要生成计算机编程术语中所称的"宏"。宏是一种以其他代码作为输入并产生独特输出的代码。该宏的构建运用了西方伦理学传统中的德性伦理理论。
This further underscores the importance of education and training on ethical considerations relating to A/IS. Such courses should be developed and presented to students of engineering, A/IS, computer science, and other relevant fields. These courses do not add value a posteriori, but should be embedded from the beginning to allow for absorption of the underlying ethical considerations as well as allowing for critical thinking to come to fruition once the students graduate. There are various approaches that can be considered on a tertiary level: 这进一步凸显了人工智能/智能系统(A/IS)伦理相关教育与培训的重要性。此类课程应当为工程学、A/IS、计算机科学及其他相关领域的学生开发开设。这些课程并非事后附加的价值,而应当从起始阶段就嵌入教学体系,既确保学生对基础伦理考量的吸收内化,又使得批判性思维能在学生毕业时得以成熟。高等教育层面可考虑采取多种实施路径:
Parallel (information) ethics program that is presented together with the science program during the course of undergraduate and postgraduate study; 与科学课程同步开设的平行(信息)伦理课程,贯穿本科及研究生学习阶段;
Embedded (information) ethics modules within the science program, i.e., one module per semester; 科学课程中嵌入的(信息)伦理模块,即每学期设置一个模块;
Short (information) ethics courses specifically designed for the science program that can be attended by the current students, alumni, or professionals. These will function as either introductory, refresher, or specialized courses. 专为科学课程设计的短期(信息)伦理课程,面向在校生、校友或专业人士开放,可作为入门课、复习课或专题课;
Courses can also be blended to include students and/or practitioners from diverse backgrounds rather than the more traditional practice of homogenous groups, such as engineering students, continuing education programs directed at a specific specialization, and the like. 课程还可采用混合编班模式,吸纳不同背景的学生和/或从业者,突破传统同质化分组模式(如工科生专项继续教育项目等)。
Recommendations 建议
Find ways to present ethics where the methodologies used are familiar to engineering students. As engineering is taught as a collection of techno-science, logic, and mathematics, embedding ethical sensitivity into these objective and non-objective processes is essential. Curricula development is crucial in each approach. In addition to research articles and best practices, it is recommended that engineers and practitioners come together with social scientists and philosophers to develop case studies, interactive virtual reality gaming, and additional course interventions that are relevant to students. 探索以工科学生熟悉的方法论来呈现伦理学内容。由于工程学教学融合了技术科学、逻辑与数学,将伦理敏感性融入这些客观与非客观的流程至关重要。每种教学路径的课程开发都具有关键意义。除研究论文与最佳实践外,建议工程师、从业者与社会科学家、哲学家共同开发案例研究、互动虚拟现实游戏及其他与学生相关的课程介入方案。
Further Resources 延伸阅读资源
T. W. Bynum and S. Rogerson, Computer Ethics and Professional Responsibility. Malden, MA: Wiley-Blackwell, 2003. T. W. 拜纳姆与 S. 罗杰森,《计算机伦理与职业责任》,马萨诸塞州莫尔登:Wiley-Blackwell 出版社,2003 年。
E. G. Seebauer and R. L. Barry, Fundamentals of Ethics for Scientists and Engineers. New York: Oxford University Press, 2001. E. G. 西鲍尔与 R. L. 巴里,《科学家与工程师伦理基础》。纽约:牛津大学出版社,2001 年。
Classical Ethics in A/IS 人工智能/信息系统中的经典伦理
C. Whitbeck, “Teaching Ethics to Scientists and Engineers: Moral Agents and Moral Problems,” Science and Engineering Ethics, vol. 1, no. 3, pp. 299-308, Sept. 1995. C. 惠特贝克,《向科学家与工程师教授伦理:道德主体与道德问题》,《科学与工程伦理》,第 1 卷第 3 期,第 299-308 页,1995 年 9 月。
B. Zevenbergen, et al. “Philosophy Meets Internet Engineering: Ethics in Networked Systems Research,” GTC Workshop Outcomes Paper. Oxford: Oxford Internet Institute, University of Oxford, 2015. B. 泽文伯根等,《哲学遇见互联网工程:网络系统研究中的伦理》,GTC 研讨会成果文件。牛津:牛津大学牛津互联网研究所,2015 年。
M. Alvarez, “Teaching Information Ethics,” International Review of Information Ethics, vol. 14, pp. 23-28, Dec. 2010. M. Alvarez,《信息伦理教学》,《国际信息伦理评论》,第 14 卷,第 23-28 页,2010 年 12 月。
P. P. Verbeek, Moralizing Technology: Understanding and Designing the Morality of Things. Chicago, IL: University of Chicago Press, 2011. P. P. Verbeek,《道德化技术:理解与设计事物的道德性》,芝加哥,IL:芝加哥大学出版社,2011 年。
K. A. Joyce, K. Darfler, D. George, J. Ludwig, and K. Unsworth, “Engaging STEM Ethics Education,” Engaging Science, Technology, and Society, vol. 4, no. 1-7, 2018. K. A. Joyce, K. Darfler, D. George, J. Ludwig, 和 K. Unsworth,《参与 STEM 伦理教育》,《参与科学、技术与社会》,第 4 卷,第 1-7 期,2018 年。
Issue: Accessing Classical Ethics by Corporations and Companies 议题:企业与公司如何借鉴古典伦理
Background 背景
Many companies, from startups to tech giants, understand that ethical considerations in tech design are increasingly important, but are not sure how to incorporate ethics into their tech design agenda. How can ethical considerations in tech design become an integrated part of the agenda of companies, public projects, and research consortia? Corporate workshops and exercises will need to go beyond 从初创企业到科技巨头,许多公司都意识到技术设计中的伦理考量日益重要,却不知如何将伦理因素纳入技术设计议程。如何使技术设计中的伦理考量成为企业、公共项目和研究联盟议程的有机组成部分?企业研讨会和培训活动必须超越
opinion-gathering exercises to embed ethical considerations into structures, environments, training, and development. 意见收集的层面,将伦理考量真正嵌入组织结构、工作环境、人员培训和发展体系中。
As it stands, classical ethics is not accessible enough to corporate endeavors in ethics, and as such, are not applicable to tech projects. There is often, but not always, a big discrepancy between the output of engineers, lawyers, and philosophers when dealing with computer science issues; there is also a large difference in how various disciplines approach these issues. While this is not true in all cases-and there are now several interdisciplinary approaches in robotics and machine ethics as well as a growing number of scientists that hold double and interdisciplinary degrees-there remains a vacuum for the wider understanding of classical ethics theories in the interdisciplinary setting. Such an understanding includes that of the philosophical language used in ethics and the translation of that language across disciplines. 目前,古典伦理学对企业伦理实践的指导性不足,因而难以适用于技术项目。工程师、律师和哲学家在处理计算机科学问题时,其产出往往存在显著差异(尽管并非总是如此);不同学科处理这些问题的方式也存在巨大鸿沟。虽然并非所有情况皆然——当前机器人学和机器伦理领域已出现若干跨学科研究方法,同时拥有双重学科背景的科学家群体也在不断壮大——但在跨学科语境下,古典伦理学理论仍缺乏广泛认知。这种认知缺失既包括对伦理学中哲学语言的理解,也涉及该语言在跨学科领域的转译问题。
If we take, for instance, the terminology and usage of the concept of “trust” in reference to technology, the term “trust” has specific philosophical, legal, and engineering connotations. It is not an abstract concept. It is attributable to humans, and relates to claims and actions people make. Machines, robots, and algorithms lack the ability to make claims and so cannot be attributed with trust. They cannot determine whether something is trustworthy or not. Software engineers may refer to “trusting” the data, but this relates to the data’s authenticity and veracity to ensure software performance. In the context of A/IS, “trust” means “functional reliability”; it means there is confidence in the technology’s predictability, reliability, and security against hackers or impersonators of authentic users. 以技术领域中的"信任"概念及其术语使用为例,"信任"一词具有特定的哲学、法律和工程学内涵。它并非抽象概念,而是专属于人类,与人们的主张和行为相关。机器、机器人和算法不具备提出主张的能力,因此不能被赋予信任属性。它们无法判断某事物是否值得信赖。软件工程师可能谈及"信任"数据,但这仅涉及数据的真实性和准确性以确保软件性能。在自主/智能系统(A/IS)语境下,"信任"意味着"功能可靠性";即对技术可预测性、可靠性及防范黑客或真实用户冒充者的安全性具有信心。
Classical Ethics in A/IS 自主/智能系统中的经典伦理
Recommendations 建议方案
In order to achieve multicultural, multidisciplinary, and multi-sectoral dialogues between technologists, philosophers, and policymakers, a nuanced understanding in philosophical and technical language, which is critical to digital society from Internet of Things (loT), privacy, and cybersecurity to issues of Internet governance, must be translated into norms and made available to technicians and policymakers who may not understand the nuances of the terminology in philosophical, legal, and engineering contexts. It is therefore recommended that the translation of the critical-thinking terminology of philosophers, policymakers, and other stakeholders on A/IS be translated into norms accessible to technicians. 为实现技术专家、哲学家与政策制定者之间的多元文化、跨学科及多领域对话,必须将哲学与技术语言中的精微理解——这些对从物联网(IoT)、隐私、网络安全到互联网治理等数字社会关键议题至关重要的概念——转化为规范准则,并使其能够被可能不熟悉哲学术语、法律术语及工程术语细微差异的技术人员与政策制定者所理解。因此建议将哲学家、政策制定者及其他利益相关方关于自主/智能系统(A/IS)的批判性思维术语转化为技术人员可理解的规范。
Further Resources 延伸阅读
A. Bhimani, “Making Corporate Governance Count: The Fusion of Ethics and Economic Rationality,” Journal of Management & Governance, vol. 12, no. 2, pp. 135-147, June 2008. A. Bhimani,《让公司治理发挥作用:伦理与经济理性的融合》,《管理与治理期刊》,第 12 卷第 2 期,第 135-147 页,2008 年 6 月。
A. B. Carroll, “A History of Corporate Social Responsibility,” in The Oxford Handbook of_Corporate Social Responsibility, A. Chrisanthi, R. Mansell, D. Quah, and R. Silverstone, Eds. Oxford, U.K.: Oxford University Press, 2008. A. B. Carroll,《企业社会责任的历史》,载《牛津企业社会责任手册》,A. Chrisanthi、R. Mansell、D. Quah 与 R. Silverstone 编,英国牛津:牛津大学出版社,2008 年。
W. Lazonick, “Globalization of the ICT Labor Force,” in The Oxford Handbook of Information and Communication Technologies, A. Chrisanthi, R. Mansell, D. Quah, and R. Silverstone, Eds. Oxford, U.K.: Oxford University Press, 2006. W. Lazonick,《ICT 劳动力的全球化》,收录于《牛津信息与通信技术手册》,A. Chrisanthi、R. Mansell、D. Quah 和 R. Silverstone 编,英国牛津:牛津大学出版社,2006 年。
IEEE P7000 ^("TM "){ }^{\text {TM }}, IEEE Standards Project for Model Process for Addressing Ethical Concerns During System Design will provide engineers and technologists with an implementable process aligning innovation management processes, IT system design approaches, and software engineering methods to minimize ethical risk for their organizations, stakeholders and end users. IEEE P7000 标准项目《系统设计过程中处理伦理问题的模型流程》将为工程师和技术人员提供可实施的流程,通过协调创新管理流程、IT 系统设计方法和软件工程方法,最大限度降低其组织、利益相关者和终端用户面临的伦理风险。
Issue: The Impact of Automated Systems on the Workplace 议题:自动化系统对工作场所的影响
Background 背景
The impact of A/IS on the workplace and the changing power relationships between workers and employers requires ethical guidance. Issues of data protection and privacy via big data in combination with the use of autonomous systems by employers are increasing, where decisions made via aggregate algorithms directly impact employment prospects. The uncritical use of A/IS in the workplace, and its impact on employee-employer relations, is of utmost concern due to the high chance of error and biased outcome. 人工智能/智能系统(A/IS)对工作场所的影响以及劳资间权力关系的变化亟需伦理指导。雇主通过大数据结合自主系统使用所引发的数据保护与隐私问题日益凸显,基于聚合算法做出的决策直接影响就业前景。由于存在高错误率和偏见性结果的风险,工作场所中对 A/IS 的非批判性使用及其对劳资关系的影响成为重大关切。
The concept of responsible research and innovation (RRI)is a growing area, particularly within the EU. It offers potential solutions to workplace bias and is being adopted by several research funders, such as the Engineering and Physical Sciences Research Council (EPSRC), who include RRI core principles in their mission statement. RRI is an umbrella concept that draws on classical ethics theory to provide tools to address ethical concerns from the outset of a project, from the design stage onwards. 负责任研究与创新(RRI)概念是一个不断发展的领域,在欧盟范围内尤为突出。它为解决职场偏见提供了潜在方案,并已被多家研究资助机构采纳,例如英国工程与物理科学研究理事会(EPSRC)就将 RRI 核心原则纳入其使命宣言。RRI 作为一个综合性概念,借鉴经典伦理理论,从项目初始阶段(即设计阶段)就提供解决伦理问题的工具。
Classical Ethics in A/IS 人工智能与自主系统中的经典伦理
Quoting Rene Von Schomberg, science and technologies studies specialist and philosopher, “Responsible Research and Innovation is a transparent, interactive process by which societal actors and innovators become mutually responsive to each other with a view to the (ethical) acceptability, sustainability and societal desirability of the innovation process and its marketable products (in order to allow a proper embedding of scientific and technological advances in our society).” ^(2){ }^{2} 援引科学技术研究专家、哲学家雷内·冯·肖姆伯格的观点:"负责任研究与创新是一个透明、互动的过程,通过该过程,社会行为者与创新者相互响应,共同关注创新过程及其市场化产品的(伦理)可接受性、可持续性和社会合意性(以便科技进展能恰当融入我们的社会)。" ^(2){ }^{2}
When RRI methodologies are used in the ethical considerations of A/IS design, especially in response to the potential bias of A/IS in the workplace, theoretical deficiencies are then often exposed that would not otherwise have been exposed, allowing room for improvement in design at the development stage rather than from a retroactive perspective. RRI in design increases the chances of both relevance and strength in ethically aligned design. 当在人工智能/智能系统(A/IS)设计的伦理考量中采用负责任研究与创新(RRI)方法论时,尤其是针对工作场所中 A/IS 潜在偏见问题的应对过程中,往往能暴露出传统理论框架的缺陷。这些缺陷在常规设计流程中通常难以察觉,从而为开发阶段(而非事后补救阶段)的设计改进创造了空间。RRI 在设计中的应用能显著提升伦理对齐设计的相关性与稳健性。
This emerging and exciting new concept aims to also push the boundaries to incorporate relevant stakeholders whose influence in responsible research is on a global stage. While this concept initially focuses on the workplace setting, success will only be achieved through the active involvement from private companies of industry, Al Institutes, and those who are at the forefront in A/IS design. Responsible research and innovation will be achieved through careful research and innovation governance that will ensure research purpose, process, and outcomes that are acceptable, sustainable, and even desirable. It will be incumbent on RRI experts to engage at a level where private companies will feel empowered 这一新兴且令人振奋的理念旨在突破传统边界,将那些在全球范围内对负责任研究具有影响力的相关利益方纳入考量。虽然该理念最初聚焦于工作场所场景,但其成功实施必须依靠行业私营企业、人工智能研究机构以及 A/IS 设计前沿实践者的积极参与。通过审慎的研究与创新治理机制,我们将实现研究目标、过程与成果的可接受性、可持续性乃至理想性,从而达成负责任研究与创新的目标。RRI 专家需要深度参与其中,使私营企业在此过程中获得赋能感。
and embrace this concept as both practical to implement and enact. 并将这一理念视为既实用又可实施的方案予以接纳。
Recommendations 建议
It is recommended, through the application of RRI as founded in classical ethics theory, that research in A/IS design utilize available tools and approaches to better understand the design process, addressing ethical concerns from the very beginning of the design stage of the project, thus maintaining a stronger, more efficient methodological accountability throughout. 根据基于经典伦理学理论的责任研究与创新(RRI)框架,建议人工智能/智能系统(A/IS)设计研究应运用现有工具与方法,以更好地理解设计流程,从项目设计阶段之初就着手解决伦理问题,从而在整个过程中保持更强大、更高效的方法论问责机制。
Further Resources 延伸资源
M. Burget, E. Bardone, and M. Pedaste, “Definitions and Conceptual Dimensions of Responsible Research and Innovation: A Literature Review,” Science and Engineering Ethics, vol. 23, no. 1, pp. 1-9, 2016. M. Burget、E. Bardone 和 M. Pedaste,《负责任研究与创新的定义与概念维度:文献综述》,《科学与工程伦理学》,第 23 卷第 1 期,第 1-9 页,2016 年。
European Commission Communication, “Artificial Intelligence for Europe,” COM 237, April, 2018. 欧盟委员会通讯,《面向欧洲的人工智能》,COM 237 文件,2018 年 4 月。
R. Von Schomberg, “Prospects for Technology Assessment in a Framework of Responsible Research and Innovation,” in Technikfolgen Abschätzen Lehren: Bildungspotenziale Transdisziplinärer Methode. Wiesbaden, Germany: Springer VS, 2011, pp. 39-61. R. Von Schomberg,《负责任研究与创新框架下技术评估的前景》,载于《技术后果评估教学:跨学科方法的教育潜力》,德国威斯巴登:Springer VS 出版社,2011 年,第 39-61 页。
B. C. Stahl, G. Eden, M. Jirotka, M. Coeckelbergh, “From Computer Ethics to Responsible Research and Innovation in ICT: The Transition of Reference Discourses_Informing Ethics-Related Research in Information Systems,” Information & Management, vol. 51, no. 6, pp. 810-818, September 2014. B. C. Stahl、G. Eden、M. Jirotka、M. Coeckelbergh,《从计算机伦理到 ICT 领域的负责任研究与创新:参考话语的转型及其对信息系统伦理相关研究的启示》,《信息与管理》,第 51 卷第 6 期,第 810-818 页,2014 年 9 月。
Classical Ethics in A/IS 人工智能与智能系统中的经典伦理
B. C. Stahl, M. Obach, E. Yaghmaei, V. Ikonen, K. Chatfield, and A. Brem, “The Responsible Research and Innovation (RRI) Maturity Model: Linking Theory and Practice,” Sustainability, vol. 9, no. 6, June 2017. B. C. 斯塔尔、M. 奥巴赫、E. 亚格梅、V. 伊科宁、K. 查特菲尔德与 A. 布雷姆,《负责任研究与创新(RRI)成熟度模型:理论与实践的联系》,《可持续发展》,第 9 卷第 6 期,2017 年 6 月。
IEEE P7005 ^("TM "){ }^{\text {TM }}, Standards Project for Transparent Employer Data Governance is IEEE P7005 标准项目——透明雇主数据治理标准
designed to provide organizations with a set of clear guidelines and certifications guaranteeing they are storing, protecting, and utilizing employee data in an ethical and transparent way. 旨在为组织提供一套明确的指导方针和认证体系,确保其以符合伦理且透明的方式存储、保护和使用员工数据。
Section 2-Classical Ethics from Globally Diverse Traditions 第 2 节-全球多元传统中的经典伦理学
Issue: The Monopoly on Ethics by Western Ethical Traditions 议题:西方伦理传统对伦理学的垄断
Background 背景
As human creators, our most fundamental values are imposed on the systems we design. It becomes incumbent on the global community to recognize which sets of values guide the design, and whether or not A/IS will generate problematic, i.e., discriminatory, consequences without consideration of non-Western values. There is an urgent need to broaden traditional ethics in its contemporary form of “responsible innovation” (RI) beyond the scope of “Western” ethical foundations, such as utilitarianism, deontology, and virtue ethics. There is also a need to include other traditions of ethics in RI, such as those inherent to Buddhism, Confucianism, and Ubuntu traditions. 作为人类创造者,我们最基本的价值观被强加于所设计的系统之上。国际社会亟需认清哪些价值体系在指导设计,以及人工智能/智能系统(A/IS)若未考虑非西方价值观,是否会产生诸如歧视性后果等问题。当前迫切需要在"负责任创新"(RI)的现代形式中,突破"西方"伦理基础(如功利主义、义务论和德性伦理学)的局限,拓展传统伦理学的范畴。同时有必要将佛教、儒家思想和乌班图传统等其它伦理传统纳入负责任创新的框架。
However, this venture poses problematic assumptions even before the issue above can be explored. In classifying Western values, we group together thousands of years of independent and disparate ideas originating from the GrecoRoman philosophical tradition with their Christianinfused cultural heritage and then the break from that heritage with the Enlightenment. What is it that one refers to by the term “Western ethics”? Does one refer to philosophical ethics (ethics as a scientific discipline) or is the reference to Western morality? 然而,这一尝试甚至在探讨上述问题之前就存在诸多问题假设。在对西方价值观进行分类时,我们将源自希腊罗马哲学传统、融合基督教文化遗产、又经启蒙运动与之决裂的数千年独立而迥异的思想混为一谈。当人们提及"西方伦理"时,究竟是指哲学伦理学(作为科学学科的伦理学),还是指西方道德观?
The “West”, however it may be defined, is an individualistic society, arguably more so than much of the rest of the world, and thus, in some aspects, should be even less collectively defined than “Eastern” ethical traditions. Suggest instead: If one is referring to Western values, one must designate which values and to whom they belong. Additionally, there is a danger in the field of intercultural information ethics, however 无论"西方"如何界定,它本质上是一个个人主义社会——可以说比世界多数地区更甚——因此在某些方面,其集体性定义理应比"东方"伦理传统更为薄弱。更恰当的说法是:若论及西方价值观,必须明确所指的具体价值及其归属主体。此外,跨文化信息伦理学领域还潜藏着某种危险
Classical Ethics in A/IS 人工智能/智能系统领域的经典伦理
unconsciously or instinctively propagated, to not only group together all Western traditions under a single banner, but to negatively designate any and all Western influence in global exchange to representing an abusive collective of colonialinfluenced ideals. Just because there exists a monopoly of influence by one system over another does not mean that said monopoly is devoid of value, even for systems outside itself. In the same way that culturally diverse traditions have much to offer Western tradition(s), so, too, do they have much to gain from them. 无意识或本能地传播一种倾向,不仅将所有西方传统统归为单一旗帜下,更将全球交流中任何西方影响负面地标记为代表受殖民思想侵蚀的压迫性集合体。某一体系对另一体系形成影响力垄断的事实,并不意味着该垄断体系本身毫无价值——即便对于外部体系而言亦是如此。正如文化多元传统能为西方传统提供丰富养分,这些传统同样能从西方体系中获益良多。
In order to establish mutually beneficial connections in addressing globally diverse traditions, it is of critical importance to first properly distinguish between subtleties in Western ethics as a discipline and morality as its object or subject matter. It is also important to differentiate between philosophical or scientific ethics and theological ethics. As noted above, the relationship between assumed moral customs, the ethical critique of those customs, and the law is an established methodology in scientific communities. Western and Eastern philosophy are very different, just like Western and Eastern ethics. Western philosophical ethics use scientific methods such as the logical, discursive, and dialectical approach (models of normative ethics) alongside the analytical and hermeneutical approaches. The Western tradition is not about education and teaching of social and moral values, but rather about the application of fundamentals, frameworks, and explanations. However, several contemporary globally relevant community mores are based in traditional and theological moral systems, requiring a conversation around how best to collaborate in 为在应对全球多元传统时建立互利联系,首先必须准确区分作为学科的西方伦理学与其研究对象或主题的道德之间的微妙差异。同时,需辨明哲学/科学伦理学与神学伦理学的区别。如前所述,假定道德习俗、对这些习俗的伦理批判以及法律三者间的关系,已成为科学界既定的方法论体系。东西方哲学存在显著差异,伦理学领域亦然。西方哲学伦理学采用逻辑论证、推理论述和辩证方法(规范伦理学模型)等科学方法,辅以分析与诠释学路径。西方传统并非侧重社会道德价值观的教化,而是着眼于基本原理、框架体系及解释机制的运用。然而,当前若干具有全球意义的社群规范植根于传统神学道德体系,这要求我们展开对话以探讨最佳协作方式。
the design and programming of ethics in A/IS amidst differing ethical traditions. 在多元伦理传统中的人工智能/信息系统(A/IS)伦理设计与编程
While experts in Intercultural Information Ethics, such as Pak-Hang Wong, highlight the dangers of the dominance of “Western” ethics in A/IS design, noting specifically the appropriation of ethics by liberal democratic values to the exclusion of other value systems, it should be noted that those same liberal democratic values are put in place and specifically designed to accommodate such differences. However, while the accommodation of differences are, in theory, accounted for in dominant liberal value systems, the reality of the situation reveals a monopoly of, and a bias toward, established Western ethical value systems, especially when it comes to standardization. As Wong notes: 跨文化信息伦理学者如黄柏恒等专家指出"西方"伦理在 A/IS 设计中的主导地位存在风险,特别强调自由主义民主价值观对伦理的垄断性占有导致其他价值体系被排斥。但需要说明的是,这些自由主义民主价值观的设立初衷正是为了包容此类差异。然而,尽管主流自由主义价值体系在理论上已考虑差异包容,现实情况却显示出既成西方伦理价值体系的垄断地位和系统性偏见——这种倾向在标准化过程中尤为明显。正如黄柏恒所言:
Standardization is an inherently value-laden project, as it designates the normative criteria for inclusion to the global network. Here, one of the major adverse implications of the introduction of value-laden standard(s) of responsible innovation (RI) appears to be the delegitimization of the plausibility of RI based on local values, especially when those values come into conflict with the liberal democratic values, as the local values (or, the RI based on local values) do not enable scientists and technology developers to be recognized as members of the global network of research and innovation (Wong, 2016). 标准化本质上是一个充满价值判断的工程,因为它为全球网络的准入设定了规范性标准。在此背景下,引入带有价值倾向的负责任创新(RI)标准所产生的主要负面影响之一,似乎是对基于本土价值观的负责任创新合理性的否定——尤其是当这些价值观与自由民主价值观发生冲突时,本土价值观(或基于本土价值观的 RI)将无法使科学家和技术开发者被认可为全球研究与创新网络的成员(Wong,2016)。
It does, however, become necessary for those who do not work within the parameters of accepted value monopolies to find alternative methods of accommodating different value systems. Liberal values arose out of conflicts 然而对于那些无法在既定价值垄断框架内开展工作的人而言,寻找兼容不同价值体系的替代方法就变得十分必要。自由价值观正是源于文化
Classical Ethics in A/IS 人工智能/智能系统中的经典伦理
of cultural and subcultural differences and are designed to be accommodating enough to include a rather wide range of differences. 与亚文化差异的冲突,其设计初衷就具有足够包容性以涵盖相当广泛的差异。
RI enables policymakers, scientists, technology developers, and the public to better understand and respond to the social, ethical, and policy challenges raised by new and emerging technologies. Given the historical context from which RI emerges, it should not be surprising that the current discourse on RI is predominantly based on liberal democratic values. Yet, the bias toward liberal democratic values will inevitably limit the discussion of RI, especially in the cases where liberal democratic values are not taken for granted. Against this background, it is important to recognize the problematic consequences of RI solely grounded on, or justified by, liberal democratic values. 负责任创新(RI)使政策制定者、科学家、技术开发者和公众能够更好地理解并应对新兴技术带来的社会、伦理和政策挑战。鉴于负责任创新产生的历史背景,当前关于 RI 的讨论主要基于自由民主价值观并不令人意外。然而,这种对自由民主价值观的偏向性将不可避免地限制 RI 的讨论范围——尤其在那些自由民主价值观并非理所当然的社会语境中。在此背景下,我们必须认识到仅以自由民主价值观为基础或正当性来源的负责任创新所产生的问题性后果。
In addition, many non-Western ethics traditions, including the Buddhist and Ubuntu traditions highlighted below, view “relationship” as a foundationally important concept to ethical discourse. One of the key parameters of intercultural information ethics and RI research must be to identify main commonalities of “relationship” approaches from different cultures and how to operationalize them for A/IS to complement classical methodologies of deontological and teleological ethics. Different cultural perceptions of time may influence “relationship” approaches and impact how A/IS are perceived and integrated, e.g., technology as part of linear progress in the West; inter-generational needs and principles of respect and benevolence in Chinese culture determining current and future use of technology. 此外,许多非西方伦理传统(包括下文强调的佛教和乌班图传统)都将"关系"视为伦理话语的基础性重要概念。跨文化信息伦理与负责任创新研究的关键参数之一,必须是识别不同文化中"关系"取向的主要共性,并探索如何将其操作化以补充义务论与目的论伦理的经典方法论。不同文化对时间的认知可能影响"关系"取向,进而左右人工智能/信息系统(A/IS)的接受与整合方式——例如西方将技术视为线性进步的组成部分,而中国文化中代际需求与仁敬原则则决定着技术的当下及未来运用。
Recommendations 建议
In order to enable a cross-cultural dialogue of ethics in technology, discussions on ethics and A/IS must first return to normative foundations of RI to address the notion of “responsible innovation” from a range of value systems not predominant in Western classical ethics. Together with acknowledging differences, a special focus on commonalities in the intercultural understanding of the concept of “relationship” must complement the process. 为实现技术伦理的跨文化对话,关于伦理与自主/智能系统的讨论必须首先回归负责任创新的规范基础,从非西方古典伦理主导的多元价值体系来探讨"负责任创新"概念。在承认差异的同时,应特别关注跨文化语境中对"关系"概念的共同理解,以此完善对话进程。
Further Resources 延伸阅读
J. Bielby, “Comparative Philosophies in Intercultural Information Ethics,” Confluence: Journal of World Philosophies, vol. 2, 2016. J. 比尔比,《跨文化信息伦理中的比较哲学》,《世界哲学汇刊》第 2 卷,2016 年。
W. B. Carlin and K. C. Strong, “A Critique of Western Philosophical Ethics: Multidisciplinary Alternatives for Framing Ethical Dilemmas,” Journal of Business Ethics, vol. 14, no. 5, pp. 387-396, May 1995. W. B. 卡林与 K. C. 斯特朗,《西方哲学伦理批判:构建伦理困境的多学科替代框架》,《商业伦理期刊》第 14 卷第 5 期,第 387-396 页,1995 年 5 月。
C. Ess, “Lost in translation”?: Intercultural dialogues on privacy and information ethics (introduction to special issue on privacy and data privacy protection in Asia)," Ethics and Information Technology, vol. 7, no. 1, pp. 1-6, March 2005. C. Ess,《"迷失在翻译中"?:关于隐私与信息伦理的跨文化对话(亚洲隐私与数据隐私保护特刊导言)》,《伦理与信息技术》,第 7 卷第 1 期,第 1-6 页,2005 年 3 月。
S. Hongladarom, “Intercultural Information Ethics: A Pragmatic Consideration,” in Information Cultures in the Digital Age. Wiesbaden, Germany: Springer Fachmedien, 2016, pp. 191-206. S. Hongladarom,《跨文化信息伦理:一种实用主义考量》,载《数字时代的信息文化》,德国威斯巴登:Springer Fachmedien 出版社,2016 年,第 191-206 页。
L. G. Rodríguez and M. Á. P. Álvarez, Ética Multicultural y Sociedad en Red. Madrid: Fundación Telefónica, 2014. L. G. Rodríguez 与 M. Á. P. Álvarez,《网络社会中的多元文化伦理》,马德里:西班牙电信基金会,2014 年。
Classical Ethics in A/IS 人工智能/信息系统中的经典伦理
P. H. Wong, “What Should We Share?: Understanding the Aim of Intercultural Information Ethics,” ACM SIGCAS Computers and Society, vol. 39, no. 3 pp. 50-58, Dec. 2009. 黄柏恒,《我们应当共享什么?:跨文化信息伦理的目标探析》,《ACM SIGCAS 计算机与社会》,第 39 卷第 3 期,第 50-58 页,2009 年 12 月。
S. A. Wilson, “Conformity, Individuality, and the Nature of Virtue: A Classical Confucian Contribution to Contemporary Ethical Reflection,” The Journal of Religious Ethics, vol. 23, no. 2, pp. 263-289, 1995. 威尔逊·S·A,《从众、个性与德性本质:古典儒家思想对当代伦理反思的贡献》,《宗教伦理学刊》,第 23 卷第 2 期,第 263-289 页,1995 年。
P. H. Wong, “Responsible Innovation for Decent Nonliberal Peoples: A Dilemma?” Journal of Responsible Innovation, vol. 3, no. 2, pp. 154-168, July 2016. 黄柏恒,《非自由主义民族的负责任创新:一个困境?》,《负责任创新期刊》,第 3 卷第 2 期,第 154-168 页,2016 年 7 月。
R. B. Zeuschner, Classical Ethics, East and West: Ethics from a Comparative Perspective. Boston, MA: McGraw-Hill, 2000. 罗伯特·B·佐伊斯纳,《东西方古典伦理学:比较视角下的伦理观》,马萨诸塞州波士顿:麦格劳-希尔出版社,2000 年。
S. Mattingly-Jordan, “Becoming a Leader in Global Ethics,” IEEE, 2017. S. Mattingly-Jordan,《成为全球伦理领域的领导者》,IEEE,2017 年。
Issue: The Application of Classical Buddhist Ethical Traditions to A/IS Design 议题:古典佛教伦理传统在人工智能/智能系统设计中的应用
Background 背景
According to Buddhism, the field of ethics is concerned with behaving in such a way that the subject ultimately realizes the goal of liberation. The question, “How should I act?” is answered straightforwardly; one should act in such a way that one realizes liberation (nirvana) 佛教认为,伦理学的核心在于通过特定行为方式使主体最终实现解脱的目标。对于"我应如何行动?"这一问题,答案直指本质:人应当以能实现解脱(涅槃)的方式行事,
in the future, achieving what in Buddhism is understood as “supreme happiness”. Thus Buddhist ethics are clearly goal-oriented. In the Buddhist tradition, people attain liberation when they no longer endure any unsatisfactory conditions, when they have attained the state where they are completely free from any passions, including desire, anger, and delusionto name the traditional three, that ensnare one’s self against freedom. In order to attain liberation, one engages oneself in mindful behavior (ethics), concentration (meditation), and what is deemed in Buddhism as “wisdom”, a term that remains ambiguous in Western scientific approaches to ethics. 从而达成佛教所理解的"至高幸福"。因此佛教伦理具有明确的目标导向性。在这一传统中,当人们不再承受任何苦厄境遇,彻底摆脱欲望、嗔怒与愚痴这三大束缚自由的烦恼时,即证得解脱。为实现解脱,修行者需践行正念行为(戒律)、禅定(冥想)以及佛教称为"般若"的智慧——这一概念在西方科学伦理框架中仍具模糊性。
Thus ethics in Buddhism are concerned exclusively with how to attain the goal of liberation, or freedom. In contrast to Western ethics, Buddhist ethics are not concerned with theoretical questions on the source of normativity or what constitutes the good life. What makes an action a “good” action in Buddhism is always concerned with whether the action leads, eventually, to liberation or not. In Buddhism, there is no questioning why liberation is a good thing. It is simply assumed. Such an assumption places Buddhism, and ethical reflection from a Buddhist perspective, in the camp of mores rather than scientifically led ethical discourse, and it is approached as an ideology or a worldview. 因此,佛教伦理只关注如何达到解脱或自由的目标。与西方伦理学不同,佛教伦理并不探讨规范性来源或美好生活构成等理论问题。在佛教中,判断一个行为是否"善"的标准始终在于该行为最终能否导向解脱。佛教从不质疑解脱为何是善的——这一前提被视为不证自明。这种预设使佛教及其伦理思考归属于习俗范畴,而非科学主导的伦理论述,它是作为一种意识形态或世界观被接纳的。
While it is critically important to consider, understand, and apply accepted ideologies such as Buddhism in A/IS, it is both necessary to differentiate the methodology from Western ethics, and respectful to Buddhist tradition, not to require that it be considered in a scientific 尽管在人工智能/智能系统(A/IS)中考虑、理解并应用佛教等公认意识形态至关重要,但有必要将其方法论与西方伦理学区分开来。出于对佛教传统的尊重,不应要求其必须被纳入科学框架进行考量。
Classical Ethics in A/IS 人工智能/智能系统中的古典伦理
context. Such assumptions put it at odds with the Western foundation of ethical reflection on mores. From a Buddhist perspective, one does not ask why supreme happiness is a good thing; one simply accepts it. The relevant question in Buddhism is not about methodological reflection, but about how to attain liberation from the necessity for such reflection. 这种假设使其与西方基于道德反思的伦理基础产生分歧。从佛教视角来看,人们不会追问为何至高幸福是善的,而是直接接受它。佛教提出的关键问题并非方法论反思,而是如何从这种反思的必要性中获得解脱。
Thus, Buddhist ethics contain potential for conflict with Western ethical value systems which are founded on ideas of questioning moral and epistemological assumptions. Buddhist ethics are different from, for example, utilitarianism, which operates via critical analysis toward providing the best possible situation to the largest number of people, especially as it pertains to the good life. These fundamental differences between the traditions need to be, first and foremost, mutually understood and then addressed in one form or another when designing A/IS that span cultural contexts. 因此,佛教伦理与西方伦理价值体系存在潜在冲突,后者建立在质疑道德和认识论假设的基础上。佛教伦理不同于功利主义等学说——功利主义通过批判性分析来为最大多数人创造最佳境况,尤其关乎美好生活的实现。在设计跨文化语境的人工智能/智能系统时,首要任务是相互理解这些传统间的根本差异,进而以某种形式加以调和。
The main difference between Buddhist and Western ethics is that Buddhism is based upon a metaphysics of relation. Buddhist ethics emphasizes how action leads to achieving a goal, or in the case of Buddhism, the final goal. In other words, an action is considered a good one when it contributes to the realization of the goal. It is relational when the value of an action is relative to whether or not it leads to the goal, the goal being the reduction and eventual cessation of suffering. In Buddhism, the self is constituted through the relationship between the synergy of bodily parts and mental activities. In Buddhist analysis, the self does not actually exist as a self-subsisting entity. Liberation, or nirvana, consists in realizing that what is known to be the 佛教伦理与西方伦理的主要区别在于,佛教建立在关系形而上学的基础之上。佛教伦理强调行为如何导向目标的实现——就佛教而言,这个终极目标就是解脱。换言之,当某个行为有助于实现终极目标时,它就被视为善行。这种伦理观具有关系性特征,因为行为的价值取决于它是否导向目标——即痛苦(苦)的减少与最终止息。在佛教看来,"自我"是由身体各部分的协同作用与心理活动之间的关系所构成的。根据佛教分析,自我并非真实存在的独立实体。解脱(涅槃)的本质在于觉悟到所谓
self actually consists of nothing more than these connecting episodes and parts. To exemplify the above, one can draw from the concept of privacy as often explored via intercultural information ethics. The Buddhist perspective understands privacy as a protection, not of self-subsisting individuals, because such do not exist ultimately speaking, but of certain values that are found to be necessary for a well-functioning society to prosper in the globalized world. 自我实际上仅由这些相互关联的片段和部分构成。为说明上述观点,可以借鉴跨文化信息伦理学中常探讨的隐私概念。佛教视角将隐私理解为对某些价值的保护——这些价值被视为全球化社会中良好运作的社会繁荣所必需——而非对独立自存个体的保护,因为究极而言,此类个体并不存在。
The secular formulation of the supreme happiness mentioned above is that of the reduction of the experience of suffering, or reduction of the metacognitive state of suffering. Such a state is the result of lifelong discipline and meditation aimed at achieving proper relationships with others and with the world. This notion of the reduction of suffering is something that can resonate well with certain Western traditions, such as epicureanism ataraxia, i.e., freedom from fear through reason and discipline, and versions of consequentialist ethics that are more focused on the reduction of harm. It also encompasses the concept of phronesis or practical wisdom from virtue ethics. 上述至乐的世俗化表述,即减少苦难体验或降低苦难的元认知状态。这种状态是通过旨在建立与他人及世界恰当关系的终身修习与冥想达成的。这种减轻苦难的理念与某些西方传统能产生强烈共鸣,例如伊壁鸠鲁主义的"不动心"(通过理性与纪律摆脱恐惧),以及更关注减少伤害的后果论伦理学变体。该理念同时也包含了德性伦理学中"实践智慧"(phronesis)的概念。
Relational ethical boundaries promote ethical guidance that focuses on creativity and growth rather than solely on mitigation of consequence and avoidance of error. If the goal of the reduction of suffering can be formulated in a way that is not absolute, but collaboratively defined, this leaves room for many philosophies and related approaches as to how this goal can be accomplished. Intentionally making space for ethical pluralism is one potential antidote to dominance of the conversation by liberal thought, with its legacy of Western colonialism. 关系伦理边界倡导一种注重创造与成长而非仅关注后果缓解与错误规避的伦理指导。若减轻痛苦的目标能以非绝对化、而是通过协作定义的方式构建,便能为实现该目标的多元哲学及相关方法留出空间。有意为伦理多元主义创造空间,是抗衡自由主义思想(及其西方殖民主义遗产)主导话语的潜在解药。
Classical Ethics in A/IS 人工智能/智能系统中的古典伦理学
Recommendations 建议
In considering the nature of interactions between human and autonomous systems, the above notion of “proper relationships” through Buddhist ethics can provide a useful platform that results in ethical statements formulated in a relational way, instead of an absolutist way. It is recommended as an additional methodology, along with Western-value methodologies, to address human/computer interactions. 在思考人类与自主系统交互本质时,上述佛教伦理学中"正当关系"的概念能提供一个有益平台,由此形成关系式而非绝对化的伦理表述。建议将其作为西方价值方法论之外的补充方法,用以处理人机交互问题。
Further Resources 延伸阅读资源
R. Capurro, “Intercultural Information Ethics: Foundations and Applications,” Journal of Information, Communication & Ethics in Society, vol. 6, no. 2, pp. 116-126, 2008. R. 卡普罗,《跨文化信息伦理:基础与应用》,《信息、传播与社会伦理期刊》,第 6 卷第 2 期,第 116-126 页,2008 年。
C. Ess, “Ethical Pluralism and Global Information Ethics,” Ethics and Information Technology, vol. 8, no. 4, pp. 215-226, Nov. 2006. C. 埃斯,《伦理多元主义与全球信息伦理》,《伦理与信息技术》,第 8 卷第 4 期,第 215-226 页,2006 年 11 月。
S. Hongladarom, “Intercultural Information Ethics: A Pragmatic Consideration,” in Information Cultures in the Digital Age, K. M. Bielby, Ed. Wiesbaden, Germany: Springer Fachmedien Wiesbaden, 2016, pp. 191-206. S. 洪拉达隆,《跨文化信息伦理:一种实用主义考量》,载《数字时代的信息文化》,K. M. 比尔比编,德国威斯巴登:施普林格威斯巴登出版社,2016 年,第 191-206 页。
S. Hongladarom, J. Britz, “Intercultural Information Ethics,” International Review of Information Ethics, vol. 13, pp. 2-5, Oct. 2010. S. Hongladarom, J. Britz,《跨文化信息伦理》,《国际信息伦理评论》,第 13 卷,第 2-5 页,2010 年 10 月。
M. Nakada, “Different Discussions on Roboethics and Information Ethics Based on Different Contexts (Ba). Discussions on Robots, Informatics and Life in the Information Era in Japanese Bulletin Board Forums and Mass Media,” Proceedings M. Nakada,《基于不同语境(场)的机器人伦理与信息伦理差异讨论——日本网络论坛与大众媒体中关于信息时代机器人、信息学与生命的探讨》,《
Cultural Attitudes towards Communication and Technology, pp. 300-314, 2010. 传播与技术文化态度会议论文集》,第 300-314 页,2010 年。
M. Mori, The Buddha in the Robot. Suginamiku, Japan: Kosei Publishing, 1989. M. Mori,《机器人中的佛陀》,日本东京都杉并区:佼成出版社,1989 年。
Issue: The Application of Ubuntu Ethical Traditions to A/IS Design 议题:乌班图伦理传统在人工智能/信息系统设计中的应用
Background 背景
In his article, “African Ethics and Journalism Ethics: News and Opinion in Light of Ubuntu”, Thaddeus Metz frames the following question: “What does a sub-Saharan ethic focused on the good of community, interpreted philosophically as a moral theory, entail for the duties of various agents with respect to the news/opinion media?” (Metz, 2015, 1). In applying that question to A/IS, it reads: “If an ethic focused on the good of community, interpreted philosophically as a moral theory, is applied to A/IS, what would the implications be on the duties of various agents?” Agents, in this regard, would therefore be the following: 塔德乌斯·梅茨在其文章《非洲伦理与新闻伦理:乌班图视角下的新闻与观点》中提出了以下问题:"以社群福祉为核心的撒哈拉以南非洲伦理,当被哲学阐释为一种道德理论时,对新闻/观点媒体中各行为主体的责任意味着什么?"(Metz, 2015, 1)。将此问题应用于人工智能/信息系统领域,可表述为:"若将以社群福祉为核心、经哲学阐释的道德理论应用于人工智能/信息系统,将对各行为主体的责任产生何种影响?"此语境下的行为主体包括:
Members of the A/IS research community 人工智能/信息系统研究共同体成员
Ubuntu is a sub-Saharan philosophical tradition. Its basic tenet is that a person is a person through other persons. It develops further in the notions of caring and sharing as well as identity and belonging, whereby people experience their lives as bound up with their community. A person is defined in relation to the community since the sense of being is intricately linked with belonging. Therefore, community exists through shared experiences and values. It is a commonly held maxim in the Ubuntu tradition that, “to be is to belong to a community and participate.” As the saying goes, motho ke motho ka batho babang, or, “a person is a person because of other people.” 乌班图是撒哈拉以南的哲学传统,其核心理念认为"人之为人,系因他人"。该思想在关怀分享、身份认同与归属感等维度深化发展,强调个体生命与社群的紧密联结。在这种哲学体系中,人的定义始终依托于群体关系,因为存在感与归属感密不可分。社群通过共同经历和价值观念得以维系。乌班图传统中有句广为流传的箴言:"存在即意味着归属于某个社群并参与其中",正如谚语"motho ke motho ka batho babang"(塞索托语)所诠释的"人因他人而成其为人"。
Very little research, if any at all, has been conducted in light of Ubuntu ethics and A/IS, but its focus will be within the following moral domains: 目前几乎没有基于乌班图伦理与人工智能/信息系统(A/IS)的研究,其关注点将集中在以下道德领域:
Among the members of the A//IS\mathrm{A} / \mathrm{IS} research community 在 A//IS\mathrm{A} / \mathrm{IS} 研究社区的成员之间
Between the A/IS community/programmers/ computer scientists and the end-users 在人工智能/信息系统社区/程序员/计算机科学家与终端用户之间
Between the A/IS community/programmers/ computer scientists and A//ISA / I S 在人工智能/信息系统社区/程序员/计算机科学家与 A//ISA / I S 之间
Between the end-users and A//ISA / I S 终端用户与 A//ISA / I S 之间
Between A//IS\mathrm{A} / \mathrm{IS} and A//IS\mathrm{A} / \mathrm{IS}A//IS\mathrm{A} / \mathrm{IS} 与 A//IS\mathrm{A} / \mathrm{IS} 之间
Considering a future where A/IS will become more entrenched in our everyday lives, one must keep in mind that an attitude of sharing one’s experiences with others and caring for their well-being will be impacted. Also, by trying to ensure solidarity within one’s community, one 考虑到人工智能/智能系统(A/IS)将日益深入我们日常生活的未来,必须意识到:分享个人体验与关怀他人福祉的态度将受到影响。同时,通过努力确保社区内部的团结,人们
must identify factors and devices that will form part of their lifeworld. If so, will the presence of A/IS inhibit the process of partaking in a community, or does it create more opportunities for doing so? One cannot classify A/IS as only a negative or disruptive force; it is here to stay and its presence will only increase. Ubuntu ethics must come to grips with, and contribute to, the body of knowledge by establishing a platform for mutual discussion and understanding. Ubuntu, as collective human dignity, may offer a way of understanding the impact of A/IS on humankind, e.g., the need for human moral and legal agency; human life and death decisions to be taken by humans rather than A//IS\mathrm{A} / \mathrm{IS}. 需要识别那些将构成其生活世界的要素与设备。若如此,A/IS 的存在会阻碍参与社区的过程,还是为此创造更多机会?我们不应简单将 A/IS 归类为负面或破坏性力量;它已成为常态且影响只会持续扩大。乌班图伦理必须通过建立相互讨论与理解的平台,来应对并丰富这一知识体系。作为集体人类尊严的乌班图理念,或许能提供理解 A/IS 对人类影响的途径,例如:维护人类道德与法律主体性的必要性;关于人类生死存亡的决策权应保留在人类而非 A//IS\mathrm{A} / \mathrm{IS} 手中。
Such analysis fleshes out the following suggestive comments of Desmond Tutu, renowned former chair of South Africa’s Truth and Reconciliation Commission, when he says of Africans, “(We say) a person is a person through other people… I am human because I belong” (Tutu, 1999). As Tutu notes, “Harmony, friendliness, and community are great goods. Social harmony is for us the summum bonum-the greatest good. Anything that subverts or undermines this soughtafter good is to be avoided” (2015:78). 此类分析充实了南非真相与和解委员会前主席德斯蒙德·图图大主教的深刻论述,他谈及非洲人时曾说:"(我们认为)人因他人而成其为人...我之所以为人,是因为我归属于群体"(图图,1999)。正如他所强调的:"和谐、友善与共同体是至善。对我们而言,社会和谐就是至高之善(summum bonum)。任何破坏或削弱这种珍贵善行的行为都应被避免"(2015:78)。
In considering the above, it is fair to state that community remains central to Ubuntu. In situating A/IS within this moral domain, they will have to adhere to the principles of community, identity, and solidarity with others. On the other hand, they will also need to be cognizant of, and sensitive toward, the potential for community-based ethics to exclude individuals on the basis that they do not belong or fail to meet communitarian standards. For example, 基于上述思考,可以公允地说共同体仍是乌班图精神的核心。将自主/智能系统(A/IS)置于这一道德领域时,它们必须遵循共同体原则、身份认同及与他人的团结。另一方面,这些系统也需警惕并审慎对待基于共同体伦理可能产生的排他性——即以"不属于"或"不符合共同体标准"为由排斥个体的风险。例如,
Classical Ethics in A/IS 自主/智能系统中的经典伦理
would this mean the excluded individual lacks personhood and as a consequence would not be able to benefit from community-based A/IS initiatives? How would community-based A/IS programming avoid such biases against individuals? 这是否意味着被排除在外的个体缺乏人格,因而无法受益于基于社区的 A/IS 倡议?基于社区的 A/IS 规划应如何避免针对个体的此类偏见?
While virtue ethics question the goal or purpose of A/IS and deontological ethics question the duties, the fundamental question asked by Ubuntu would be, “How does A/IS affect the community in which it is situated?” This question links with the initial question concerning the duties of the various moral agents within the specific community. Motivation becomes very important, because if A/IS seek to detract from community, they will be detrimental to the identity of this community when it comes to job losses, poverty, lacks in education, and lacks in skills training. However, should A/IS seek to supplement the community by means of ease of access, support systems, and more, then it cannot be argued that they will be detrimental. In between these two motivators is a safeguarding issue about how to avoid excluding individuals from accessing community-based A/IS initiatives. It therefore becomes imperative that whoever designs the systems must work closely both with ethicists and the target community, audience, or end-user to ascertain whether their needs are identified and met. 当美德伦理学质疑人工智能/智能系统(A/IS)的目标或目的,义务论伦理学追问其责任时,乌班图伦理提出的根本问题则是:"A/IS 如何影响其所处的社区?"这一问题与最初关于特定社区内各道德主体职责的疑问紧密相连。动机变得至关重要——如果 A/IS 试图削弱社区凝聚力,在导致失业、贫困、教育缺失和技能培训不足等方面,它们将损害该社区的身份认同。反之,若 A/IS 通过提升可及性、建立支持系统等方式来补益社区,则其危害性便无从谈起。在这两种动机之间,还存在如何避免将个体排除在社区 A/IS 计划之外的保障问题。因此,系统设计者必须与伦理学家及目标社区、受众或终端用户密切合作,以确认其需求是否被识别并满足,这显得尤为必要。
Recommendations 建议
It is recommended that a concerted effort be made toward the study and publication of literature addressing potential relationships between Ubuntu and other instances of African ethical traditions and A/IS value design. A/IS 建议集中力量研究并出版探讨乌班图思想与其他非洲伦理传统实例同人工智能/自主系统(A/IS)价值设计潜在关联的文献。
designers and programmers must work closely with the end-users and target communities to ensure their design objectives, products, and services are aligned with the needs of the endusers and target communities. 人工智能/自主系统(A/IS)设计师与程序员必须同终端用户及目标社群紧密协作,确保其设计目标、产品和服务符合终端用户与目标社群的实际需求。
Further Resources 延伸阅读资源
D. W. Lutz, “African Ubuntu Philosophy and Global Management,” Journal of Business Ethics, vol. 84, pp. 313-328, Oct. 2009. D·W·卢茨,《非洲乌班图哲学与全球管理》,《商业伦理学刊》第 84 卷,第 313-328 页,2009 年 10 月。
T. Metz, “African Ethics and Journalism Ethics: News and Opinion in Light of Ubuntu,” Journal of Media Ethics: Exploring Questions of Media Morality, vol. 30 no. 2, pp. 74-90, April 2015. T. Metz,《非洲伦理与新闻伦理:基于乌班图视角的新闻与观点》,《媒体伦理期刊:媒体道德问题探究》,第 30 卷第 2 期,第 74-90 页,2015 年 4 月。
T. Metz, “Ubuntu as a moral theory and human rights in South Africa,” African Human Rights Law Journal, vol. 11, no. 2, pp. 532-559, 2011. T. Metz,《作为道德理论的乌班图与南非人权》,《非洲人权法律期刊》,第 11 卷第 2 期,第 532-559 页,2011 年。
R. Nicolson, Persons in Community: African Ethics in a Global Culture. Scottsville, South Africa: University of KwaZulu-Natal Press, 2008. R. Nicolson,《社群中的人:全球文化中的非洲伦理》,南非斯科茨维尔:夸祖鲁-纳塔尔大学出版社,2008 年。
A. Shutte, Ubuntu: An Ethic for a New South Africa. Dorpspruit, South Africa: Cluster Publications, 2001. A. Shutte,《乌班图:新南非的伦理准则》,南非多普斯普鲁特:丛集出版社,2001 年。
D. Tutu, No Future without Forgiveness. London: Rider, 1999. D. 图图,《没有宽恕就没有未来》。伦敦:骑手出版社,1999 年。
O. Ulgen, “Human Dignity in an Age of Autonomous Weapons: Are We in Danger of Losing an ‘Elementary Consideration of Humanity’?” in How International Law Works in Times of Crisis, I. Ziemele and G. Ulrich, Eds. Oxford: Oxford University Press, 2018, pp. 242-272. O. 乌尔根,"自主武器时代的人类尊严:我们是否正在失去'人道的基本考量'?" 载于《危机中国际法的运作》,I. 齐梅莱与 G. 乌尔里希编。牛津:牛津大学出版社,2018 年,第 242-272 页。
Classical Ethics in A/IS 人工智能/智能系统中的古典伦理学
Issue: The Application of Shinto-Influenced Traditions to A/IS Design 议题:神道影响传统在人工智能/智能系统设计中的应用
Background 背景
Alongside the burgeoning African Ubuntu reflections on A/IS, other indigenous technoethical reflections boast an extensive engagement. One such tradition is Japanese Shinto indigenous spirituality, or, Kami no michi, often cited as the catalyst for Japanese robot and autonomous systems culture, a culture that naturally stems from the traditional Japanese concept of karakuri ningyo (automata). Popular Japanese artificial intelligence, robot, and videogaming culture can be directly connected to indigenous Shinto tradition, from the existence of kami (spirits) to puppets and automata. 在非洲乌班图思想对人工智能/自主系统(A/IS)的蓬勃探讨之外,其他本土技术伦理传统也展现出深厚的参与度。日本神道传统便是其中一例——这种被称为"神之道"(Kami no michi)的土著灵性体系,常被视为日本机器人及自主系统文化的思想源头,这种文化本质上源自传统"机关人偶"(karakuri ningyo)的机械美学。从"神"(kami)灵信仰到傀儡戏与自动装置,日本流行的人工智能文化、机器人文化及电子游戏文化均可直接溯源于神道传统。
The relationship between A/IS and a human being is a personal relationship in Japanese culture and, one could argue, a very natural one. The phenomenon of “relationship” in Japan between humans and automata stands out as unique to technological relationships in world cultures, since the Shinto tradition is arguably the only animistic and naturalistic tradition that can be directly connected to contemporary digital culture and A/IS. From the Shinto perspective, the existence of A/IS, whether manifested through robots or other technological autonomous systems, is as natural to the world as rivers, forests, and thunderstorms. As noted by Spyros G. Tzafestas, author of Roboethics: A Navigating Overview, “Japan’s harmonious feeling 在日本文化中,人工智能与自主系统(A/IS)与人类的关系是一种具有个人特质的联系,甚至可以说是一种极其自然的关系。日本人与自动化机械之间独特的"关系"现象在世界各技术文化中显得尤为特殊,因为神道传统可以说是唯一能与当代数字文化及 A/IS 直接关联的万物有灵论与自然主义传统。从神道视角来看,无论是通过机器人还是其他技术性自主系统呈现的 A/IS,其存在就如同河流、森林与雷暴般自然。正如《机器人伦理学:导航概览》作者斯皮罗斯·G·察菲斯塔斯所言:"日本这种和谐共生的情感"
for intelligent machines and robots, particularly for humanoid ones,” (Tzafestas, 2015, 155) colors and influences technological development in Japan, especially robot culture. “对于智能机器人和机器人,尤其是人形机器人而言,”(Tzafestas,2015 年,第 155 页)这一理念影响并塑造了日本的技术发展,特别是机器人文化。
The word “Shinto” can be traced to two Japanese concepts: Shin, meaning spirit, and to, the philosophical path. Along with the modern concept of the android, which can be traced back to three sources-the first, to its Greek etymology that combines andras (“ávofos”), or man, and gynoids/gyni (“YUVn’”), or woman; the second, via automatons and toys as per U.S. patent developers in the 1800s; and the third to Japan, where both historical and technological foundations for android development have dominated the market since the 1970s-Japanese Shinto-influenced technology culture is perhaps the most authentic representation of the humanautomaton interface. “神道”一词可追溯至两个日本概念:“神”意指灵性,“道”代表哲学路径。与现代“人形机器人”概念相似——该词源自三个源头:其一为希腊词源,结合“andras”(άνθρωπος,即男性)与“gynoids/gyni”(γυνή,即女性);其二通过 19 世纪美国专利开发者所述的自动机械与玩具;其三则归于日本——自 1970 年代以来,该国在历史与技术层面的人形机器人研发基础始终主导市场——受神道影响的日本科技文化,或许最真实地体现了人机交互的终极形态。
Shinto tradition is an animistic religious tradition, positing that everything is created with, and maintains, its own spirit (kami) and is animated by that spirit-an idea that goes a long way to defining autonomy in robots from a Japanese viewpoint. This includes, on one hand, everything that Western culture might deem natural, including rivers, trees, and rocks, and on the other hand, everything artificially (read: artfully) created, including vehicles, homes, and automata (robots). Artifacts are as much a part of nature in Shinto as animals, and they are considered naturally beautiful rather than falsely artificial. 神道传统是一种泛灵论的宗教传统,认为万物皆有其自身创造并维系的神灵(kami),并由这种灵性赋予生命——这一观念从日本视角为机器人自主性提供了深远定义。一方面包括西方文化可能视为自然的一切,如河流、树木与岩石;另一方面也涵盖人工(即:技艺性)创造之物,包括车辆、居所与自动机械(机器人)。在神道体系中,人造物与动物同为自然的组成部分,它们被视为天然之美而非虚假的造作。
A potential conflict between Western and Japanese concepts of nature and artifact arises when the two traditions are compared 当比较西方与日本关于自然与人工制品的观念时,两者传统间可能产生冲突
Classical Ethics in A/IS 人工智能/智能系统中的古典伦理
and contrasted, especially in the exploration of artificial intelligence. While in Shinto, the artifact as “artificial” represents creation and authentic being, with implications for defining autonomy, the same artifact is designated as secondary and often times unnatural, false, and counterfeit in Western ethical philosophical tradition, dating back to Platonic and Christian ideas of separation of form and spirit. In both traditions, culturally presumed biases define our relationships with technology. While disparate in origin and foundation, both Western classical ethics traditions and Shinto ethical influences in modern A/IS have similar goals and outlooks for ethics in A/IS, goals that are centered in “relationship”. 尤其在人工智能的探索中,两种传统形成了鲜明对比。在神道教中,作为"人工"产物的器物代表着创造与真实存在,这对定义自主性具有深远意义;而同样的器物在西方伦理哲学传统中——可追溯至柏拉图主义与基督教关于形式与灵魂分离的理念——则被视为次要的、常被归为不自然、虚假与赝品之列。两种传统中,文化预设的偏见塑造了我们与技术的关系。尽管起源与基础迥异,西方古典伦理传统与现代人工智能/智能系统(A/IS)中的神道伦理影响,却对 A/IS 伦理怀有以"关系"为核心的相似目标与愿景。
Recommendations 建议
Where Japanese culture leads the way in the synthesis of traditional value systems and technology, we recommend that people involved with efforts in A/IS ethics explore the Shinto paradigm as representative, though not necessarily as directly applicable, to global efforts in understanding and applying traditional and classical ethics methodologies to A/IS. 日本文化在传统价值体系与技术融合方面处于领先地位,我们建议参与 A/IS 伦理研究的人员将神道范式视为传统与古典伦理方法在 A/IS 领域理解与应用全球实践的代表性范例——尽管其未必能直接套用。
Further Resources 延伸资源
R. M. Geraci, “Spiritual Robots: Religion and Our Scientific View of the Natural World,” Theology and Science, vol. 4, no. 3, pp. 229-246, 2006. R. M. 杰拉奇,《灵性机器人:宗教与我们的自然观科学观》,《神学与科学》,第 4 卷第 3 期,第 229-246 页,2006 年。
D. F. Holland-Minkley, “God in the Machine: Perceptions and Portrayals of Mechanical Kami in Japanese Anime.” Ph.D. dissertation, University of Pittsburgh, Pittsburgh, PA, 2010. D. F. 霍兰德-明克利,《机器中的神明:日本动漫中机械神明的认知与呈现》,博士论文,匹兹堡大学,宾夕法尼亚州匹兹堡,2010 年。
C. B. Jensen and A. Blok, “Techno-Animism in Japan: Shinto Cosmograms, Actor-Network Theory, and the Enabling Powers of NonHuman Agencies,” Theory, Culture & Society, vol. 30, no. 2, pp. 84-115, March 2013. C. B. 詹森与 A. 布洛克,《日本的技术万物有灵论:神道宇宙图式、行动者网络理论与非人能动性的赋能力量》,《理论、文化与社会》,第 30 卷第 2 期,第 84-115 页,2013 年 3 月。
F. Kaplan, “Who Is Afraid of the Humanoid? Investigating Cultural Differences in the Acceptance of Robots,” International Journal of Humanoid Robotics, vol. 1, no. 3, pp. 465480, 2004. F. 卡普兰,《谁在害怕人形机器人?探究机器人接受度的文化差异》,《国际人形机器人学杂志》,第 1 卷第 3 期,第 465-480 页,2004 年。
S. G. Tzafestas, Roboethics: A Navigating Overview. Cham, Switzerland: Springer, 2015. S. G. Tzafestas,《机器人伦理学:导航概览》。瑞士查姆:施普林格出版社,2015 年。
G. Veruggio and K. Abney, “22 Roboethics: The Applied Ethics for a New Science,” in Robot Ethics: The Ethical and Social Implications of Robotics. Cambridge, MA: MIT Press, 2011, p. 347. G. Veruggio 与 K. Abney,《第 22 章 机器人伦理学:一门新科学的应用伦理》,载《机器人伦理:机器人技术的伦理与社会影响》。美国剑桥:麻省理工学院出版社,2011 年,第 347 页。
Classical Ethics in A/IS 人工智能/智能系统中的经典伦理学
Section 3-Classical Ethics for a Technical World 第三章-技术世界中的经典伦理学
Issue: Maintaining Human Autonomy 议题:维护人类自主性
Background 背景
A/IS present the possibility for a digitally networked intellectual capacity that imitates, matches, and supersedes human intellectual capacity, including, among other things, general skills, discovery, and computing functions. In addition, A/IS can potentially acquire functionality in areas traditionally captured under the rubric of what we deem unique human and social ability. While the larger question of ethics and A/IS looks at the implications of the influence of autonomous systems in these areas, the pertinent issue is the possibility of autonomous systems imitating, influencing, and then determining the norms of human autonomy. This is done through the eventual negation of independent human thinking and decisionmaking, where algorithms begin to inform through targeted feedback loops what it is we are and what it is we should decide. Thus, how can the academic rigor of traditional ethics speak to the question of maintaining human autonomy in light of algorithmic decision-making? 人工智能/智能系统(A/IS)展现了一种可能性:通过数字化网络化的智能能力,模仿、匹配并超越人类智力水平,包括但不限于通用技能、探索发现与计算功能。此外,A/IS 可能获得传统上被归为人类独有社会能力的领域功能。当宏观层面的伦理与 A/IS 问题关注自主系统在这些领域的影响时,核心议题在于自主系统模仿、影响并最终决定人类自主规范的可能性。这种影响通过逐步消解人类独立思考和决策能力来实现——算法开始通过定向反馈循环来定义我们的本质并左右我们的抉择。那么,传统伦理学的学术严谨性应如何回应算法决策时代下维护人类自主性的命题?
How will A/IS influence human autonomy in ways that may or may not be advantageous to the good life, and perhaps—even if advantageous— may be detrimental at the same time? How do these systems affect human autonomy and decision-making through the use of algorithms when said algorithms tend to inform (“in-form”) via targeted feedback loops? 人工智能/智能系统(A/IS)将如何以可能有利或不利于美好生活的方式影响人类自主性,甚至在某些情况下——即便有利——也可能同时产生负面影响?当这些系统通过算法提供定向反馈循环来"塑造"("in-form")时,它们如何影响人类的自主性和决策能力?
Consider, for example, Google’s autocomplete tool, where algorithms attempt to determine one’s search parameters via the user’s initial keyword input, offering suggestions based on several criteria including search patterns. In this scenario, autocomplete suggestions influence, in real-time, the parameters the user phrases their search by, often reforming the user’s perceived notions of what it was they were looking for in the first place, versus what they might have actually originally intended. 以谷歌的自动补全工具为例,该算法试图通过用户最初输入的关键词来确定其搜索参数,并根据包括搜索模式在内的多个标准提供建议。在这种情境下,自动补全建议会实时影响用户构建搜索查询的方式,常常重塑用户对自己最初寻找内容的认知,而非反映他们原本的真实意图。
Targeted algorithms also inform, as per emerging loT, applications that monitor the user’s routines and habits in the analog world. Consider for example that our bioinformation is, or soon will be, available for interpretation by autonomous systems. What happens when autonomous systems can inform the user in ways the user is not even aware of, using one’s bioinformation in targeted advertising campaigns that seek to influence the user in real-time feedback loops based on the user’s biological reactions such as 定向算法还通过新兴物联网技术,向监测用户现实世界行为习惯的应用系统提供数据支撑。试想我们的生物信息数据当前或即将能够被自主系统解析运用。当自主系统能够以用户无法察觉的方式,利用其生物特征数据开展精准广告投放——根据用户实时的生理反应(如
Classical Ethics in A/IS 《人工智能与智能系统中的经典伦理》
pupil dilation, body temperature, and emotional reaction, whether positive or negative, to that very same advertising, using information about our being to in-form and re-form our being? On the other hand, it becomes important not to adopt dystopian assumptions concerning autonomous machines threatening human autonomy. 瞳孔扩张、体温变化以及对同一广告产生的积极或消极情绪反应——这些关于我们存在状态的数据被用于塑造并重塑我们的本质?另一方面,我们必须警惕避免陷入机器自主性威胁人类自主权的反乌托邦预设。
The tendency to think only in negative terms presupposes a case for interactions between autonomous machines and human beings, a presumption not necessarily based in evidence. Ultimately, the behavior of algorithms rests solely in their design, and that design rests solely in the hands of those who designed them. Perhaps more importantly, however, is the matter of choice in terms of how the user chooses to interact with the algorithm. Users often don’t know when an algorithm is interacting with them directly or their data which acts as a proxy for their identity. Should there be a precedent for the A/IS user to know when they are interacting with an algorithm? What about consent? 仅以负面思维考量的倾向,预设了自主机器与人类互动的特定情境,这种预设未必基于实证依据。归根结底,算法的行为完全取决于其设计,而设计权则完全掌握在开发者手中。但或许更重要的是,用户选择如何与算法互动的方式问题。用户往往无法辨识直接与其交互的是算法本身,还是作为其身份代理的数据。是否应当为 A/IS 用户建立知晓算法交互的判例?关于知情同意又当如何?
The responsibility for the behavior of algorithms remains with the designer, the user, and a set of well-designed guidelines that guarantee the importance of human autonomy in any interaction. As machine functions become more autonomous and begin to operate in a wider range of situations, any notion of those machines working for or against human beings becomes contested. Does the machine work for someone in particular, or for particular groups but not others? Who decides on the parameters? Is it the machine itself? Such questions become key factors in conversations around ethical standards. 算法行为的责任仍在于设计者、使用者和一套精心设计的准则,这些准则保障了人类自主权在任何交互中的重要性。随着机器功能日益自主并开始在更广泛的情境中运作,关于这些机器是为人类服务还是与人类对立的任何概念都变得具有争议性。机器是为特定个体服务,还是为某些群体服务而排斥其他群体?由谁来决定参数?是机器自身吗?这些问题已成为围绕伦理标准讨论的关键因素。
Recommendations 建议
A two-step process is recommended to maintain human autonomy in A/IS. The creation of an ethics-by-design methodology is the first step to addressing human autonomy in A/IS, where a critically applied ethical design of autonomous systems preemptively considers how and where autonomous systems may or may not dissolve human autonomy. The second step is the creation of a pointed and widely applied education curriculum that spans grade school through university, one based on a classical ethics foundation that focuses on providing choice and accountability toward digital being as a priority in information and knowledge societies. 为维护人工智能/智能系统(A/IS)中的人类自主性,建议采用两步走方案。首要步骤是建立"伦理设计"方法论,通过批判性地应用伦理设计原则,在自主系统开发阶段前瞻性地考量其可能削弱或维护人类自主性的具体情境与方式。第二步则是构建贯穿基础教育至高等教育的专项课程体系,该课程以经典伦理学为基础框架,将数字时代的主体选择权与责任意识作为信息知识社会的核心培养目标。
Further Resources 延伸阅读
B. van den Berg and J. de Mul, “Remote Control. Human Autonomy in the Age of Computer-Mediated Agency,” in Law, Human Agency and Autonomic Computing: The Philosophy of Law Meets the Philosophy of Technology, M. Hildebrandt and A. Rouvroy, Eds. London: Routledge, 2011, pp. 46-63. B. van den Berg 与 J. de Mul 合著,《远程控制:计算机中介代理时代的人类自主性》,收录于《法律、人类主体性与自主计算:法哲学与技术哲学的对话》,M. Hildebrandt 与 A. Rouvroy 编,伦敦:Routledge 出版社,2011 年,第 46-63 页。
L. Costa, “A World of Ambient Intelligence,” in Virtuality and Capabilities in a World of Ambient Intelligence. Cham, Switzerland: Springer International, 2016, pp. 15-41. L. Costa 撰,《环境智能世界》,收录于《环境智能世界中的虚拟性与能力》,瑞士查姆:Springer International 出版社,2016 年,第 15-41 页。
P. P. Verbeek, “Subject to Technology on Autonomic Computing and Human Autonomy,” in The Philosophy of Law Meets the Philosophy of Technology: Autonomic Computing and Transformations of Human Agency, M. Hildebrandt and A. Rouvroy, Eds. New York: Routledge, 2011. P. P. 韦贝克,《受制于技术:论自主计算与人类自主性》,载《法律哲学遇见技术哲学:自主计算与人类能动性之变革》,M. 希尔德布兰特与 A. 鲁瓦罗伊编,纽约:劳特利奇出版社,2011 年。
Classical Ethics in A/IS 人工智能/信息系统中的古典伦理
D. Reisman, J. Schultz, K. Crawford, and M. Whittaker, “Algorithmic Impact Assessments: A practical Framework for Public Agency Accountability,” AI NOW, April 2018. D. 莱斯曼、J. 舒尔茨、K. 克劳福德与 M. 惠特克,《算法影响评估:公共机构问责制的实践框架》,AI NOW 研究所,2018 年 4 月。
A. Chaudhuri, “Philosophical Dimensions of Information and Ethics in the Internet of Things (IoT) Technology,” EDPACS, vol. 56, no. 4, pp. 7-18, Nov. 2017. A. 乔杜里,《物联网技术中信息与伦理的哲学维度》,《EDPACS》第 56 卷第 4 期,第 7-18 页,2017 年 11 月。
Issue: Implications of Cultural Migration in A/IS 议题:文化迁移对自主/智能系统(A/IS)的影响
Background 背景
In addition to developing an understanding of A/IS via different cultures, it is crucial to understand how A/IS are shaped and reshaped -how they affect and are affected by-human mobility and cultural diversity through active immigration. The effect of human mobility on state systems reliant on A/IS impacts the State structure itself, and thus the systems that the structure relies on, in the end influencing everything from democracy to citizenship. Where the State, through A/IS, invests in and gathers big data through mechanisms for registration and identification of people, mainly immigrants, human mobility becomes a foundational component in a system geared toward the preservation of human dignity. 除了通过不同文化视角理解 A/IS 外,关键还在于把握这些系统如何被塑造与重塑——它们如何通过活跃的移民活动影响并受制于人类流动性与文化多样性。人类流动性对依赖 A/IS 的国家体系产生的效应,会波及国家结构本身,进而影响该结构所依赖的整套系统,最终从民主制度到公民身份等方方面面都将受到影响。当国家通过 A/IS 机制(主要是针对移民的人口登记与识别机制)投入并收集大数据时,人类流动性就成为了以维护人类尊严为目标的系统基础构成要素。
Traditional national concerns reflect two information foundations: information produced for human rights and information produced for national sovereignty. In the second foundation, State borders are considered the limits from which political governance is defined in terms of 传统国家关切反映着两种信息基础:为人权服务的信息生产与为国家主权服务的信息生产。在第二种基础中,国家边界被视为界定政治治理范畴的界限,其表征形式包括...
security. The preservation of national sovereignty depends on the production and domination of knowledge. In the realm of migratory policies, knowledge is created to measure people in transit: collecting, treating, and transferring information about territory and society. 安全。国家主权的维系依赖于知识的创造与掌控。在移民政策领域,知识被创造用于衡量流动中的人群:收集、处理并传递有关领土与社会的信息。
Knowledge organization has been the paramount pillar of scientific thought and scientific practice since the beginning of written civilization. Any scientific and technological development has only been possible through information policies that include the establishment of management processes to systematize them, and the codification of language. For the Greeks, this process was closely associated with the concept of arete, or the excellence of one’s self in politics as congregated in the polis. The notion of polis is as relevant as ever in the digital age with the development of digital technologies and the discussions around morality in A/IS. Where the systematization of knowledge is potentially freely created, the advent of the Internet and its flows are difficult to control. Ethical issues about the production of information are becoming paramount to our digital society. 自文字文明伊始,知识组织便成为科学思想与科学实践的核心支柱。任何科技发展都只能通过包含管理流程建立(以实现知识系统化)和语言编码的信息政策得以实现。对古希腊人而言,这一过程与"arete"(卓越德性)概念紧密相连,即个人在城邦政治中臻于至善。在数字技术发展及人工智能/智能系统(A/IS)道德讨论的当下,城邦理念仍具现实意义。当知识的系统化可能自由生成时,互联网及其信息流的涌现便难以控制。信息生产涉及的伦理问题正成为数字社会的关键议题。
The advancement of the fields of science and technology has not been followed by innovations in the political community, and the technical community has repeatedly tabled academic discussions about the hegemony of technocracy over policy issues, restricting the space of the policy arena and valorizing excessively technic solutions for human problems. This monopoly alters conceptions of morality, relocating the locus of the Kantian “Categorical Imperative”, causing the tension among different social and political contexts to become more pervasive. 科技进步的浪潮并未带动政治领域的创新,技术界屡次将技术霸权凌驾于政策议题之上的学术讨论提上议程,既压缩了政策讨论空间,又将人类问题的解决方案过度技术化。这种垄断扭曲了道德观念,重新定位了康德"绝对命令"的坐标,致使不同社会政治语境间的张力愈发弥漫。
Classical Ethics in A/IS 人工智能/智能系统领域的经典伦理
Current global migration dynamics have been met by unfavorable public opinion based in ideas of crisis and emergency, a response vastly disproportionate to what statistics have shown to be the reality. In response to these views, A/IS are currently designed and applied to measure, calculate, identify, register, systematize, normalize, and frame both human rights and security policies. This is largely no different of a process than what has been practiced since the period of colonialism. It includes the creation and implementation of a set of ancient and new technologies. Throughout history, mechanisms have been created firstly to identify and select individuals who share certain biological heritage, and secondly to individuals and social groups, including biological characteristics. 当前全球移民动态遭遇了基于危机和紧急状况观念的不利舆论,这种反应与统计数据显示的现实情况严重不符。针对这些观点,自主/智能系统(A/IS)正被设计并应用于衡量、计算、识别、登记、系统化、规范化和构建人权与安全政策。这一过程与殖民时期以来的做法并无本质区别,其中包含一系列新旧技术的创造与实施。纵观历史,相关机制首先被用于识别和筛选具有特定生物遗传特征的个体,其次则针对个人和社会群体,包括其生物特征。
Information is only possible when materialized as an infrastructure supported by ideas in action as a “communicative act”, which Habermas (1968) identifies in Hegel’s work, converging three elements in human-in-the-world relationships: symbol, language, and labor. Information policies reveal the importance and the strength in which technologies influence economic, social, cultural, identity, and ethnic interactions. 信息唯有在被具象化为一种由行动中的理念所支撑的基础设施时,才成为可能,这种"交往行为"被哈贝马斯(1968)在黑格尔著作中发现,它汇聚了人与世界关系中的三个要素:符号、语言和劳动。信息政策揭示了技术对经济、社会、文化、身份认同及种族互动的深刻影响力。
Traditional mechanisms used to control migration, such as the passport, are associated with globally established walls and fences. The more intense human mobility becomes, the more amplified are the discourses to discourage it, restricting human migrations, and deepening the need for an ethics related to conditions of citizenship. Together with the building of walls, other remote technologies are developed to monitor and surveil borders, buildings, and streets, also impacting ideas and 用于控制移民的传统机制,如护照制度,与全球范围内建立的围墙和栅栏相关联。人类流动越频繁,劝阻流动的论调就越强烈,这既限制了人口迁移,也深化了对公民身份伦理的需求。随着围墙的筑起,其他远程监控技术也被开发出来,用于监视边界、建筑和街道,这些技术同样影响着人们的观念和
moral presumptions of citizenship. Closed Circuit Television(CCTV), Unmanned Aerial Vehicles (UAVs), and satellites allow data transference in real time to databases, cementing the backbone that A/IS draws from, often with bias as per the expectations of developed countries. This centrality of data sources for A/IS expresses a divide between developed and underdeveloped countries, particularly as relevant to the refugee. 公民权的道德预设。闭路电视监控系统(CCTV)、无人机(UAV)和卫星实现了数据向数据库的实时传输,这些构成了人工智能/智能系统(A/IS)赖以运行的支柱,但往往带有发达国家预期的偏见。A/IS 数据源的这种集中性,在发达国家与欠发达国家之间形成了一道鸿沟,对难民群体影响尤为显著。
Information is something that links languages, habits, customs, identification, and registration technologies. It provokes a reshaping of the immigrants’ and refugees’ citizenship and their value as people in terms of their citizenship, as they seek forms of surviving in, and against, the restrictions imposed by A/IS for surveillance and monitoring in an enlarged and more complex cosmopolis. 信息是连接语言、习惯、习俗以及身份识别与登记技术的纽带。当移民和难民在日益扩大且复杂的都市中,为求生存而应对 A/IS 强加的监控限制时,信息会引发对其公民身份及作为公民个人价值的重塑。
An understanding of the impact of A/IS on migration and mobile populations, as used in state systems, is a critical first step to consider if systems are to become truly autonomous and intelligent, especially beyond the guidance of human deliberation. Digital technology systems used to register and identify human mobility, including refugees and other displaced populations, are not autonomous in the intelligent sense, and are dependent on the biases of worldviews around immigration. In this aspect, language is the locus where this dichotomy has to be considered to understand the diversity of morals when there are contacts among different cultures. 理解人工智能/智能系统(A/IS)在国家体系中对于移民和流动人口的影响,是考虑这些系统能否真正实现自主与智能的关键第一步,尤其是在超越人类决策指导的情况下。当前用于登记和识别人口流动(包括难民和其他流离失所群体)的数字技术系统并不具备真正意义上的智能自主性,其运作仍受制于围绕移民问题的世界观偏见。在这方面,语言是必须考量这种二元对立的核心场域,以理解不同文化接触时道德观念的多样性。
Classical Ethics in A/IS 人工智能/智能系统中的经典伦理
Recommendations 建议
Is it recommended that the State become a proactive player in the globalized processes of A/IS for migrant and mobile populations, introducing a series of mechanisms that limit the segregation of social spaces and groups, and consider the biases inherent in surveillance for control. 建议国家在面向移民和流动人口的全球化人工智能/智能系统进程中发挥积极作用,建立一系列限制社会空间与群体隔离的机制,并考量监控行为中固有的控制性偏见。
Further Resources 延伸阅读
I. About and V. Denis, Histoire de l’identification des personnes. Paris: La Découverte, 2010. I. 阿布特与 V. 德尼,《人员识别的历史》。巴黎:发现出版社,2010 年。
I. About, J. Brown, G. Lonergan, Identification and Registration Practices in Transnational Perspective: People, Papers and Practices. London: Palgrave Macmillan, 2013, pp. 1-13. I. 阿布特、J. 布朗、G. 隆纳根,《跨国视角下的身份识别与登记实践:人员、文件与操作》。伦敦:帕尔格雷夫·麦克米伦出版社,2013 年,第 1-13 页。
D. Bigo, “Security and Immigration: Toward a Critique of the Governmentality of Unease,” in Alternatives, Special Issue, no. 27. pp. 63-92, 2002. D. 比戈,《安全与移民:对不安治理性的批判》,载《替代方案》特刊第 27 期,第 63-92 页,2002 年。
R. Capurro, “Citizenship in the Digital Age,” in Information Ethics, Globalization and Citizenship, T. Samek and L. Schultz, Eds. Jefferson NC: McFarland, 2017, pp. 11-30. R. 卡普罗,《数字时代的公民身份》,载于《信息伦理、全球化与公民身份》,T. 萨梅克与 L. 舒尔茨编,北卡罗来纳州杰斐逊:麦克法兰出版社,2017 年,第 11-30 页。
R. Capurro, “Intercultural Information Ethics,” in Localizing the Internet: Ethical Aspects in Intercultural Perspective, R. Capurro, J. Frühbauer, and T. Hausmanninger, Eds. Munich: Fink, 2007, pp. 21-38. R. 卡普罗,《跨文化信息伦理》,载于《互联网本土化:跨文化视角下的伦理问题》,R. 卡普罗、J. 弗吕鲍尔与 T. 豪斯曼宁格编,慕尼黑:芬克出版社,2007 年,第 21-38 页。
UN High Commissioner for Refugees (UNHCR), Policy on the Protection of Personal Data of Persons of Concern to UNHCR, May 2015. 联合国难民署(UNHCR),《关于保护联合国难民署关注对象个人数据的政策》,2015 年 5 月。
Issue: Applying Goal-Directed Behavior (Virtue Ethics) to Autonomous and Intelligent Systems 议题:将目标导向行为(美德伦理)应用于自主智能系统
Background 背景
Initial concerns regarding A/IS also include questions of function, purpose, identity, and agency, a continuum of goal-directed behavior with function being the most primitive expression. How can classical ethics act as a regulating force in autonomous technologies as goal-directed behavior transitions from being externally set by operators to being internally set? The question is important not just for safety reasons, but for mutual productivity. If autonomous systems are to be our trusted, creative partners, then we need to be confident that we possess mutual anticipation of goal-directed action in a wide variety of circumstances. 关于人工智能/智能系统(A/IS)的初始关切还涉及功能、目的、身份和能动性问题,这是一个以目标导向行为为连续谱系的概念框架,其中功能是最基础的表达形式。当目标导向行为从由操作者外部设定转变为系统内部自主设定时,经典伦理学如何作为自主技术的规范力量发挥作用?这一问题的重要性不仅关乎安全性,更涉及协同效能。若要使自主系统成为我们可信赖的创造性合作伙伴,就必须确保双方在多样化情境下对目标导向行为具有相互可预见性。
A virtue ethics approach has merits for accomplishing this even without having to posit a “character” in an autonomous technology, since it places emphasis on habitual, iterative action focused on achieving excellence in a chosen domain or in accord with a guiding purpose. At points on the goal-directed continuum associated with greater sophistication, virtue ethics become even more useful by providing a framework for prudent decision-making that is in keeping with the autonomous system’s purpose, but allows for creativity in how to achieve the purpose in a way that still allows for a degree of predictability. An ethics approach that does not rely on a decision 美德伦理学方法在这方面具有优势,即便无需假设自主技术具有"品格"——因为它强调通过习惯性、迭代性的行动,在特定领域或遵循指导性目标追求卓越。在目标导向连续体中复杂度更高的节点上,美德伦理学通过提供审慎决策框架而更具价值:该框架既符合自主系统的目标宗旨,又允许通过创造性方式实现目标,同时保持一定程度的可预测性。这种不依赖于决策的伦理进路
Classical Ethics in A/IS 人工智能/智能系统中的经典伦理学
to refrain from transgressing, but instead to prudently pursue a sense of purpose informed by one’s identity, might provide a greater degree of insight into the behavior of the system. 其要义不在于禁止越界行为,而在于审慎追求由身份认同所启发的目标感,这可能为理解系统行为提供更深刻的洞见。
Recommendations 建议
Program autonomous systems to be able to recognize user behavior for the purposes of predictability, traceability, and accountability and to hold expectations, as an operator and co-collaborator, whereby both user and system mutually recognize the decisions of the autonomous system as virtue ethics-based. 编程自主系统使其能够识别用户行为,以实现可预测性、可追溯性和可问责性,并作为操作者和协同合作者保持预期,使用户和系统共同将自主系统的决策视为基于美德伦理的。
Further Resources 延伸阅读
M. A. Boden, Ed. The Philosophy of Artificial Life. Oxford, U.K.: Oxford University Press, 1996. M. A. Boden 编,《人工生命的哲学》,英国牛津:牛津大学出版社,1996 年。
C. Castelfranchi, “Modelling Social Action for Al Agents,” Artificial Intelligence, vol. 103, no.12, pp. 157-182, 1998. C. Castelfranchi,《为 AI 智能体建模社会行为》,《人工智能》,第 103 卷,第 12 期,第 157-182 页,1998 年。
W. D. Christensen and C. A. Hooker, “Anticipation in Autonomous Systems: Foundations for a Theory of Embodied Agents,” International Journal of Computing Anticipatory Systems, vol. 5, pp. 135-154, Dec. 2000. W. D. 克里斯滕森与 C. A. 胡克,《自主系统中的预期:具身智能体理论基础》,《国际计算预期系统学报》第 5 卷,第 135-154 页,2000 年 12 月。
K. G. Coleman, “Android Arete: Toward a Virtue Ethic for Computational Agents,” Ethics and Information Technology, vol. 3, no. 4, pp. 247-265, 2001. K. G. 科尔曼,《安卓美德:面向计算智能体的德性伦理》,《伦理与信息技术》第 3 卷第 4 期,第 247-265 页,2001 年。
J. G. Lennox, “Aristotle on the Biological Roots of Virtue,” Biology and the Foundations of Ethics, J. Maienschein and M. Ruse, Eds. Cambridge, U.K.: Cambridge University Press, 1999, pp. 405-438. J. G. 伦诺克斯,《亚里士多德论美德的生物学根源》,载《生物学与伦理基础》,J. 迈恩申与 M. 鲁斯编,英国剑桥:剑桥大学出版社,1999 年,第 405-438 页。
L. Muehlhauser and L. Helm, “The Singularity and Machine Ethics,” in Singularity Hypotheses, A. H. Eden, J. H. Moor, J. H. Soraker, and E. Steinhart, Eds. Berlin: Springer, 2012, pp. 101-126. L. 穆尔豪泽与 L. 赫尔姆,《奇点与机器伦理》,载《奇点假说》,A. H. 伊登、J. H. 摩尔、J. H. 索拉克与 E. 斯坦哈特编,柏林:施普林格出版社,2012 年,第 101-126 页。
D. Vernon, G. Metta, and G. Sandini, “A Survey of Artificial Cognitive Systems: Implications for the Autonomous Development of Mental Capabilities in Computational Agents,” IEEE Transactions on Evolutionary Computation, vol. 11, no. 2, pp. 151-180, April 2007. D. Vernon、G. Metta 和 G. Sandini,《人工认知系统综述:对计算智能体心智能力自主发展的启示》,《IEEE 进化计算汇刊》,第 11 卷第 2 期,第 151-180 页,2007 年 4 月。
Issue: A Requirement for Rule-Based Ethics in Practical Programming 议题:实用编程中基于规则的伦理需求
Background 背景
Research in machine ethics focuses on simple moral machines. It is deontological ethics and teleological ethics that are best suited to the kind of practical programming needed for such machines, as these ethical systems are abstractable enough to encompass ideas of non-human agency, whereas most modern ethics approaches are far too human-centered to properly accommodate the task. 机器伦理学研究聚焦于简单的道德机器。义务论伦理学与目的论伦理学最适合此类机器所需的实用编程,因为这些伦理体系具有足够的抽象性来涵盖非人类主体的概念,而大多数现代伦理学方法都过于以人类为中心,难以妥善应对这一任务。
In the deontological model, duty is the point of departure. Duty can be translated into rules. It can be distinguished into rules and metarules. For example, a rule might take the form “Don’t lie!”, whereas a metarule would take the form of Kant’s categorical imperative: “Act only according to that maxim whereby you can, at the same time, will that it should become a universal law.” 在义务论模型中,义务是出发点。义务可转化为规则,并可区分为规则与元规则。例如,规则可能表现为"不可说谎!",而元规则则采用康德绝对命令的形式:"只按照你同时愿意它成为普遍法则的准则行事。"
Classical Ethics in A/IS 人工智能系统中的古典伦理学
A machine can follow simple rules. Rule-based systems can be implemented as formal systems, also referred to as “axiomatic systems”, and in the case of machine ethics, a set of rules is used to determine which actions are morally allowable and which are not. Since it is not possible to cover every situation by a rule, an inference engine is used to deduce new rules from a small set of simple rules called axioms by combining them. The morality of a machine comprises the set of rules that is deducible from the axioms. 机器能够遵循简单规则。基于规则的系统可作为形式系统(亦称"公理系统")实现,在机器伦理领域,一组规则被用于判定哪些行为在道德上是被允许的,哪些则不被允许。由于无法通过单一规则覆盖所有情境,系统会使用推理引擎,通过组合少量称为公理的简单规则来推导新规则。机器的道德体系即是由这些公理可推导出的全部规则所构成。
Formal systems have an advantage since properties such as decidability and consistency of a system can be effectively examined. If a formal system is decidable, every rule is either morally allowable or not, and the “unknown” is eliminated. If the formal system is consistent, one can be sure that no two rules can be deduced that contradict each other. In other words, the machine never has moral doubt about an action and never encounters a deadlock. 形式系统的优势在于其可判定性、一致性等属性能够被有效检验。若形式系统具有可判定性,则每条规则要么被道德允许,要么不被允许,从而消除了"未知"状态。当形式系统保持一致性时,可以确保不会推导出两条相互矛盾的规则。换言之,机器对行为从不会产生道德困惑,也不会陷入决策僵局。
The disadvantage of using formal systems is that many of them work only in closed worlds like computer games. In this case, what is not known is assumed to be false. This is in drastic conflict with real world situations, where rules can conflict and it is impossible to take into account the totality of the environment. In other words, consistent and decidable formal systems that rely on a closed world assumption can be used to implement an ideal moral framework for a machine, yet they are not viable for real world tasks. 使用形式化系统的缺点在于,其中许多系统只能在计算机游戏这样的封闭世界中运行。在这种情况下,未知信息会被默认为假。这与现实世界情境存在根本性冲突——现实中的规则可能相互矛盾,且无法完全考量环境的整体性。换言之,依赖封闭世界假设的、具有一致性和可判定性的形式系统虽可用于实现机器的理想道德框架,却无法胜任现实世界的任务。
One approach to avoiding a closed world scenario is to utilize self-learning algorithms, such as case- 避免封闭世界场景的一种方法是采用自学习算法,例如基于案例
based reasoning approaches. Here, the machine uses “experience” in the form of similar cases that it has encountered in the past or uses cases which are collected in databases. 的推理方法。该方法使机器能够利用过往遇到的类似案例作为"经验",或调用数据库中收集的案例进行决策。
In the context of the teleological model, the consequences of an action are assessed. The machine must know the consequences of an action and what the action’s consequences mean for humans, for animals, for things in the environment, and, finally, for the machine itself. It also must be able to assess whether these consequences are good or bad, or if they are acceptable or not, and this assessment is not absolute. While a decision may be good for one person, it may be bad for another; while it may be good for a group of people or for all of humanity, it may be bad for a minority of people. An implementation approach that allows for the consideration of potentially contradictory subjective interests may be realized by decentralized reasoning approaches such as agent-based systems. In contrast to this, centralized approaches may be used to assess the overall consequences for all involved parties. 在目的论模型的语境下,需要对行为的后果进行评估。机器必须了解行为的后果,以及这些后果对人类、动物、环境中的事物乃至机器自身意味着什么。它还必须能够评估这些后果是好是坏、是否可以接受,而这种评估并非绝对。某个决定对一个人有利,却可能对另一个人有害;对某个群体或全人类有益的决定,可能对少数群体不利。通过基于代理系统等去中心化推理方法,可以实现一种能兼顾潜在矛盾主观利益的实施路径。与之相对,集中式方法可用于评估对所有相关方的整体影响。
Recommendations 建议
By applying the classical methodologies of deontological and teleological ethics to machine learning, rules-based programming in A/IS can be supplemented with established praxis, providing both theory and a practicality toward consistent and determinable formal systems. 通过将道义论与目的论伦理学经典方法论应用于机器学习领域,可基于既有实践对人工智能/智能系统(A/IS)的规则编程进行补充,从而为构建具有一致性和可判定性的形式化系统提供理论基础与实践路径。
Classical Ethics in A/IS 人工智能与智能系统中的经典伦理
Further Resources 延伸阅读资源
C. Allen, I. Smit, and W. Wallach, “Artificial Morality: Top-Down, Bottom-Up, and Hybrid Approaches,” Ethics and Information Technology, vol. 7, no. 3, pp. 149-155, 2005. C. 艾伦、I. 斯密特与 W. 瓦拉赫,《人工道德:自上而下、自下而上及混合方法》,《伦理与信息技术》,第 7 卷第 3 期,第 149-155 页,2005 年。
O. Bendel, Die Moral in der Maschine: Beiträge zu Roboter-und Maschinenethik. Heise Medien, 2016. O. 本德尔,《机器中的道德:机器人及机器伦理文集》,海泽出版社,2016 年。
O. Bendel, Oliver, Handbuch Maschinenethik. Wiesbaden, Germany: Springer VS, 2018. O. Bendel, Oliver,《机器伦理手册》。德国威斯巴登:Springer VS 出版社,2018 年。
M. Fisher, L. Dennis, and M. Webster, “Verifying Autonomous Systems,” Communications of the ACM, vol. 56, no. 9, pp. 84-93, Sept. 2013. M. Fisher, L. Dennis, 和 M. Webster,《验证自主系统》,《ACM 通讯》第 56 卷第 9 期,第 84-93 页,2013 年 9 月。
B. M. McLaren, “Computational Models of Ethical Reasoning: Challenges, Initial Steps, and Future Directions,” IEEE Intelligent Systems, vol. 21, no. 4, pp. 29-37, July 2006. B. M. McLaren,《伦理推理的计算模型:挑战、初步探索与未来方向》,《IEEE 智能系统》第 21 卷第 4 期,第 29-37 页,2006 年 7 月。
M. A. Perez Alvarez, “Tecnologías de la Mente y Exocerebro o las Mediaciones del Aprendizaje,” 2015. M. A. Perez Alvarez,《心智技术与外脑:学习的中介》,2015 年。
E. L. Rissland and D. B. Skalak, “Combining Case-Based and Rule-Based Reasoning: A Heuristic Approach.” Proceedings of the 11th International Joint Conference on Artificial Intelligence, IJCAI 1989, Detroit, MI, August 20-25, 1989, San Francisco, CA: Morgan Kaufmann Publishers Inc., 1989. pp. 524-530. E. L. 里斯兰与 D. B. 斯卡拉克,《结合案例推理与规则推理:一种启发式方法》,第十一届国际人工智能联合会议论文集,IJCAI 1989,美国密歇根州底特律,1989 年 8 月 20-25 日,加州旧金山:Morgan Kaufmann 出版社,1989 年,第 524-530 页。
Thanks to the Contributors 致谢贡献者
We wish to acknowledge all of the people who contributed to this chapter. 我们谨向所有为本章节做出贡献的人士致以谢意。
The Classical Ethics in A/IS Committee 人工智能/智能系统古典伦理委员会
Jared Bielby (Chair) - President, Netizen Consulting Ltd; Chair, International Center for Information Ethics; editor, Information Cultures in the Digital Age 贾里德·比尔比(主席)——Netizen 咨询有限公司总裁;国际信息伦理中心主席;《数字时代的信息文化》主编
Soraj Hongladarom (Co-chair) - President at The Philosophy and Religion Society of Thailand 颂拉·洪拉达隆(联合主席)——泰国哲学与宗教学会会长
Miguel Á. Pérez Álvarez - Professor of Technology in Education, Colegio de Pedagogía, Facultad de Filosofía y Letras, Universidad Nacional Autónoma de México 米格尔·Á. 佩雷斯·阿尔瓦雷斯——墨西哥国立自治大学文学院教育学系教育技术教授
Oliver Bendel - Professor of Information Systems, Information Ethics and Machine 奥利弗·本德尔——信息系统、信息伦理与机器伦理教授
Ethics, University of Applied Sciences and Arts Northwestern Switzerland FHNW 瑞士西北应用科学与艺术大学伦理研究中心
Dr. John T. F. Burgess - Assistant Professor / Coordinator for Distance Education, School of Library and Information Studies, The University of Alabama 约翰·T·F·伯吉斯博士 - 阿拉巴马大学图书馆与信息研究学院助理教授/远程教育协调员
Rafael Capurro - Founder, International Center for Information Ethics 拉斐尔·卡普罗 - 国际信息伦理中心创始人
Corinne Cath-Speth - PhD student at Oxford Internet Institute, The University of Oxford, Doctoral student at the Alan Turing Institute, Digital Consultant at ARTICLE 19 科琳娜·凯斯-斯佩思 - 牛津大学牛津互联网研究所博士研究生,艾伦·图灵研究所博士生,ARTICLE 19 数字顾问
Dr. Paola Di Maio - Center for Technology Ethics, ISTCS.org UK and NCKU Taiwan 保拉·迪·马约博士 - 技术伦理中心,英国 ISTCS.org 与台湾成功大学
Robert Donaldson - Independent Computer Scientist, BMRILLC, Hershey, PA 罗伯特·唐纳森 - 独立计算机科学家,BMRILLC 公司,美国宾夕法尼亚州赫尔希
Classical Ethics in A/IS 人工智能与智能系统中的经典伦理
Rachel Fischer - Research Officer: African Centre of Excellence for Information Ethics, Information Science Department, University of Pretoria, South Africa. 雷切尔·菲舍尔 - 研究专员:非洲信息伦理卓越中心,南非比勒陀利亚大学信息科学系
Dr. D. Michael Franklin - Assistant Professor, Kennesaw State University, Marietta Campus, Marietta, GA D. Michael Franklin 博士 - 美国佐治亚州玛丽埃塔市肯尼索州立大学玛丽埃塔校区助理教授
Wolfgang Hofkirchner - Associate Professor, Institute for Design and Technology Assessment, Vienna University of Technology Wolfgang Hofkirchner - 维也纳理工大学设计与技术评估研究所副教授
Dr. Tae Wan Kim - Associate Professor of Business Ethics, Tepper School of Business Carnegie Mellon University Tae Wan Kim 博士 - 卡内基梅隆大学泰珀商学院商业伦理学副教授
Kai Kimppa - University Research Fellow, Information Systems, Turku School of Economics, University of Turku Kai Kimppa - 图尔库大学图尔库经济学院信息系统研究员
Sara R. Mattingly-Jordan - Assistant Professor Center for Public Administration & Policy, Virginia Tech 萨拉·R·马丁利-乔丹 - 弗吉尼亚理工大学公共行政与政策中心助理教授
Dr Neil McBride - Reader in IT Management, School of Computer Science and Informatics, Centre for Computing and Social Responsibility, De Montfort University 尼尔·麦克布莱德博士 - 德蒙福特大学计算机科学与信息学院信息技术管理高级讲师,计算与社会责任中心研究员
Bruno Macedo Nathansohn - Perspectivas Filosóficas em Informação (Perfil-i); Brazilian Institute of Information in Science and Technology (IBICT) 布鲁诺·马塞多·纳坦索恩 - 信息哲学视角研究组(Perfil-i)成员;巴西科技信息研究所(IBICT)研究员
Marie-Therese Png - PhD Student, Oxford Internet Institute, PhD Intern, DeepMind Ethics & Society 玛丽-特蕾丝·庞 - 牛津互联网研究所博士研究生,DeepMind 伦理与社会部门实习研究员
Samuel T. Segun - PhD Candidate, Department of Philosophy, University of Johannesburg. Fellow, Philosophy Node of the Centre for Artificial Intelligence Research (CAIR) at the University of Pretoria and Research fellow at the Conversational School of Philosophy (CSP) 塞缪尔·T·塞贡 - 约翰内斯堡大学哲学系博士候选人,比勒陀利亚大学人工智能研究中心(CAIR)哲学节点研究员,哲学对话学院(CSP)研究员
Dr. Ozlem Ulgen - Reader in International Law and Ethics, School of Law, Birmingham City University 奥兹莱姆·乌尔根博士 - 伯明翰城市大学法学院国际法与伦理学高级讲师
Kristene Unsworth - Assistant Professor, The College of Computing & Informatics, Drexel University 克里斯汀·昂斯沃思 - 德雷塞尔大学计算与信息科学学院助理教授
Dr. Xiaowei Wang - Associate professor of Philosophy, Renmin University of China 王晓伟博士 - 中国人民大学哲学系副教授
Dr Sara Wilford - Senior Lecturer, Research Fellow, School of Computer Science and Informatics, Centre for Computing and Social Responsibility, De Montfort University 萨拉·威尔福德博士 - 德蒙福特大学计算机科学与信息学院、计算与社会责任中心高级讲师兼研究员
Pak-Hang Wong - Research Associate, Department of Informatics, University of Hamburg 黄柏恒博士 - 汉堡大学信息学系研究助理
Bendert Zevenbergen - Oxford Internet Institute, University of Oxford & Center for Information Technology Policy, Princeton University 本德特·泽文贝亨 - 牛津大学牛津互联网研究所 & 普林斯顿大学信息技术政策中心
For a full listing of all IEEE Global Initiative Members, visit standards.ieee.org/content/dam/ ieee-standards/standards/web/documents/other/ ec bios.pdf. 如需查看 IEEE 全球倡议组织全体成员名单,请访问 standards.ieee.org/content/dam/ieee-standards/standards/web/documents/other/ec_bios.pdf。
For information on disclaimers associated with EAD1e, see How the Document Was Prepared. 关于 EAD1e 相关免责声明,请参阅《文档编制说明》。
Endnotes 尾注
^(1){ }^{1} This edition of “Classical Ethics in A/IS” does not (and could not) aspire to universal coverage of all of the world’s traditions in the space available to us. Future editions will touch on several other traditions, including Judaism and Islam. ^(1){ }^{1} 本版《人工智能/智能系统中的经典伦理》受篇幅所限,无法(也不可能)涵盖世界上所有伦理传统。未来版本将涉及犹太教、伊斯兰教等其他多种传统。
2 R. Von Schomberg, “Prospects for Technology Assessment in a Framework of Responsible Research and Innovation” in Technikfolgen Abschätzen Lehren: Bildungspotenziale Transdisziplinärer Methode. Wiesbaden, Germany: Springer VS, 2011, pp. 39-61. 2 R. 冯·肖姆伯格,《负责任研究与创新框架下的技术评估前景》,收录于《技术影响评估教学:跨学科方法的教育潜力》。德国威斯巴登:Springer VS 出版社,2011 年,第 39-61 页。
Well-being 福祉
Prioritizing ethical and responsible artificial intelligence has become a widespread goal for society. Important issues of transparency, accountability, algorithmic bias, and value systems are being directly addressed in the design and implementation of autonomous and intelligent systems (A/IS). While this is an encouraging trend, a key question still facing technologists, manufacturers, and policymakers alike is how to assess, understand, measure, monitor, safeguard, and improve the well-being impacts of A/IS on humans. Finding the answer to this question is further complicated when A/IS are within a holistic and interconnected framework of well-being in which individual well-being is inseparable from societal, economic, and environmental systems. 将伦理和负责任的人工智能作为优先事项已成为社会的普遍目标。透明度、问责制、算法偏见和价值体系等重要问题正在自主与智能系统(A/IS)的设计和实施过程中得到直接解决。尽管这一趋势令人鼓舞,但技术人员、制造商和政策制定者共同面临的关键问题是如何评估、理解、衡量、监测、保障并改善 A/IS 对人类福祉的影响。当 A/IS 处于一个整体且相互关联的福祉框架中时——其中个体福祉与社会、经济和环境系统密不可分——寻找这一问题的答案就变得更为复杂。
For A/IS to demonstrably advance well-being, we need consistent and multidimensional indicators that are easily implementable by the developers, engineers, and designers who are building our future. This chapter is intended for such developers, engineers, and designersreferred to in this chapter as “A/IS creators”. Those affected by A/IS are referred to as “A/IS stakeholders”. 要使 A/IS 切实提升福祉,我们需要开发一套易于被构建未来的开发者、工程师和设计师实施的多维度标准化指标。本章专为这些开发者、工程师和设计师(本章统称为"A/IS 创造者")而撰写。受 A/IS 影响的群体则被称为"A/IS 利益相关方"。
A/IS technologies affect human agency, identity, emotion, and ecological systems in new and profound ways. Traditional metrics of success are not equipped to ensure A/IS creators can avoid unintended consequences or benefit from unexpected innovation in the algorithmic age. A/IS creators need expanded ways to evaluate the impact of their products, services, or systems on human well-being. These evaluations must also be done with an understanding that human well-being is deeply linked to the well-being of society, economies, and ecosystems. 人工智能/智能系统(A/IS)技术以全新而深刻的方式影响着人类能动性、身份认同、情感体验及生态系统。传统成功指标已无法确保 A/IS 创造者在算法时代规避意外后果或从突发创新中获益。A/IS 创造者需要拓展评估维度,以衡量其产品、服务或系统对人类福祉的影响。此类评估必须基于这样的认知:人类福祉与社会、经济及生态系统的健康运行密不可分。
Today, A/IS creators largely measure success using metrics including profit, gross domestic product (GDP), consumption levels, and occupational safety. While important, these metrics fail to encompass the full spectrum of well-being impacts on individuals and society, such as psychological, social, and environmental factors. Where the priority given to these factors is not equal to that given to fiscal metrics of success, A/IS creators risk causing or contributing to negative and irreversible harms to our people and our planet. 当前 A/IS 创造者主要采用利润、国内生产总值(GDP)、消费水平及职业安全等指标衡量成功。尽管这些指标很重要,却未能涵盖对个人与社会福祉的全方位影响,包括心理、社会及环境等维度。当这些要素的优先级低于财务成功指标时,A/IS 创造者可能引发或加剧对人类和地球造成的负面且不可逆的伤害。
When A/IS creators are not aware that well-being indicators, in addition to traditional metrics, can provide guidance for their work, they are also missing out on innovation that can increase well-being and societal value. For instance, while it is commonly recognized that autonomous vehicles will save lives when safely deployed, a topic of less frequent discussion is how self- 当人工智能/智能系统(A/IS)的创造者未能意识到,除了传统指标外,福祉指标也能为其工作提供指导时,他们也错失了能够提升福祉和社会价值的创新机会。例如,虽然人们普遍认识到自动驾驶汽车在安全部署时将挽救生命,但较少被讨论的是自动驾驶汽车如何通过减少温室气体排放和增加绿地空间来帮助改善环境。
driving cars also have the potential to help the environment by reducing greenhouse gas emissions and increasing green space. Autonomous vehicles can also positively impact wellbeing by increasing work-life balance and enhancing the quality of time spent during commutes. 自动驾驶汽车还能通过改善工作与生活的平衡,提升通勤时间的质量,从而对福祉产生积极影响。
Unless A/IS creators are made aware of the existence of alternative measures of progress, the value they provide, and the way they can be incorporated into A/IS work, technology and society will continue to rely upon traditional metrics of success. In an era where innovation is defined by holistic prosperity, alternative measures are needed more now than ever before. The 2009 Report by the Commission on the Measurement of Economic Performance and Social Progress which contributed substantially to the worldwide movement of governments using wider measures of well-being, states, “What we measure affects what we do; and if our measurements are flawed, decisions may be distorted.” 除非人工智能/智能系统(A/IS)的开发者能够认识到其他衡量进步的标准的存在、其价值所在以及如何将其融入 A/IS 工作中,否则技术和社会将继续依赖于传统的成功指标。在这个以全面繁荣定义创新的时代,我们比以往任何时候都更需要多元化的衡量标准。2009 年《经济绩效与社会进步测量委员会报告》——该报告为全球政府采用更广泛的福祉衡量标准运动作出重大贡献——明确指出:"我们衡量的内容会影响我们的行动;如果测量方式存在缺陷,决策就可能被扭曲。"
We believe that A/IS creators can profoundly increase human and environmental flourishing by prioritizing well-being metrics as an outcome in all A/IS system designs-now and for the future. The primary intended audience for this chapter is A/IS creators who are unfamiliar with the term “well-being” as it is used in the field of positive psychology and well-being studies. Our initial goal is to provide a broad introduction to qualitative and quantitative metrics and applications of well-being to educate and inspire A/IS creators. We do not prioritize or advocate for any specific indicator or methodology. For further elaboration on the definition of well-being, please see the first Issue listed in Section 1. 我们相信,人工智能/智能系统(A/IS)的创造者能够通过在所有系统设计中优先考虑幸福指标作为产出目标,从而显著提升人类与环境福祉——无论是现在还是未来。本章节主要面向不熟悉积极心理学与幸福研究领域术语"A/IS 创造者"。我们的首要目标是全面介绍幸福感的定性与定量测量方法及其应用,以启发 A/IS 创造者。我们不会优先推荐或主张任何特定指标或方法论。关于幸福定义的详细阐述,请参阅第 1 节列出的首个议题。
This chapter is divided into two main sections: 本章分为两个主要部分:
The Value of Well-being Metrics for A/IS Creators 幸福指标对 A/IS 创造者的价值
Implementing Well-being Metrics for A/IS Creators A/IS 创造者实施幸福指标的实践路径
The following resources are available online to provide readers with an introduction to existing well-being metrics and tools currently in use: 以下在线资源可供读者了解当前使用的现有福祉指标和工具:
The State of Well-being Metrics 福祉指标现状
The Happiness Screening Tool for Business Product Decisions 商业产品决策幸福度筛查工具
Additional Resources: Standards Development Models and Frameworks 其他资源:标准开发模型与框架
Section 1-The Value of Well-being Metrics for A/IS Creators 第一节 福祉衡量标准对人工智能/智能系统(A/IS)开发者的价值
Well-being metrics provide a broader perspective for A/IS creators than they normally might be familiar with in evaluating their products. This broader perspective unlocks greater opportunities to assure a positive impact of A/IS on human well-being, while minimizing the risk of unintended negative outcomes. This section defines well-being, discusses the value of wellbeing metrics to A/IS creators, and notes how similar frameworks like sustainability and human rights can be complemented by incorporating well-being metrics. 福祉衡量标准为 A/IS 开发者提供了比常规评估更广阔的视角。这种更全面的视角能够创造更多机会,确保 A/IS 对人类福祉产生积极影响,同时降低意外负面后果的风险。本节将界定福祉的定义,探讨福祉衡量标准对 A/IS 开发者的价值,并说明如何通过纳入福祉指标来完善可持续发展与人权等类似框架。
Definition of Well-being 福祉的定义
For the purposes of Ethically Aligned Design, the term “well-being” refers to an evaluation of the general quality of life of an individual and the state of external circumstances. The conception of well-being encompasses the full spectrum of personal, social, and environmental factors that enhance human life and on which human life depend. The concept of well-being shall be considered distinct from moral or legal evaluation. 在《伦理对齐设计》框架下,"福祉"指对个体整体生活质量及外部环境状态的评估。福祉概念涵盖增强人类生活及人类赖以生存的所有个人、社会和环境因素。需注意福祉概念应与道德或法律评价区分开来。
Issue: There is ample and robust science behind wellbeing metrics and their use by international and national institutions. However, A/IS creators are often unaware that well-being metrics exist, or that they can be used to plan, develop, and evaluate technology. 问题:关于幸福指标及其在国际和国家机构中的应用,已有充分且坚实的科学依据。然而,人工智能/智能系统(A/IS)的开发者往往不知道这些幸福指标的存在,也不了解它们可用于技术的规划、开发和评估。
Background 背景
The concept of well-being refers to an evaluation of the general goodness of the state of an individual or community and is distinct from moral or legal evaluation. A well-being evaluation takes into account major aspects of a person’s life, such as their happiness, success in their goals, and their overall positive functioning in their environment. There is now a thriving area of scientific research into the psychological, social, behavioral, economic, and environmental determinants of human well-being. 幸福概念指的是对个人或群体整体生活状态的良性评估,与道德或法律评价截然不同。幸福评估涵盖个人生活的主要方面,如幸福感、目标达成度以及在所处环境中的整体积极功能。目前针对人类幸福的心理、社会、行为、经济和环境决定因素已形成一个蓬勃发展的科学研究领域。
The term “well-being” is defined and used in various ways across different contexts and fields. For example: economists identifying economic welfare with levels of consumption and economic vitality, psychologists highlighting subjective experience, and sociologists emphasizing living, labor, political, social, and environmental conditions. We do not take a stand on any specific measure of well-being. The metrics listed below are an incomplete list and provided as a starting point for further inquiry. Among these are subjective well-being indicators, measures of quality of life, social progress and capabilities, and many more. “福祉”这一术语在不同语境和领域中有多种定义和应用方式。例如:经济学家将经济福利等同于消费水平和经济活力,心理学家侧重主观体验,而社会学家则强调生活、劳动、政治、社会及环境条件。我们不对任何特定的福祉衡量标准持立场。以下所列指标为不完整清单,仅作为进一步研究的起点,其中包括主观福祉指标、生活质量衡量标准、社会进步与能力评估等多种维度。
There is now sufficient consensus among scientists that well-being can be reliably measured. Well-being measures differ in the number and the intricacy of indicators they employ. Short questionnaires of life satisfaction have emerged as particularly popular, although they do not reflect all aspects of well-being. While recognizing a scope for differences across well-being indicators, we note that the richest conception of well-being encompasses the full spectrum of personal, social, and environmental goods that enhance human life. 目前科学界已达成充分共识,认为福祉可以被可靠地测量。不同福祉测量工具采用的指标数量与复杂程度各异。虽然简短的生活满意度问卷尤为流行,但它们并不能反映福祉的所有方面。在承认不同福祉指标存在差异空间的同时,我们注意到最全面的福祉概念应涵盖提升人类生活的个人、社会及环境效益的完整谱系。
We encourage A/IS creators to consider the wide range of available indicators and select those most relevant and revealing for particular stages of the A/IS technology’s life cycle and the particular context for the technology’s use and evaluation. That is, measures of well-being that may be well-suited to wealthy, industrialized nations may be less applicable in low- and middle-income countries, and vice versa. 我们鼓励人工智能/智能系统(A/IS)的创造者考虑各种可用的指标,并选择那些与 A/IS 技术生命周期特定阶段及技术使用和评估的具体背景最相关且最具揭示性的指标。也就是说,适用于富裕工业化国家的福祉衡量标准可能在中低收入国家不太适用,反之亦然。
Among the most important and recognized aspects of well-being are (in alphabetical order): 最重要且公认的福祉维度包括(按字母顺序排列):
Community: Belonging, Crime & Safety, Discrimination & Inclusion, Participation, Social Support 社区:归属感、犯罪与安全、歧视与包容、参与度、社会支持
Culture: Identity, Values 文化:身份认同、价值观
Economy: Economic Policy, Equality & Environment, Innovation, Jobs, Sustainable Natural Resources & Consumption & Production, Standard of Living 经济:经济政策、平等与环境、创新、就业、可持续自然资源与消费生产、生活水平
Education: Formal Education, Lifelong Learning, Teacher Training 教育:正规教育、终身学习、教师培训
Environment: Air, Biodiversity, Climate Change, Soil, Water 环境:空气、生物多样性、气候变化、土壤、水资源
Government: Confidence, Engagement, Human Rights, Institutions 政府:公信力、公众参与、人权、制度机构
Human Settlements: Energy, Food, Housing, Information & Communication Technology, Transportation 人类聚居地:能源、食品、住房、信息通信技术、交通
Physical Health: Health Status, Risk Factors, Service Coverage 身体健康:健康状况、风险因素、服务覆盖范围
Psychological Health: Affect (feelings), Flourishing, Mental Illness & Health, Satisfaction with Life 心理健康:情感(感受)、蓬勃发展、精神疾病与健康、生活满意度
Work: Governance, Time Balance, Workplace Environment 工作:治理、时间平衡、工作环境
In an effort to provide a basic orientation to well-being metrics, information about well-being indicators can be segmented into four categories: 为提供关于幸福指标的基本导向,相关信息可分为四类:
1. Subjective or survey-based indicators 1. 主观或调查型指标
Survey-based well-being indicators, subjective well-being (SWB) indicators, and multidimensional measurements of aspects of well-being, are being used by national institutions, international institutions, and governments to better understand levels of psychological wellbeing within countries and aspects of a country’s population. These indicators are also being used to understand people’s satisfaction in specific domains of life. Examples of surveys that include survey-based well-being indicators and SWB indicators include the European Social Survey, Bhutan’s Gross National Happiness Indicators, well-being surveys created by The UK Office for National Statistics, and many more. 国家级机构、国际组织和政府正采用基于调查的幸福指标、主观幸福感(SWB)指标以及多维度的幸福测量体系,以更深入地了解各国国民的心理健康水平及人口特征。这些指标也被用于评估民众在特定生活领域的满意度。包含调查型幸福指标和 SWB 指标的典型调查包括:欧洲社会调查、不丹国民幸福总值指标体系、英国国家统计局设计的幸福调查等。
Survey-based metrics are also employed in the field of positive psychology and in the World Happiness Report. The data are employed by researchers to understand the causes, consequences, and correlates of well-being. Data gathered from surveys tend to address concerns, such as day-to-day experience, overall satisfaction with life, and perceived flourishing. The findings of these researchers provide crucial and necessary guidance because they often diverge from and complement the understanding of traditional conditions, such as economic growth. 调查式指标同样被应用于积极心理学领域及《世界幸福报告》中。研究人员利用这些数据来理解幸福感的成因、影响及其相关因素。通过调查收集的数据通常涉及日常体验、生活总体满意度及感知到的繁荣等议题。这些研究发现提供了关键且必要的指导,因为它们往往与传统认知(如经济增长)存在差异并形成互补。
2. Objective indicators 2. 客观指标
Objective indicators of quality of life have typically incorporated areas such as income, consumption, health, education, crime, housing, etc. These indicators have been used to understand 生活质量客观指标通常涵盖收入、消费、健康、教育、犯罪、住房等领域。这些指标被用于理解
conditions that support the well-being of countries and populations, and to measure the societal and environmental impact of companies. They are in use by organizations like the OECD with their Better Life Index, which also includes surveybased well-being indicators and SWB indicators, and the United Nations with their Sustainable Development Goals Indicators (formerly the Millennium Development Goals). For business, the Global Reporting Initiative, SDG Compass, and B-Corp provide broad indicator sets. 支持国家和民众福祉的条件,并衡量企业的社会与环境影响。这些指标正被经济合作与发展组织(OECD)等机构采用,如其包含调查式福祉指标和主观幸福感指标的"美好生活指数";联合国也通过"可持续发展目标指标"(前身为"千年发展目标指标")加以运用。商业领域则通过全球报告倡议组织、SDG 指南针和共益企业等体系提供综合性指标集。
3. Composite indicators (indices that aggregate multiple metrics) 3. 复合指标(聚合多项指标的指数)
Aggregate metrics combine subjective and/ or objective metrics to produce one measure reflecting both objective aspects of quality of life and people’s subjective evaluation of these. Examples of this are the UN’s Human Development Index, the Social Progress Index, and the United Kingdom’s Office of National Statistics Measures of National Well-being. Some subjective and objective indicators are also composite indicators, such as Bhutan’s Gross National Happiness Index and the OECD’s Better Life Index. 聚合指标通过整合主客观指标,生成同时反映生活质量客观维度与民众主观评价的综合性度量。典型案例包括联合国人类发展指数、社会进步指数,以及英国国家统计局的国家福祉衡量体系。部分主客观指标本身也属于复合指标,如不丹的国民幸福总值指数和经合组织的美好生活指数。
4. Social media sourced data 4. 社交媒体源数据
Social media can be used to measure the wellbeing of a geographic region or demographic group, based on sentiment analysis of publicly available data. Examples include the Hedonometer and the World Well-being Project. 社交媒体可通过分析公开数据的情感倾向,用于衡量特定地理区域或人口群体的福祉水平,例如"幸福测量仪"和"世界福祉项目"等应用案例。
Recommendation 建议
A/IS creators should prioritize learning about well-being concepts, scientific learnings, research findings, and well-being metrics as potential determinants for how they create, deploy, market, and monitor their technologies, and ensuring their stakeholders learn the same. This process can be expedited if Standards Development Organizations (SDOs), such as the IEEE Standards Association, or other institutions such as the Global Reporting Initiative (GRI) or B-Corp, create certifications, guidelines, and standards that for the use of holistic, well-being metrics for A/IS in the public and private sectors. 人工智能/智能系统(A/IS)开发者应优先学习福祉概念、科学发现、研究成果及福祉指标,将其作为技术研发、部署、营销和监测的潜在决定因素,并确保利益相关方掌握同等知识。若 IEEE 标准协会等标准开发组织(SDOs),或全球报告倡议组织(GRI)、共益企业(B-Corp)等机构能制定针对公共和私营领域 A/IS 系统的整体福祉指标认证体系、指导方针及标准规范,将显著加速这一进程。
Further Resources 延伸阅读资源
The IEEE P7010 ^("тм "){ }^{\text {тм }} Standards Project for Wellbeing Metric for Autonomous/Intelligent Systems, was formed with the aim of identifying well-being metrics for applicability to A/IS today and in the future. All are welcome to join the working group. IEEE P7010 标准项目——自主/智能系统福祉度量,旨在确立适用于当前及未来 A/IS 系统的福祉评估指标体系。欢迎各界人士加入工作组。
On 11 April 2017, IEEE hosted a dinner debate at the European Parliament in Brussels to discuss how the world’s top metric of value, gross domestic product, must move Beyond GDP to holistically measure how intelligent and autonomous systems can hinder or improve human well-being. 2017 年 4 月 11 日,IEEE 在布鲁塞尔欧盟议会举办晚宴辩论会,探讨如何突破国内生产总值这一传统价值衡量标准,构建"超越 GDP"的综合性评估框架,以全面衡量智能自主系统对人类福祉的潜在影响。
Prioritizing Human Well-being in the Age of Artificial Intelligence (Report) 《人工智能时代的人类福祉优先(报告版)》
Prioritizing Human Well-being in the Age of Artificial Intelligence (Video) 《人工智能时代的人类福祉优先(视频版)》
Issue: Increased awareness and application of well-being metrics by A/IS creators can create greater value, safety, and relevance to corporate communities and other organizations in the algorithmic age. 问题:人工智能/智能系统(A/IS)开发者对福祉指标的认知和应用提升,能够在算法时代为企业社区及其他组织创造更大价值、安全性和相关性。
Background 背景
While many organizations in the private and public sectors are increasingly aware of the need to incorporate well-being measures as part of their efforts, the reality is that bottom line, quarterly-driven shareholder growth remains a dominant goal and metric. Short term growth is often the priority in the private sector and public sector. As long as organizations exist in a larger societal system which prioritizes financial success, these companies will remain under pressure to deliver financial results that do not fully incorporate societal and environmental impacts, measurements, or priorities. 尽管私营和公共部门的众多组织日益意识到需要将福祉指标纳入其工作范畴,但现实情况是,以季度为周期的股东利益增长仍是主导性目标和衡量标准。无论是私营还是公共部门,短期增长往往被列为优先事项。只要组织仍存在于一个更崇尚财务成功的社会体系中,这些企业就将继续承受压力,追求未能充分纳入社会与环境影响、衡量标准或优先事项的财务业绩。
Rather than focus solely on the negative aspects of how A/IS could harm humans and environments, we seek to explore how the implementation of well-being metrics can help A/IS to have a measurable, positive impact on human well-being as well as on systems and organizations. Incorporation of well-being goals and measures beyond what is strictly required can benefit both private sector organizations’ brands and public sector organizations’ stability and reputation, as well as help realize financial 我们不仅关注人工智能/智能系统(A/IS)可能对人类和环境造成的负面影响,更致力于探索如何通过实施福祉指标,使 A/IS 能够对人类福祉、系统及组织产生可衡量的积极影响。在严格要求的范畴之外融入福祉目标和衡量标准,既能提升私营企业品牌价值,又能增强公共机构的稳定性和公信力,同时有助于实现经济效益。
savings, innovation, trust, and many other benefits. For instance, a companion robot outfitted to support seniors in assisted living situations might traditionally be launched with a technology development model that was popularized by Silicon Valley known as “move fast and break things”. The A/IS creator who rushed to bring the robot to market faster than the competition and who was unaware of well-being metrics, may have overlooked critical needs of the seniors. The robot might actually hurt the senior instead of helping by exacerbating isolation or feelings of loneliness and helplessness. While this is a hypothetical scenario, it is intended to demonstrate the value of linking A/IS design to well-being indicators. 节约成本、促进创新、增强信任等诸多益处。以一款专为辅助养老生活场景设计的陪伴机器人为例,若采用硅谷流行的"快速行动、打破常规"技术开发模式仓促上市,开发者若为抢占市场先机而忽视幸福度指标,极可能遗漏老年群体的核心需求。这种机器人非但无法提供帮助,反而可能通过加剧孤独感与无助感对老人造成伤害。尽管此为假设性场景,却生动揭示了将人工智能/智能系统设计与幸福度指标相衔接的重要价值。
By prioritizing largely fiscal metrics of success, A/IS devices might fail in the market because of limited adoption and subpar reception. However, if during use of the A/IS product, success were measured in terms of relevant aspects of wellbeing, developers and researchers could be in a better position to attain funding and public support. Depending on the intended use of the A/IS product, well-being measures that could be used extend to emotional levels of calm or stress; psychological states of thriving or depression; behavioral patterns of engagement in community or isolation; eating, exercise and consumption habits; and many other aspects of human well-being. The A/IS product could significantly improve quality of life guided by metrics from trusted sources, such as the World Health Organization, European Social Survey, and Sustainable Development Goal Indicators. 若仅以财务指标作为成功标准,自主/智能系统(A/IS)设备可能因市场接受度有限和反响不佳而失败。然而,如果在 A/IS 产品使用过程中,以福祉相关维度作为衡量标准,开发者和研究者将更有可能获得资金支持与公众认可。根据 A/IS 产品的设计用途,可采用的福祉衡量指标包括:情绪层面的平静或压力状态;心理层面的蓬勃或抑郁倾向;行为层面的社区参与或自我隔离模式;饮食、运动及消费习惯;以及人类福祉的其他诸多方面。通过采用世界卫生组织、欧洲社会调查和可持续发展目标指标等权威机构的评估体系,A/IS 产品可在科学指引下显著提升生活质量。
Thought leaders in the corporate arena have recognized the multifaceted need to utilize metrics beyond fiscal indicators. 企业界的思想领袖已认识到,必须采用超越财务指标的多维评估体系。
PricewaterhouseCoopers defines “total impact” as a “holistic view of social, environmental, fiscal and economic dimensions-the big picture”. Other thought-leading organizations in the public sector, such as the OECD, demonstrate the desire for business leaders to incorporate metrics of success beyond fiscal indicators for their efforts, exemplified in their 2017 workshop, Measuring Business Impacts on People’s WellBeing. The B-Corporation movement has created a new legal status for “a new type of company that uses the power of business to solve social and environmental problems”. Focusing on increasing stakeholder value versus shareholder returns alone, B-Corps are defining their brands by provably aligning their efforts with wider measures of well-being. 普华永道将"全面影响"定义为"对社会、环境、财政和经济维度的整体观照——即宏观图景"。公共部门其他思想领先的机构,如经合组织,在 2017 年举办的"衡量企业对人民福祉影响"研讨会上,展现了希望企业领导者将超越财务指标的成功标准纳入其努力方向的愿景。共益企业运动为"一种利用商业力量解决社会和环境问题的新型公司"创设了新的法律地位。这些共益企业通过可验证的方式将其努力与更广泛的福祉衡量标准相结合来定义自身品牌,其关注重点在于增加利益相关者价值而非仅追求股东回报。
Recommendations 建议
A/IS creators should work to better understand and apply well-being metrics in the algorithmic age. Specifically: 人工智能/自主系统创造者应努力在算法时代更好地理解并应用福祉衡量标准。具体而言:
A/IS creators should work directly with experts, researchers, and practitioners in wellbeing concepts and metrics to identify existing metrics and combinations of indicators that would bring support a “triple bottom line”, i.e., accounting for economic, social, and environmental impacts, approach to wellbeing. However, well-being metrics should only be used with consent, respect for privacy, and with strict standards for collection and use of these data. 人工智能/智能系统(A/IS)的创造者应当直接与福祉概念及度量领域的专家、研究人员和实践者合作,共同识别现有指标及组合方式,以支持采用"三重底线"(即统筹经济、社会与环境影响)的福祉评估方法。但福祉指标的使用必须遵循知情同意原则,尊重隐私权,并严格执行数据采集与使用的规范标准。
For A/IS to promote human well-being, the well-being metrics should be chosen in collaboration with the populations most affected by those systems-the A/IS 为使 A/IS 系统有效促进人类福祉,其福祉指标的选取必须与受该系统影响最深的人群——即 A/IS 所服务的具体群体——进行协同决策。
stakeholders-including both the intended end-users or beneficiaries and those groups whose lives might be unintentionally transformed by them. This selection process should be iterative and through a learning and continually improving process. In addition, “metrics of well-being” should be treated as vehicles for learning and potential midcourse corrections. The effects of A/IS on human well-being should be monitored continuously throughout their life cycles, by A/IS creators and stakeholders, and both A/IS creators and stakeholders should be prepared to significantly modify, or even roll back, technology that is shown to reduce well-being, as defined by affected populations. 利益相关方——既包括预期的终端用户或受益者,也包括那些生活可能被技术无意改变的群体。这一筛选过程应当通过持续学习与改进的迭代机制完成。此外,"福祉衡量指标"应被视为学习工具和潜在中期修正的载体。人工智能/智能系统(A/IS)对人类福祉的影响应当在其整个生命周期内受到持续监测,监测主体包括 A/IS 创建者与利益相关方;当技术被证明会降低受影响群体定义的福祉水平时,A/IS 创建者与利益相关方都应做好对技术进行重大修改甚至回退的准备。
A/IS creators in the business or academic, engineering, or policy arenas are advised to review the additional resources on standards development models and frameworks at the end of this chapter to familiarize themselves with existing indicators relevant to their work. 建议商业、学术、工程或政策领域的 A/IS 创建者查阅本章末尾关于标准开发模型与框架的补充资源,以了解与其工作相关的现有指标体系。
Further Resources 延伸阅读
PricewaterhouseCoopers (PwC). Managing and Measuring Total Impact: A New Language for Business Decisions, 2017. 普华永道(PwC)。《管理与衡量综合影响:商业决策的新语言》,2017 年。
World Economic Forum. The Inclusive Growth and Development Report 2017, Geneva, Switzerland: World Economic Forum, January 16, 2017. 世界经济论坛。《2017 年包容性增长与发展报告》,瑞士日内瓦:世界经济论坛,2017 年 1 月 16 日。
OECD Guidelines on Measuring Subjective Well-being, 2013. 经济合作与发展组织。《主观幸福感测量指南》,2013 年。
National Research Council. Subjective WellBeing: Measuring Happiness, Suffering, and Other Dimensions of Experience. DC: The National Academies Press, 2013. 美国国家研究委员会。《主观幸福感:测量快乐、痛苦及其他体验维度》。华盛顿特区:美国国家学术出版社,2013 年。
Issue: A/IS creators have opportunities to safeguard human well-being by ensuring that A/IS does no harm to earth’s natural systems or that A/IS contributes to realizing sustainable stewardship, preservation, and/or restoration of earth’s natural systems. A/IS creators have opportunities to prevent A/IS from contributing to the degradation of earth’s natural systems and hence losses to human well-being. 议题:人工智能/自主系统(A/IS)的创造者有机会通过确保 A/IS 不对地球自然系统造成危害,或促使 A/IS 实现对地球自然系统的可持续管理、保护和/或修复,来保障人类福祉。A/IS 创造者有机会防止 A/IS 导致地球自然系统退化,从而避免对人类福祉造成损害。
Background 背景
It is unwise, and in truth impossible, to separate the well-being of the natural environment of the planet from the well-being of humanity. A range of studies, from the historic to more recent, prove that ecological collapse endangers human existence. Hence, the concept of well-being should encompass planetary wellbeing. Moreover, biodiversity and ecological integrity have intrinsic merit beyond simply their instrumental value to humans. 将地球自然环境的福祉与人类福祉割裂开来是不明智的,事实上也是不可能的。从历史研究到最新成果都证明,生态崩溃会危及人类生存。因此,福祉概念应当包含行星福祉。此外,生物多样性和生态完整性除了对人类具有工具价值外,其本身也具有内在价值。
Technology has a long history of contributing to ecological degradation through its role in expanding the scale of resource extraction and environmental pollution, for example, the immense power needs of network computing, which leads to climate change, water scarcity, soil degradation, species extinction, deforestation, 技术在导致生态退化方面有着长期的历史,其作用体现在扩大资源开采规模和环境染污程度,例如网络计算的巨大电力需求就导致了气候变化、水资源短缺、土壤退化、物种灭绝、森林砍伐等问题,
biodiversity loss, and destruction of ecosystems which in turn threatens humankind in the long run. These and other costs are often considered externalities and often do not figure into decisions or plans. At the same time, there are many examples, such as photovoltaics and smart grid technology that present potential ways to restore earth’s ecosystems if undertaken within a systems approach aimed at sustainable economic and environmental development. 生物多样性丧失和生态系统破坏,这些最终将长期威胁人类生存。此类代价常被视为外部性因素,往往未被纳入决策或规划考量。与此同时,光伏发电与智能电网技术等众多案例表明,若采用以可持续经济与环境发展为目标的系统方法,这些技术有望成为修复地球生态系统的潜在途径。
Environmental justice research demonstrates that the negative environmental impacts of technology are commonly concentrated on the middle class and working poor, as well as those suffering from abject poverty, fleeing disaster zones, or otherwise lacking the resources to meet their needs. Ecological impact can thus exacerbate the economic and sociological effects of wealth disparities on human well-being by concentrating environmental injustice onto those who are less well off. Moreover, well-being research findings indicate that unfair economic and social inequality has a dampening effect on everyone’s well-being, regardless of economic or social class. 环境正义研究表明,技术带来的负面环境影响通常集中作用于中产阶级、劳动贫民,以及赤贫群体、灾后流离失所者或其他资源匮乏人群。生态影响由此通过将环境不公集中转嫁给弱势群体,加剧了财富差距对人类福祉的经济与社会学效应。此外,福祉研究结果表明,不公平的经济社会不平等会对所有人的福祉产生抑制作用,无论其所属的经济或社会阶层如何。
In these respects, A/IS are no exception; they can be used in ways that either help or harm the ecological integrity of the planet. It may be fair to say that ecological health and human well-being will, increasingly, depend upon A/IS creators. It is imperative that A/IS creators and stakeholders find ways to use A/IS to do no harm and to reduce the environmental degradation associated with economic growth-while simultaneously identifying applications to restore the ecological health of the planet and thereby safeguarding the well-being of humans. For A/IS to reduce environmental degradation and promote well- 在这些方面,自主/智能系统(A/IS)亦不例外;其应用方式既可维护亦可破坏地球的生态完整性。可以说,生态健康与人类福祉将日益取决于 A/IS 的创造者。A/IS 开发者及相关方必须确保技术应用不造成伤害,并减轻经济增长伴随的环境退化——同时探索修复地球生态健康的应用方案,从而守护人类福祉。要实现 A/IS 减轻环境退化、促进福祉的目标,
being, it is required that not only A/IS creators act along such lines, but also that a systems approach is taken by all A/IS stakeholders to find solutions that safeguard human well-being with the understanding that human well-being is inextricable from healthy social, economic, and environmental systems. 不仅需要 A/IS 开发者遵循这一方向,更要求所有利益相关方采用系统思维寻求解决方案,在理解人类福祉与健康的社会、经济及环境系统不可分割的前提下守护人类福祉。
Recommendations 建议
A/IS creators need to recognize and prioritize the stewardship of the Earth’s natural systems to promote human and ecological well-being. Specifically: A/IS 开发者需认知并优先承担地球自然系统的管理责任,以促进人类与生态福祉。具体而言:
Human well-being should be defined to encompass ecological health, access to nature, safe climate and natural environments, biosystem diversity, and other aspects of a healthy, sustainable natural environment. 人类福祉的定义应包含生态健康、亲近自然的机会、安全的气候与自然环境、生物系统多样性,以及健康可持续自然环境的其他方面。
A/IS systems should be designed to use, support, and strengthen existing ecological sustainability standards with a certification or similar system, e.g., LEED, Energy Star, or Forest Stewardship Council. This directs automation and machine intelligence to follow the principle of doing no harm and to safeguard environmental, social, and economic systems. 人工智能/智能系统(A/IS)的设计应遵循并通过认证或类似体系(如 LEED 能源与环境设计先锋、能源之星或森林管理委员会)来使用、支持和强化现有生态可持续性标准。这引导自动化与机器智能遵循无害原则,保护环境、社会和经济系统。
A/IS creators should prioritize doing no harm to the Earth’s natural systems, both intended and unintended harm. A/IS 创造者应当优先确保不对地球自然系统造成伤害——无论是蓄意还是无意的损害。
A committee should be convened to issue findings on ways in which A/IS can be used by business, NGOs, and governmental agencies to promote stewardship and restoration of natural systems while reducing the harmful impact of economic development on ecological sustainability and environmental justice. 应组建专门委员会,就企业、非政府组织和政府机构如何利用 A/IS 促进自然系统的管护与修复、同时降低经济发展对生态可持续性和环境正义的负面影响等问题发布研究报告。
Further Resources 延伸阅读
D. Austin and M. Macauley. “Cutting Through Environmental Issues: Technology as a double-edged sword.” The Brookings Institution, Dec. 2001 [Online]. Available: https://www.brookings.edu/articles/cutting-through-environmental-issues-technology-as-a-double-edged-sword/. [Accessed Dec. 1, 2018]. D. 奥斯汀与 M. 麦考利,《剖析环境问题:技术的双刃剑效应》,布鲁金斯学会,2001 年 12 月[在线]。访问地址:https://www.brookings.edu/articles/cutting-through-environmental-issues-technology-as-a-double-edged-sword/。[访问日期:2018 年 12 月 1 日]
J. Newton, Well-being and the Natural Environment: An Overview of the Evidence. August 20, 2007. J. 牛顿,《福祉与自然环境:证据综述》,2007 年 8 月 20 日。
P. Dasgupta, Human Well-Being and the Natural Environment. Oxford, U.K.: Oxford University Press, 2001. P. 达斯古普塔,《人类福祉与自然环境》,英国牛津:牛津大学出版社,2001 年。
R. Haines-Young and M. Potschin. “The Links Between Biodiversity, Ecosystem Services and Human Well-Being,” in Ecosystem Ecology: A New Synthesis, D. Raffaelli, and C. Frid, Eds. Cambridge, U.K.: Cambridge University Press, 2010. R. Haines-Young 和 M. Potschin。《生物多样性、生态系统服务与人类福祉之间的联系》,载于 D. Raffaelli 和 C. Frid 编著的《生态系统生态学:新综合》。英国剑桥:剑桥大学出版社,2010 年。
S. Hart, Capitalism at the Crossroads: Next Generation Business Strategies for a PostCrisis World. Upper Saddle River, NJ: Pearson Education, 2010. S. Hart,《十字路口的资本主义:后危机时代新一代商业战略》。美国新泽西州上鞍河:培生教育集团,2010 年。
United Nations Department of Economic and Social Affairs. “Call for New Technologies to Avoid Ecological Destruction.” Geneva, Switzerland, July 5, 2011. 联合国经济和社会事务部。《呼吁新技术以避免生态破坏》。瑞士日内瓦,2011 年 7 月 5 日。
Pope Francis. Encyclical Letter Laudato Si’ of the Holy Father Francis On the Care for Our Common Home. May 24, 2015. 教皇方济各。《愿祢受赞颂:论爱惜我们共同的家园》通谕。2015 年 5 月 24 日。
Why Islam.org, Environment and Islam, 2018. Why Islam.org,《环境与伊斯兰》,2018 年。
Issue: Human rights law is related to, but distinct from, the pursuit of well-being. Incorporating a human-rights framework as an essential basis for A/IS creators means A/IS creators honor existing law as part of their well-being analysis and implementation. 议题:人权法与福祉追求相关但有所区别。将人权框架作为人工智能/自主系统(A/IS)开发者的基本准则,意味着开发者在进行福祉分析与实施时需恪守现行法律。
Background 背景
International human rights law has been firmly established for decades in order to protect various guarantees and freedoms as enshrined in charters such as the United Nations’ Universal Declaration of Human Rights and the Council of Europe’s Convention on Human Rights. In 2018, the Toronto Declaration on machine learning standards was released, calling on both governments and technology companies to ensure that algorithms respect basic principles of equality and non-discrimination. The Toronto Declaration sets forth an obligation to prevent machine learning systems from discriminating, and in some cases violating, existing human rights law. 国际人权法历经数十年发展已形成稳固体系,旨在维护《联合国世界人权宣言》和《欧洲人权公约》等宪章所载的各项保障与自由。2018 年发布的《多伦多宣言》针对机器学习标准,呼吁政府与技术企业共同确保算法遵循平等与非歧视的基本原则。该宣言明确规定了防止机器学习系统产生歧视行为、乃至违反现有人权法的责任义务。
Well-being initiatives are typically undertaken for the sake of public interest. However, any metric, including well-being metrics, can be misused to justify human rights violations. Encampment and mistreatment of refugees and ethnic cleansing undertaken to preserve a nation’s culture (an aspect of well-being) is one example. Imprisonment or assassination of journalists or researchers to ensure the stability 福祉计划通常是为了公共利益而实施的。然而,任何指标——包括福祉指标——都可能被滥用来为人权侵犯行为辩护。为维护国家文化(福祉的一个方面)而对难民进行集中营关押和虐待,以及实施种族清洗就是一个例证。为确保政府稳定而监禁或暗杀记者和研究人员则是另一个例证。
of a government is another. The use of wellbeing metrics to justify human rights violations is an unconscionable perversion of the nature of any well-being metric. It should be noted that these same practices happen today in relation to GDP. For instance, in 2012, according to the International Labour Organization (ILO), approximately 21 million people are victims of forced labor (slavery), representing 9% to 56% of GDP income for various countries. These clear human rights violations, from sex trafficking and use of children in armies, to indentured farming or manufacturing labor, can increase a country’s GDP while obviously harming human well-being. 利用福祉指标为人权侵犯行为辩护,是对任何福祉指标本质的丧尽天良的歪曲。值得注意的是,这些相同的做法如今也发生在 GDP 相关领域。例如,根据国际劳工组织(ILO)2012 年的数据,约 2100 万人成为强迫劳动(奴隶制)的受害者,占各国 GDP 收入的 9%至 56%。从性交易、使用童军到契约农工或制造业奴工,这些明显侵犯人权的行为在损害人类福祉的同时,反而可能提升一个国家的 GDP。
Well-being metrics are designed to measure the efficacy of efforts related to individual and societal flourishing. Well-being as a value complements justice, equality, and freedom. Well-designed application of well-being considerations by A/IS creators should not displace other issues of human rights or ethical methodologies, but rather complement them. 福祉指标旨在衡量与个人及社会繁荣相关工作的成效。作为一种价值理念,福祉与正义、平等和自由相辅相成。人工智能/智能系统(A/IS)创造者合理运用福祉考量时,不应取代其他人权议题或伦理方法,而应与之形成互补。
Recommendation 建议
A human rights framework should represent the floor, and not the ceiling, for the standards to which A/IS creators must adhere. Developers and users of well-being metrics should be aware these metrics will not always adequately address human rights. 人权框架应作为 A/IS 创造者必须遵守的基准线,而非上限标准。福祉指标的开发者和使用者应当意识到,这些指标未必总能充分涵盖人权议题。
Further Resources 延伸资源
United Nations Universal Declaration of Human Rights, 1948. 联合国《世界人权宣言》,1948 年。
Council of Europe’s Convention on Human Rights, 2018. 欧洲委员会《人权公约》,2018 年。
International Labor Organization (ILO) Declaration on Fundamental Principles and Rights at Work, 1998. 国际劳工组织《关于工作中基本原则和权利宣言》,1998 年。
The regularly updated University of Minnesota Human Rights Library provides a wealth of material on human rights laws, its history, and the organizations engaged in promoting them. 持续更新的明尼苏达大学人权图书馆提供了关于人权法律、历史及推动人权事业组织的丰富资料。
The Oxford Human Rights Hub reports on how and why technologies surrounding artificial intelligence raise human rights issues. 牛津人权中心报告了人工智能相关技术如何及为何会引发人权问题。
Section 2-Implementing Well-being Metrics for A/IS Creators 第二节 为人工智能/智能系统开发者实施福祉指标
A key challenge for A/IS creators in realizing the benefits of well-being metrics is how to best incorporate them into their work. This section explores current best thinking on how to make this happen. 人工智能/智能系统开发者在实现福祉指标效益时面临的关键挑战,是如何将其最佳融入工作流程。本节探讨当前关于实现这一目标的最新思考。
Issue: How can A/IS creators incorporate well-being into their work? 议题:人工智能/智能系统开发者如何将福祉理念融入其工作?
Background 背景
Without practical ways of incorporating well-being metrics to guide, measure, and monitor impact, A/IS will likely lack fall short of its potential to avoid harm and promote well-being. Incorporating well-being thinking into typical organizational processes of design, prototyping, marketing, etc., suggests a variety of adaptations. 若缺乏将福祉指标实际纳入以指导、衡量和监测影响的可行方法,人工智能/自主系统(A/IS)很可能无法充分发挥其避免伤害、促进福祉的潜力。将福祉思维融入设计、原型开发、市场营销等常规组织流程,意味着需要进行多方面的调整。
Organizations and A/IS creators should consider clearly defining the type of A/IS product or service that they are developing, including articulating its intended stakeholders and uses. By defining typical uses, possible uses, and finally unacceptable uses of the technology, creators will help to spell out the context of well-being. This can help to identify possible harms and risks given the different possible uses and end users, as well as intended and unintended positive consequences. 各组织与 A/IS 开发者应当明确定义其开发的人工智能产品或服务类型,包括阐明目标利益相关方及使用场景。通过界定技术的常规用途、可能用途以及不可接受的用途,开发者有助于阐明技术应用的福祉语境。这种做法能帮助识别不同潜在用途和终端用户可能造成的危害与风险,同时也能辨明预期与非预期的积极影响。
Additionally, internal and external stakeholders should be extensively consulted to ensure that impacts are thoroughly considered through an iterative and learning stakeholder engagement process. After consultation, A/IS creators should select appropriate well-being indicators based on the possible scope and impact of their A//IS\mathrm{A} / \mathrm{IS} product or service. These well-being indicators can be drawn from mainstream sources and models and adapted as necessary. They can be used to engage in pre-assessment of the intended user population, projection of possible impacts, and post-assessment. Development of a well-being indicator measurement plan and relevant data infrastructure will support a robust integration of well-being. A/IS models can also be trained to explicitly include well-being indicators as subgoals. 此外,应广泛征询内外部利益相关方的意见,通过迭代式、学习型的利益相关方参与流程,确保全面考量各类影响。经协商后,人工智能/自主系统(A/IS)研发者应根据其产品或服务可能涉及的范围与影响,选取适当的福祉衡量指标。这些指标可从主流理论模型中进行遴选并作必要调整,用于开展目标用户群体的前置评估、潜在影响预测及实施后评估。制定福祉指标测量方案及配套数据基础设施,将有力支撑福祉维度的系统性整合。人工智能/自主系统模型还可通过训练,将福祉指标明确纳入次级目标体系。
Data and discussions on well-being impacts can be used to suggest improvements and modifications to existing A/IS products and services throughout their lifecycle. For example, a team seeking to increase the well-being of people using wheelchairs found that when provided the opportunity to use a smart wheelchair, some users were delighted with the opportunity for more mobility, while others felt it would decrease their opportunities for social contact, increase their sense of isolation, and lead to an overall decrease in their well-being. Therefore, even though a product modification may increase well-being according to one indicator or set of 关于福祉影响的数据和讨论可用于建议对现有 A/IS 产品和服务在其整个生命周期中进行改进和修改。例如,一个旨在提升轮椅使用者福祉的团队发现,当提供使用智能轮椅的机会时,部分使用者因获得更多移动自由而欣喜,而另一些用户则认为这会减少他们的社交接触机会、加剧孤独感,并导致整体福祉水平下降。因此,即便某项产品改进可能根据某一指标或部分
A/IS stakeholders, it does not mean that this modification should automatically be adopted. A/IS 利益相关方的标准提升了福祉,但这并不意味着该改进方案应被自动采纳。
Finally, organizational processes can be modified to incorporate the above strategies. Appointment of an organizational lead person for well-being impacts, e.g., a well-being lead, ombudsman, or officer can help to facilitate this effort. 最后,可通过修改组织流程来整合上述策略。设立专门负责福祉影响的组织牵头人(如福祉主管、监察专员或专职官员)将有助于推动这项工作。
Recommendation 建议
A/IS creators should adjust their existing development, marketing, and assessment cycles to incorporate well-being concerns throughout their processes. This includes identification of an A/IS lead ombudsperson or officer; identification of stakeholders and end users; determination of possible uses, harm and risk assessment; robust stakeholder engagement; selection of well-being indicators; development of a well-being indicator measurement plan; and ongoing improvement of A/IS products and services throughout the lifecycle. 人工智能/智能系统(A/IS)开发者应调整现有的开发、营销和评估流程,将福祉关切纳入全周期管理。具体包括:设立 A/IS 首席监察专员职位;识别利益相关方与终端用户;评估潜在用途及危害风险;建立完善的利益相关方参与机制;遴选福祉指标体系;制定福祉指标测量方案;并在全生命周期持续优化 A/IS 产品与服务。
Further Resources 延伸阅读
Peter Senge and the Learning Organization (synopsis) Purdue University 彼得·圣吉与学习型组织(概要)普渡大学
Stakeholder Engagement: A Good Practice Handbook for Companies Doing Business in Emerging Markets. International Finance Corporation, May 2007. 《利益相关方参与:新兴市场企业实践指南》国际金融公司,2007 年 5 月
Global Reporting Initiative 全球报告倡议组织
GNH Certification, Centre for Bhutan and GNH Studies, 2018. 国民幸福指数认证,不丹国民幸福研究中心,2018 年。
J. Helliwell, R. Layard, and J. Sachs, Eds., “The Objective Benefits of Subjective Well-Being,” in World Happiness Report 2013. New York: UN Sustainable Development Solutions Network, pp. 54-79, 2013. J. 赫利韦尔、R. 莱亚德与 J. 萨克斯编,《主观幸福感的客观效益》,载《2013 年世界幸福报告》。纽约:联合国可持续发展解决方案网络,第 54-79 页,2013 年。
Global Happiness and Well-being Policy Report by the Global Happiness Council, 2018. 全球幸福理事会《全球幸福与福祉政策报告》,2018 年。
Issue: How can A/IS creators influence A/IS goals to ensure well-being, and what can A/IS creators learn or borrow from existing models in the well-being and other arenas? 问题:A/IS 创造者如何影响 A/IS 目标以确保福祉,以及 A/IS 创造者可以从福祉及其他领域的现有模式中学习或借鉴什么?
Background 背景
Another way to incorporate considerations of well-being is to include well-being measures in the development, goal setting, and training of the A/IS systems themselves. 另一种纳入福祉考量的方式是将福祉指标融入 A/IS 系统的开发、目标设定和训练过程中。
Identified metrics of well-being could be formulated as auxiliary objectives of the A/IS. As these auxiliary well-being objectives will be only a subset of the intended goals of the system, the architecture will need to balance multiple objectives. Each of these auxiliary objectives may be expressed as a goal, set of rules, set of values, or as a set of preferences, which can be weighted and combined using established methodologies from intelligent systems engineering. 已识别的福祉指标可被制定为 A/IS 的辅助目标。由于这些辅助性福祉目标仅是系统预期目标的一部分,系统架构需要平衡多个目标。每个辅助目标均可表述为某种目标、规则集、价值观集合或偏好集合,并可通过智能系统工程中的成熟方法进行加权组合。
For example, an educational A/IS tool could not only optimize learning outcomes, but also incorporate measures of student social and emotional education, learning, and thriving. 例如,一个教育类 A/IS 工具不仅可以优化学习成果,还能纳入学生社会情感教育、学习体验及成长发展的衡量指标。
A/IS-related data relates both to the individualthrough personalized algorithms, in conjunction with affective sensors measuring and influencing emotion, and other aspects of individual well-being -and to society as large data sets representing aggregate individual subjective and objective data. As the exchange of this data becomes more widely available via establishing tracking methodologies, the data can be aligned within A/IS products and services to increase human well-being. For example, robots like Pepper are equipped to share data regarding their usage and interaction with humans to the cloud. This allows almost instantaneous innovation, as once an action is validated as useful for one Pepper robot, all other Pepper units (and ostensibly their owners) benefit as well. As long as this data exchange happens with the predetermined consent of the robots’ owners, this innovation in real time model can be emulated for the large-scale aggregation of information relating to existing well-being metrics. 人工智能/智能系统(A/IS)相关数据既通过个性化算法与个体相关联——结合测量并影响情感的情绪传感器及其他个体福祉指标,也作为代表群体主客观数据的海量数据集与社会整体相连。随着追踪方法的建立使这类数据交换更广泛可行,这些数据可被整合进 A/IS 产品与服务中以提升人类福祉。例如 Pepper 等机器人能将与人类互动使用的数据实时上传云端,这种模式可实现近乎即时的创新——当某个行为被验证对一台 Pepper 机器人有效时,所有同类设备(理论上包括其所有者)均可同步受益。只要在机器人所有者预先同意的前提下进行数据交换,这种实时创新模式就可推广应用于现有福祉指标相关信息的规模化聚合。
A/IS creators can also help to operationalize well-being metrics by providing stakeholders with reports on the expected or actual outcomes of the A/IS and the values and objectives embedded in the systems. This transparency will help creators, users, and third parties assess the state of well-being produced by A/IS and make improvements in A/IS. In addition, A/IS creators should consider allowing end users to layer on their own preferences, such as allowing users 人工智能/自主系统(A/IS)的创造者可通过向利益相关方提供系统预期或实际成果报告,以及系统内嵌的价值观与目标,协助落实福祉衡量指标。这种透明度将帮助开发者、用户及第三方评估 A/IS 产生的福祉状态,并推动系统改进。此外,A/IS 创造者应考虑允许终端用户叠加个性化偏好,例如
to limit their use of an A/IS product if it leads to increased sustained stress levels, sustained isolation, development of unhealthy habits, or other decreases to well-being. 当 A/IS 产品导致持续压力水平上升、长期孤立、形成不健康习惯或其他福祉减损情况时,允许用户限制对该产品的使用。
Incorporating well-being goals and metrics into broader organizational values and processes would support the use of well-being metrics as there would be institutional support. A key factor in industrial, corporate, and societal progress is cross-dissemination of concepts and models from one industry or field to another. To date, a number of successful concepts and models exist in the fields of sustainability, economics, industrial design and manufacturing, architecture and urban development, and governmental policy. These concepts and models can provide a foundation for building a metrics standard and the use of wellbeing metrics by A/IS creators, from conception and design to marketing, product updates, and improvements to the user experience. 将福祉目标和指标纳入更广泛的组织价值观与流程中,将为福祉指标的应用提供制度性支持。工业、企业和社会进步的一个关键因素在于概念与模型在不同行业或领域间的交叉传播。目前,可持续发展、经济学、工业设计与制造、建筑与城市发展以及政府政策等领域已存在诸多成功概念与模型。这些概念与模型可为构建指标标准奠定基础,并指导 A/IS 创造者在从概念设计到市场营销、产品更新及用户体验优化的全过程中运用福祉指标。
Recommendation 建议
Create technical standards for representing goals, metrics, and evaluation guidelines for well-being metrics and their precursors and components within A/IS that include: 制定技术标准以呈现 A/IS 系统中的福祉指标及其前驱要素与组成部分,包括:
Ontologies for representing technological requirements. 表征技术需求的本体论框架。
A testing framework for validating adherence to well-being metrics and ethical principles such as IEEE P7010 ^("™ "){ }^{\text {™ }} Standards Project for Wellbeing Metric for Autonomous and Intelligent Systems. 用于验证是否符合 IEEE P7010 ^("™ "){ }^{\text {™ }} 自主与智能系统福祉度量标准项目等福祉指标和伦理原则的测试框架。
The exploration of models and concepts listed above as well as others as a basis for a wellbeing metrics standard for A/IS creators. (See page 191, Additional Resources: Additional Resources: Standards Development Models and Frameworks) 探索上述模型与概念及其他相关理论,作为制定自主/智能系统(A/IS)开发者福祉度量标准的基础。(参见第 191 页《附加资源:标准开发模型与框架》)
The development of a well-being metrics standard for A/IS that encompasses an understanding of well-being as holistic and interlinked to social, economic, and ecological systems. 开发涵盖社会、经济和生态系统整体关联性的自主/智能系统(A/IS)福祉度量标准。
Further Resources 延伸阅读资源
A.F.T Winfield, C. Blum, and W. Liu. “Towards an Ethical Robot: Internal Models, Consequences and Ethical Action Selection,” in Advances in Autonomous Robotics Systems. Springer, 2014, pp. 85-96 A.F.T 温菲尔德、C. 布鲁姆和刘伟。"迈向伦理机器人:内部模型、后果与伦理行为选择",载于《自主机器人系统进展》。施普林格,2014 年,第 85-96 页
R. A. Calvo, and D. Peters. Positive Computing: Technology for Well-Being and Human Potential. Cambridge MA: MIT Press, 2014. R.A. 卡尔沃与 D. 彼得斯。《积极计算:促进福祉与人类潜能的技术》。剑桥:麻省理工学院出版社,2014 年
Y. Collette, and P. Slarry. Multiobjective Optimization: Principles and Case Studies (Decision Engineering Series). Berlin, Germany: Springer, 2004. doi: 10.1007/978-3-662-08883-8. Y. 科莱特与 P. 斯拉里。《多目标优化:原理与案例研究》(决策工程系列)。德国柏林:施普林格,2004 年。doi: 10.1007/978-3-662-08883-8
J. Greene, et al. “Embedding Ethical Principles in Collective Decision Support Systems,” in Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, 4147-4151. Palo Alto, CA: AAAI Press, 2016. J. 格林等。"在集体决策支持系统中嵌入伦理原则",载于《第三十届人工智能大会论文集》,4147-4151 页。加州帕洛阿尔托:AAAI 出版社,2016 年
L. Li, I. Yevseyeva, V. Basto-Fernandes, H. Trautmann, N. Jing, and M. Emmerich,“Building and Using an Ontology of Preference-Based Multiobjective Evolutionary Algorithms.” In 9th International Conference on Evolutionary MultiCriterion Optimization-Volume 10173 (EMO 李林、伊夫谢耶娃、巴斯特-费尔南德斯、特劳特曼、景宁与埃默里希,《构建与应用基于偏好的多目标进化算法本体论》,载于《第九届进化多目标优化国际会议论文集》(第 10173 卷)(EMO
2017), H. Trautmann, G. Rudolph, K. Klamroth, O. Schütze, M. Wiecek, Y. Jin, and C. Grimme, Eds., Vol. 10173. Springer-Verlag, Berlin, Heidelberg, 406-421, 2017. 2017),由特劳特曼、鲁道夫、克拉姆罗斯、舒策、维切克、金毅与格里姆主编,第 10173 卷。柏林/海德堡:施普林格出版社,2017 年,第 406-421 页。
PositiveSociallmpact: Empowering people, organizations and planet with information and knowledge to make a positive impact to sustainable development, 2017. 积极社会影响:通过信息与知识赋能个人、组织及地球,为可持续发展作出积极贡献,2017 年。
D.K. Ura, Bhutan’s Gross National Happiness Policy Screening Tool. 乌拉·多吉,《不丹国民幸福总值政策筛选工具》。
Issue: Decision processes for determining relevant well-being indicators through stakeholder deliberations need to be established. 问题:需要通过利益相关方审议建立确定相关福祉指标的决策流程。
Background 背景
A/IS stakeholder involvement is necessary to determine relevant well-being indicators, for a number of reasons: 出于多种原因,确定相关福祉指标需要 A/IS(自主与智能系统)利益相关方的参与:
“Well-being” will be defined differently by different groups affected by A/IS. The most relevant indicators of well-being may vary according to country, with concerns of wealthy nations being different than those of low- and middle-income countries. Indicators may vary based on geographical region or unique circumstances. The indicators may also be different across social groups, including gender, race, ethnicity, and disability status. 不同受 A/IS 影响的群体对"福祉"的定义各不相同。最相关的福祉指标可能因国家而异,富裕国家关注的问题与中低收入国家不同。指标可能根据地理区域或特殊情况而变化。这些指标在不同社会群体(包括性别、种族、民族和残障状况)间也可能存在差异。
Common indicators of well-being include satisfaction with life, healthy life expectancy, 常见的福祉指标包括生活满意度、健康预期寿命、
economic standard of living, trust in government, social support, perceived freedom to make life decisions, income equality, access to education, and poverty rates. Applying them in particular settings necessarily requires judgment, to ensure that assessments of well-being are in fact meaningful in context and reflective of the life circumstances of the diverse groups in question. 经济生活水平、对政府的信任度、社会支持度、对生活决策自由的感知、收入平等程度、教育机会获取以及贫困率。在特定环境中应用这些指标时,必须运用判断力,以确保对福祉的评估确实具有情境意义,并能反映相关多元群体的真实生活状况。
Not all aspects of well-being are easily quantifiable. The importance of hard-to-quantify aspects of well-being is most likely to become apparent through interaction with those more directly affected by A/IS in specific settings. 并非所有福祉维度都易于量化。那些难以量化的福祉要素的重要性,往往需要通过与被人工智能/智能系统(A/IS)更直接影响的特定群体进行互动才能充分显现。
Engineers and corporate employees frequently misunderstand stakeholders’ needs and expectations, especially when the stakeholders are very different from them in terms of educational and cultural background, social location, and/or economic status. 工程师和企业员工经常误解利益相关者的需求和期望,尤其是当这些利益相关者在教育文化背景、社会阶层和/或经济地位方面与他们存在显著差异时。
The processes through which stakeholders become involved in determining relevant wellbeing indicators will affect the quality of the indicators selected and assessed. Stakeholders should be empowered to define well-being, assess the appropriateness of existing indicators and propose new ones, and highlight context-specific factors that bear on issues of well-being, whether or not the issues have been recognized previously or are amenable to measurement. Interactive, open-ended discussions or deliberations among a wide variety of stakeholders and system designers are more likely to yield robust, widely-shared understandings of well-being and how to measure it in context. Closed-ended or over-determined methods for soliciting stakeholder input are likely to miss relevant information that system designers have not anticipated. 利益相关方参与确定相关福祉指标的过程将影响所选指标的质量与评估效果。应赋予利益相关方以下权利:自主定义福祉内涵、评估现有指标的适用性、提出新指标,并着重指出特定情境下影响福祉的关键因素——无论这些因素是否已被现有认知体系所识别,或是否具备可量化性。通过多元利益相关方与系统设计者之间开放式的互动讨论或审议,更有可能形成对福祉内涵及其情境化测量方式的强韧共识。而采用封闭式或过度预设的征询方法,则很可能遗漏系统设计者未能预见的重要信息。
A process of stakeholder engagement and deliberation is one model for collective decisionmaking. Parties in such deliberation come together as equals. Their goal is to set aside their immediate, personal interests in order to think together about the common good. Participants in a stakeholder engagement and deliberation learn from one another’s perspectives and experiences. 利益相关方参与和协商的过程是集体决策的一种模式。在此类协商中,各方以平等身份参与,其目标在于搁置个人眼前利益,共同思考公共利益。参与者通过相互交流观点和经验实现共同成长。
In the real world, stakeholder engagement and deliberation may run into the following challenges: 现实世界中,利益相关方参与和协商可能面临以下挑战:
Individuals with more education, power, or higher social status may-intentionally or unintentionally-dominate the discussion, undermining their ability to learn from less powerful participants. 受教育程度更高、掌握更多权力或社会地位更高的个体——无论有意或无意——可能主导讨论,从而削弱其向弱势参与者学习的能力。
Topics may be preemptively ruled “out of bounds”, to the detriment of collective problem-solving. An example would be if, in a deliberation on well-being and A/IS, participants were told that worries about the costs of health insurance were unrelated to A/IS and thus could not be discussed. 某些议题可能被预先划定为"禁区",这将损害集体解决问题的能力。例如在关于福祉与自主智能系统(A/IS)的协商中,若参与者被告知医疗保险费用的担忧与 A/IS 无关因而禁止讨论,便属此类情况。
Engineers and scientists may claim authority over technical issues and be willing to deliberate only on social issues, obscuring the ways that technical and social issues are intertwined. 工程师和科学家可能宣称对技术问题拥有权威,并只愿意就社会议题展开协商,从而模糊了技术与社会问题相互交织的本质。
Less powerful groups may be unable to keep more powerful ones “at the table” when discussions get contentious, and vice versa. 当讨论变得具有争议性时,较弱势群体可能无法使强势方继续"留在谈判桌上",反之亦然。
Participants may not agree on who can legitimately be involved in the conversation. For example, the consensual spirit of deliberation is often used as a justification for excluding activists and others who already hold a position on the issue. 参与者可能对谁有合法参与讨论的资格存在分歧。例如,协商的共识精神常被用作排除活动人士及其他已对该议题持有立场者的理由。
Stakeholder engagement and deliberative processes can be effective when: 当满足以下条件时,利益相关方参与和协商程序能发挥实效:
Their design is guided by experts or practitioners who are experienced in deliberation models. 他们的设计由熟悉审议模式的专家或从业者指导。
Deliberations are facilitated by individuals sensitive to issues of power and are skilled in mediating deliberation sessions. 审议过程由对权力问题敏感且擅长调解审议会议的人员主持。
Less powerful actors participate with the help of allies who can amplify their voices. 弱势参与者通过能够放大其声音的盟友协助参与。
More powerful actors participate with an awareness of their own power and make a commitment to listen with humility, curiosity, and open-mindedness. 强势参与者需保持对自身权力的认知,并承诺以谦逊、好奇和开放的态度倾听。
Deliberations are convened by institutions or individuals who are trusted and respected by all parties and who hold all actors accountable for participating constructively. 审议会议由受各方信任和尊重的机构或个人召集,这些召集人负责确保所有参与者都能建设性地参与其中。
Ethically aligned design of A/IS would be furthered by thoughtfully constructed, context-specific deliberations on well-being and the best indicators for assessing it. 通过针对具体情境、经过深思熟虑的关于福祉及其最佳评估指标的审议,可以进一步推动人工智能/智能系统(A/IS)的伦理对齐设计。
Recommendation 建议
Appoint a lead team or person, “leads”, to facilitate stakeholder engagement and to serve as a resource for A/IS creators who use stakeholderbased processes to establish well-being indicators. Specifically: 指定一个主导团队或个人(即"负责人")来促进利益相关方参与,并为采用利益相关方流程建立福祉指标的 A/IS 开发者提供支持资源。具体而言:
Leads should solicit and collect lessons learned from specific applications of stakeholder engagement and deliberation in order to continually refine its guidance. 领导者应征求并收集利益相关方参与和审议在具体应用中的经验教训,以持续完善其指导方针。
When determining well-being indicators, the leads should enlist the help of experts in public 在确定福祉指标时,负责人应当寻求公共领域专家的协助
participation and deliberation. With expert guidance, facilitators can provide guidance for how to: take steps to mitigate the effects of unequal power in deliberative processes; incorporate appropriately trained facilitators and coaching participants in deliberations; recognize and curb disproportionate influence by morepowerful groups; use techniques to maximize the voices of less-powerful groups. 参与和审议。在专家指导下,协调员可以提供以下方面的指导:采取措施减轻审议过程中权力不平等的影响;配备经过适当培训的协调员并对参与者进行审议指导;识别并遏制强势群体的过度影响;运用技术手段最大化弱势群体的发声机会。
Leads should use their convening power to bring together A/IS creators and stakeholders, including critics of A/IS, for deliberations on well-being indicators, impacts, and other considerations for specific contexts and settings. Leads’ involvement would help bring actors to the table with a balance of power and encourage all actors to remain in conversation until robust, mutually agreeable definitions are found. 领导者应运用其召集力,汇聚人工智能/智能系统(A/IS)开发者与利益相关方(包括 A/IS 批评者),就特定场景下的福祉指标、影响及其他考量因素展开研讨。领导者的参与将有助于平衡各方话语权,促使所有参与者持续对话,直至达成坚实且彼此认可的定义框架。
Further Resources 延伸阅读资源
D. E. Booher and J. E. Innes. Planning with Complexity: An Introduction to Collaborative Rationality for Public Policy. London: Routledge, 2010. D. E. 布赫与 J. E. 英尼斯。《复杂性规划:公共政策协作理性导论》。伦敦:劳特利奇出版社,2010 年。
J. A. Leydens and J. C. Lucena. Engineering Justice: Transforming Engineering Education and Practice. Wiley-IEEE Press, 2018. J. A. 莱登斯与 J. C. 卢塞纳。《工程正义:工程教育与实践的转型》。威利-IEEE 出版社,2018 年。
G. Ottinger. Assessing Community Advisory Panels: A Case Study from Louisiana’s Industrial Corridor. Center for Contemporary History and Policy, 2008. G. 奥廷格。《社区咨询委员会评估:路易斯安那工业走廊案例研究》。当代历史与政策研究中心,2008 年。
Expert and Citizen Assessment of Science and Technology (ECAST) Network 专家与公民科技评估网络(ECAST)
Issue: There are insufficient mechanisms to foresee and measure negative impacts, and to promote and safeguard positive impacts of A/IS. 问题:目前缺乏有效机制来预见和衡量人工智能/信息系统(A/IS)的负面影响,并促进和保障其积极影响。
Background 背景
A/IS technologies present great opportunity for positive change in every aspect of society. However, they can-by design or unintentionallycause harm as well. While it is important to consider and make sense of possible benefits, harms, and trade-offs, it is extremely challenging to foresee all of the relevant, direct, and secondary impacts. 人工智能/信息系统技术为社会各领域带来积极变革的重大机遇,但无论出于设计或意外,同样可能造成危害。尽管考量并理解潜在收益、危害及权衡取舍至关重要,但要预见所有相关直接及次生影响仍极具挑战性。
However, it is prudent to review case studies of similar products and the impacts they have had on well-being, as well as to consider possible types of impacts that could apply. Issues to consider include: 然而,审慎研究同类产品的案例及其对福祉的影响,并考量可能适用的影响类型是明智之举。需考量的议题包括:
Economic and labor impacts, including labor displacement, unemployment, and inequality, 经济与劳动力影响,包括劳动力替代、失业与不平等问题
Accountability, transparency, and explainability, 问责制、透明度与可解释性
Surveillance, privacy, and civil liberties, 监控、隐私与公民自由
Fairness, ethics, and human rights, 公平性、伦理与人权
Political manipulation, deception, “nudging”, and propaganda, 政治操纵、欺骗、"助推"和宣传,
Human physical and psychological health, 人类身心健康,
Environmental impacts, 环境影响,
Human dignity, autonomy, and human vs. A/IS roles, 人类尊严、自主权以及人与人工智能/智能系统(A/IS)的角色定位,
Security, cybersecurity, and autonomous weapons, and 安全、网络安全与自主武器,
Existential risk and super intelligence. 存在性风险与超级智能。
While this is a partial list, it is important to be aware of and reflect on possible and actual cases. For example: 尽管这只是部分列举,但认识和反思这些潜在及实际案例至关重要。例如:
A prominent concern related to A/IS is of labor displacement and economic and social impacts at an individual and a systems level. A/IS technologies designed to replicate human tasks, behavior, or emotion have the potential to increase or decrease human well-being. These systems could complement human work and increase productivity, wages, and leisure time; or they could be used to supplement and displace human workers, leading to unemployment, inequality, and social strife. It is important for A/IS creators to think about possible uses of their technology and whether they want to encourage or design in restrictions in light of these impacts. 与人工智能/智能系统(A/IS)相关的一个突出问题是劳动力替代及其对个人和系统层面经济社会的影响。旨在复制人类任务、行为或情感的 A/IS 技术,既可能提升也可能削弱人类福祉。这些系统可以辅助人类工作,提高生产力、工资水平和休闲时间;也可能被用于替代人力,导致失业、不平等和社会冲突。A/IS 创造者必须考量其技术的潜在用途,并根据这些影响决定是否设置使用限制。
Another example relates to manipulation. Sophisticated manipulative technologies utilizing A/IS can restrict the fundamental freedom of human choice by manipulating humans who consume content without them recognizing the extent of the manipulation. Software platforms are moving from targeting and customizing content to much more powerful and potentially harmful “persuasive computing” that leverages psychological data and methods. While these approaches may be effective in encouraging use of a product, they may come at significant psychological and social costs. 另一个例子涉及操控问题。利用人工智能/自主系统(A/IS)的精密操控技术,可以通过在用户不知情的情况下操纵其接收的内容,从而限制人类选择的基本自由。软件平台正从内容定向投放和定制,转向更强大且可能更具危害性的"说服性计算"——这种技术利用心理学数据和方法。虽然这些手段能有效促进产品使用,但可能造成巨大的心理和社会代价。
A/IS may deceive and harm humans by posing as humans. With the increased ability of artificial systems to meet the Turing test, an intelligence test for a computer that allows a human to distinguish human intelligence from artificial intelligence, there is a significant risk 人工智能/自主系统可能通过伪装人类进行欺骗和伤害。随着人工系统通过图灵测试(一种让人类区分人类智能与人工智能的测试)的能力增强,存在不法经营者将该技术滥用于不道德商业或直接犯罪目的的重大风险。
that unscrupulous operators will abuse the technology for unethical commercial or outright criminal purposes. Without taking action to prevent it, it is highly conceivable that A/IS will be used to deceive humans by pretending to be another human being in a plethora of situations and via multiple mediums. 若不采取预防措施,完全可以预见人工智能/自主系统将通过多种媒介,在无数场景中伪装成他人对人类实施欺骗。
A potential entry point for exploring these unintended consequences is computational sustainability. 探索这些意外后果的一个潜在切入点在于计算可持续性领域。
Computational-Sustainability.org defines the term as an “interdisciplinary field that aims to apply techniques from computer science, information science, operations research, applied mathematics, and statistics for balancing environmental, economic, and societal needs for sustainable development”. The Institute of Computational Sustainability states that the intent of computational sustainability is provide “computational models for a sustainable environment, economy, and society”. Examples of applied computational sustainability can be seen in the Stanford University Engineering Department’s course in computational sustainability presentation. Computational sustainability technologies designed to increase social good could also be tied to existing well-being metrics. 根据 Computational-Sustainability.org 的定义,该术语指"一个跨学科领域,旨在应用计算机科学、信息科学、运筹学、应用数学和统计学技术来平衡可持续发展所需的环境、经济和社会需求"。计算可持续性研究所指出,该领域的目的是提供"实现可持续环境、经济和社会的计算模型"。应用计算可持续性的实例可见于斯坦福大学工程系的计算可持续性课程展示。旨在增进社会福祉的计算可持续性技术也可与现有幸福度量指标相结合。
Recommendation 建议
To avoid potential negative, unintended consequences, and secure and safeguard positive impacts, A/IS creators, end-users, and stakeholders should be aware of possible 为避免潜在的负面意外后果,并确保和保障积极影响,人工智能/智能系统(A/IS)的创造者、终端用户及利益相关方应当充分认知可能的
well-being impacts when designing, using, and monitoring A//IS\mathrm{A} / \mathrm{IS} systems. This includes being aware of existing cases and possible areas of impact, measuring impacts on wellbeing outcomes, and developing regulations to promote beneficent uses of A/IS. Specifically: 在设计、使用和监控 A//IS\mathrm{A} / \mathrm{IS} 系统时需考量其对福祉的影响。这包括了解现有案例和潜在影响领域,衡量对福祉结果的实质影响,并制定规范以促进人工智能/智能系统(A/IS)的良性应用。具体而言:
A/IS creators should protect human dignity, autonomy, rights, and well-being of those directly and indirectly affected by the technology. As part of this effort, it is important to include multiple stakeholders, minorities, marginalized groups, and those often without power or a voice in consultation. 人工智能/智能系统(A/IS)的创造者应保护受技术直接或间接影响者的人类尊严、自主权、权利及福祉。在此过程中,必须纳入多元利益相关方、少数群体、边缘化群体以及那些通常缺乏话语权的群体参与协商。
Policymakers, regulators, monitors, and researchers should consider issuing guidance on areas such as A/IS labor and the proper role of humans vs. A/IS in work transparency, trust, and explainability; manipulation and deception; and other areas that emerge. 政策制定者、监管机构、监督人员及研究者应考虑就以下领域发布指导方针:人工智能/智能系统(A/IS)劳动力问题、人类与 A/IS 在工作透明度/信任度/可解释性中的合理分工、操纵与欺骗行为,以及其他新兴领域。
Ongoing literature review and analysis should be performed by research and other communities to curate and aggregate information on positive and negative A/IS impacts, along with demonstrated approaches to realize positive ones and ameliorate negative ones. 研究机构及其他相关团体应持续开展文献综述与分析工作,系统整理并整合关于 A/IS 积极影响与消极影响的信息,同时收录已验证的实现积极影响及缓解消极影响的具体方法。
A/IS creators working toward computational sustainability should integrate well-being concepts, scientific findings, and indicators into current computational sustainability models. They should work with well-being experts, researchers, and practitioners to conduct research and develop and apply models in A/IS development that prioritize and increase human well-being. 致力于计算可持续性的 A/IS 开发者应将福祉概念、科学发现及相关指标整合到现有计算可持续性模型中。他们需与福祉领域的专家、研究者及实践者合作开展研究,在 A/IS 开发过程中建立并应用以提升人类福祉为优先目标的模型体系。
Cross-pollination should be developed between computational sustainability and well-being professionals to ensure integration of well-being into computational sustainability frameworks, and vice versa. Where feasible and reasonable, do the same for conceptual models such as doughnut economics and systems thinking. 应促进计算可持续性与福祉专业领域的交叉融合,确保将福祉维度纳入计算可持续性框架体系,反之亦然。在可行且合理的情况下,对诸如甜甜圈经济学和系统思维等概念模型也应采取同样的整合策略。
Further Resources 延伸阅读资源
AI Safety Research by The Future of Life Institute 未来生命研究所的人工智能安全研究
D. Helbing, et al. “Will Democracy Survive Big Data and Artificial Intelligence?” Scientific American, February 25, 2017. D. Helbing 等,《大数据与人工智能时代民主制度能否存续?》,《科学美国人》2017 年 2 月 25 日刊
J. L. Schenker, “Can We Balance Human Ethics with Artificial Intelligence?” Techonomy, January 23, 2017. J. L. 申克,《人工智能时代如何平衡人类伦理?》,Techonomy,2017 年 1 月 23 日。
M. Bulman, "EU to Vote on Declaring Robots To Be ‘Electronic Persons’.’ Independent, January 14, 2017. M. 布尔曼,《欧盟将投票决定是否授予机器人"电子人"身份》,独立报,2017 年 1 月 14 日。
N. Nevejan, for the European Parliament. “European Civil Law Rules in Robotics.” October 2016. N. 内维安,欧洲议会报告,《机器人技术中的欧洲民法规则》,2016 年 10 月。
University of Oxford. “Social media manipulation rising globally, new report warns,” https://phys.org/news/2018-07-social-media-globally.html. July 20, 2018. 牛津大学,《新报告警告:全球社交媒体操纵现象加剧》,https://phys.org/news/2018-07-social-media-globally.html,2018 年 7 月 20 日。
“The Al That Pretends To Be Human,” LessWrong blog post, February 2, 2016. 《假装成人类的 AI》,LessWrong 博客文章,2016 年 2 月 2 日。
C. Chan, “Monkeys Grieve When Their Robot Friend Dies.” Gizmodo, January 11, 2017. C. Chan,《当机器人朋友死去时猴子会悲伤》,Gizmodo,2017 年 1 月 11 日。
Partnership on AI, “AI, Labor, and the Economy” Working Group launches in New York City," https://www.partnershiponai.org/ aile-wg-launch/. April 25, 2018. 人工智能合作伙伴关系,《AI、劳动力与经济》工作组在纽约市启动,https://www.partnershiponai.org/aile-wg-launch/,2018 年 4 月 25 日。
C.Y. Johnson, “Children can be swayed by robot peer pressure,study says,” The Washington Post, August 15, 2018. [Online]. Available: www.WashingtonPost.com. [Accessed 2018]. C.Y. Johnson,《研究表明儿童易受机器人同伴压力影响》,《华盛顿邮报》,2018 年 8 月 15 日。[在线]。网址:www.WashingtonPost.com。[访问于 2018 年]。
Further Resources for Computational Sustainability 计算可持续性补充资源
Computational Sustainability, Computational Sustainability: Computational Methods for a Sustainable Environment, Economy, and Society Project Summary. 《计算可持续性:构建可持续环境、经济与社会的计算方法》项目概要。
C. P. Gomes, “Computational Sustainability: Computational Methods for a Sustainable Environment, Economy, and Society” in The Bridge: Linking Engineering and Society. Washington, DC: National Academy of Engineering of the National Academies, 2009. C·P·戈麦斯,《计算可持续性:构建可持续环境、经济与社会的计算方法》,载《桥梁:连接工程与社会》。华盛顿特区:美国国家工程院,2009 年。
S.J. Gershman, E. J. Horvitz, and J. B. Tenenbaum. “Computational rationality: A converging paradigm for intelligence in brains, minds, and machines,” Science vol. 349, no. 6245, pp. 273-278, July 2015. S.J. 格什曼、E.J. 霍维茨与 J.B. 特南鲍姆合著,《计算理性:大脑、心智与机器智能的趋同范式》,刊于《科学》第 349 卷第 6245 期,第 273-278 页,2015 年 7 月。
ACM Fairness, Accountability and Transparency Conference ACM 公平性、问责制与透明度大会
Thanks to the Contributors 致谢贡献者
We wish to acknowledge all of the people who contributed to this chapter. 我们谨向所有参与本章节编写工作的人员致以诚挚谢意。
The Well-being Committee 福祉委员会
John C. Havens (Co-Chair) - Executive Director, The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems; Executive Director, The Council on Extended Intelligence; Author, Heartificial Intelligence: Embracing Our Humanity to Maximize Machines 约翰·C·哈文斯(联合主席) - IEEE 全球自主与智能系统伦理倡议执行主任;扩展智能委员会执行主任;《人工情感智能:在机器最大化中拥抱人性》作者
Laura Musikanski (Co-Chair) - Executive Director at The Happiness Alliance-home of The Happiness Initiative & Gross National Happiness Index 劳拉·穆西坎斯基(联合主席) - 幸福联盟(幸福倡议及国民幸福指数机构)执行主任
Liz Alexander - PhD Futurist 利兹·亚历山大 - 未来学博士
Anna Alexandrova - Senior Lecturer in Philosophy of Science at Cambridge University and Fellow of Kings College 安娜·亚历山德罗娃 - 剑桥大学科学哲学高级讲师,国王学院院士
Christina Berkley - Executive Coach to leaders in exponential technologies, cuttingedge science, and aerospace 克里斯蒂娜·伯克利 - 为指数级技术、尖端科学及航空航天领域领导者提供高管教练服务
Catalina Butnaru - UK Al Ambassador for global community City.AI, and Founder of HAI, the first methodology for applications of AI in cognitive businesses 卡特琳娜·布特纳鲁 - 全球社区 City.AI 英国人工智能大使,认知型企业人工智能应用首创方法论 HAI 创始人
Celina Beatriz - Project Director at the Institute for Technology & Society of Rio de Janeiro (ITS Rio) 塞莉娜·贝阿特丽丝 - 里约热内卢科技与社会研究所(ITS Rio)项目总监
Peet van Biljon - Founder and CEO at BMNP Strategies LLC, advisor on strategy, innovation, and business transformation; 皮特·范·比永 - BMNP 战略咨询公司创始人兼首席执行官,战略、创新与业务转型顾问;
Adjunct faculty at Georgetown University; Business ethics author 乔治城大学兼职教授;商业伦理著作作者
Amy Blankson - Author of The Future of Happiness and Founder of TechWell, a research and consulting firm that aims to help organizations to create more positive digital cultures 艾米·布兰克森 - 《幸福未来》作者,TechWell 创始人(该研究咨询机构致力于帮助组织构建更积极的数字文化)
Marc Böhlen - Professor, University at Buffalo, Emerging Practices in Computational Media. www.realtechsupport.org 马克·伯伦 - 布法罗大学新兴计算媒体实践教授。网站:www.realtechsupport.org
Rafael A. Calvo - Professor and ARC Future Fellow at The University of Sydney. Co-author of Positive Computing: Technology for WellBeing and Human Potential 拉斐尔·A·卡尔沃 - 悉尼大学教授兼澳大利亚研究理事会未来学者,《积极计算:促进福祉与人类潜能的技术》合著者
Rumman Chowdhury - PhD Senior Principal, Artificial Intelligence, and Strategic Growth Initiative Responsible AI Lead, Accenture 鲁曼·乔杜里 - 博士,埃森哲人工智能高级首席、战略增长计划负责任人工智能负责人
Dr. Aymee Coget - CEO and Founder of Happiness For HumanKind 艾米·科格特博士 - 人类幸福组织首席执行官兼创始人
Danny W. Devriendt - Managing director of Mediabrands Dynamic (IPG) in Brussels, and the CEO of the Eye of Horus, a global think-tank for communication-technology related topics 丹尼·W·德弗里恩特 - 布鲁塞尔盟博动力(IPG 集团)董事总经理,荷鲁斯之眼全球智库首席执行官(该智库专注于传播技术相关议题)
Eimear Farrell - Eimear Farrell, independent expert/consultant on technology and human rights (formerly at OHCHR) 艾默·法雷尔 - 技术与人权独立专家/顾问(曾任职于联合国人权事务高级专员办事处)
Danit Gal - Project Assistant Professor, Keio University; Chair, IEEE Standard P7009 on the Fail-Safe Design of Autonomous and SemiAutonomous Systems 达尼特·加尔 - 庆应义塾大学项目助理教授;IEEE P7009 标准《自主与半自主系统故障安全设计》主席
Andra Keay - Managing Director of Silicon Valley Robotics, cofounder of Robohub 安德拉·凯伊 - 硅谷机器人公司董事总经理,Robohub 联合创始人
Dr. Peggy Kern - Senior Lecturer, Centre for Positive Psychology at the University of Melbourne’s Graduate School of Education 佩吉·克恩博士 - 墨尔本大学教育研究生院积极心理学中心高级讲师
Michael Lennon - Senior Fellow, Center for Excellence in Public Leadership, George Washington University; Co-Founder, Govpreneur.org; Principal, CAIPP.org (Consortium for Action Intelligence and Positive Performance); Member, Well-being Metrics Standard for Ethical Artificial Intelligence and Autonomous Systems Committee 迈克尔·列侬 - 乔治华盛顿大学公共领导力卓越中心高级研究员;Govpreneur.org 联合创始人;CAIPP.org(行动智能与积极绩效联盟)负责人;《伦理人工智能与自主系统福祉度量标准》委员会成员
Alan Mackworth - Professor of Computer Science, University of British Columbia; Former President, AAAI; Co-author of “Artificial Intelligence: Foundations of Computational Agents” 艾伦·麦克斯沃思 - 不列颠哥伦比亚大学计算机科学教授;美国人工智能协会前主席;《人工智能:计算智能体基础》合著者
Richard Mallah - Director of Al Project, Future of Life Institute 理查德·马拉 - 生命未来研究所人工智能项目主任
Fabrice Murtin - Senior Economist, OECD Statistics and Data Directorate 法布里斯·穆尔坦 - 经济合作与发展组织统计与数据司高级经济学家
Gwen Ottinger - Associate Professor, Center for Science, Technology, and Society and Department of Politics, Drexel University; Director, Fair Tech Collective 格温·奥廷杰 - 德雷塞尔大学科学技术与社会研究中心及政治学系副教授;公平科技联盟主任
Eleonore Pauwels - Research Fellow on AI and Emerging Cybertechnologies, United Nations University (NY) and Director of the AI Lab, Woodrow Wilson International Center for Scholars (DC) 埃莱奥诺尔·波韦尔斯 - 联合国大学(纽约)人工智能与新兴网络技术研究员,伍德罗·威尔逊国际学者中心(华盛顿)人工智能实验室主任
Venerable Tenzin Priyadarshi - MIT Media Lab, Director, Ethics Initiative 尊贵的丹增·普里亚达尔希法师 - 麻省理工学院媒体实验室伦理倡议项目主任
Gideon Rosenblatt - Writer, focused on work and the human experience in an era of machine intelligence, at The Vital Edge 吉迪恩·罗森布拉特 - 作家,专注于机器智能时代的工作与人类体验,任职于 The Vital Edge
Daniel Schiff - PhD Student, Georgia Institute of Technology; Chair, Sub-Group for Autonomous and Intelligent Systems Implementation, IEEE P7010 ^("TM "){ }^{\text {TM }} Standards Project for Well-being Metric for Autonomous and Intelligent Systems 丹尼尔·希夫 - 佐治亚理工学院博士研究生;IEEE P7010 标准项目"自主与智能系统福祉指标"实施分小组主席
Madalena Sula - Undergraduate student of Electrical and Computer Engineering, University of Thessaly, Greece, x-PR Manager of IEEE Student Branch of University of Thessaly, Data Scientist & Business Analyst in a multinational company 玛达莱娜·苏拉 - 希腊色萨利大学电气与计算机工程专业本科生,曾任色萨利大学 IEEE 学生分会公关经理,现为跨国公司数据科学家兼业务分析师
Vincent Siegerink - Analyst, OECD Statistics and Data Directorate 文森特·西格林克 - 经济合作与发展组织(OECD)统计与数据司分析师
Andy Townsend - Emerging and Disruptive Technology, PwC UK 安迪·汤森德 - 普华永道英国新兴与颠覆性技术部门
Andre Uhl - Research Associate, Director’s Office, MIT Media Lab 安德烈·乌尔 - 麻省理工学院媒体实验室主任办公室研究助理
Ramón Villasante - Founder of PositiveSociallmpact. Software designer, engineer, CTO & CPO in EdTech for sustainable development, social impact and innovation 拉蒙·维拉萨特 - PositiveSocialImpact 创始人,教育科技领域软件设计师、工程师、首席技术官兼首席产品官,专注于可持续发展、社会影响与创新
Sarah Villeneuve - Policy Analyst; Member, IEEE P7010 ^("TM "){ }^{\text {TM }} Standards Project for Well-being Metric for Autonomous and Intelligent Systems. 莎拉·维勒纽夫 - 政策分析师;IEEE P7010 标准项目成员,负责自主与智能系统福祉度量标准制定
For a full listing of all IEEE Global Initiative Members, visit standards.ieee.org/content/dam/ ieee-standards/standards/web/documents/ other/ec bios.pdf. 要查看所有 IEEE 全球倡议成员的完整名单,请访问 standards.ieee.org/content/dam/ieee-standards/standards/web/documents/other/ec_bios.pdf。
For information on disclaimers associated with EAD1e, see How the Document Was Prepared. 有关 EAD1e 相关免责声明信息,请参阅"文档编制说明"章节。
Affective Computing 情感计算
Affect is a core aspect of intelligence. Drives and emotions, such as excitement and depression, are used to coordinate action throughout intelligent life, even in species that lack a nervous system. Emotions are one mechanism that humans evolved to accomplish what needs to be done in the time available with the information at hand-to satisfice. Emotions are not an impediment to rationality; arguably they are integral to rationality in humans. Humans create and respond to both positive and negative emotional influence as they coordinate their actions with other individuals to create societies. Autonomous and intelligent systems (A/IS) are being designed to simulate emotions in their interactions with humans in ways that will alter our societies. 情感是智能的核心要素。驱动因素和情绪(如兴奋与抑郁)被用于协调智能生命体的行为,甚至存在于缺乏神经系统的物种中。情绪是人类进化出的机制之一,用于在有限时间内利用现有信息完成必要任务——即满足最低要求。情绪并非理性的障碍;可以说它们是人类理性的组成部分。人类在协调彼此行动构建社会时,既创造也回应着积极与消极的情感影响。当前设计的自主与智能系统(A/IS)正通过模拟人类情感互动的方式改变我们的社会形态。
A/IS should be used to help humanity to the greatest extent possible in as many contexts as are appropriate. While A/IS have tremendous potential to effect positive change, there is also potential that artifacts used in society could cause harm either by amplifying, altering, or even dampening human emotional experience. Even rudimentary versions of synthetic emotions, such as those already in use within nudging systems, have already altered the perception of A/IS by the general public and public policy makers. 人工智能/智能系统(A/IS)应在尽可能多的适用场景中最大限度地造福人类。虽然 A/IS 具有推动积极变革的巨大潜力,但社会应用中的智能体也可能通过放大、改变甚至抑制人类情感体验而造成伤害。即使是合成情感的初级版本(例如已在助推系统中应用的实例),也已改变了公众和公共政策制定者对 A/IS 的认知。
This chapter of Ethically Aligned Design addresses issues related to emotions and emotionlike control in interactions between humans and design of A/IS. We have put forward recommendations on a variety of topics: considering how affect varies across human cultures; the particular problems of artifacts designed for caring and private relationships; considerations of how intelligent artifacts may be used for “nudging”; how systems can support human flourishing; and appropriate policy interventions for artifacts designed with inbuilt affective systems. 《伦理对齐设计》本章节探讨人机交互中涉及情感及类情感控制的相关问题。我们针对以下议题提出建议:考量情感表达在不同人类文化中的差异性;专为关怀关系和私人关系设计的智能体所面临的特殊问题;智能体在"助推"中的应用考量;系统如何支持人类繁荣发展;以及针对内置情感系统的智能体应采取何种政策干预措施。
Section 4-Systems Supporting Human Potential 第 4 章-支持人类潜能的系统
Section 5-Systems with Synthetic Emotions 第 5 章-具备合成情感的系统
Section 1-Systems Across Cultures 第一节 跨文化系统
Issue: Should affective systems interact using the norms for verbal and nonverbal communication consistent with the norms of the society in which they are embedded? 议题:情感系统是否应遵循其所处社会的言语与非言语交际规范进行互动?
Background 背景
Individuals around the world express intentions differently, including the ways that they make eye contact, use gestures, or interpret silence. These particularities are part of an individual’s and a society’s culture and are incorporated into their affective systems in order to convey the intended message. To ensure that the emotional systems of autonomous and intelligent systems foster effective communication within a specific culture, an understanding of the norms/values of the community where the affective system will be deployed is essential. 世界各地的人们以不同方式表达意图,包括眼神交流、手势运用或沉默解读等。这些特性构成个人及社会文化的一部分,并被融入其情感系统以传递预期信息。为确保自主智能系统的情感系统能在特定文化中促进有效沟通,理解情感系统部署社区的规范/价值观至关重要。
Recommendations 建议
A well-designed affective system will have a set of essential norms, specific to its intended cultural context of use, in its knowledge base. Research has shown that A/IS technologies can use at least five types of cues to simulate social interactions. 设计良好的情感系统将在其知识库中包含一套与其预期使用文化背景相适应的基本规范。研究表明,自主/智能系统(A/IS)技术至少可以利用五种类型的线索来模拟社交互动。
These include: physical cues such as simulated facial expressions, psychological cues such as 1 simulated humor or other emotions, use of language, use of social dynamics like taking turns, and through social roles such as acting as a tutor or medical advisor. Further examples are listed below: 这些线索包括:物理线索(如模拟面部表情)、心理线索(如模拟幽默或其他情绪)、语言运用、社交动态(如轮流发言)以及通过社会角色(如担任导师或医疗顾问)等方式。更多示例如下:
a. Well-designed affective systems will use language with affective content carefully and within the contemporaneous expectations of the culture. An example is small talk. Although small talk is useful for establishing a friendly rapport in many communities, some communities see people that use small talk as insincere and hypocritical. Other cultures may consider people that do not use small talk as unfriendly, uncooperative, rude, arrogant, or ignorant. Additionally, speaking with proper vocabulary, grammar, and sentence structure may contrast with the typical informal interactions between individuals. For example, the latest trend, TV show, or other media may significantly influence what is viewed as appropriate vocabulary and interaction style. a. 设计良好的情感系统会谨慎使用带有情感色彩的语言,并符合所处文化的即时预期。以寒暄为例,虽然在许多社群中寒暄有助于建立友好关系,但某些社群认为使用寒暄的人虚伪做作。而另一些文化则可能将不进行寒暄的人视为不友好、不合作、粗鲁、傲慢或无知。此外,使用规范的词汇、语法和句子结构,可能与个体间典型的非正式互动形成反差。例如最新潮流、电视节目或其他媒体,可能会显著影响人们对恰当词汇和互动风格的认知。
b. Well-designed affective systems will recognize that the amount of personal space (proxemics) given by individuals in an important part of culturally specific b. 设计良好的情感系统应当认识到,个体间保持的个人空间距离(空间关系学)是文化特性的重要组成部分
Affective Computing 情感计算
human interaction. People from varying cultures maintain, often unknowingly, different spatial distances between themselves to establish smooth communication. Crossing these limits may require explicit or implicit consent, which A/IS must learn to negotiate to avoid transmitting unintended messages. 人际互动。来自不同文化背景的人们在交流时往往无意识地保持着不同的空间距离以确保沟通顺畅。跨越这些界限可能需要明确或默示的同意,自主智能系统(A/IS)必须学会协调这些界限,以避免传递非本意的信息。
c. Eye contact is an essential component for culturally sensitive social interaction. For some interactions, direct eye contact is needed but for others it is not essential and may even generate misunderstandings. It is important that A/IS be equipped to recognize the role of eye contact in the development of emotional interaction. c. 眼神交流是文化敏感性社交互动的重要组成部分。在某些互动中需要直接的眼神接触,而在其他情况下则非必需,甚至可能引发误解。重要的是,自主智能系统(A/IS)应具备识别眼神交流在情感互动发展中的作用的能力。
d. Hand gestures and other non-verbal communication are very important for social interaction. Communicative gestures are culturally specific and thus should be used with caution in cross-cultural situations. The specificity of physical communication techniques must be acknowledged in the design of functional affective systems. For instance, although a “thumbs-up” sign is commonly used to indicate approval, in some countries this gesture can be considered an insult. d. 手势及其他非语言交流对社交互动至关重要。交流手势具有文化特异性,因此在跨文化情境中应谨慎使用。在设计功能性情感系统时,必须认识到身体交流技巧的特殊性。例如,虽然"竖起大拇指"的手势通常表示赞许,但在某些国家这一手势可能被视为侮辱。
e. Humans use facial expressions to detect emotions and facilitate communication. Facial expressions may not be universal across cultures, however, and A/IS trained with a dataset from one culture may not be readily usable in another e. 人类通过面部表情识别情绪并促进交流。然而,面部表情在不同文化中可能并不通用,基于单一文化数据集训练的人工智能/智能系统(A/IS)可能难以直接适用于其他文化场景。
culture. Well-developed A/IS will be able to recognize, analyze, and even display facial expressions essential for culturally specific social interaction. 成熟的人工智能/智能系统应具备识别、分析乃至展现特定文化社交所需面部表情的能力。
3. Engineers should consider the need for cross-cultural use of affective systems. Well-designed systems will have options innate to facilitate flexibility in cultural programming. Mechanisms to enable and disable culturally specific “add-ons” should be considered an essential part of A/IS development. 3. 工程师应当考量情感系统的跨文化应用需求。设计良好的系统应内置支持文化编程灵活性的选项。允许启用/禁用特定文化"扩展模块"的机制,应被视为人工智能/智能系统开发的核心组成部分。
Further Resources 延伸阅读资源
G. Cotton, “Gestures to Avoid in Cross-Cultural Business: In Other Words, ‘Keep Your Fingers to Yourself!’” Huffington Post, June 13, 2013. G. 科顿,《跨文化商务中应避免的手势:换句话说,"管好你的手指!"》,赫芬顿邮报,2013 年 6 月 13 日。
“Paralanguage Across Cultures,” Sydney, Australia: Culture Plus Consulting, 2016. 《跨文化副语言研究》,澳大利亚悉尼:Culture Plus 咨询公司,2016 年。
G. Cotton, Say Anything to Anyone, Say Anything to Anyone, Anywhere: 5 Keys to Successful Cross-Cultural Communication. Hoboken, NJ: Wiley, 2013. G. 科顿,《对任何人说任何话:成功跨文化沟通的五个关键》,新泽西州霍博肯:Wiley 出版社,2013 年。
D. Elmer, Cross-Cultural Connections: Stepping Out and Fitting In Around the World. Westmont, IL: InterVarsity Press, 2002. D. 埃尔默,《跨文化联结:全球范围内的融入与适应》,伊利诺伊州西蒙特:InterVarsity 出版社,2002 年。
B. J. Fogg, Persuasive Technology. Ubiquity, December 2, 2002. B·J·福格,《说服性技术》。载《泛在》期刊,2002 年 12 月 2 日。
A. McStay, Emotional AI: The Rise of Empathic Media. London: Sage, 2018. A·麦克斯特,《情感人工智能:共情媒体的崛起》。伦敦:塞奇出版社,2018 年。
M. Price, “Facial Expressions-Including FearMay Not Be as Universal as We Thought.” Science, October 17, 2016. M·普莱斯,《面部表情——包括恐惧——可能并不如我们想象的那样具有普遍性》。载《科学》期刊,2016 年 10 月 17 日。
Issue: It is presently unknown whether long-term interaction with affective artifacts that lack cultural sensitivity could alter human social interaction. 核心问题:目前尚不清楚长期与缺乏文化敏感性的情感人造物互动是否会改变人类社交方式。
Background 背景
Systems that do not have cultural knowledge incorporated into their knowledge base may or may not interact effectively with humans for whom emotion and culture are significant. Given that interaction with A/IS may affect individuals and societies, it is imperative that we carefully evaluate mechanisms to promote beneficial affective interaction between humans and A/IS. Humans often use mirroring in order to understand and develop their norms for behavior. Certain machine learning approaches also address improving A/IS interaction with humans through mirroring human behavior. Thus, we must remember that learning via mirroring can go in both directions and that interacting with machines has the potential to impact individuals’ norms, as well as societal and cultural norms. If affective artifacts with enhanced, different, or absent cultural sensitivity interact with impressionable humans this could alter their responses to social and cultural cues and values. The potential for A/IS to exert cultural influence in powerful ways, at scale, is an area of substantial concern. 未将文化知识纳入其知识库的系统,在与情感和文化因素至关重要的人类互动时,其有效性存疑。鉴于与自主智能系统(A/IS)的互动可能影响个人与社会,我们必须审慎评估促进人机良性情感互动的机制。人类常通过行为镜像来理解并发展其行为规范,而某些机器学习方法也试图通过模仿人类行为来改善 A/IS 的交互表现。因此需注意:镜像学习具有双向性——与机器的互动既可能改变个体行为准则,也可能重塑社会文化规范。当具备强化型、差异化或文化敏感性缺失的情感化智能体与易受影响的人类交互时,可能改变其对社交文化信号及价值体系的反应方式。A/IS 大规模施加文化影响力的潜在可能,已成为重大关切领域。
Recommendations 建议方案
Collaborative research teams must research the effects of long-term interaction of people with affective systems. This should be done using multiple protocols, disciplinary approaches, and metrics to measure the modifications of habits, norms, and principles as well as careful evaluation of the downstream cultural and societal impacts. 协作研究团队必须研究人类与情感系统长期互动的影响。这需要通过多种协议、学科方法和衡量指标来评估习惯、规范及原则的改变,同时谨慎评估由此产生的文化和社会影响。
Parties responsible for deploying affective systems into the lives of individuals or communities should be trained to detect the influence of A/IS, and to utilize mitigation techniques if A/IS effects appear to be harmful. It should always be possible to shut down harmful A/IS. 负责将情感系统引入个人或群体生活的相关方应接受培训,以识别人工智能/智能系统(A/IS)的影响,并在发现 A/IS 产生有害影响时运用缓解技术。必须始终保留关闭有害 A/IS 的权限。
Further Resources 延伸阅读
T. Nishida and C. Faucher, Eds., Modelling Machine Emotions for Realizing Intelligence: Foundations and Applications. Berlin, Germany: Springer-Verlag, 2010. 西田亨、C.福谢主编,《为实现智能而建模机器情感:基础与应用》,德国柏林:Springer 出版社,2010 年。
D. J. Pauleen, et al. “Cultural Bias in Information Systems Research and Practice: Are You Coming from the Same Place I Am?” Communications of the Association for Information Systems, vol. 17,)pp. 1-36, 2006. J. Bielby, “Comparative Philosophies in Intercultural Information Ethics.” Confluence: Online Journal of World Philosophies 2, no. 1, pp. 233-253, 2015. D. J. 保林等,《信息系统研究与实践中文化偏见:我们是否同源同根?》,《信息系统学会通讯》第 17 卷,第 1-36 页,2006 年。J. 比尔比,《跨文化信息伦理中的比较哲学》,《汇流:世界哲学在线期刊》第 2 卷第 1 期,第 233-253 页,2015 年。
J. Bryson, “Why Robot Nannies Probably Won’t Do Much Psychological Damage.” A commentary on an article by N. Sharkey J. 布莱森,《为何机器人保姆可能不会造成太多心理伤害》——对 N. 夏基与 A. 夏基《机器人保姆的可悲现状》一文的评论
and A. Sharkey, The Crying Shame of Robot Nannies. Interaction Studies, vol. 11, no. 2 pp. 161-190, July 2010. 《互动研究》第 11 卷第 2 期,第 161-190 页,2010 年 7 月。
A. Sharkey, and N. Sharkey, “Children, the Elderly, and Interactive Robots.” IEEE Robotics & Automation Magazine, vol.18, no. 1, pp. 32-38, March 2011. A. 夏基与 N. 夏基,《儿童、老人与交互式机器人》,《IEEE 机器人与自动化杂志》第 18 卷第 1 期,第 32-38 页,2011 年 3 月。
Issue: When affective systems are deployed across cultures, they could adversely affect the cultural, social, or religious values of the community in which they interact. 问题:当情感系统跨文化部署时,可能会对交互社区的文化、社会或宗教价值观产生负面影响。
Background 背景
Some philosophers argue that there are no universal ethical principles and that ethical norms vary from society to society. Regardless of whether universalism or some form of ethical relativism is true, affective systems need to respect the values of the cultures within which they are embedded. How systems should effectively reflect the values of the designers or the users of affective systems is not a settled discussion. There is general agreement that developers of affective systems should acknowledge that the systems should reflect the values of those with whom the systems are interacting. There is a high likelihood that when spanning different groups, the values imbued by the developer will be different from the operator or customer of that affective system, and that 部分哲学家主张不存在普世伦理原则,认为伦理规范因社会而异。无论普遍主义或某种形式的伦理相对主义何者为真,情感系统都需尊重其所处文化的价值观。关于系统应如何有效体现设计者或使用者价值观的问题,目前尚未形成定论。学界普遍认同情感系统开发者应当承认:系统应当反映交互对象的价值观。当跨越不同群体时,开发者所赋予的价值观极有可能与系统操作者或用户的价值观存在差异,
end-user values should be actively considered. Differences between affective systems and societal values may generate conflict situations producing undesirable results, e.g., gestures or eye contact being misunderstood as rude or threatening. Thus, affective systems should adapt to reflect the values of the community and individuals where they will operate in order to avoid misunderstanding. 终端用户的价值观念应被积极纳入考量。情感系统与社会价值观之间的差异可能引发冲突情境,导致不良后果,例如某些手势或眼神接触被误解为粗鲁或具有威胁性。因此,情感系统应当进行适应性调整,以体现其运行环境中的社区及个体价值观,从而避免误解。
Recommendations 建议
Assuming that well-designed affective systems have a minimum subset of configurable norms incorporated in their knowledge base: 基于设计完善的情感系统在其知识库中已内置最低限度的可配置规范这一前提:
Affective systems should have capabilities to identify differences between the values they are designed with and the differing values of those with whom the systems are interacting. 情感系统应具备识别能力,能够察觉系统预设价值观与交互对象所持价值观之间的差异。
Where appropriate, affective systems will adapt accordingly over time to better fit the norms of their users. As societal values change, there needs to be a means to detect and accommodate such cultural change in affective systems. 在适当情况下,情感系统应随时间推移进行适应性调整,以更好地适应用户规范。随着社会价值观的变化,情感系统需要具备检测并适应这种文化变迁的机制。
Those actions undertaken by an affective system that are most likely to generate an emotional response should be designed to be easily changed in appropriate ways by the user without being easily hacked by actors with malicious intentions. Similar to how software today externalizes the language and vocabulary to be easily changeable based on location, affective systems should externalize some of the core aspects of their actions. 情感系统中最可能引发情绪反应的行为,其设计应确保用户能够以适当方式轻松调整,同时防止被恶意行为者轻易篡改。正如当今软件将语言和词汇外部化以便根据地区轻松修改,情感系统也应将其行为的部分核心要素外部化。
Further Resources 延伸阅读
J. Bielby, “Comparative Philosophies in Intercultural Information Ethics.” Confluence: Online Journal of World Philosophies 2, no. 1, pp. 233-253, 2015. J. 比尔比,《跨文化信息伦理中的比较哲学》,《思想汇流:世界哲学在线期刊》第 2 卷第 1 期,第 233-253 页,2015 年。
M. Velasquez, C. Andre, T. Shanks, and M. J. Meyer. “Ethical Relativism.” Markkula Center for Applied Ethics, Santa Clara, CA: Santa Clara University, August 1, 1992. M. 维拉斯奎兹、C. 安德烈、T. 尚克斯与 M.J. 迈耶合著。《伦理相对主义》。应用伦理学马克库拉中心,加州圣克拉拉:圣克拉拉大学,1992 年 8 月 1 日。
Culture reflects the moral values and ethical norms governing how people should behave and interact with others. “Ethics, an Overview.” Boundless Management. 文化反映了规范人们行为及人际互动的道德价值观与伦理准则。《伦理学概述》。无界管理。
T. Donaldson, “Values in Tension: Ethics Away from Home Away from Home.” Harvard Business Review. September- October 1996. T. 唐纳森,《价值冲突:异乡伦理》。哈佛商业评论。1996 年 9-10 月刊。
Section 2-When Systems Care 第二章 当系统具备关怀能力时
Issue: Are moral and ethical boundaries crossed when the design of affective systems allows them to develop intimate relationships with their users? 问题:当情感系统的设计允许其与用户发展亲密关系时,是否跨越了道德与伦理的边界?
Background 背景
There are many robots in development or production designed to focus on intimate care of children, adults, and the elderly ^(2){ }^{2}. While robots capable of participating fully in intimate relationships are not currently available, the potential use of such robots routinely captures the attention of the media. It is important that professional communities, policy makers, and the general public participate in development of guidelines for appropriate use of A/IS in this area. Those guidelines should acknowledge 目前正在开发或生产许多专注于为儿童、成人和老年人提供亲密照护的机器人 ^(2){ }^{2} 。虽然能够完全参与亲密关系的机器人目前尚未问世,但此类机器人的潜在应用经常引起媒体关注。专业团体、政策制定者和公众必须共同参与制定 A/IS 在该领域的应用准则,这些准则应当明确承认
fundamental human rights to highlight potential ethical benefits and risks that may emerge, if and when affective systems interact intimately with users. 基本人权,以凸显情感系统若与用户发生亲密互动时可能产生的伦理效益与风险。
Among the many areas of concern are the representation of care, embodiment of caring A/IS, and the sensitivity of data generated through intimate and caring relationships with A/IS. The literature suggests that there are some potential benefits to individuals and to society from the incorporation of caring A/IS, along with duly cautionary notes concerning the possibility that these systems could negatively impact human-to-human intimate relations ^(3){ }^{3}. 在众多关注领域中,包括护理行为的表征、具身化的关怀型人工智能/自主系统(A/IS),以及通过与 A/IS 建立亲密关怀关系所产生的数据敏感性。文献表明,引入关怀型 A/IS 可能为个人和社会带来某些潜在益处,同时也恰当地警示了这些系统可能对人类间亲密关系产生的负面影响 ^(3){ }^{3} 。
Recommendations 建议
As this technology develops, it is important to monitor research into the development of intimate relationships between A/IS and humans. Research should emphasize any technical and 随着该技术的发展,必须密切监测关于 A/IS 与人类建立亲密关系的研究进展。研究应重点关注那些体现 A/IS 积极治疗性应用的技术与
Affective Computing 情感计算
normative developments that reflect use of A/IS in positive and therapeutic ways while also creating appropriate safeguards to mitigate against uses that contribute to problematic individual or social relationships: 规范性发展,同时建立适当保障措施,以防范可能导致个体或社会关系问题的使用方式:
Intimate systems must not be designed or deployed in ways that contribute to stereotypes, gender or racial inequality, or the exacerbation of human misery. 亲密系统的设计与部署不得助长刻板印象、性别或种族不平等,或加剧人类苦难。
Intimate systems must not be designed to explicitly engage in the psychological manipulation of the users of these systems unless the user is made aware they are being manipulated and consents to this behavior. Any manipulation should be governed through an opt-in system. 除非用户明确知晓并同意接受心理干预,否则亲密系统的设计不得包含对使用者的心理操控行为。任何操控行为都应通过选择加入机制进行管控。
Caring A/IS should be designed to avoid contributing to user isolation from society. 关怀型人工智能系统应避免设计成导致用户与社会隔离的产品。
Designers of affective robotics must publicly acknowledge, for example, within a notice associated with the product, that these systems can have side effects, such as interfering with the relationship dynamics between human partners, causing attachments between the user and the A/IS that are distinct from human partnership. 情感机器人设计者必须公开声明(例如在产品说明中注明),这类系统可能产生副作用,包括干扰人类伴侣间的互动关系,导致用户与人工智能系统形成不同于人类伴侣关系的特殊依恋。
Commercially marketed A/IS for caring applications should not be presented to be a person in a legal sense, nor marketed as a person. Rather its artifactual, that is, authored, designed, and built deliberately, nature should always be made as transparent as possible, at least at point of sale and in available documentation, as noted in Section 4, Systems Supporting Human Potential. 商业销售的关怀型人工智能与智能系统(A/IS)在法律意义上不得被呈现为具有人格,也不应作为人格主体进行营销。正如第 4 章《支撑人类潜能的系统》所述,其作为人工制品的本质——即人为设计、刻意构建的特性——应始终保持最大限度的透明度,至少在销售环节和随附文档中需明确说明。
Existing laws regarding personal imagery need to be reconsidered in light of caring A/IS. In addition to other ethical considerations, it will also be necessary to establish conformance with local laws and mores in the context of caring A/IS systems. 针对关怀型 A/IS 系统,需要重新审视现行关于个人肖像的法律法规。除其他伦理考量外,还需确保此类系统符合当地法律与道德规范。
Further Resources 延伸阅读
M. Boden, J. Bryson, D. Caldwell, K. Dautenhahn, L. Edwards, S. Kember, P. Newman, V. Parry, G. Pegman, T. Rodden and T. Sorrell, Principles of robotics: regulating robots in the real world. Connection Science, vol. 29, no. 2, pp. 124-129, April 2017. M.博登、J.布莱森、D.考德威尔、K.多滕哈恩、L.爱德华兹、S.肯伯、P.纽曼、V.帕里、G.佩格曼、T.罗登与 T.索雷尔,《机器人原则:现实世界中的机器人监管》,《连接科学》第 29 卷第 2 期,第 124-129 页,2017 年 4 月。
J. J. Bryson, M. E. Diamantis, and T. D. Grant, “Of, For, and By the People: The Legal Lacuna of Synthetic Persons.” Artificial Intelligence & Law, vol. 25, no. 3, pp. 273-291, Sept. 2017. J·J·布莱森、M·E·迪亚曼蒂斯与 T·D·格兰特,《属于人民、为了人民、源于人民:合成人格的法律真空》,《人工智能与法律》第 25 卷第 3 期,第 273-291 页,2017 年 9 月。
M. Scheutz, “The Inherent Dangers of Unidirectional Emotional Bonds between Humans and Social Robots,” in Robot Ethics: The Ethical and Social Implications of Robotics, P. Lin, K. Abney, and G. Bekey, Eds., pp. 205. Cambridge, MA: MIT Press, 2011. M·舒茨,《人类与社交机器人单向情感联结的固有风险》,载《机器人伦理:机器人技术的伦理与社会影响》,P·林、K·阿布尼与 G·贝基编,第 205 页,马萨诸塞州剑桥市:麻省理工学院出版社,2011 年。
Section 3- System Manipulation/ Nudging/Deception 第三节 系统操控/助推/欺骗
Issue: Should affective systems be designed to nudge people for the user's personal benefit and/or for the benefit of others? 议题:情感系统是否应被设计为基于用户个人利益和/或他人利益而实施行为助推?
Background 背景
Manipulation can be defined as an exercise of influence by one person or group, with the intention to attempt to control or modify the actions of another person or group. Thaler and Sunstein (2008) call the tactic of subtly modifying behavior a “nudge4”. Nudging mainly operates through the affective elements of a human rational system. Making use of a nudge might be considered appropriate in situations like teaching children, treating drug dependency, and in some healthcare settings. While nudges can be deployed to encourage individuals to express behaviors that have community benefits, a nudge could have unanticipated consequences for people whose backgrounds were not well considered in the development of the nudging system ^(5){ }^{5}. Likewise, nudges may encourage behaviors with unanticipated long-term effects, whether positive or negative, for the individual and/or society. The effect of A/IS nudging a person, such as potentially eroding or encouraging individual liberty, or expressing behaviors that are for the benefit others, should be well characterized in the design of A/IS. 操纵可定义为个人或群体为试图控制或改变他人行为而施加的影响。塞勒与桑斯坦(2008)将这种微妙改变行为的技术称为"助推术"4。助推主要通过人类理性系统中的情感要素发挥作用。在教育儿童、治疗药物依赖及某些医疗场景中,运用助推技术可能被视为恰当。虽然助推可用于鼓励个体展现有益社群的行为,但若助推系统设计时未充分考虑特定人群背景,则可能产生意外后果 ^(5){ }^{5} 。同理,助推可能引发对个人或社会具有未知长期影响的行为(无论积极或消极)。在 A/IS 系统设计中,必须明确界定其助推行为的影响——包括可能削弱或增强个人自由,以及促使利他行为等潜在效应。
Recommendations 建议
Systematic analyses are needed that examine the ethics and behavioral consequences of designing affective systems to nudge human beings prior to deployment. 需要进行系统分析,在情感化系统部署前研究其伦理设计及对人类行为的影响。
The user should be empowered, through an explicit opt-in system and readily available, comprehensible information, to recognize different types of A/IS nudges, regardless of whether they seek to promote beneficial social manipulation or to enhance consumer acceptance of commercial goals. The user should be able to access and check facts behind the nudges and then make a conscious decision to accept or reject a nudge. Nudging systems must be transparent, with a clear chain of accountability that includes human agents: data logging is required so users can know how, why, and by whom they were nudged. 用户应通过明确的主动选择系统和易于获取、易于理解的信息,获得识别各类人工智能/智能系统(A/IS)助推行为的能力——无论这些助推旨在促进有益的社会引导,还是增强消费者对商业目标的接受度。用户应当能够查证助推背后的事实依据,进而做出接受或拒绝助推的自主决策。助推系统必须保持透明,建立包含人类责任主体的明确问责链条:系统需记录数据日志,使用户能知悉自己被助推的方式、原因及实施主体。
A/IS nudging must not become coercive and should always have an opt-in system policy with explicit consent. 人工智能/智能系统助推绝不可具有强制性,必须始终采用明确知情同意原则下的主动选择政策。
Additional protections against unwanted nudging must be put in place for vulnerable populations, such as children, or when informed consent cannot be obtained. Protections against unwanted nudging should be encouraged when nudges alter long-term behavior or when consent alone may not be a sufficient safeguard against coercion or exploitation. 必须为儿童等弱势群体或在无法获得知情同意的情况下,建立额外的保护措施以防止不受欢迎的助推行为。当助推会改变长期行为,或仅靠同意不足以防范胁迫或剥削时,应鼓励采取防止不受欢迎助推的保护措施。
Affective Computing 情感计算
Data gathered which could reveal an individual or groups’ susceptibility to a nudge or their emotional reaction to a nudge should not be collected or distributed without opt-in consent, and should only be retained transparently, with access restrictions in compliance with the highest requirements of data privacy and law. 除非获得明确选择加入的同意,否则不得收集或分发可能揭示个人或群体对助推的易感性或其情绪反应的数据,且此类数据的保留应保持透明,访问权限需符合数据隐私和法律的最高要求。
Further Resources 延伸阅读
R. Thaler, and C. R. Sunstein, Nudge: Improving Decision about Health, Wealth and Happiness, New Haven, CT: Yale University Press, 2008. R. 塞勒与 C. R. 桑斯坦,《助推:如何做出有关健康、财富与幸福的最佳决策》,纽黑文:耶鲁大学出版社,2008 年。
L. Bovens, “The Ethics of Nudge,” in Preference change: Approaches from Philosophy, Economics and Psychology, T. Grüne-Yanoff and S. O. Hansson, Eds., Berlin, Germany: Springer, 2008 pp. 207-219. L. 博文斯,《助推的伦理》,载于《偏好改变:哲学、经济学与心理学视角》,T. 格吕内-亚诺夫与 S. O. 汉松编,德国柏林:施普林格出版社,2008 年,第 207-219 页。
S. D. Hunt and S. Vitell. “A General Theory of Marketing Ethics.” Journal of Macromarketing, vol.6, no. 1, pp. 5-16, June 1986. S. D. 亨特与 S. 维特尔,《营销伦理的一般理论》,《宏观营销学刊》第 6 卷第 1 期,第 5-16 页,1986 年 6 月。
A. McStay, Empathic Media and Advertising: Industry, Policy, Legal and Citizen Perspectives (the Case for Intimacy), Big Data & Society, pp. 1-11, December 2016. A. 麦克斯特伊,《共情媒体与广告:产业、政策、法律与公民视角(亲密性案例)》,《大数据与社会》,第 1-11 页,2016 年 12 月。
J. de Quintana Medina and P. Hermida Justo, “Not All Nudges Are Automatic: Freedom of Choice and Informative Nudges.” Working paper presented to the European Consortium for Political Research, Joint Session of Workshops, 2016 Behavioral Change and Public Policy, Pisa, Italy, 2016. J. 德金塔纳·梅迪纳与 P. 埃尔米达·胡斯托,《并非所有助推都是自动的:选择自由与信息型助推》,提交至欧洲政治研究联盟"行为改变与公共政策"联合研讨会的 working paper,意大利比萨,2016 年。
M. D. White, The Manipulation of Choice. Ethics and Libertarian Paternalism. New York: Palgrave Macmillan, 2013 M.D.怀特,《选择的操纵:伦理与自由家长主义》,纽约:帕尔格雷夫·麦克米伦出版社,2013 年
C.R. Sunstein, The Ethics of Influence: Government in the Age of Behavioral Science. New York: Cambridge, 2016 C.R.桑斯坦,《影响的伦理:行为科学时代的政府治理》,纽约:剑桥大学出版社,2016 年
M. Scheutz, "The Affect Dilemma for Artificial Agents: Should We Develop Affective Artificial Agents? "IEEE Transactions on Affective Computing, vol. 3, no. 4,pp. 424-433, Sept. 2012. M.舒茨,《人工情感体的两难困境:我们是否应该开发具有情感的人工智能体?》,《IEEE 情感计算汇刊》第 3 卷第 4 期,第 424-433 页,2012 年 9 月
A. Grinbaum, R. Chatila, L. Devillers, J.G. Ganascia, C. Tessier and M. Dauchet. “Ethics in Robotics Research: CERNA Recommendations,” IEEE Robotics and Automation Magazine, vol. 24, no. 3,pp. 139-145, Sept. 2017. A.格林鲍姆、R.查蒂拉、L.德维莱尔、J.G.加纳西亚、C.泰西耶与 M.多谢合著,《机器人研究伦理:CERNA 建议书》,《IEEE 机器人与自动化杂志》第 24 卷第 3 期,第 139-145 页,2017 年 9 月
“Designing Moral Technologies: Theoretical, Practical, and Ethical Issues” Conference July 10-15, 2016, Monte Verità, Switzerland. “设计道德技术:理论、实践与伦理问题”会议 2016 年 7 月 10-15 日,瑞士蒙特维里塔
Affective Computing 情感计算
Issue: Governmental entities may potentially use nudging strategies, for example to promote the performance of charitable acts. Does the practice of nudging for the benefit of society, including nudges by affective systems, raise ethical concerns? 议题:政府实体可能采用助推策略,例如促进慈善行为的表现。为社会利益而实施的助推行为(包括情感系统的助推)是否会引起伦理担忧?
Background 背景
A few scholars have noted a potentially controversial practice of the future: allowing a robot or another affective system to nudge a user for the good of society ^(6){ }^{6}. For instance, if it is possible that a well-designed robot could effectively encourage humans to perform charitable acts, would it be ethically appropriate for the robot to do so? This design possibility illustrates just one behavioral outcome that a robot could potentially elicit from a user. 部分学者注意到未来可能出现的一项争议性实践:允许机器人或其他情感系统为促进社会利益而对用户实施行为干预 ^(6){ }^{6} 。例如,如果一个设计精良的机器人能有效鼓励人类实施慈善行为,那么机器人这样做在伦理上是否恰当?这种设计可能性展示了机器人可能从用户身上引发的行为结果之一。
Given the persuasive power that an affective system may have over a user, ethical concerns related to nudging must be examined. This includes the significant potential for misuse. 鉴于情感系统对用户可能具有的说服力,必须审视与行为干预相关的伦理问题。这包括被滥用的重大可能性。
Recommendations 建议
As more and more computing devices subtly and overtly influence human behavior, it is important to draw attention to whether it is ethically appropriate to pursue this type of design pathway in the context of governmental actions. 随着越来越多的计算设备以微妙或明显的方式影响人类行为,必须关注在政府行为背景下追求此类设计路径是否符合伦理道德。
There needs to be transparency regarding who the intended beneficiaries are, and whether any form of deception or manipulation is going to be used to accomplish the intended goal. 需要明确说明目标受益者是谁,以及是否会使用任何形式的欺骗或操纵手段来实现预期目标。
Further Resources 更多资源
J. Borenstein and R. Arkin, “Robotic Nudges: Robotic Nudges: The Ethics of Engineering a More Socially Just Human Being Just Human Being.” Science and Engineering Ethics, vol. 22, no. 1,pp. 31-46, Feb. 2016. J·博伦斯坦与 R·阿金,《机器人助推:构建更具社会正义性人类的伦理工程》,《科学与工程伦理》第 22 卷第 1 期,第 31-46 页,2016 年 2 月。
J. Borenstein and R. Arkin. “Nudging for Good: Robots and the Ethical Appropriateness of Nurturing Empathy and Charitable Behavior.” Al and Society, vol. 32, no. 4, pp. 499-507, Nov. 2016. J·博伦斯坦与 R·阿金,《向善助推:机器人培育同理心与慈善行为的伦理适切性》,《人工智能与社会》第 32 卷第 4 期,第 499-507 页,2016 年 11 月。
Issue: Will A/IS nudging systems that are not fully relevant to the sociotechnical context in which they are operating cause behaviors with adverse unintended consequences? 核心问题:若人工智能/智能系统(A/IS)助推系统与其运行的社会技术情境不完全适配,是否会导致产生不良意外后果的行为?
Background 背景
A well-designed nudging or suggestion system will have sophisticated enough technical capabilities for recognizing the context in which it is applying nudging actions. Assessment of the context requires perception of the scope or impact of the actions to be taken, the consequences of incorrectly or incompletely 设计完善的助推或建议系统应具备足够精密的技术能力,以识别其施加助推行为的具体情境。情境评估需涵盖对拟采取行动的影响范围或程度的感知,以及对错误或不完全行动的后果预判。
Affective Computing 情感计算
applied nudges, and acknowledgement of the uncertainties that may stem from long term consequences of a nudge ^(7){ }^{7}. 应用助推措施时需承认其可能带来的长期后果不确定性 ^(7){ }^{7} 。
Recommendations 建议
Consideration should be given to the development of a system of technical licensing (“permits”) or other certification from governments or non-governmental organizations (NGOs) that can aid users to understand the nudges from A//IS\mathrm{A} / \mathrm{IS} in their lives. 应考虑建立技术许可("许可")制度或由政府及非政府组织(NGOs)提供的其他认证体系,以帮助用户理解生活中来自 A//IS\mathrm{A} / \mathrm{IS} 的助推措施。
User autonomy is a key and essential consideration that must be taken into account when addressing whether affective systems should be permitted to nudge human beings. 在讨论是否应允许情感系统对人类进行助推时,用户自主权是必须考量的关键要素。
Design features of an affective system that nudges human beings should include the ability to accurately distinguish between users, including detecting characteristics such as whether the user is an adult or a child. 能够引导人类行为的情感系统,其设计特征应包括准确区分用户的能力,包括检测用户是否为成人或儿童等特征。
Affective systems with nudging strategies should incorporate a design system of evaluation, monitoring, and control for unintended consequences. 采用行为引导策略的情感系统应建立评估、监测及控制意外后果的设计体系。
Further Resources 延伸阅读
J. Borenstein and R. Arkin, “Robotic Nudges: Robotic Nudges: The Ethics of Engineering a More Socially Just Human Being Just Human Being.” Science and Engineering Ethics, vol. 22, no. 1, pp. 31-46, 2016. J. Borenstein 与 R. Arkin,《机器人行为引导:塑造更公平社会人类的伦理探讨》,载《科学与工程伦理》第 22 卷第 1 期,第 31-46 页,2016 年。
R. C. Arkin, M. Fujita, T. Takagi, and R. Hasegawa, “An Ethological and Emotional Basis for Human- Robot Interaction.” Robotics and Autonomous Systems, vol. 42, no. 3-4 pp.191-201, March 2003. R. C. Arkin、M. Fujita、T. Takagi 和 R. Hasegawa 合著的《人类-机器人交互的动物行为学与情感基础》,发表于《机器人与自主系统》第 42 卷第 3-4 期,第 191-201 页,2003 年 3 月。
S. Omohundro “Autonomous Technology and the Greater Human Good.” Journal of Experimental and Theoretical Artificial Intelligence, vol. 26, no. 3, pp. 303-315, 2014. S. Omohundro 所著《自主技术与人类更大福祉》,发表于《实验与理论人工智能杂志》第 26 卷第 3 期,第 303-315 页,2014 年。
Issue: When, if ever, and under which circumstances, is deception performed by affective systems acceptable? 议题:情感系统实施的欺骗行为在何时(如有)、何种情况下是可接受的?
Background 背景
Deception is commonplace in everyday humanhuman interaction. According to Kantian ethics, it is never ethically appropriate to lie, while utilitarian frameworks indicate that it can be acceptable when deception increases overall happiness. Given the diversity of views on ethics and the appropriateness of deception, should affective systems be designed to deceive? Does the non-consensual nature of deception restrict the use of A/IS in contexts in which deception may be required? 欺骗在人类日常交往中司空见惯。康德伦理学认为说谎在道德上绝不可取,而功利主义框架则表明当欺骗能提升整体幸福感时是可接受的。鉴于道德观念与欺骗适当性认知的多样性,情感系统是否应被设计为具有欺骗功能?欺骗的非自愿性特质是否会限制自主/智能系统在可能需要欺骗的场景中的应用?
Affective Computing 情感计算
Recommendations 建议
It is necessary to develop recommendations regarding the acceptability of deception performed by A/IS, specifically with respect to when and under which circumstances, if any, it is appropriate. 有必要制定关于人工智能/智能系统(A/IS)实施欺骗行为的可接受性准则,具体明确在何种情况下(如存在适用情形)此类行为是恰当的。
In general, deception may be acceptable in an affective agent when it is used for the benefit of the person being deceived, not for the agent itself. For example, deception might be necessary in search and rescue operations or for elder- or child-care. 总体而言,当情感智能体实施的欺骗行为是为了被欺骗者的利益而非智能体自身时,这种欺骗可能是可接受的。例如,在搜救行动或老人/儿童护理场景中,欺骗行为可能是必要的。
For deception to be used under any circumstance, a logical and reasonable justification must be provided by the designer, and this rationale should be certified by an external authority, such as a licensing body or regulatory agency. 在任何情况下使用欺骗手段时,设计者都必须提供合乎逻辑且合理的正当理由,且该论证应当通过外部权威机构(如许可机构或监管部门)的认证。
Further Resources 延伸阅读资料
R. C. Arkin, “Robots That Need to Mislead: Biologically-inspired Machine Deception.” IEEE Intelligent Systems 27, no. 6, pp. 60-75, 2012. R. C. Arkin,《需要欺骗的机器人:仿生机器欺骗技术》,载《IEEE 智能系统》第 27 卷第 6 期,第 60-75 页,2012 年。
J. Shim and R. C. Arkin, “Other-Oriented Robot Deception: How Can a Robot’s Deceptive Feedback Help Humans in HRI?” Eighth International Conference on Social Robotics (ICSR 2016), Kansas, MO., November 2016. J. Shim 与 R. C. Arkin,《他者导向的机器人欺骗:机器人欺骗性反馈如何助力人机交互?》,第八届国际社交机器人会议(ICSR 2016),美国密苏里州堪萨斯城,2016 年 11 月。
J. Shim and R. C. Arkin, “The Benefits of Robot Deception in Search and Rescue: Computational Approach for Deceptive Action Selection via Case-based Reasoning.” 2015 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR 2015), West Lafayette, IN, October 2015. J. Shim 与 R. C. Arkin,《搜救行动中机器人欺骗的效益:基于案例推理的欺骗行为选择计算方法》,2015 年 IEEE 安全、安保及救援机器人国际研讨会(SSRR 2015),美国印第安纳州西拉法叶,2015 年 10 月。
J. Shim and R. C. Arkin, “A Taxonomy of Robot Deception and its Benefits in HRI.” Proceedings of IEEE Systems, Man and Cybernetics Conference, Manchester England, October 2013. J. Shim 和 R. C. Arkin,《机器人欺骗分类学及其在人机交互中的益处》。2013 年 10 月于英国曼彻斯特举行的 IEEE 系统、人与控制论会议论文集。
Section 4-Systems Supporting Human Potential 第四章 支持人类潜能的系统
Abstract 摘要
Issue: Will extensive use of A/IS in society make our organizations more brittle by reducing human autonomy within organizations, and by replacing creative, affective, empathetic components of management chains? 核心问题:人工智能/智能系统在社会中的广泛应用,是否会通过削弱组织内的人类自主权、取代管理链条中具有创造性、情感性和共情性的组成部分,从而使我们的组织变得更加脆弱?
Background 背景
If human workers are replaced by A/IS, the possibility of corporations, governments, employees, and customers discovering new equilibria outside the scope of what the organizations’ past leadership originally foresaw may be unduly limited. A lack of empathy based on shared needs, abilities, and disadvantages between organizations and customers causes disequilibria between the individuals and corporations and governments that exist to serve them. Opportunities for useful innovation may therefore be lost through automation. Collaboration requires enough commonality of collaborating intelligences to create empathythe capacity to model the other’s goals based on one’s own. 若人工智能/智能系统(A/IS)取代人类劳动者,企业、政府、雇员和消费者在组织过往领导层原有预见范围之外发现新平衡点的可能性可能会受到不当限制。由于组织与消费者之间缺乏基于共同需求、能力和劣势的共情,导致个体与为其服务的公司及政府之间出现失衡。因此,自动化可能导致有益创新机会的流失。有效协作需要合作智能体之间具备足够的共性以建立共情——即基于自身目标模拟对方目标的能力。
According to scientists within several fields, autonomy is a psychological need. Without it, humans fail to thrive, create, and innovate. 多个领域的科学家研究表明,自主性是人类的基本心理需求。缺乏自主性将导致人类难以实现成长、创造与创新。
Ethically aligned design should support, not hinder, human autonomy or its expression. 符合伦理的设计应当支持而非阻碍人类自主性及其表达。
Recommendations 建议方案
It is important that human workers’ interaction with other workers not always be intermediated by affective systems (or other technology) which may filter out autonomy, innovation, and communication. 人类工作者之间的互动不应总是由情感系统(或其他技术)作为中介,因为这些系统可能会过滤掉自主性、创新和沟通。
Human points of contact should remain available to customers and other organizations when using A//IS\mathrm{A} / \mathrm{IS}. 在使用 A//IS\mathrm{A} / \mathrm{IS} 时,应确保客户和其他组织能够获得人工服务触点。
Affective systems should be designed to support human autonomy, sense of competence, and meaningful relationships as these are necessary to support a flourishing life. 情感系统的设计应支持人类的自主性、胜任感和有意义的关系,因为这些是支撑繁荣生活的必要要素。
Even where A/IS are less expensive, more predictable, and easier to control than human employees, a core network of human employees should be maintained at every level of decision-making in order to ensure preservation of human autonomy, communication, and innovation. 即使人工智能/信息系统比人类员工成本更低、更可预测且更易控制,也应在每个决策层级保留核心的人类员工网络,以确保人类自主性、沟通和创新能力得以延续。
Management and organizational theorists should consider appropriate use of affective and autonomous systems to enhance their business models and the efficacy of their workforce within the limits of the preservation of human autonomy. 管理与组织理论学者应当考虑在保障人类自主权的前提下,合理运用情感化与自主化系统来优化商业模式并提升员工效能。
Further Resources 延伸阅读
J. J. Bryson, “Artificial Intelligence and Pro-Social Behavior,” in Collective Agency and Cooperation in Natural and Artificial Systems, C. Misselhorn, Ed., pp. 281-306, Springer, 2015. J. J. Bryson,《人工智能与亲社会行为》,收录于《自然与人工系统中的集体代理与协作》,C. Misselhorn 编,第 281-306 页,Springer 出版社,2015 年。
D. Peters, R.A. Calvo, and R.M. Ryan, “Designing for Motivation, Engagement and Wellbeing in Digital Experience,” Frontiers in Psychology- Human Media Interaction, vol. 9, pp 797, 2018. D. Peters, R.A. Calvo 与 R.M. Ryan,《数字体验中的动机、参与及幸福感设计》,《心理学前沿-人机交互》,第 9 卷,第 797 页,2018 年。
Issue: Does the increased access to personal information about other members of our society, facilitated by A/IS, alter the human affective experience? Does this access potentially lead to a change in human autonomy? 问题:人工智能/信息系统(A/IS)促进了社会成员间个人信息的获取,这是否会改变人类的情感体验?这种获取是否可能导致人类自主性的变化?
Background 背景
Theoretical biology tells us that we should expect increased communication-which A/IS facilitateto increase group-level investment ^(8){ }^{8}. Extensive use of A/IS could change the expression of individual autonomy and in its place increase group-based identities. Examples of this sort of social alteration may include: 理论生物学表明,我们应当预期——由 A/IS 推动的——增强的交流会提升群体层面的投入 ^(8){ }^{8} 。A/IS 的广泛使用可能改变个体自主性的表达,并在此过程中增强基于群体的身份认同。此类社会变革的案例可能包括:
Changes in the scope of monitoring and control of children’s lives by parents. 父母对子女生活监控与管理范围的改变。
Decreased willingness to express opinions for fear of surveillance or long-term consequences of past expressions being used in changed temporal contexts. 因担心受到监视或过去言论在时过境迁后被利用,而降低表达意见的意愿。
Utilization of customers or other end users to perform basic corporate business processes such as data entry as a barter for lower prices or access, resulting potentially in reduced tax revenues. 利用客户或其他终端用户执行数据录入等基础企业业务流程,以换取更低价格或访问权限,这种做法可能导致税收收入减少。
Changes to the expression of individual autonomy could alter the diversity, creativity, and cohesiveness of a society. It may also alter perceptions of privacy and security, and social and legal liability for autonomous expressions. 个体自主表达方式的改变可能影响社会的多样性、创造力和凝聚力,同时可能改变人们对隐私与安全的认知,以及对自主表达所承担的社会与法律责任的理解。
Recommendations 建议
Organizations, including governments, must put a high value on individuals’ privacy and autonomy, including restricting the amount and age of data held about individuals specifically. 包括政府在内的各类组织必须高度重视个人隐私与自主权,特别是限制所持有的个人数据数量及数据年限。
Education in all forms should encourage individuation, the preservation of autonomy, and knowledge of the appropriate uses and limits to A//IS^(9)\mathrm{A} / \mathrm{IS}^{9}. 各种形式的教育都应促进个体化、保持自主性,并让学习者了解 A//IS^(9)\mathrm{A} / \mathrm{IS}^{9} 的合理使用范围与限制边界。
Further Resources 延伸阅读资源
J. J. Bryson, “Artificial Intelligence and Pro-Social Behavior,” in Collective Agency and Cooperation in Natural and Artificial Systems, C. Misselhorn, Ed., pp. 281-306, Springer, 2015. J·J·布莱森,《人工智能与亲社会行为》,收录于《自然与人工系统中的集体能动性与合作》,C·米塞尔霍恩编,第 281-306 页,斯普林格出版社,2015 年。
M. Cooke, “A Space of One’s Own: Autonomy, Privacy, Liberty,” Philosophy & Social Criticism, Vol. 25, no. 1, pp. 22-53, 1999. M·库克,《属于自己的空间:自主性、隐私与自由》,《哲学与社会批判》第 25 卷第 1 期,第 22-53 页,1999 年。
D. Peters, R.A. Calvo, R.M. Ryan, “Designing for Motivation, Engagement and Wellbeing in Digital Experience” Frontiers in Psychology Human Media Interaction, vol. 9. pp 797, 2018. D. Peters, R.A. Calvo, R.M. Ryan,《数字体验中的动机、参与与幸福感设计》心理学前沿:人机交互,第 9 卷,第 797 页,2018 年。
Affective Computing 情感计算
J. Roughgarden, M. Oishi and E. Akçay, “Reproductive Social Behavior: Cooperative Games to Replace Sexual Selection.” Science 311, no. 5763, pp. 965-969, 2006. J. Roughgarden, M. Oishi 和 E. Akçay,《生殖社会行为:替代性选择的合作博弈》,科学,第 311 卷,第 5763 期,第 965-969 页,2006 年。
Issue: Will use of A/IS adversely 问题:人工智能/智能系统的使用是否会对人类心理和情感健康产生其他不可预见的不良影响?
affect human psychological and emotional well-being in ways not otherwise foreseen?
functioning legal system, one that is conducive to both economic prosperity and human well-being, will have a number of attributes. It should be: 一个运作良好的法律体系,既要促进经济繁荣又要保障人类福祉,应当具备若干特质。它应当:
Background 背景
A/IS may be given unprecedented access to human culture and human spaces-both physical and intellectual. A/IS may communicate via natural language, may move with humanlike form, and may express humanlike identity, but they are not, and should not be regarded as, human. Incorporation of A/IS into daily life may affect human well-being in ways not yet anticipated. Incorporation of A/IS may alter patterns of trust and capability assessment between humans, and between humans and A/IS. 人工智能与智能系统(A/IS)可能获得前所未有的途径接触人类文化及人类空间——包括物理与智力层面。这类系统能够通过自然语言交流,以类人形态移动,并展现类人身份特征,但其本质并非人类,也不应被视为人类。A/IS 融入日常生活可能以尚未预见的方式影响人类福祉,并可能改变人与人之间、以及人与 A/IS 之间的信任模式与能力评估机制。
Recommendations 建议
Vigilance and robust, interdisciplinary, on-going research on identifying situations where A/IS affect human well-being, both positively and negatively, is necessary. Evidence of correlations between the increased use of A/IS and positive or negative individual or social outcomes must be explored. 必须保持警惕,并通过持续、跨学科的深入研究来识别 A/IS 对人类福祉产生积极或消极影响的各类情境。需要系统探究 A/IS 使用率提升与个人或社会积极/消极结果之间的相关性证据。
Design restrictions should be placed on the systems themselves to avoid machine decisions that may alter a person’s life in unknown ways. Explanations should be available on demand in systems that may affect human well-being. 应对系统本身施加设计限制,以避免机器决策可能以未知方式改变个人生活。在可能影响人类福祉的系统中,应确保按需提供决策解释。
Further Resources 扩展阅读
K. Kamewari, M. Kato, T. Kanda, H. Ishiguro and K. Hiraki. “Six-and-a-Half-Month-Old Children Positively Attribute Goals to Human Action and to Humanoid-Robot Motion,” Cognitive Development, vol. 20, no. 2, pp. 303-320, 2005. 龟割和宏、加藤美穗子、神田崇行、石黑浩和平木幸子。《六月龄婴儿对人类行为与人形机器人动作的目标正向归因》,《认知发展》第 20 卷第 2 期,第 303-320 页,2005 年。
R.A. Calvo and D. Peters, Positive Computing: Technology for Wellbeing and Human Potential. Cambridge, MA: MIT Press, 2014. 拉斐尔·A·卡尔沃与多里安·彼得斯,《积极计算:促进福祉与人类潜能的技术》,马萨诸塞州剑桥市:麻省理工学院出版社,2014 年。
Section 5-Systems with Synthetic Emotions 第 5 节 具有合成情感的系统
Issue: Will deployment of synthetic emotions into affective systems increase the accessibility of A/IS? Will increased accessibility prompt unforeseen patterns of identification with A//IS\mathrm{A} / \mathrm{IS} ? 问题:在情感系统中部署合成情感是否会增加人工智能/智能系统(A/IS)的可及性?这种可及性的提升是否会引发人类对 A//IS\mathrm{A} / \mathrm{IS} 产生不可预见的认同模式?
Background 背景
Deliberately constructed emotions are designed to create empathy between humans and artifacts, which may be useful or even essential for human-A/IS collaboration. Synthetic emotions are essential for humans to collaborate with the A/IS but can also lead to failure to recognize that synthetic emotions can be compartmentalized and even entirely removed. Potential consequences for humans include different patterns of bonding, guilt, and trust, whether between the human and A/IS or between other humans. There is no coherent sense in which A/IS can be made to suffer emotional loss, because any such affect, even if possible, could be avoided at the stage of engineering, or reengineered. As such, it is not possible to allocate moral agency or responsibility in the senses that have been developed for human emotional bonding and thus sociality. 人工构建的情感旨在建立人类与人工制品之间的共情,这对于人机协作可能具有实用价值甚至至关重要。合成情感是人类与 A/IS 协作的必要条件,但也可能导致人们无法认识到合成情感可以被区隔甚至完全移除。对人类可能产生的影响包括形成不同的情感联结模式、内疚感和信任关系——无论是人机之间还是人际之间。从任何连贯的意义上说,A/IS 都不可能遭受情感损失,因为任何此类情感影响(即便可能存在)都可以在工程阶段规避或重新设计。因此,我们无法按照人类情感联结及社会性发展出的道德框架,来赋予 A/IS 道德主体性或责任。
Recommendations 建议
Commercially marketed A/IS should not be persons in a legal sense, nor marketed as persons. Rather their artifactual (authored, designed, and built deliberately) nature should always be made as transparent as possible, at least at point of sale and in available documentation. 商业销售的自主智能系统(A/IS)不应在法律意义上被视作人或作为人进行营销。相反,其人工制品属性(经过刻意创作、设计和建造的本质)应始终保持最大限度的透明度,至少在销售时点和随附文档中需明确体现。
Some systems will, due to their application, require opaqueness in some contexts, e.g., emotional therapy. Transparency in such systems should be available to inspection by responsible parties but may be withdrawn for operational needs. 部分系统因其应用场景(如情感治疗)需要在特定情境下保持不透明性。此类系统的透明度机制应接受责任方的审查监督,但可基于实际运行需求进行必要调整。
Further Resources 延伸阅读资源
R. C. Arkin, P. Ulam and A. R. Wagner, “Moral Decision-making in Autonomous Systems: Enforcement, Moral Emotions, Dignity, Trust and Deception,” Proceedings of the IEEE, vol. 100, no. 3, pp. 571-589, 2012. R. C. Arkin, P. Ulam 和 A. R. Wagner,《自主系统中的道德决策:执行、道德情感、尊严、信任与欺骗》,《IEEE 会刊》,第 100 卷,第 3 期,第 571-589 页,2012 年。
R. Arkin, M. Fujita, T. Takagi and R. Hasegawa. “An Ethological and Emotional Basis for Human-Robot Interaction,” Robotics and Autonomous Systems, vol.42, no. 3-4, pp.191-201, 2003. R. Arkin, M. Fujita, T. Takagi 和 R. Hasegawa,《人机交互的生态学与情感基础》,《机器人与自主系统》,第 42 卷,第 3-4 期,第 191-201 页,2003 年。
R. C. Arkin, “Moving up the Food Chain: Motivation and Emotion in Behavior-based Robots,” in Who Needs Emotions: The Brain Meets the Robot, J. Fellous and M. Arbib., Eds., New York: Oxford University Press, 2005. R. C. Arkin,《向食物链上游移动:基于行为机器人的动机与情感》,载《谁需要情感:大脑遇见机器人》,J. Fellous 和 M. Arbib 编,纽约:牛津大学出版社,2005 年。
Affective Computing 情感计算
M. Boden, J. Bryson, D. Caldwell, et al. “Principles of Robotics: Regulating Robots in the Real World.” Connection Science, vol. 29, no. 2, pp. 124-129, 2017. M. 博登、J. 布莱森、D. 考德威尔等。《机器人原则:现实世界中的机器人监管》,《连接科学》第 29 卷第 2 期,第 124-129 页,2017 年。
J. J Bryson, M. E. Diamantis and T. D. Grant. “Of, For, and By the People: The Legal Lacuna of Synthetic Persons,” Artificial Intelligence & Law, vol. 25, no. 3, pp. 273-291, Sept. 2017. J. J. 布莱森、M. E. 迪亚曼蒂斯与 T. D. 格兰特。《属于人民、为了人民、由人民主宰:合成人的法律真空》,《人工智能与法律》第 25 卷第 3 期,第 273-291 页,2017 年 9 月。
J. Novikova, and L. Watts, “Towards Artificial Emotions to Assist Social Coordination in HRI,” International Journal of Social Robotics, vol. 7, no. 1, pp. 77-88, 2015. J. 诺维科娃与 L. 瓦茨。《面向人机交互中辅助社会协调的人工情感》,《国际社交机器人杂志》第 7 卷第 1 期,第 77-88 页,2015 年。
M. Scheutz, “The Affect Dilemma for Artificial Agents: Should We Develop Affective Artificial Agents?” IEEE Transactions on Affective Computing, vol. 3, no. 4, pp. 424-433, 2012. M. 舒茨。《人工情感体的情感困境:我们是否应该开发具有情感的人工智能体?》,《IEEE 情感计算汇刊》第 3 卷第 4 期,第 424-433 页,2012 年。
A. Sharkey and N. Sharkey. “Children, the Elderly, and Interactive Robots.” IEEE Robotics & Automation Magazine, vol. 18, no. 1, pp. 32-38, 2011. A. Sharkey 和 N. Sharkey. 《儿童、老人与交互式机器人》. IEEE 机器人与自动化杂志, 第 18 卷第 1 期, 第 32-38 页, 2011 年.
Thanks to the Contributors 致谢贡献者
We wish to acknowledge all of the people who contributed to this chapter. 我们谨向所有为本章节做出贡献的人士致以谢意。
The Affective Computing Committee 情感计算委员会
Ronald C. Arkin (Founding Co-Chair) Regents’ Professor & Director of the Mobile Robot Laboratory; College of Computing Georgia Institute of Technology 罗纳德·C·阿金(创始联合主席)佐治亚理工学院计算机学院移动机器人实验室主任、校董教授
Joanna J. Bryson (Co-Chair) - Reader (Associate Professor), University of Bath, Intelligent Systems Research Group, Department of Computer Science 乔安娜·J·布莱森(联合主席)英国巴斯大学计算机科学系智能系统研究组准教授
John P. Sullins (Co-Chair) - Professor of Philosophy, Chair of the Center for Ethics Law and Society (CELS), Sonoma State University 约翰·P·苏林斯(联合主席)索诺马州立大学哲学教授、伦理法律与社会研究中心主任
Genevieve Bell - Intel Senior Fellow Vice President, Corporate Strategy Office Corporate Sensing and Insights 吉纳维芙·贝尔 英特尔高级研究员副总裁、企业战略办公室企业感知与洞察部门负责人
Jason Borenstein - Director of Graduate Research Ethics Programs, School of Public Policy and Office of Graduate Studies, Georgia Institute of Technology 贾森·博伦斯坦 - 佐治亚理工学院公共政策学院与研究生院办公室研究生科研伦理项目主任
Cynthia Breazeal - Associate Professor of Media Arts and Sciences, MIT Media Lab; Founder & Chief Scientist of Jibo, Inc. 辛西娅·布雷泽尔 - 麻省理工学院媒体实验室媒体艺术与科学副教授;Jibo 公司创始人兼首席科学家
Joost Broekens - Assistant Professor Affective Computing, Interactive Intelligence group; Department of Intelligent Systems, Delft University of Technology 约斯特·布鲁肯斯 - 代尔夫特理工大学智能系统系交互智能研究组情感计算助理教授
Rafael Calvo - Professor & ARC Future Fellow, School of Electrical and Information Engineering, The University of Sydney 拉斐尔·卡尔沃 - 悉尼大学电气与信息工程学院教授、澳大利亚研究理事会未来学者
Laurence Devillers - Professor of Computer Sciences, University Paris Sorbonne, LIMSICNRS ‘Affective and social dimensions in spoken interactions’ - member of the French Commission on the Ethics of Research in Digital Sciences and Technologies (CERNA) 劳伦斯·德维勒斯 - 巴黎索邦大学计算机科学教授,法国国家科学研究院 LIMSI 实验室"语音交互中的情感与社会维度"研究组负责人,法国数字科学与技术研究伦理委员会(CERNA)成员
Jonathan Gratch - Research Professor of Computer Science and Psychology, Director for Virtual Human Research, USC Institute for Creative Technologie 乔纳森·格拉奇 - 南加州大学计算机科学与心理学研究教授,创意科技研究所虚拟人类研究主任
Mark Halverson - Founder and CEO at Human Ecology Holdings and Precision Autonomy 马克·哈尔弗森 - 人类生态控股公司及精密自主公司创始人兼首席执行官
John C. Havens - Executive Director, The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems; Executive Director, The Council on Extended Intelligence; Author, Heartificial Intelligence: Embracing Our Humanity to Maximize Machines 约翰·C·哈文斯 - IEEE 自主与智能系统全球伦理倡议执行董事;扩展智能委员会执行董事;《人工情感:在机器最大化时代拥抱人性》作者
Affective Computing 情感计算
Noreen Herzfeld - Reuter Professor of Science and Religion, St. John’s University 诺琳·赫茨菲尔德 - 圣约翰大学科学宗教学鲁特讲席教授
Chihyung Jeon - Assistant Professor, Graduate School of Science and Technology Policy, Korea Advanced Institute of Science and Technology (KAIST) 全志亨 - 韩国科学技术院(KAIST)科学技术政策研究生院助理教授
Preeti Mohan - Software Engineer at Microsoft and Computational Linguistics Master’s Student at the University of Washington 普雷蒂·莫汉 - 微软软件工程师,华盛顿大学计算语言学硕士在读
Bjoern Niehaves - Professor, Chair of Information Systems, University of Siegen 比约恩·尼哈维斯 - 教授,锡根大学信息系统学系主任
Rosalind Picard - Rosalind Picard, (Sc.D, FIEEE) Professor, MIT Media Laboratory, Director of Affective Computing Research; Faculty Chair, MIT Mind+Hand+Heart; Cofounder & Chief Scientist, Empatica Inc.; Cofounder, Affectiva Inc. 罗莎琳德·皮卡德 - (理学博士,IEEE 会士)麻省理工学院媒体实验室教授,情感计算研究主任;MIT Mind+Hand+Heart 教师主席;Empatica 公司联合创始人兼首席科学家;Affectiva 公司联合创始人
Edson Prestes - Professor, Institute of Informatics, Federal University of Rio Grande do Sul (UFRGS), Brazil; Head, Phi Robotics Research Group, UFRGS; CNPq Fellow 埃德森·普雷斯特斯 - 教授,巴西南里奥格兰德联邦大学信息学院;Phi 机器人研究组负责人;巴西国家科学技术发展委员会研究员
Matthias Scheutz - Professor, Bernard M. Gordon Senior Faculty Fellow, Tufts University School of Engineering 马蒂亚斯·朔伊茨 - 教授,伯纳德·M·戈登高级研究员,塔夫茨大学工程学院
Robert Sparrow - Professor, Monash University, Australian Research Council “Future Fellow”, 2010-15. 罗伯特·斯帕罗 - 莫纳什大学教授,澳大利亚研究理事会"未来研究员",2010-15 年
Cherry Tom - Emerging Technologies Intelligence Manager, IEEE Standards Association 切丽·汤姆 - IEEE 标准协会新兴技术情报经理
For a full listing of all IEEE Global Initiative Members, visit standards.ieee.org/content/dam/ ieee-standards/standards/web/documents/other/ ec bios.pdf. 完整 IEEE 全球倡议成员名单请访问:standards.ieee.org/content/dam/ieee-standards/standards/web/documents/other/ec bios.pdf
For information on disclaimers associated with EAD1e, see How the Document Was Prepared. 有关 EAD1e 免责声明信息,请参阅"文档编制说明"章节
Endnotes 尾注
^(1){ }^{1} See B. J. Fogg, Persuasive technology. Ubiquity, December: 2, 2002. ^(1){ }^{1} 参见 B·J·福格,《说服性技术》,载《泛在》2002 年 12 月刊第 2 页。
^(2){ }^{2} See S. Turkle, W. Taggart, C.D. Kidd, and O. Daste, "Relational artifacts with children and elders: the complexities of cybercompanionship, Connection Science, vol. 18, no. 4, 2006. ^(2){ }^{2} 参见 S·特克尔、W·塔格特、C·D·基德与 O·达斯特,《儿童与长者关系型人工物:电子陪伴的复杂性》,载《连接科学》2006 年第 18 卷第 4 期。^(3){ }^{3} A discussion of intimate robots for therapeutic and personal use is outside of the scope of Ethically Aligned Design, First Edition. For further treatment, among others, see J. P. Sullins, “Robots, Love, and Sex: The Ethics of Building a Love Machine.” IEEE Transactions on Affective Computing 3, no. 4 (2012): 398-409. ^(3){ }^{3} 关于治疗及个人用途的亲密关系机器人讨论不在《伦理对齐设计(第一版)》范畴内。延伸阅读可参阅 J·P·萨林斯《机器人、爱与性:建造爱情机器的伦理问题》,载《IEEE 情感计算汇刊》2012 年第 3 卷第 4 期,第 398-409 页。^(4){ }^{4} See R. Thaler, and C. R. Sunstein. Nudge: Improving Decision about Health, Wealth and Happiness, New Haven, CT: Yale University Press, 2008. ^(4){ }^{4} 参见 R·塞勒与 C·R·桑斯坦,《助推:改善关于健康、财富和幸福的决策》,纽黑文:耶鲁大学出版社,2008 年。^(5){ }^{5} See J. de Quintana Medina and P. Hermida Justo. “Not All Nudges Are Automatic: Freedom of Choice and Informative Nudges.” Working paper presented to the European Consortium for Political Research, Joint Session of Workshops, 2016 Behavioral Change and Public Policy, Pisa, Italy, 2016; and M. D. White, The Manipulation of Choice. Ethics and Libertarian Paternalism. New York: Palgrave Macmillan, 2013. ^(5){ }^{5} 参见 J. de Quintana Medina 与 P. Hermida Justo 合著《并非所有助推都是自动的:选择自由与信息型助推》,提交至欧洲政治研究联盟"行为变革与公共政策"联合研讨会的未发表论文,2016 年于意大利比萨;以及 M. D. White 所著《选择的操纵:伦理与自由家长主义》,纽约:帕尔格雷夫·麦克米伦出版社,2013 年。^(6){ }^{6} See, for example, J. Borenstein and R. Arkin. “Robotic Nudges: The Ethics of Engineering a More Socially Just Human Being.” Science and Engineering Ethics, vol. 22, no. 1 (2016): 31-46. ^(2){ }^{2} 参见 S. Turkle、W. Taggart、C.D. Kidd 和 O. Daste 所著《与儿童及长者相处的关联性人工物:电子陪伴的复杂性》,载于《连接科学》2006 年第 18 卷第 4 期。 ^(3){ }^{3} 关于治疗及个人用途的亲密机器人讨论不在《伦理对齐设计》第一版范畴内。进一步探讨可参阅 J. P. Sullins《机器人、爱与性:建造爱情机器的伦理问题》,载于《IEEE 情感计算汇刊》2012 年第 3 卷第 4 期,第 398-409 页。 ^(4){ }^{4} 参见 R. Thaler 与 C. R. Sunstein 合著《助推:改善关于健康、财富与幸福的决策》,耶鲁大学出版社 2008 年康涅狄格州纽黑文版。 ^(5){ }^{5} 参见 J. de Quintana Medina 与 P. Hermida Justo《并非所有助推都是自动的:选择自由与信息型助推》,2016 年提交欧洲政治研究联盟工作会议的未发表论文,意大利比萨;以及 M. D. White《选择的操纵:伦理与自由家长主义》,帕尔格雷夫·麦克米伦出版社 2013 年纽约版。 ^(6){ }^{6} 例如参见 J. Borenstein 与 R. Arkin... 《机器人助推:构建更具社会正义性人类的伦理考量》。《科学与工程伦理》,第 22 卷第 1 期(2016 年):31-46 页。
7 See S. Omohundro, “Autonomous Technology and the Greater Human Good.” Journal of Experimental and Theoretical Artificial Intelligence 26, no. 3 (2014): 303-315. 7 参见 S. Omohundro《自主技术与人类更大福祉》,《实验与理论人工智能杂志》第 26 卷第 3 期(2014 年):303-315 页。 ^(8){ }^{8} See J. Roughgarden, M. Oishi, and E. Akçay. “Reproductive Social Behavior: Cooperative Games to Replace Sexual Selection.” Science 311, no. 5763 (2006): 965-969. ^(9){ }^{9} See the Well-being chapter of this Ethically Aligned Design, First Edition.
Personal Data and Individual Agency 个人数据与个体能动性
Regulations like the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA) of 2018 are helping to improve personal data protection. But legal compliance is not enough to mitigate the ethical implications and core challenges to human agency embodied by algorithmically driven behavioral tracking or persuasive computing. The core of the issue is one of parity. 《通用数据保护条例》(GDPR)和 2018 年《加州消费者隐私法案》(CCPA)等法规正在助力提升个人数据保护水平。但仅靠法律合规并不足以缓解算法驱动的行为追踪或说服式计算所体现的伦理影响及对人类能动性的核心挑战。问题的核心在于对等性。
Humans cannot respond on an individual basis to every algorithm tracking their behavior without technological tools supported by policy allowing them to do so. Individuals may provide consent without fully understanding specific terms and conditions agreements. But they are also not equipped to fully recognize how the nuanced use of their data to inform personalized algorithms affects their choices at the risk of eroding their agency. 若缺乏政策支持的技术工具,人类无法以个体身份对每个追踪其行为的算法作出响应。个人可能在未完全理解具体条款协议的情况下表示同意。但他们同样不具备充分认知能力,无法完全意识到其数据的细微使用如何影响个性化算法,进而在侵蚀其能动性的风险下左右其选择。
Here we understand agency as an individual’s ability to influence and shape their life trajectory as determined by their cultural and social contexts. Agency in the digital arena enables an individual to make informed decisions where their own terms and conditions can be recognized and honored at an algorithmic level. 在此,我们将能动性理解为个体在文化和社会背景所决定的框架内影响并塑造自身生命轨迹的能力。数字领域的能动性使个人能够做出知情决策,确保其个人条款和条件能在算法层面得到识别与尊重。
To strengthen individual agency, governments and organizations must test and implement technologies and policies that let individuals create, curate, and control their online agency as associated with their identity. Data transactions should be moderated and case-by-case authorization decisions from the individual as to who can process what personal data for what purpose. 为增强个体能动性,政府与组织必须测试并实施相关技术与政策,使个人能够创建、管理并控制与其身份相关联的在线能动性。数据交易应受到规范,且关于何人可基于何种目的处理哪些个人数据,须由个体逐案作出授权决定。
Specifically, we recommend governments and organizations: 我们特别建议政府与组织采取以下措施:
Create: Provide every individual with the means to create and project their own terms and conditions regarding their personal data that can be read and agreed to at a machinereadable level. 创建机制:为每个个体提供创建并投射其个人数据相关条款与条件的技术手段,这些条款需具备机器可读性且能被协议方读取确认。
Curate: Provide every individual with a personal data or algorithmic agent which they curate to represent their terms and conditions in any real, digital, or virtual environment. 策管:为每位个体配备一个可由其自主管理的数据或算法代理,用以在任何现实、数字或虚拟环境中代表其个人条款与条件。
Control: Provide every individual access to services allowing them to create a trusted identity to control the safe, specific, and finite exchange of their data. 管控:为每位个体提供可信身份创建服务,使其能够安全、精准且有限度地掌控自身数据的交换。
Three sections of this chapter reflect these core ideals regarding human agency. 本章的三个部分体现了这些关于人类能动性的核心理念。
A fourth section addresses issues surrounding personal data and individual agency relating to children. 第四节则探讨了涉及儿童个人数据与个体能动性的相关问题。
Section 1-Create 第一节 创建
To retain agency in the algorithmic era, each individual must have the means to create and project their own terms and conditions regarding their personal data. These must be readable and usable by both humans and machines. 在算法时代保持自主权,每个人都必须有能力创建并表达关于其个人数据的个性化条款与条件。这些条款必须同时具备人类可读性和机器可读性。
Issue: What would it mean for a person to have individually controlled terms and conditions for their personal data? 议题:个人对其数据拥有个性化可控条款意味着什么?
Background 背景
Part of providing individually controlled terms and conditions for personal data is to help each person consider what their preferences are regarding their data versus dictating how they need to share it. While questions along these lines are framed in light of a person’s privacy, their preferences also reveal larger values for individuals. The ethical issue is whether A/IS act in accordance with these values. 为个人数据提供个体可控的条款和条件,其部分目的是帮助每个人思考自己对数据的偏好,而非强制规定他们必须如何分享数据。虽然这类问题的提出是基于个人隐私考量,但他们的偏好也揭示出更深层次的个人价值观。伦理问题在于人工智能/智能系统(A/IS)是否遵循这些价值观行事。
This process of investigating one’s values to identify these preferences is a powerful step towards regaining data agency. The point is not only that a person’s data are protected, but also that by curating these answers they become educated about how important their information is in the context of how it is shared. 这种通过审视自身价值观来确定偏好的过程,是重获数据自主权的重要一步。关键不仅在于保护个人数据,更在于通过梳理这些答案,让人们认识到在数据共享的背景下,自身信息的重要性。
Most individuals also believe controlling their personal data only happens on the sites or social networks to which they belong and have no idea of the consequences of how that data may be used by others in the future. Agreeing to most standard terms and conditions on these sites largely means users consent to give up control of their personal data rather than play a meaningful role in defining and curating its downstream use. 大多数人还认为,控制个人数据仅限于他们所属的网站或社交网络,并不了解这些数据未来可能被他人使用的后果。同意这些网站上的大多数标准条款和条件,很大程度上意味着用户放弃了个人数据的控制权,而非在定义和管理其后续使用中发挥实质性作用。
The scope of how long one should or could control the downstream use of their data can be difficult to calculate as consent-based models of personal data have trained users to release rights on any claims for use of their data which are entirely provided to the service, manufacturer, and their partners. However, models like YouTube’s Content ID provide a form of precedent for thinking about how an individual’s data could be technically protected where it is considered as an asset they could control and copyright. Here is language from YouTube’s site about the service: “Copyright owners can use a system called Content ID to easily identify and manage their content on YouTube. Videos uploaded to YouTube are scanned against a database of files that have been submitted to us by content owners.” In this sense, the question of how long or how far downstream one’s personal data should be protected takes on the same logic of how long a corporation’s intellectual property or copyrights could be protected based on initial legal terms set. 个人应或能对其数据下游使用控制多久的范围往往难以界定,因为基于同意的个人数据模式已使用户习惯于放弃对完全提供给服务商、制造商及其合作伙伴的数据使用主张权利。然而,像 YouTube 内容识别系统这样的模式为思考个人数据的技术保护提供了先例——当数据被视为可控制且具有版权属性的资产时。YouTube 官网对此服务的描述如下:"版权所有者可通过内容识别系统轻松识别并管理其在 YouTube 上的内容。上传至 YouTube 的视频会与内容所有者提交给我们的文件数据库进行比对扫描。"从这个意义上说,个人数据应受保护的时间跨度和下游范围问题,与企业知识产权或版权基于初始法律条款受保护的期限逻辑具有同构性。
One challenge is how to define use of data that can affect the individual directly, versus use of aggregated data. For example, an individual subway user’s travel card, tracking their individual movements, should be protected from uses that identify or profile that individual to make inferences about his/her likes or location generally. But data provided by a user could be included in an overall travel system’s management database, aggregated into patterns for scheduling and maintenance as long as the individual-level data are deleted. Where users have predetermined via their terms and conditions that they are willing to share their data for these travel systems, they can meaningfully articulate how to share their information. 一个挑战在于如何区分直接影响个人的数据使用与聚合数据的使用。例如,单个地铁用户的交通卡记录了其个人出行轨迹,这类数据应受到保护,避免被用于识别或分析该用户以推测其喜好或常去地点。但用户提供的数据在删除个人层级信息后,可纳入整体交通系统的管理数据库,聚合为用于调度和维护的出行模式。若用户已通过条款协议预先同意为交通系统共享数据,他们便能有效表达如何分享自身信息。
Under current business models, it is common for people to consent to the sharing of discrete data like credit card transaction data, answers to test questions, or how many steps they walk. However, once aggregated these data and the associated insights may lead to complex and sensitive conclusions being drawn about individuals. This end use of the individual’s data may not have been part of the initial sharing agreement. This is why models for terms and conditions created for user control typically alert people via onscreen or other warning methods when their predetermined preferences are not being honored. 在现行商业模式下,人们通常同意分享诸如信用卡交易数据、测试问题答案或步行步数等离散数据。然而,一旦这些数据被汇总分析,便可能推导出关于个人的复杂而敏感的结论。这种对个人数据的最终使用方式,往往超出了最初数据共享协议的范畴。正因如此,为用户控制设计的条款与条件模型,通常会在系统检测到预设偏好未被遵循时,通过屏幕提示或其他警示方式向用户发出提醒。
Recommendation 建议
Individuals should be provided tools that produce machine-readable terms and conditions that are dynamic in nature and serve to protect their data and honor their preferences for its use. 应为个人提供能生成机器可读条款与条件的工具,这些条款应具备动态特性,既能保护用户数据,又能确保其使用偏好得到尊重。
Specifically: 具体而言:
Personal data access and consent should be managed by the individual using their curated terms and conditions that provide notification and an opportunity for consent at the time data are exchanged, versus outside actors being able to access personal data without an individual’s awareness or control. 个人数据访问与授权应当由个体通过其定制的条款进行管理,这些条款需在数据交换时提供通知并给予授权机会,而非让外部行为者能够在个人不知情或无法控制的情况下获取其数据。
Terms should be presented in a way that allows a user to easily read, interpret, understand, and choose to engage with any A/IS. Consent should be both conditional and dynamic, where “dynamic” means downstream uses of a person’s data must be explicitly called out, allowing them to cancel a service and potentially rescind or “kill” any data they have shared with a service to date via the use of a “Smart Contract” or specific conditions as described in mutual terms and conditions between two parties at the time of exchange. 条款的呈现方式应使用户能够轻松阅读、解释、理解并选择与任何自主/智能系统(A/IS)进行交互。授权应当既是条件性的也是动态的,其中"动态"意味着对个人数据的后续使用必须明确说明,允许他们终止服务,并可能通过"智能合约"或双方在交换时约定的特定条款,撤销或"销毁"迄今为止与服务共享的任何数据。
For further information on these issues, please see the following section in regard to algorithmic agents and their application. 关于这些问题的更多信息,请参阅有关算法代理及其应用的后续章节。
Further Resources 更多资源
IEEE P7012 ^("TM "){ }^{\text {TM }} - IEEE Standards Project for Machine Readable Personal Privacy Terms. This approved standardization project (currently in development) directly honors the goals laid out in Section One of this document. IEEE P7012 ^("TM "){ }^{\text {TM }} - 机器可读个人隐私条款的 IEEE 标准项目。这一获批的标准化项目(目前正在开发中)直接践行了本文件第一章所述目标。
M. Orcutt, “Personal AI Privacy Watchdog Could Help You Regain Control of Your Data” MIT Technology Review, May 11, 2017. M·奥克特,《个人 AI 隐私守护者或助你重掌数据控制权》,麻省理工科技评论,2017 年 5 月 11 日。
M. Hintze, Privacy Statements: Purposes, Requirements, and Best Practices. Cambridge, U.K.: Cambridge University Press, 2017. M·辛策,《隐私声明:目的、要求与最佳实践》,英国剑桥:剑桥大学出版社,2017 年。
D. J. Solove, "Privacy self-management and the consent dilemma, Harvard Law Review, vol. 126, no. 7, pp. 1880-1903, May 2013. D·J·索洛夫,《隐私自我管理与同意困境》,《哈佛法律评论》第 126 卷第 7 期,第 1880-1903 页,2013 年 5 月。
N. Sadeh, M. Degeling, A. Das, A. S. Zhang, A. Acquisti, L. Bauer, L. Cranor, A. Datta, and D. Smullen, A Privacy Assistant for the Internet of Things: https://www.usenix.org/sites/default/ files/soups17 poster sadeh.pdf N·萨德赫、M·德格林、A·达斯、A·S·张、A·阿奎斯蒂、L·鲍尔、L·克兰诺、A·达塔与 D·斯马伦,《物联网隐私助手:https://www.usenix.org/sites/default/files/soups17_poster_sadeh.pdf》
H. Lee, R. Chow, M. R. Haghighat, H. M. Patterson and A. Kobsa, "IoT Service Store: H·李、R·周、M·R·哈格海特、H·M·帕特森与 A·科布萨,《物联网服务商店:》
Section 2-Curate 第二章-策展
To retain agency in the algorithmic era, we must provide every individual with a personal data or algorithmic agent they curate to represent their terms and conditions in any real, digital, or virtual environment. This “agent” would be empowered to act as an individual’s legal proxy in the digital and virtual arena. Oftentimes, the functionality of this agent will be automated, operating along the lines of current ad blockers which do not permit prespecified algorithms to access a user’s data. For other situations that might be unique or new to this agent, a user could specify that notices or updates be sent on a case-by-case basis to determine where there could be a concern. 要在算法时代保持自主权,我们必须为每个个体配备可自主管理的个人数据或算法代理,使其能够在现实、数字或虚拟环境中代表个体的条款与条件。这种"代理"将被授权作为个体在数字和虚拟领域的法定代表。多数情况下,该代理功能将实现自动化运作,其原理类似于当前广告拦截器——禁止预设算法访问用户数据。对于该代理可能遇到的特殊或新型情况,用户可设定按个案接收通知或更新,以便识别潜在风险。
A Web-based System for Privacy-aware IoT Service Discovery and Interaction," 2018 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops), Athens, pp. 107-112, 2018. 《基于 Web 的隐私感知物联网服务发现与交互系统》,2018 年 IEEE 普适计算与通信国际会议研讨会(PerCom Workshops),雅典,第 107-112 页,2018 年。
L. Cranor, M. Langheinrich, M. Marchiori, M. Presler-Marshall, and J. Reagle, “The Platform for Privacy Preferences 1.0 (P3P1.0) Specification,” W3C Recommendation, [Online]. Available: www.w3.org/TR/P3P/, Apr. 2002. L. Cranor, M. Langheinrich, M. Marchiori, M. Presler-Marshall 及 J. Reagle,《隐私偏好平台 1.0(P3P1.0)规范》,W3C 推荐标准,[在线]。访问地址:www.w3.org/TR/P3P/,2002 年 4 月。
L. F. Cranor, “Personal Privacy Assistants in the Age of the Internet of Things,” in World Economic Forum Annual Meeting, 2016. L. F. 克兰诺,《物联网时代的个人隐私助手》,载于世界经济论坛年会,2016 年。
Issue: What would it mean for a person to have an algorithmic agent helping them actively represent and curate their terms and conditions at all times? 议题:若一个人拥有算法代理助手持续协助其主动表达和管理个人条款与条件,这意味着什么?
Background 背景
While it’s essential to create your own terms and conditions to broadcast your preferences, it’s also important to recognize that humans do not operate at an algorithmic speed or level. A significant part of retaining your agency in this 尽管制定并传播个人偏好条款至关重要,但同样需要认识到人类无法以算法速度或精度运作。在此过程中保持自主权的关键部分在于
way involves identifying trusted services that can essentially act on your behalf when making decisions about your data. 这种方式需要识别可信的服务,这些服务在关于您数据的决策过程中能够实质性地代表您行事。
Part of this logic entails putting you “at the center of your data”. One of the greatest challenges to user agency is that once you give your data away, you do not know how it is being used or by whom. But when all transactions about your data go through your A/IS agent honoring your preferences, you have better opportunities to control how your information is shared. 该逻辑的部分核心是将您"置于个人数据的中心位置"。用户自主权面临的最大挑战在于,一旦数据被交出,您便无从知晓其使用方式和使用者。但当所有涉及您数据的交易都经由遵循您偏好的 A/IS 代理处理时,您就能更好地控制信息的共享方式。
As an example, with medical data-while it is assumed most would share all their medical data with their spouse-most would also not wish to share that same amount of data with their local gym. This is an issue that extends beyond privacy, meaning one’s cultural or individual preferences about what personal information to share, to utility and clarity. This type of sharing also benefits users or organizations on the receiving end of data from these exchanges. For instance, the local gym in the previous example may only need basic heart or general health information and would actually not wish to handle or store sensitive cancer or other personal health data for reasons of liability. 以医疗数据为例——虽然大多数人会与配偶共享全部医疗数据,但多数人也不愿向当地健身房披露同等程度的信息。这个问题超越了隐私范畴,涉及个人对共享哪些信息的文化或个体偏好,更关乎实用性与清晰度。此类数据共享也使接收数据的用户或组织受益。例如前文中的健身房可能仅需基础心脏或总体健康信息,出于责任考量,他们实际上并不愿处理或存储敏感的癌症等个人健康数据。
A precedent for this type of patient- or usercentric model comes from Gliimpse, a service described by Jordan Crook from TechCrunch in his article, “Apple acquired Gliimpse, a personal health data startup”: “Gliimpse works by letting users pull their own medical info into a single virtual space, with the ability to add documents and pictures to fill out the profile. From there, users can share that data (as a comprehensive picture) to whomever they wish.” The fact that 这种以患者或用户为中心的模型已有先例,即科技媒体 TechCrunch 记者 Jordan Crook 在《苹果收购个人健康数据初创公司 Gliimpse》一文中描述的服务:"Gliimpse 通过让用户将个人医疗信息整合至统一虚拟空间来运作,用户可添加文档和图片完善健康档案,并能自主选择将完整健康数据共享给指定对象。"该案例
Apple acquired the startup points to the potential for the successful business model of user-centric data exchange and putting individuals at the center of their data. 被苹果收购的事实,印证了以用户为核心的数据交换商业模式具有成功潜力,彰显了将个体置于数据主权中心的可能性。
A person’s A//IS\mathrm{A} / \mathrm{IS} agent is a proactive algorithmic tool honoring their terms and conditions in the digital, virtual, and physical worlds. Any public space where a user may not be aware they are under surveillance by facial recognition, biometric, or other tools that could track, store, and utilize their data can now provide overt opportunity for consent via an A/IS agent platform. Even where an individual is not sure they are being tracked, by broadcasting their terms and conditions via digital means, they can demonstrate their preferences in the public arena. Via Bluetooth or similar technologies, individuals could offer their terms and conditions in a ubiquitous and always-on manner. This means even when an individual’s terms and conditions are not honored, people would have the ability to demonstrate their desire not to be tracked which could provide a methodology for the democratic right to protest in a peaceful manner. And where those terms and conditions are recognized meaning technically recognized even if they are not honored one’s opinions could be formally logged via GPS and timestamp data. 个人的 A//IS\mathrm{A} / \mathrm{IS} 代理是一种主动算法工具,旨在数字、虚拟和物理世界中维护其条款与条件。任何公共场所若用户可能未察觉自身正受到面部识别、生物特征或其他可追踪、存储并利用其数据的工具监控时,现在均可通过 A/IS 代理平台提供明确的同意机会。即便个体不确定是否被追踪,通过数字方式广播其条款与条件,他们仍能在公共领域表明自身偏好。借助蓝牙或类似技术,个人能以无处不在且持续在线的方式提供其条款与条件。这意味着即使个体的条款与条件未被遵守,人们仍能通过技术手段表明拒绝被追踪的意愿——这为和平行使民主抗议权提供了方法论基础。当这些条款与条件获得技术层面的识别(即便未被实际遵守),个体的主张仍可通过 GPS 和时间戳数据被正式记录。
The A/IS agent could serve as an educator and negotiator on behalf of its user by suggesting how requested data could be combined with other data that has already been provided, inform the user if data are being used in a way that was not authorized, or make recommendations to the user based on a personal profile. As a negotiator, the agent could broker conditions for sharing data and could include payment to the user as a 该人工智能系统(A/IS)代理可充当用户的教育者与协商代表,通过建议如何将请求数据与已提供的其他数据相结合、告知用户数据是否正以未经授权的方式被使用,或基于个人档案向用户提出建议。作为协商者,该代理可制定数据共享条件,并将用户报酬作为条款纳入其中。
Personal Data and Individual Agency 个人数据与个体能动性
term, or even retract consent for the use of data previously authorized, for instance, if a breach of conditions was detected. 例如,若检测到条款违约情况,用户甚至可撤回先前对数据使用的授权。
Recommendations 建议
Algorithmic agents should be developed for individuals to curate and share their personal data. Specifically: 应开发算法代理工具,帮助个人管理和共享其个人数据。具体而言:
For purposes of privacy, a person must be able to set up complex permissions that reflect a variety of wishes. 出于隐私保护目的,个人必须能够设置反映多种意愿的复杂权限。
The agent should help a person foresee and mitigate potential ethical implications of specific machine learning data exchanges. 该代理应帮助用户预见并缓解特定机器学习数据交换可能引发的伦理问题。
A user should be able to override his/her personal agents should he/she decide that the service offered is worth the conditions imposed. 当用户认为所提供服务值得接受相关条件时,应能随时覆盖其个人代理的决策。
An agent should enable machine-to-machine processing of information to compare, recommend, and assess offers and services. 智能体应支持机器间的信息处理,以比较、推荐和评估各类服务与报价。
Institutional systems should ensure support for and respect the ability of individuals to bring their own agent to the relationship 机构系统应确保支持并尊重个体在关系中自带智能体的权利,
without constraints that would make some guardians inherently incompatible or subject to censorship. 不得设置会导致某些监护程序本质上不兼容或遭受审查的限制。
Vulnerable parts of the population will need protection in the process of granting access. 在授权访问的过程中,需对弱势群体提供特殊保护。
Further Resources 延伸阅读
IEEE P7006 ^("TM "){ }^{\text {TM }} - IEEE Standards Project on Personal Data Al Agent Working Group. Designed as a tool to allow any individual to create their own personal “terms and conditions” for their data, the AI Agent will also provide a technological tool for individuals to manage and control their identity in the digital and virtual world. IEEE P7006 标准项目——个人数据人工智能代理工作组。该项目旨在开发一种工具,使个人能够为自己的数据创建专属"使用条款",该人工智能代理还将提供技术手段,帮助用户在数字和虚拟世界中管理并控制自身身份信息。
Tools allowing an individual to create a form of an algorithmic guardian are often labeled as PIMS, or Personal Information Management Services. Nesta in the United Kingdom was one of the funders of early research about PIMS conducted by CtrlShift. 允许个人创建算法守护者的工具通常被称为 PIMS(个人信息管理服务)。英国国家科技艺术基金会是早期 PIMS 研究的主要资助方之一,该研究由 CtrlShift 公司具体实施。
Section 3-Control 第三章-控制
To retain agency in the algorithmic era, we must provide every individual access to services allowing them to create a trusted identity to control the safe, specific, and finite exchange of their data. 要在算法时代保持自主权,我们必须为每个人提供可创建可信身份的服务,使其能够安全、精准且有限度地控制自身数据的交换。
Issue: How can we increase agency by providing individuals access to services allowing them to create a trusted identity to control the safe, specific, and finite exchange of their data? 问题:如何通过提供创建可信身份的服务来增强个人自主权,使其能够安全、精准且有限度地控制自身数据的交换?
Background 背景
Pervasive behavior-tracking adversely affects human agency by recognizing our identity in every action we take on and offline. This is why identity as it relates to individual data is emerging at the forefront of the risks and opportunities related to use of personal information for A//IS\mathrm{A} / \mathrm{IS}. Across the identity landscape there is increasing tension between the requirement for federated identities versus a range of identities. In federated identities, all data are linked to a natural and identified person. When one has a range of identities, or personas, these can be context specific and determined by the use case. New movements, such as "Self-Sovereign Identity"defined as the right of a person to determine his or her own identity-are emerging alongside legal identities, e.g., those issued by governments, banks, and regulatory authorities, to help put individuals at the center of their data in the algorithmic age. 无处不在的行为追踪通过识别我们在线上线下的每一个行为来确认身份,这对人的自主性产生了负面影响。正因如此,与个人数据相关的身份问题正成为个人信息使用风险与机遇的核心议题。在身份识别领域,联邦身份与多重身份之间的矛盾日益凸显——联邦身份要求将所有数据关联到确定的自然人,而多重身份则允许根据使用场景创建特定情境下的身份角色。新兴的"自我主权身份"运动(即个人自主决定其身份的权利)正与政府、银行及监管机构颁发的法定身份并存发展,旨在算法时代让个体重获数据掌控权。
Personas, identities that act as proxies, and pseudonymity are also critical requirements for privacy management and agency. These help individuals select an identity that is appropriate for the context they are in or wish to join. In these settings, trust transactions can still be enabled without giving up the “root” identity of the user. For example, it is possible to validate that a user is over eighteen or is eligible for a service. 角色身份、作为代理的虚拟身份以及化名机制同样是隐私管理和自主权保障的关键需求。这些机制帮助个体根据所处或希望加入的特定情境选择合适的身份。在此类设定下,即便不透露用户的"根身份",仍可完成信任交易。例如,能够验证用户是否年满十八岁或符合某项服务的使用资格。
Attribute verification will play a significant role in enabling individuals to select the identity that provides access without compromising agency. This type of access is especially important in dealing with the myriad of algorithms interacting with narrow segments of our identity data. In these situations, individuals typically are not aware of the context for how their data will be used. 属性验证将在保障个体自主权的前提下,帮助其选择具有访问权限的身份标识方面发挥重要作用。这种访问方式对于应对海量算法与身份数据碎片交互的场景尤为关键。此类情境中,个体通常无法预知其数据的具体使用背景。
Recommendation 建议
Individuals should have access to trusted identity verification services to validate, prove, and support the context-specific use of their identity. 个体应能通过可信身份验证服务,对其身份信息在特定场景下的使用进行验证、证明及授权。
Further Resources 延伸阅读
Sovrin Foundation, The Inevitable Rise of SelfSovereign Identity, Sept. 29, 2016. Sovrin 基金会,《自我主权身份必然崛起》,2016 年 9 月 29 日。
T. Ruff, “Three Models of Digital Identity Relationships,” Evernym, Apr. 24, 2018. T. Ruff,《数字身份关系的三种模型》,Evernym,2018 年 4 月 24 日。
C. Pettey, The Beginner’s Guide to Decentralized Identity. Gartner, 2018. C. Pettey,《去中心化身份入门指南》,Gartner,2018 年。
C. Allen, The Path to Self-Sovereign Identity. GitHub, 2017. C. Allen,《通往自主身份之路》。GitHub,2017 年。
Section 4-Children's Data Issues 第四节 儿童数据问题
While the focus of this chapter is to provide all individuals with agency regarding their personal data, some sectors of society have little or no control. For some elderly individuals or the mentally ill, it is because they have been found to not have “mental capacity”, and for prisoners in the criminal justice system, society has taken control away as punishment. In the case of children, this is because they are considered human beings in development with evolving capacities. 尽管本章的重点是让所有个体都能掌控自己的个人数据,但社会中的某些群体几乎或完全无法行使这种控制权。对部分老年人或精神疾病患者而言,这是由于他们被认定不具备"心智能力";而对刑事司法系统中的囚犯,社会剥夺其控制权作为惩罚手段。至于儿童,则是因为他们被视为发展中的个体,其能力处于持续成长阶段。
We examine the issues of children as an example case and recommend either regulation or a technical architecture that provides a veil and buffer from harm until a child is at an age where they can claim personal responsibility for their decisions. 我们以儿童为例探讨相关问题,建议通过监管措施或技术架构提供保护屏障,使其免受伤害,直至达到能够为自己的决定承担个人责任的年龄。
In many parts of the world, children are viewed by the law as being primarily charges of their parents who make choices on their behalf. In Europe, however, the state has a role in ensuring the “best interests of the child” ^(1){ }^{1}. In schools, the two interests operate side-by-side, with parents being given some control over their child’s education but with many decisions being made by the schools. 在世界许多地区,法律将儿童主要视为父母的监护对象,由父母代为做出选择。然而在欧洲,国家承担着保障"儿童最大利益" ^(1){ }^{1} 的责任。在学校教育中,这两种权益并行运作:父母对子女教育享有部分决定权,但许多具体决策仍由校方作出。
Many of the issues described above concern choices around personal data and the future impacts of how the data are gathered and shared. Children are at the forefront of technological developments with future educational and recreational technology gathering data from them all day at school and intelligent toys throughout their time at home. 上述诸多问题都涉及个人数据的选择权,以及数据采集与共享方式对未来产生的影响。儿童正处于技术发展的最前沿——未来教育科技产品在校全天采集其数据,智能玩具则在家居环境中持续收集信息。
As children post, click, search, and share information, their data are linked to various profiles, grouped into segmented audiences, and fed into machine learning algorithms. Some of these may be designed to target campaigns that increase sales, influence sentiment, encourage online games, impact social networks, or influence religious and political views. Data fed into algorithmic advertising is not only gathered from children’s online actions but also from their devices. An example of device data is browser fingerprinting. ^(3){ }^{3} It includes a set of data about a child’s browser or operating system. Fingerprinting vastly increases privacy risks because it is used to link to an individual. 当儿童发布、点击、搜索和分享信息时,他们的数据被关联到各种档案中,归类为细分受众群体,并输入机器学习算法。其中部分算法可能旨在开展提高销售额、影响情绪、鼓励网络游戏、冲击社交网络或左右宗教政治观点的定向活动。输入算法广告的数据不仅来自儿童的在线行为,还源自其设备——例如浏览器指纹识别技术 ^(3){ }^{3} ,该技术包含关于儿童浏览器或操作系统的一组数据。指纹识别极大增加了隐私风险,因其可用于关联到特定个体。
Increasingly, children’s beliefs and social norms are established by what they see and experience online. Their actions reflect what they believe is possible and expected. The report, “Digital Deceit: Technologies Behind Precision Propaganda on the Internet” ^(14){ }^{14}, explains how companies collect, process, and then monetize personal preferences, socioeconomic status, fears, political and religious beliefs, location, and patterns of internet use. 儿童的信念与社会规范正日益由他们在网络上的所见所历所塑造。其行为折射出他们认知中可能且被期待的事物。报告《数字欺骗:互联网精准宣传背后的技术》 ^(14){ }^{14} 阐释了企业如何收集、处理并最终将个人偏好、社会经济地位、恐惧、政治宗教信仰、地理位置及网络使用模式进行商业化变现。
Companies, governments, political parties, and philosophical and religious organizations use data available about students and children to influence how they spend their time, money, and the people or institutions they trust and with whom they spend time and build relationships. 企业、政府、政党以及哲学和宗教组织利用学生与儿童的相关数据,影响其时间分配、消费决策,以及他们信任的对象和建立关系的机构。
Many aspects of a child’s life can be digitized. Their behavioral, device, and network data are combined and used by machine learning 儿童生活的诸多方面均可被数字化。他们的行为数据、设备数据和网络数据被机器学习
algorithms to determine the information and content that best achieve the educational goals of the schools and the economic goals of the advertisers and platform companies. 算法整合分析,用以确定最能实现学校教育目标及广告商与平台公司经济目标的信息内容。
Issue: Mass personalization of instruction 核心问题:教学的大规模个性化定制
Background 背景
The mass personalization of education offers better education for all at very low cost through A/IS-enabled computer-based instruction that promises to free up teachers to work with kids individually to pursue their passions. These applications will rely on the continuous gathering of personal data regarding mood, thought processes, private stories, physiological data, and more. The data will be used to construct a computational model of each child’s interests, understanding, strengths, and weaknesses. The model provides an intimate understanding of how they think, what they understand, how they process information, or react to new information; all of which can be used to drive instructional content and feedback. 教育的大规模个性化通过人工智能/智能系统支持的计算机辅助教学,以极低成本为所有人提供更优质的教育,使教师能够解放出来,与孩子们一对一地探索他们的兴趣所在。这些应用将依赖于持续收集关于情绪、思维过程、私人故事、生理数据等多方面的个人数据。这些数据将被用于构建每个孩子的兴趣、理解力、优势与不足的计算模型。该模型能深入理解他们的思维方式、知识掌握程度、信息处理方式以及对新信息的反应——所有这些都可用于驱动教学内容和反馈。
Sharing of this data between classes, enabling it to follow students through their schooling, will make the models more effective and beneficial to children, but it also exposes children and their families to social control. If performance data are correlated with social data on a family, it could be used by social authorities in decision-making about the family. For example, since 20152018, well-being digital tests were performed in schools in Denmark. Children were asked 在班级间共享这些数据,使其能够跟随学生整个学业历程,将使模型更加有效并对儿童更有益,但同时也使儿童及其家庭暴露于社会控制之下。如果将学业表现数据与家庭社会数据相关联,社会管理部门可能会在针对该家庭的决策中使用这些信息。例如,自 2015 至 2018 年间,丹麦的学校实施了数字化幸福度测试,要求儿童回答
about everything from bullying, loneliness, and stomachaches. Recently it was disclosed that although the collected data was presented as anonymous, they were not. Data were stored with social security numbers, correlated with other test data, and even used in case management by some Danish municipalities. ^(5){ }^{5} Commercial profiling and correlation of different sets of personal data may further affect these children in future job or educational situations. 内容涉及从欺凌、孤独到胃痛等方方面面。最新披露显示,尽管收集的数据号称匿名,实则不然。这些数据不仅与社会安全号码关联存储,还与其他测试数据相互关联,甚至被丹麦部分市政当局用于个案管理。商业画像与不同个人数据集的交叉关联,可能在未来就业或教育情境中对这些儿童造成进一步影响。
Recommendation 建议
Educational data offer a unique opportunity to model individuals’ thought processes and could be used to predict or change individuals’ behavior in many situations. Governments and organizations should classify educational data as being sensitive and implement special protective standards. 教育数据为建模个体思维过程提供了独特机遇,可在多种情境中用于预测或改变个人行为。政府与组织应将教育数据归类为敏感信息,并实施特殊保护标准。
Children’s data should be held in “escrow” and not used for any commercial purposes until a child reaches the age of majority and is able to authorize use as they choose. 儿童数据应被置于"托管"状态,在其成年并获得自主授权能力前,不得用于任何商业用途。
Further Resources 扩展资源
The journal of the International Artificial Intelligence in Education Society: http://iaied.org/journal/ 国际人工智能教育学会期刊:http://iaied.org/journal/
Deeper discussion and bibliography of future trends of Al-based education with utopian and dystopian case scenarios: N. Pinkwart, “Another 25 Years of AIED? Challenges and Opportunities for Intelligent Educational Technologies of the Future,” International Journal of Artificial Intelligence in Education, vol. 26, no. 2, pp. 771-783, 2016. [Online]. 关于人工智能教育未来趋势的深入探讨及参考文献,包含乌托邦与反乌托邦案例场景:N. Pinkwart,《AIED 的下一个 25 年?未来智能教育技术的挑战与机遇》,《国际人工智能教育期刊》,第 26 卷第 2 期,第 771-783 页,2016 年。[在线]。
K. Firth-Butterfield, “What happens when your child’s friend is an AI toy that talks back?” in World Economic Forum: Generation AI, https://www.weforum.org/agenda/2018/05/ generation-ai-what-happens-when-your-childs-invisible-friend-is-an-ai-toy-that-talks-back/. May 22, 2018. K. Firth-Butterfield,《当孩子的玩伴是会回应的 AI 玩具时会发生什么?》,载于世界经济论坛:AI 世代专栏,https://www.weforum.org/agenda/2018/05/generation-ai-what-happens-when-your-childs-invisible-friend-is-an-ai-toy-that-talks-back/,2018 年 5 月 22 日。
Issue: Technology choice-making in schools 问题:学校中的技术选择决策
Background 背景
Children, as minors, have no standing to give or deny consent, or to control the use of their personal data. Parents only have limited choices in what are often school-wide implementations of educational technology. Examples include the use of Google applications, face recognition in security systems, and computer driven instruction as described above. In many cases, parents’ only choice would be to send their children to a different school, but that choice is seldom available. 作为未成年人,儿童无权给予或拒绝同意,也无法控制其个人数据的使用。家长在学校普遍采用教育技术时往往只有有限的选择权,例如使用谷歌应用程序、安防系统中的人脸识别技术,以及前文所述的计算机辅助教学。在许多情况下,家长唯一的选择就是将孩子转学到其他学校,但这种选择往往难以实现。
How should schools make these choices? How much input should parents have? Should parents be able to demand technology-free teaching? 学校应如何进行这些技术选择?家长应拥有多少决策参与权?家长是否有权要求采用非技术化的教学方式?
There are many gaps in current student data regulation. In June 2018, CLIP, The Center on Law and Information Policy at Fordham Law School published, “Transparency and the Marketplace for Student Data”. ^(6){ }^{6} This study concluded that “student lists are commercially available for purchase on the basis of ethnicity, affluence, religion, lifestyle, awkwardness, and even a perceived or predicted need for family planning services”. Fordham found that the data market is becoming one of the largest and most profitable marketplaces in the United States. Data brokers have databases that store billions of data elements on nearly every United States consumer. However, information from students in the pursuit of an education should not be exploited and commercialized without restraint. 当前学生数据监管存在诸多漏洞。2018 年 6 月,福特汉姆法学院法律与信息政策中心(CLIP)发布《学生数据市场的透明度》报告。 ^(6){ }^{6} 该研究指出"可根据种族、富裕程度、宗教信仰、生活方式、社交障碍甚至对计划生育服务的潜在需求等特征购买商业化的学生名单"。福特汉姆研究发现,数据市场正成为美国规模最大、利润最丰厚的交易市场之一。数据经纪商建立的数据库存储着近每位美国消费者的数十亿条数据元素。然而,学生在接受教育过程中产生的信息不应被无节制地开发利用和商业化。
Fordham researchers found at least 14 data brokers who advertise the sale of student information. One sold lists of students as young as two years old. Another sold lists of student profiles on the basis of ethnicity, religion, economic factors, and even gawkiness. 福特汉姆研究人员发现至少有 14 家数据经纪商公开宣传出售学生信息。其中一家出售低至两岁幼童的学生名单,另一家则根据种族、宗教信仰、经济状况甚至社交笨拙程度等特征兜售学生档案。
Recommendation 建议
Local and national educational authorities must work to develop policies surrounding students’ personal data with all stakeholders: administrators, teachers, technology providers, students, and parents in order to balance the best educational interests of each child with the best practices to ensure safety of their personal data. Such efforts will raise awareness among all stakeholders of the promise and the compromises inherent in new educational technologies. 地方和国家教育主管部门必须与所有利益相关方——包括行政人员、教师、技术供应商、学生和家长——共同制定关于学生个人数据的政策,以平衡每个孩子的最佳教育利益与保护其个人数据安全的最佳实践。这些努力将提高所有利益相关方对新型教育技术所蕴含的机遇与固有妥协的认识。
Further Resources 更多资源
-Common Sense Media privacy evaluation project:https://www.commonsense.org/ education/privacy -常识媒体隐私评估项目:https://www.commonsense.org/education/privacy
-D.T.Ritvo,L.Plunkett,and P.Haduong,"Privacy and Student Data:Companion Learning Tools."Berkman Klein Center for Internet and Society at Harvard University, 2017. [Online].Available:http://blogs.harvard. edu/youthandmediaalpha/files/2017/03/ PrivacyStudentData Companion Learning Tools.pdf[Accessed Dec.2018]. -D·T·里特沃、L·普伦基特和 P·哈东,《隐私与学生数据:辅助学习工具》,哈佛大学伯克曼克莱因互联网与社会中心,2017 年。[在线]。访问地址:http://blogs.harvard.edu/youthandmediaalpha/files/2017/03/PrivacyStudentDataCompanionLearningTools.pdf[2018 年 12 月访问]
-F.Alim,N.Cardozo,G.Gebhart,K.Gullo,and A.Kalia,"Spying on Students:School-Issued Devices and Student Privacy,"Electronic Frontier Foundation,https://www.eff.org/wp/ school-issued-devices-and-student-privacy, April 13, 2017. -F.Alim,N.Cardozo,G.Gebhart,K.Gullo,和 A.Kalia,《监视学生:学校发放的设备与学生隐私》,电子前沿基金会,https://www.eff.org/wp/ school-issued-devices-and-student-privacy, 2017 年 4 月 13 日。
-N.C.Russell,J.R.Reidenberg,E.Martin,and T.Norton,"Transparency and the Marketplace for Student Data,"Virginia Journal of Law and Technology,Forthcoming.Available at SSRN: https://ssrn.com/abstract=3191436,June 6, 2018. -N.C.Russell,J.R.Reidenberg,E.Martin,和 T.Norton,《透明度与学生数据市场》,《弗吉尼亚法律与技术杂志》,即将出版。参见 SSRN: https://ssrn.com/abstract=3191436,2018 年 6 月 6 日。
Issue:Intelligent toys 议题:智能玩具
Background 背景
Children will not only be exposed to A/IS at school but also at home,while they play and while they sleep.Toys are already being sold that offer interactive,intelligent opportunities for play. Many of them collect video and audio data which is stored on company servers and either is or could be mined for profiling or marketing data. 儿童不仅会在学校接触人工智能/自主系统(A/IS),在家中也同样如此——无论是玩耍时还是睡眠时。市面上已出现具有交互式智能功能的玩具,其中许多会收集视频和音频数据,这些数据存储在公司服务器上,可能被用于用户画像分析或商业营销。
There is currently little regulatory oversight.In the United States COPPA ^(7){ }^{7} offers some protection for the data of children under 13.Germany has outlawed such toys using legislation banning spying equipment enacted in 1981.Corporate A/IS are being embodied in toys and given to children to play with,to talk to,tell stories to,and to explore all the personal development issues that we learn about in private play as children. 目前监管几乎空白。美国通过《儿童在线隐私保护法案》(COPPA) ^(7){ }^{7} 为 13 岁以下儿童数据提供有限保护。德国则依据 1981 年颁布的反间谍设备法明令禁止此类玩具。企业将 A/IS 技术植入玩具,让孩子们与之玩耍、对话、听故事,并通过这种私密互动探索儿童成长过程中的所有个性化发展议题。
Recommendations 建议
Child data should be held in"escrow"and not used for any commercial purposes until a child reaches the age of majority and is able to authorize use as they choose. 儿童数据应实行"第三方托管",在未成年人达到法定年龄并能自主授权前,不得用于任何商业用途
Governments and organizations need to educate and inform parents of the mechanisms of A/IS and data collection in toys and the possible impact on children in the future. 政府与相关组织需向家长普及智能玩具中的人工智能/信息系统(A/IS)运作机制及数据收集行为,并阐明其对儿童未来可能产生的影响。
Further Resources 延伸阅读
-K.Firth-Butterfield,"What happens when your child's friend is an AI toy that talks back?" in World Economic Forum:Generation AI, https://www.weforum.org/agenda/2018/05/ generation-ai-what-happens-when-your-childs- invisible-friend-is-an-ai-toy-that-talks-back/, May 22, 2018. -K·弗斯-巴特菲尔德,《当孩子的玩伴是会回应的 AI 玩具时会发生什么?》,载于世界经济论坛"AI 世代"专栏,https://www.weforum.org/agenda/2018/05/generation-ai-what-happens-when-your-childs-invisible-friend-is-an-ai-toy-that-talks-back/,2018 年 5 月 22 日。
-D.Basulto,"How artificial intelligence is moving from the lab to your kid's playroom," Washington Post,Oct.15,2015.[Online]. Available:https://www.washingtonpost. com/news/innovations/wp/2015/10/15/ how-artificial-intelligence-is-moving-from- the-lab-to-your-kids-playroom/?utm_ term=.89a1431a05a7 -D·巴苏托,《人工智能如何从实验室走进儿童游戏室》,《华盛顿邮报》2015 年 10 月 15 日。[网络资源]。访问地址:https://www.washingtonpost.com/news/innovations/wp/2015/10/15/how-artificial-intelligence-is-moving-from-the-lab-to-your-kids-playroom/?utm_term=.89a1431a05a7
[Accessed Dec.1,2018]. [访问日期:2018 年 12 月 1 日]
S. Chaudron, R. Di Gioia, M. Gemo, D. Holloway, J. Marsh, G. Mascheroni J. Peter, and D. Yamada-Rice , http://publications.jrc. ec.europa.eu/repository/handle/JRC105061, 2016. S. Chaudron, R. Di Gioia, M. Gemo, D. Holloway, J. Marsh, G. Mascheroni J. Peter, 及 D. Yamada-Rice,http://publications.jrc.ec.europa.eu/repository/handle/JRC105061,2016 年。
S. Chaudron, R. Di Gioia, M. Gemo, D. Holloway, J. Marsh, G. Mascheroni, J. Peter, D. Yamada-Rice Kaleidoscope on the Internet of Toys - Safety, security, privacy and societal insights, EUR 28397 EN, doi:10.2788/05383, Luxembourg: Publications Office of the S. Chaudron, R. Di Gioia, M. Gemo, D. Holloway, J. Marsh, G. Mascheroni, J. Peter, D. Yamada-Rice 著,《玩具物联网万花筒——安全、安保、隐私与社会洞察》,EUR 28397 EN,doi:10.2788/05383,卢森堡:欧盟出版物办公室,
European Union, 2017. 2017 年。
Z. Kleinman, “Alexa, are you friends with our kids?” BBC News, July 16, 2018. [Online]. Available: https://www.bbc.com/news/ technology-44847184.05b. [Accessed Dec. 1, 2018]. Z·克莱曼,《Alexa,你和我们的孩子是朋友吗?》,BBC 新闻,2018 年 7 月 16 日。[在线]。获取地址:https://www.bbc.com/news/technology-44847184.05b。[访问日期:2018 年 12 月 1 日]。
We wish to acknowledge all of the people who contributed to this chapter. 我们谨向所有为本章节做出贡献的人士致以诚挚谢意。
The Personal Data and Individual Agency Committee 个人数据与个体能动性委员会
Katryna Dow (Co-Chair) - CEO & Founder at Meeco 卡特琳娜·道(联合主席)— Meeco 公司首席执行官兼创始人
John C. Havens (Co-Chair) - Executive Director, The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems; Executive Director, The Council on Extended Intelligence; Author, Heartificial Intelligence: Embracing Our Humanity to Maximize Machines 约翰·C·哈文斯(联合主席)— IEEE 全球自主与智能系统伦理倡议执行主任;扩展智能委员会执行主任;《人工情感:在机器最大化时代拥抱人性》作者
Mads Schaarup Andersen - Senior Usable Security Expert in the Alexandra Institute’s Security Lab 麦兹·斯查鲁普·安德森— 亚历山大研究所安全实验室高级可用性安全专家
Ariel H. Brio - Privacy and Data Counsel at Sony Interactive Entertainment 阿里尔·H·布里奥 - 索尼互动娱乐隐私与数据法律顾问
Walter Burrough - Co-Founder, Augmented Choice; PhD Candidate (Computer Science) Serious Games Institute 沃尔特·巴勒 - 增强选择公司联合创始人;考文垂大学严肃游戏研究所计算机科学博士候选人
Danny W. Devriendt - Managing director of Mediabrands Dynamic (IPG) in Brussels, and the CEO of the Eye of Horus, a global thinktank for communication-technology related topics 丹尼·W·德夫林特 - 布鲁塞尔盟博动力传媒董事总经理,荷鲁斯之眼全球智库首席执行官(专注传播技术领域)
Dr. D. Michael Franklin - Assistant Professor, Kennesaw State University, Marietta Campus, Marietta, GA D. Michael Franklin 博士 - 美国佐治亚州玛丽埃塔市肯尼索州立大学玛丽埃塔校区助理教授
Jean-Gabriel Ganascia - Professor, University Pierre et Marie Curie; LIP6 Laboratory ACASA Group Leader Jean-Gabriel Ganascia - 皮埃尔与玛丽·居里大学教授;LIP6 实验室 ACASA 研究组负责人
Bryant Joseph Gilot, MD CM DPhil MSc Center for Personalised Medicine, University of Tuebingen Medical Center, Germany & Chief Medical Officer, Blockchain Health Co., San Francisco Bryant Joseph Gilot 医学博士 - 德国图宾根大学医学中心个性化医疗中心研究员 & 旧金山区块链健康公司首席医疗官
David Goldstein - Seton Hall University David Goldstein - 美国西东大学教授
Adrian Gropper, M.D. - CTO, Patient Privacy Rights Foundation; HIE of One Project 阿德里安·格罗珀,医学博士 - 患者隐私权利基金会首席技术官;HIE of One 项目负责人
Marsali S. Hancock - Chair, IEEE Standards for Child and Student Data governance, CEO and Co-Foundation EP3 Foundation.F 玛萨莉·S·汉考克 - IEEE 儿童及学生数据治理标准委员会主席,EP3 基金会首席执行官兼联合创始人
Gry Hasselbalch - Founder DataEthics, Author, Data Ethics - The New Competitive Advantage 格里·哈塞尔巴尔奇 - DataEthics 创始人,《数据伦理——新竞争优势》作者
Yanqing Hong - Graduate, University of Utrecht Researcher at Tsinghua University 洪燕青 - 乌得勒支大学毕业生,清华大学研究员
Professor Meg Leta Jones - Assistant Professor in the Communication, Culture & Technology program at Georgetown University 梅格·莱塔·琼斯教授 - 乔治城大学传播、文化与技术项目助理教授
Mahsa Kiani - Chair of Student Activities, IEEE Canada; Vice Editor, IEEE Canada Newsletter (ICN); PhD Candidate, Faculty of Computer Science, University of New Brunswick 玛莎·基安尼 - IEEE 加拿大分会学生活动主席;IEEE 加拿大通讯(ICN)副主编;新不伦瑞克大学计算机科学学院博士候选人
Brenda Leong - Senior Counsel, Director of Operations, The Future of Privacy Forum 布伦达·梁 - 隐私未来论坛高级法律顾问兼运营总监
Ewa Luger - Chancellor’s Fellow at the University of Edinburgh, within the Design Informatics Group 埃娃·卢格 - 爱丁堡大学校长研究员,隶属于设计信息学团队
Sean Martin McDonald - CEO of FrontlineSMS, Fellow at Stanford’s Digital Civil Society Lab, Principal at Digital Public 肖恩·马丁·麦克唐纳 - FrontlineSMS 首席执行官,斯坦福数字公民社会实验室研究员,Digital Public 机构负责人
Hiroshi Nakagawa - Professor, The University of Tokyo, and AI in Society Research Group Director at RIKEN Center for Advanced Intelligence Project (AIP) 中川裕 - 东京大学教授,理化学研究所先进智能项目中心(AIP)社会人工智能研究组主任
Sofia C. Olhede - Professor of Statistics and an Honorary Professor of Computer Science at University College London, London, U.K; Member of the Programme Committee of the International Centre for Mathematical Sciences. 索菲亚·C·奥尔德 - 英国伦敦大学学院统计学教授兼计算机科学荣誉教授,国际数学科学中心项目委员会成员
Ugo Pagallo - University of Turin Law School; Center for Transnational Legal Studies, London; NEXA Center for Internet & Society, Politecnico of Turin 乌戈·帕加洛 - 都灵大学法学院;伦敦跨国法律研究中心;都灵理工大学互联网与社会 NEXA 中心
Dr. Juuso Parkkinen - Senior Data Scientist, Nightingale Health; Programme Team Member, MyData 2017 conference 尤索·帕克宁博士 - 夜莺健康高级数据科学家;2017 年 MyData 大会项目组成员
Eleonore Pauwels - Research Fellow on AI and Emerging Cybertechnologies, United Nations University (NY) and Director of the AI Lab, Woodrow Wilson International Center for Scholars (DC) 埃莱奥诺尔·波韦尔斯 - 联合国大学(纽约)人工智能与新兴网络技术研究员,伍德罗·威尔逊国际学者中心(华盛顿)人工智能实验室主任
Dr. Deborah C. Peel - Founder, Patient Privacy Rights & Creator, the International Summits on the Future of Health Privacy 黛博拉·C·皮尔博士 - 患者隐私权利组织创始人,国际健康隐私未来峰会发起人
Walter Pienciak - Principal Architect, Advanced Cognitive Architectures, Ltd. 沃尔特·皮恩恰克 - 高级认知架构有限公司首席架构师
Professor Serena Quattrocolo - University of Turin Law School 塞雷娜·夸特罗科洛教授 - 都灵大学法学院
Carolyn Robson - Group Data Privacy Manager at Etihad Aviation Group 卡罗琳·罗布森 - 阿提哈德航空集团数据隐私主管
Gilad Rosner - Internet of Things Privacy Forum; Horizon Digital Economy Research Institute, UK; UC Berkeley Information School 吉拉德·罗斯纳 - 物联网隐私论坛;英国数字经济社会研究院;加州大学伯克利分校信息学院
Prof. Dr.-Ing. Ahmad-Reza Sadeghi - 艾哈迈德-礼萨·萨德吉教授,工学博士
Director System Security Lab, Technische Universität Darmstadt / Director Intel Collaborative Research Institute for Secure Computing 达姆施塔特工业大学系统安全实验室主任/英特尔安全计算合作研究中心主任
Rose Shuman - Partner at BrightFront Group & Founder, Question Box 罗斯·舒曼 - BrightFront 集团合伙人,Question Box 创始人
Dr. Zoltán Szlávik - Lead/Researcher, IBM Center for Advanced Studies Benelux 佐尔坦·斯拉维克博士 - IBM 比荷卢高级研究中心首席研究员
Udbhav Tiwari - Centre for Internet and Society, India 乌德哈夫·蒂瓦里 - 印度互联网与社会中心
Endnotes 尾注
^(1){ }^{1} Europäische Union, Europäischer Gerichtshof für Menschenrechte, & Europarat (Eds.). (2015). Handbook on European law relating to the rights of the child. Luxembourg: Publications Office of the European Union. https://www.echr.coe.int/Documents/Handbook rights child ENG.PDF ^(1){ }^{1} 欧洲联盟、欧洲人权法院及欧洲委员会编(2015)。《欧洲儿童权利法律手册》。卢森堡:欧盟出版署。https://www.echr.coe.int/Documents/Handbook_rights_child_ENG.PDF ^(2){ }^{2} Children Act (1989). Retrieved from https://www. legislation.gov.uk/ukpga/1989/41/section/1 ^(2){ }^{2} 《儿童法案》(1989 年)。引自 https://www.legislation.gov.uk/ukpga/1989/41/section/1
3 “Browser fingerprints, and why they are so hard to erase | Network World.” 17 Feb. 2015, https:// www.networkworld.com/article/2884026/securi-ty0/browser-fingerprints-and-why-they-are-so-hard-to-erase.html. Accessed 25 July. 2018. 3 “浏览器指纹及其难以消除的原因 | 网络世界。” 2015 年 2 月 17 日,https://www.networkworld.com/article/2884026/security0/browser-fingerprints-and-why-they-are-so-hard-to-erase.html。2018 年 7 月 25 日访问。 ^(4){ }^{4} D. Gosh and B. Scott, “Digital Deceit: The Technologies behind Precision Propaganda on the Internet” 23 Jan. 2018, https://www.newamerica. org/public-interest-technology/policy-papers/digitaldeceit/. Accessed 10 Nov 2018. ^(4){ }^{4} D·戈什与 B·斯科特,《数字欺骗:互联网精准宣传背后的技术》2018 年 1 月 23 日,https://www.newamerica.org/public-interest-technology/policy-papers/digitaldeceit/。2018 年 11 月 10 日访问。
For a full listing of all IEEE Global Initiative Members, visit standards.ieee.org/content/dam/ ieee-standards/standards/web/documents/other/ ec bios.pdf. 完整 IEEE 全球倡议成员名单请访问 standards.ieee.org/content/dam/ieee-standards/standards/web/documents/other/ec bios.pdf。
For information on disclaimers associated with EAD1e, see How the Document Was Prepared. ^(5){ }^{5} Case described in Danish here https://dataethics. eu/trivsel-enhver-pris/
^(6){ }^{6} Russell, N. Cameron, Reidenberg, Joel R., Martin, Elizabeth, and Norton, Thomas, “Transparency and the Marketplace for Student Data” (June 6, 2018). Virginia Journal of Law and Technology, Forthcoming. Available at SSRN: https://ssrn.com/ abstract=3191436^(7){ }^{7} Children’s Online Privacy Protection Act (COPPA) - https://www.ftc.gov/tips-advice/business-center/ privacy-and-security/children%27s-privacy
Methods to Guide Ethical Research and Design 指导伦理研究与设计的方法论
Autonomous and intelligent systems (A/IS) research and design must be developed against the backdrop that technology is not neutral. A/IS embody values and biases that can influence important social processes like voting, policing, and banking. To ensure that A/IS benefit humanity, A/IS research and design must be underpinned by ethical and legal norms. These should be instantiated through values-based research and design methods. Such methods put human well-being at the core of A/IS development. 自主与智能系统(A/IS)的研究与设计必须基于"技术非中立"这一前提展开。A/IS 承载的价值观与偏见可能深刻影响投票、执法、金融等关键社会进程。为确保 A/IS 造福人类,相关研发工作必须以伦理和法律规范为基础,通过价值导向的研究设计方法予以实现。这类方法将人类福祉置于 A/IS 开发的核心位置。
To help achieve these goals, researchers, product developers, and technologists across all sectors need to embrace research and development methods that evaluate their processes, products, values, and design practices in light of the concerns and considerations raised in this chapter. This chapter is split up into three sections: 为实现这些目标,各领域的研究人员、产品开发者和技术专家都应采用新型研发方法,根据本章提出的关切与考量,系统评估其流程、产品、价值观及设计实践。本章内容分为三个部分:
Section 1-Interdisciplinary Education and Research 第一部分——跨学科教育与研究
Section 2-Corporate Practices on A/IS 第二部分——企业 A/IS 实践规范
Section 3-Responsibility and Assessment 第 3 节 责任与评估
Each of the sections highlights various areas of concern (issues) as well as recommendations and further resources. 每个章节都重点阐述了不同领域的关注点(议题)以及相关建议和延伸资源。
Overall, we address both structural and individual approaches. We discuss how to improve the ethical research and business practices surrounding the development of A/IS and attend to the responsibility of the technology sector vis-à-vis the public interest. We also look at that what can be done at the level of educational institutions, among others, informing engineering students about ethics, social justice, and human rights. The values-based research and design method will require a change of current system development approaches for organizations. This includes a commitment of research institutions to strong ethical guidelines for research and of businesses to values that transcend narrow economic incentives. 总体而言,我们同时探讨了结构性方案与个体化路径。我们论述了如何改进围绕人工智能/智能系统(A/IS)研发的伦理研究和商业实践,并关注科技行业对公共利益应承担的责任。我们还考察了教育机构等主体可采取的措施,包括向工程专业学生传授伦理道德、社会公正与人权知识。基于价值观的研究设计方法将要求组织改变现行系统开发模式,这既需要研究机构承诺遵守严格的科研伦理准则,也要求企业超越狭隘的经济利益追求更高层次的价值观。
Section 1-Interdisciplinary Education and Research 第 1 节 跨学科教育与研究
Abstract 摘要
Integrating applied ethics into education and research to address the issues of A/IS requires an interdisciplinary approach, bringing together humanities, social sciences, physical sciences, engineering, and other disciplines. 将应用伦理学融入教育和研究以应对自主/智能系统(A/IS)问题,需要采取跨学科方法,整合人文科学、社会科学、自然科学、工程学及其他学科领域。
\section*{Issue: Integration of ethics in A/IS-related degree programs} \section*{议题:A/IS 相关学位项目中的伦理课程整合}
Background 背景
A/IS engineers and design teams do not always thoroughly explore the ethical considerations implicit in their technical work and design choices. Moreover, the overall science, technology, engineering, and mathematics (STEM) field struggles with the complexity of ethical considerations, which cannot be readily articulated and translated into the formal languages of mathematics and computer programming associated with algorithms and machine learning. 人工智能/智能系统(A/IS)工程师和设计团队并不总是深入探讨其技术工作和设计选择中隐含的伦理考量。此外,整个科学、技术、工程和数学(STEM)领域都面临着伦理考量的复杂性挑战,这些考量难以被明确表述并转化为与算法和机器学习相关的数学及计算机编程形式语言。
Ethical issues can easily be rendered invisible or inappropriately reduced and simplified in the context of technical practice. For the dangers of this approach see for instance, Lipton and Steinhardt (2018), listed under “Further Resources”. This problem is further compounded by the fact that many STEM programs do not 在技术实践过程中,伦理问题很容易被忽视或不恰当地简化和弱化。关于这种做法的危害,可参阅"扩展资源"中列出的 Lipton 和 Steinhardt(2018)的研究。这一问题因许多 STEM 专业未能
sufficiently integrate applied ethics throughout their curricula. When they do, often ethics is relegated to a stand-alone course or module that gives students little or no direct experience in ethical decision-making. Ethics education should be meaningful, applicable, and incorporate best practices from the broader field. 在课程体系中充分融入应用伦理学而更加严重。即便开设相关课程,伦理学也往往被边缘化为独立课程或模块,使学生几乎无法获得伦理决策的直接经验。伦理教育应当具有实际意义、可应用性,并吸收该领域更广泛的最佳实践。
The aim of these recommendations is to prepare students for the technical training and engineering development methods that incorporate ethics as essential so that ethics, and relevant principles, like human rights, become naturally a part of the design process. 这些建议旨在培养学生掌握将伦理作为核心要素的技术培训与工程开发方法,使伦理及相关原则(如人权)自然融入设计流程。
Recommendations 建议
Ethics training needs to be a core subject for all those in the STEM field, beginning at the earliest appropriate level and for all advanced degrees. 伦理教育应成为所有 STEM 领域从业者的核心课程,从最早适宜阶段开始贯穿至所有高级学位教育。
Effective STEM ethics curricula should be informed by experts outside the STEM community from a variety of cultural and educational backgrounds to ensure that students acquire sensitivity to a diversity of robust perspectives on ethics and design. 有效的 STEM 伦理课程体系应当吸纳 STEM 领域外多元文化与教育背景专家的意见,确保学生能够培养对伦理与设计领域多种健全观点的敏感性。
Such curricula should teach aspiring engineers, computer scientists, and statisticians about the relevance and impact of their decisions in designing A/IS technologies. Effective 此类课程应教导未来的工程师、计算机科学家和统计学家,让他们了解在设计人工智能/智能系统(A/IS)技术时其决策的相关性和影响。有效的
Methods to Guide Ethical Research and Design 引导伦理研究与设计的方法
ethics education in STEM contexts and beyond should span primary, secondary, and postsecondary education, and include both universities and vocational training schools. STEM 领域内外的伦理教育应贯穿初等、中等及高等教育阶段,并涵盖大学和职业培训学校。
Relevant accreditation bodies should reinforce this integrated approach as outlined above. 相关认证机构应强化上述这种综合性的教育方法。
Further Resources 延伸阅读
IEEE P7000 ^("TM "){ }^{\text {TM }} Standards Project for a Model Process for Addressing Ethical Concerns During System Design. IEEE P7000 aims to enhance corporate IT innovation practices by providing processes for embedding a values- and virtue-based thinking, culture, and practice into them. IEEE P7000 标准项目:系统设计中伦理问题的模型处理流程。该标准旨在通过将基于价值观和美德的思维、文化及实践融入企业 IT 创新流程,从而提升相关实践水平。
Z. Lipton and J. Steinhardt, Troubling Trends in Machine Learning Scholarship. ICML conference paper, July 2018. Z. Lipton 与 J. Steinhardt 合著《机器学习学术研究中的问题趋势》,ICML 会议论文,2018 年 7 月。
J. Holdren, and M. Smith. “Preparing for the Future of Artificial Intelligence.” Washington, DC: Executive Office of the President, National Science and Technology Council, 2016. J. Holdren 与 M. Smith 合著《为人工智能的未来做准备》,华盛顿特区:美国总统行政办公室、国家科学技术委员会,2016 年。
Comparing the UK, EU, and US approaches to AI and ethics: C. Cath, S. Wachter, B. Mittelstadt, et al., “Artificial Intelligence and the ‘Good Society’: The US, EU, and UK Approach.” Science and Engineering Ethics, vol. 24, pp. 505-528, 2017. 比较英国、欧盟和美国在人工智能与伦理方面的不同路径:C. Cath, S. Wachter, B. Mittelstadt 等,《人工智能与"美好社会":美国、欧盟及英国的发展模式》,《科学与工程伦理》第 24 卷,第 505-528 页,2017 年。
Issue: Interdisciplinary collaborations 议题:跨学科协作
Background 背景
More institutional resources and incentive structures are necessary to bring A/IS engineers and designers into sustained and constructive contact with ethicists, legal scholars, and social scientists, both in academia and industry. This contact is necessary as it can enable meaningful interdisciplinary collaboration and shape the future of technological innovation. More could be done to develop methods, shared knowledge, and lexicons that would facilitate such collaboration. 需要投入更多机构资源和建立激励机制,促使人工智能/智能系统(A/IS)工程师和设计师与伦理学家、法学家及社会科学家在学术界和产业界保持持续而富有建设性的互动。这种互动至关重要,它能促成实质性的跨学科合作并塑造技术创新的未来走向。当前亟需开发更多促进此类协作的方法论、共享知识体系和专业术语体系。
This issue relates, among other things, to funding models as well as the lack of diversity of backgrounds and perspectives in A/IS-related institutions and companies, which limit crosspollination between disciplines. To help bridge this gap, additional translation work and resource sharing, including websites and Massive Open Online Courses (MOOCs), need to happen among technologists and other relevant experts, e.g., in medicine, architecture, law, philosophy, psychology, and cognitive science. Furthermore, there is a need for more cross-disciplinary conversation and multi-disciplinary research, as is being done, for instance, at the annual ACM Fairness, Accountability, and Transparency (FAT*) conference or the work done by the Canadian Institute For Advanced Research (CIFAR), which is developing Canada’s AI strategy. 该问题尤其涉及资金模式以及人工智能/智能系统(A/IS)相关机构与企业中背景和观点多样性的缺失,这限制了学科间的交叉融合。为弥合这一鸿沟,技术人员与其他相关领域专家(如医学、建筑学、法学、哲学、心理学及认知科学)之间需要加强翻译工作与资源共享,包括网站和慕课(MOOCs)资源。此外,有必要开展更多跨学科对话与多学科研究,例如年度 ACM 公平性、问责性与透明度(FAT*)会议所倡导的实践,或加拿大高等研究院(CIFAR)正在推进的加拿大人工智能战略相关工作。
Recommendations 建议
Funding models and institutional incentive structures should be reviewed and revised to prioritize projects with interdisciplinary ethics components to encourage integration of ethics into projects at all levels. 应审查并修订资金模式与机构激励机制,优先支持具有跨学科伦理要素的项目,以促进伦理考量在各级项目中的整合。
Further Resources 延伸资源
S. Barocas, Course Material for Ethics and Policy in Data Science, Cornell University, 2017. S. Barocas,《数据科学伦理与政策》课程资料,康奈尔大学,2017 年。
L. Floridi, and M. Taddeo. “What Is Data Ethics?” Philosophical Transactions of the Royal Society, vol. 374, no. 2083, 1-4. DOI10.1098/ rsta.2016.0360, 2016. L. Floridi 与 M. Taddeo,《何为数据伦理?》,《皇家学会哲学汇刊》第 374 卷第 2083 期,1-4 页。DOI10.1098/rsta.2016.0360,2016 年。
S. Spiekermann, Ethical IT Innovation: A ValueBased System Design Approach. Boca Raton, FL: Auerbach Publications, 2015. S. Spiekermann,《伦理化信息技术创新:基于价值的系统设计方法》,佛罗里达州博卡拉顿:奥尔巴赫出版社,2015 年。
K. Crawford, “Artificial Intelligence’s White Guy Problem”, New York Times, July 25, 2016. [Online]. Available: http://www.nytimes. com/2016/06/26/opinion/sunday/artificial-intelligences-white-guy-problem.html? r=1\mathrm{r}=1. [Accessed October 28, 2018]. K. Crawford,《人工智能的白人男性问题》,《纽约时报》,2016 年 7 月 25 日。[在线]。访问地址:http://www.nytimes.com/2016/06/26/opinion/sunday/artificial-intelligences-white-guy-problem.html? r=1\mathrm{r}=1 。[访问于 2018 年 10 月 28 日]。
Issue: A/IS culture and context 议题:人工智能/智能系统(A/IS)的文化与语境
Background 背景
A responsible approach to embedding values into A/IS requires that algorithms and systems are created in a way that is sensitive to the variation of ethical practices and beliefs across cultures. The designers of A/IS need to be mindful of cross-cultural ethical variations while also respecting widely held international legal norms. 将价值观融入人工智能/智能系统的负责任做法,要求算法和系统的设计能够敏感地适应不同文化间伦理实践与信仰的差异。A/IS 设计者需要在尊重国际公认法律规范的同时,充分关注跨文化伦理差异。
Recommendation 建议
Establish a leading role for intercultural information ethics (IIE) practitioners in ethics committees informing technologists, policy makers, and engineers. Clearly demonstrate through examples how cultural variation informs not only information flows and information systems, but also algorithmic decision-making and value by design. 在指导技术人员、政策制定者和工程师的伦理委员会中确立跨文化信息伦理(IIE)从业者的主导地位。通过实例清晰展示文化差异如何不仅影响信息流动和信息系统,还作用于算法决策与设计价值观。
Further Resources 延伸阅读
D. J. Pauleen, et al. "Cultural Bias in Information Systems Research and Practice: Are You Coming From the Same Place I Am? " Communications of the Association for Information Systems, vol. 17, no. 17, 2006. D. J. 保林等,《信息系统研究与实践中文化偏见:我们是否同源同流?》,《信息系统学会通讯》第 17 卷第 17 期,2006 年。
J. Bielby, “Comparative Philosophies in Intercultural Information Ethics,” Confluence: Online Journal of World Philosophies 2, no. 1, pp. 233-253, 2016. J. 比尔比,《跨文化信息伦理中的比较哲学》,《思想汇流:世界哲学在线期刊》第 2 卷第 1 期,第 233-253 页,2016 年。
Methods to Guide Ethical Research and Design 指导伦理研究与设计的方法
Issue: Institutional ethics committees in the A/IS fields 议题:A/IS 领域的机构伦理委员会
Background 背景
It is unclear how research on the interface of humans and A/IS, animals and A/IS, and biological hazards will impact research ethical review boards. Norms, institutional controls, and risk metrics appropriate to the technology are not well established in the relevant literature and research governance infrastructure. Additionally, national and international regulations governing review of human-subjects research may explicitly or implicitly exclude A/IS research from their purview on the basis of legal technicalities or medical ethical concerns, regardless of the potential harms posed by the research. 目前尚不明确关于人类与 A/IS、动物与 A/IS 以及生物危害界面的研究将如何影响研究伦理审查委员会。适用于该技术的规范、机构管控措施和风险指标在相关文献和研究治理体系中尚未完善确立。此外,无论研究可能造成的潜在危害如何,基于法律技术细节或医学伦理考量,国内外关于人类受试者研究审查的法规可能会明确或隐含地将 A/IS 研究排除在其管辖范围之外。
Research on A/IS human-machine interaction, when it involves intervention or interaction with identifiable human participants or their data, typically falls to the governance of research ethics boards, e.g., institutional review boards. The national level and institutional resources, e.g., hospitals and universities, necessary to govern ethical conduct of Human-Computer Interaction (HCl)(\mathrm{HCl}), particularly within the disciplines pertinent to A/IS research, are underdeveloped. 关于人工智能/智能系统(A/IS)人机交互的研究,当涉及可识别人类参与者或其数据的干预或交互时,通常属于研究伦理委员会(如机构审查委员会)的管辖范畴。目前,在国家层面和机构资源(如医院和大学)方面,对于管理人机交互伦理行为所需的支持,特别是在与 A/IS 研究相关的学科领域内,仍处于发展不足的状态。
First, there is limited international or national guidance to govern this form of research. Sections of IEEE standards governing research on A/IS in medical devices address some of the issues related to the security of A/ISenabled devices. However, the ethics of testing those devices for the purpose of bringing them 首先,国际上或国内对于这类研究的治理指导十分有限。IEEE 标准中关于医疗设备中 A/IS 研究的部分条款虽然涉及了与 A/IS 设备安全性相关的一些问题,但针对这些设备为投放市场而进行测试的伦理规范
to market are not developed into policies or guidance documents from recognized national and international bodies, e.g., U.S. Food and Drug Administration (FDA) and EU European Medicines Agency (EMA). Second, the bodies that typically train individuals to be gatekeepers for the research ethics bodies are under-resourced in terms of expertise for A/IS development, e.g., Public Responsibility in Medicine and Research (PRIM&R) and the Society of Clinical Research Associates (SoCRA). Third, it is not clear whether there is sufficient attention paid to A/IS ethics by research ethics board members or by researchers whose projects involve the use of human participants or their identifiable data. 首先,进入市场的相关规范尚未形成美国食品药品监督管理局(FDA)和欧盟欧洲药品管理局(EMA)等国际公认机构颁布的政策或指导文件。其次,为研究伦理委员会培养把关人才的机构(如"医学与研究公共责任组织"PRIM&R 和"临床研究协会"SoCRA)在人工智能/智能系统(A/IS)开发领域的专业资源配备不足。第三,目前尚不清楚研究伦理委员会成员或涉及人类参与者及其可识别数据的研究人员是否对 A/IS 伦理问题给予了足够重视。
For example, research pertinent to the ethicsgoverning research at the interface of animals and A/IS research is underdeveloped with respect to systematization for implementation by the Institutional Animal Care and Use Committee (IACUC) or other relevant committees. In institutions without a veterinary school, it is unclear that the organization would have the relevant resources necessary to conduct an ethical review of such research. 以动物与 A/IS 交叉领域的研究伦理为例,目前尚未形成可供机构动物护理与使用委员会(IACUC)或其他相关委员会系统化实施的规范体系。在没有兽医学院的机构中,这类组织是否具备开展相关研究伦理审查所需的专业资源仍不明确。
Similarly, research pertinent to the intersection of radiological, biological, and toxicological research -ordinarily governed under institutional biosafety committees-and A/IS research is not often found in the literature pertinent to research ethics or research governance. 同样地,关于放射学、生物学和毒理学研究(通常由机构生物安全委员会监管)与人工智能/自主系统研究交叉领域的研究,在涉及科研伦理或研究治理的文献中并不常见。
Methods to Guide Ethical Research and Design 引导伦理研究与设计的方法
Recommendation 建议
The IEEE and other standards-setting bodies should draw upon existing standards, empirical research, and expertise to identify priorities and develop standards for the governance of A/IS research and partner with relevant national agencies, and international organizations, when possible. IEEE 及其他标准制定机构应借鉴现有标准、实证研究和专业知识,确定优先事项并制定人工智能/自主系统研究的治理标准,在可能情况下与相关国家机构及国际组织建立合作伙伴关系。
Further Resources 延伸阅读
S. R. Jordan, “The Innovation Imperative.” Public Management Review 16, no. 1, pp. 67-89, 2014. S. R. 乔丹,《创新势在必行》,《公共管理评论》第 16 卷第 1 期,第 67-89 页,2014 年。
B. Schneiderman, “The Dangers of Faulty, Biased, or Malicious Algorithms Requires Independent Oversight.” Proceedings of the National Academy of Sciences of the United States of America 113, no. 48, 13538-13540, 2016. B. 施奈德曼,《有缺陷、偏见或恶意算法的危害需要独立监督》,《美国国家科学院院刊》第 113 卷第 48 期,第 13538-13540 页,2016 年。
J. Metcalf and K. Crawford, “Where are Human Subjects in Big Data Research? The Emerging Ethics Divide.” Big Data & Society, May 14, 2016. [Online]. Available: SSRN: https://ssrn. com/abstract=2779647. [Accessed Nov. 1, 2018]. J. 梅特卡夫与 K. 克劳福德,《大数据研究中的人类主体何在?新兴的伦理鸿沟》,《大数据与社会》,2016 年 5 月 14 日。[在线]。参见:SSRN:https://ssrn.com/abstract=2779647。[2018 年 11 月 1 日访问]。
R. Calo, “Consumer Subject Review Boards: A Thought Experiment.” Stanford Law Review Online 66 97, Sept. 2013. R. Calo,《消费者主题审查委员会:一项思想实验》。《斯坦福法律评论在线》第 66 期 97 页,2013 年 9 月。
Section 2-Corporate Practices on A/IS 第二节 企业在人工智能/信息系统领域的实践
Corporations are eager to develop, deploy, and monetize A/IS, but there are insufficient structures in place for creating and supporting ethical systems and practices around A/IS funding, development, and use. 企业热衷于开发、部署人工智能/信息系统并实现其货币化,但在围绕该技术的资助、开发和使用方面,尚缺乏建立和维护伦理体系与实践的充分机制。
Issue: Values-based ethical culture and practices for industry 议题:基于价值观的行业伦理文化与实践
Background 背景
Corporations are built to create profit while competing for market share. This can lead corporations to focus on growth at the expense of avoiding negative ethical consequences. Given the deep ethical implications of widespread deployment of A/IS, in addition to laws and regulations, there is a need to create valuesbased ethical culture and practices for the development and deployment of those systems. To do so, we need to further identify and refine corporate processes that facilitate values-based design. 企业的建立旨在创造利润并争夺市场份额。这可能导致企业以牺牲规避负面伦理后果为代价来追求增长。鉴于人工智能/智能系统(A/IS)广泛部署所带来的深刻伦理影响,除法律法规外,有必要为这些系统的开发与部署建立基于价值观的伦理文化和实践。为此,我们需要进一步识别并完善那些促进基于价值观设计的公司流程。
Recommendations 建议
The building blocks of such practices include top-down leadership, bottom-up empowerment, ownership, and responsibility, along with the need to consider system deployment contexts and/or ecosystems. Corporations should identify stages in their processes in which ethical considerations, “ethics filters”, are in place before products are further developed and deployed. 此类实践的构建要素包括自上而下的领导力、自下而上的赋权、所有权与责任意识,以及考虑系统部署环境及/或生态系统的必要性。企业应在其流程中识别出需要设置伦理考量的阶段——即"伦理过滤器"——以确保产品在进一步开发与部署前符合伦理标准。
For instance, if an ethics review board comes in at the right time during the A/IS creation process, it would help mitigate the likelihood of creating ethically problematic designs. The institution of an ethical A/IS corporate culture would accelerate the adoption of the other recommendations within this section focused on business practices. 例如,如果在人工智能/智能系统(A/IS)的创建过程中,伦理审查委员会能在恰当的时机介入,将有助于减少产生伦理问题设计的可能性。建立符合伦理规范的 A/IS 企业文化,将加速本节中其他关于商业实践建议的采纳。
Further Resources 延伸阅读
ACM Code of Ethics and Professional Ethics, which includes various references to human well-being and human rights, 2018. 《ACM 伦理与职业操守准则》(2018 年版),其中包含多项关于人类福祉与人权的参考条款。
Report of UN Special Rapporteur on Freedom of Expression. Al and Freedom of Expression. 2018. 联合国言论自由特别报告员报告《人工智能与言论自由》(2018 年)
The website of the Benefit corporations (B-corporations) provides a good overview of a range of companies that personify this type of culture. B 型企业(B-corporations)的网站提供了体现此类文化的企业案例概览。
R. Sisodia, J. N. Sheth and D. Wolfe, Firms of Endearment, 2^("nd ")2^{\text {nd }} edition. Upper Saddle River, NJ: FT Press, 2014. This book showcases how companies embracing values and a stakeholder approach outperform their competitors in the long run. R. 西索迪亚、J.N. 谢斯与 D. 沃尔夫,《令人倾心的企业》,第 0 版。新泽西州上鞍河市:FT 出版社,2014 年。本书展示了秉持价值观和利益相关者理念的企业如何实现长期竞争优势。
Issue: Values-based leadership 议题:基于价值观的领导力
Background 背景
Technology leadership should give innovation teams and engineers direction regarding which human values and legal norms should be promoted in the design of A/IS. Cultivating an ethical corporate culture is an essential component of successful leadership in the A/IS domain. 技术领导层应为创新团队和工程师提供指导,明确在人工智能/智能系统(A/IS)设计中应弘扬哪些人类价值观和法律规范。培育道德企业文化是 A/IS 领域成功领导力的核心要素。
Recommendations 建议
Companies should create roles for senior-level marketers, engineers, and lawyers who can collectively and pragmatically implement ethically aligned design. There is also a need for more in-house ethicists, or positions that fulfill similar roles. One potential way to ensure values are on the agenda in A/IS development is to have a Chief Values Officer (CVO), a role first suggested by Kay Firth-Butterfield, see “Further Resources”. However, ethical responsibility should not be delegated solely to CVOs. They can support the creation of ethical knowledge in companies, but in the end, all members of an organization will need to act responsibly throughout the design process. 企业应设立由高级市场人员、工程师和法律顾问组成的复合型职位,以务实方式共同实施符合伦理的设计方案。同时需要配备更多内部伦理专家或类似职能岗位。确保价值观融入 A/IS 开发议程的可行方案之一是设立首席价值观官(CVO)——该职位由凯·弗斯-巴特菲尔德首次提出(参见"扩展资源")。但伦理责任不应完全委派给 CVO,他们可以协助构建企业伦理知识体系,而最终需要组织全体成员在整个设计流程中践行责任意识。
Companies need to ensure that their understanding of values-based system innovation is based on de jure and de facto international human rights standards. 企业必须确保其价值观导向的系统创新理念符合国际人权标准的法律条文与实际规范。
Further Resources 延伸阅读
K. Firth-Butterfield, “How IEEE Aims to Instill Ethics in Artificial Intelligence Design,” The Institute. Jan. 19, 2017. [Online]. Available: http://theinstitute.ieee.org/ieee-roundup/ blogs/blog/how-ieee-aims-to-instill-ethics-in-artificial-intelligence-design. [Accessed October 28, 2018]. K. Firth-Butterfield,《IEEE 如何将伦理融入人工智能设计》,《The Institute》,2017 年 1 月 19 日。[在线]。获取地址:http://theinstitute.ieee.org/ieee-roundup/blogs/blog/how-ieee-aims-to-instill-ethics-in-artificial-intelligence-design。[访问日期:2018 年 10 月 28 日]。
United Nations, Guiding Principles on Business and Human Rights: Implementing the United Nations “Protect, Respect and Remedy” Framework, New York and Geneva: UN, 2011. 联合国,《工商企业与人权指导原则:实施联合国"保护、尊重和补救"框架》,纽约与日内瓦:联合国,2011 年。
Institute for Human Rights and Business (IHRB), and Shift, ICT Sector Guide on Implementing the UN Guiding Principles on Business and Human Rights, 2013. 人权与商业研究所(IHRB)与 Shift,《联合国工商企业与人权指导原则实施指南:ICT 行业篇》,2013 年。
C. Cath, and L. Floridi, “The Design of the Internet’s Architecture by the Internet Engineering Task Force (IETF) and Human Rights.” Science and Engineering Ethics, vol. 23, no. 2, pp. 449-468, Apr. 2017. C. Cath 和 L. Floridi,《互联网工程任务组(IETF)的互联网架构设计与人权》,《科学与工程伦理学》,第 23 卷第 2 期,第 449-468 页,2017 年 4 月。
Issue: Empowerment to raise ethical concerns 议题:提出伦理关切的赋权
Background 背景
Engineers and design teams may encounter obstacles to raising ethical concerns regarding their designs or design specifications within their organizations. Corporate culture should incentivize technical staff to voice the full range of ethical questions to relevant corporate actors throughout the full product lifecycle, including the design, development, and deployment 工程师和设计团队在组织内部针对其设计或设计规范提出伦理关切时可能遇到阻碍。企业文化应激励技术人员在整个产品生命周期(包括设计、开发和部署阶段)向相关企业参与者提出全方位的伦理问题。
Methods to Guide Ethical Research and Design 引导伦理研究与设计的方法
phases. Because raising ethical concerns can be perceived as slowing or halting a design project, organizations need to consider how they can recognize and incentivize values-based design as an integral component of product development. 由于提出伦理问题可能被视为会减缓或阻碍设计项目进展,各组织需要考虑如何将基于价值观的设计视为产品开发的重要组成部分,并予以认可和激励。
Recommendations 建议
Employees should be empowered and encouraged to raise ethical concerns in day-to-day professional practice. 应赋予员工权力并鼓励他们在日常专业实践中提出伦理关切。
To be effective in ensuring adoption of ethical considerations during product development or internal implementation of A/IS, organizations should create a company culture and set of norms that encourage incorporating ethical considerations in the design and implementation processes. 为确保在产品开发或人工智能/智能系统(A/IS)内部实施过程中有效采纳伦理考量,组织应建立鼓励将伦理因素融入设计和实施流程的企业文化及规范体系。
New categories of considerations around these issues need to be accommodated, along with updated Codes of Conduct, company valuestatements, and other management principles so individuals are empowered to share their insights and concerns in an atmosphere of trust. Additionally, bottom-up approaches like company “town hall meetings” should be explored that reward, rather than punish, those who bring up ethical concerns. 围绕这些议题的新考量范畴需要被纳入,同时更新行为准则、企业价值观声明及其他管理原则,从而在信任氛围中赋予员工分享见解与顾虑的权限。此外,应探索采用"市政厅会议"等自下而上的方式,对提出伦理质疑者予以奖励而非惩戒。
Further Resources 扩展资源
The British Computer Society (BCS), Code of Conduct, 2019. 英国计算机学会(BCS),《行为准则》,2019 年。
C. Cath, and L. Floridi, “The Design of the Internet’s Architecture by the Internet Engineering Task Force (IETF) and Human Rights,” Science and Engineering Ethics, vol. 23, no. 2, pp. 449-468, Apr. 2017. C. Cath 和 L. Floridi,《互联网工程任务组(IETF)的互联网架构设计与人权》,《科学与工程伦理》,第 23 卷第 2 期,第 449-468 页,2017 年 4 月。
Issue: Ownership and responsibility 议题:所有权与责任
Background 背景
There is variance within the technology community on how it sees its responsibility regarding A/IS. The difference in values and behaviors are not necessarily aligned with the broader set of social concerns raised by public, legal, and professional communities. The current makeup of most organizations has clear delineations among engineering, legal, and marketing functions. Thus, technologists will often be incentivized in terms of meeting functional requirements, deadline, and financial constraints, but for larger social issues may say, “Legal will handle that.” In addition, in employment and management technology or work contexts, “ethics” typically refers to a code of conduct regarding professional behavior versus a valuesdriven design process mentality. 技术界对于自身在自主/智能系统(A/IS)方面的责任认知存在差异。这种价值观和行为方式的差异未必与公众、法律和专业团体提出的更广泛社会关切相一致。当前大多数组织的架构中,工程、法律和营销职能之间存在明确界限。因此技术人员往往以满足功能需求、工期要求和财务限制为激励目标,而对于更宏观的社会议题可能会表示"法务部门会处理"。此外,在技术就业与管理或工作场景中,"伦理"通常指涉专业行为守则,而非基于价值观的设计流程思维。
As such, ethics regarding professional conduct often implies moral issues such as integrity or the lack thereof, in the case of whistleblowing, for instance. However, ethics in A/IS design include broader considerations about the consequences of technologies. 因此,关于职业行为的伦理通常涉及诚信等道德问题,例如在举报行为中体现的诚信或诚信缺失。然而,人工智能/信息系统(A/IS)设计中的伦理则包含对技术后果的更广泛考量。
Recommendations 建议
Organizations should clarify the relationship between professional ethics and applied A/IS ethics by helping or enabling designers, engineers, and other company representatives to discern the differences between these kinds of ethics and where they complement each other. 各组织应通过帮助或促使设计师、工程师及其他公司代表辨别这些伦理类型之间的差异及其互补关系,来阐明职业伦理与应用型 A/IS 伦理之间的关联。
Methods to Guide Ethical Research and Design 引导伦理研究与设计的方法
Corporate ethical review boards, or comparable mechanisms, should be formed to address ethical and behavioral concerns in relation to A/IS design, development and deployment. Such boards should seek an appropriately diverse composition and use relevant criteria, including both research ethics and product ethics, at the appropriate levels of advancement of research and development. These boards should examine justifications of research or industrial projects. 应成立企业伦理审查委员会或类似机制,以解决与自主/智能系统(A/IS)设计、开发和部署相关的伦理及行为问题。此类委员会应确保成员构成具有适当多样性,并在研发进程的相应阶段,运用包括研究伦理和产品伦理在内的相关标准。这些委员会应对研究或产业项目的合理性进行审查。
Further Resources 延伸阅读
HH van der Kloot Meijberg and RHJ ter Meulen, “Developing Standards for Institutional Ethics Committees: Lessons from the Netherlands,” Journal of Medical Ethics 27 i36-i40, 2001. HH van der Kloot Meijberg 与 RHJ ter Meulen 合著《制定机构伦理委员会标准:荷兰经验启示》,载《医学伦理学杂志》2001 年第 27 期 i36-i40 页。
Issue: Stakeholder inclusion 议题:利益相关方参与
Background 背景
The interface between A/IS and practitioners, as well as other stakeholders, is gaining broader attention in domains such as healthcare diagnostics, and there are many other contexts where there may be different levels of involvement with the technology. We should recognize that, for example, occupational therapists and their assistants may have on-theground expertise in working with a patient, who might be the “end user” of a robot or social A/IS technology. In order to develop a product that is ethically aligned, stakeholders’ feedback is crucial to design a system that takes ethical and social issues into account. There are successful user experience (UX) design concepts, such 人工智能/智能系统(A/IS)与从业者及其他利益相关者之间的交互界面在医疗诊断等领域正获得更广泛的关注,同时在其他应用场景中也可能存在不同层次的技术参与。我们应当认识到,例如职业治疗师及其助理在与患者(可能是机器人或社交型 A/IS 技术的"终端用户")的实际工作中具备现场专业知识。为了开发符合伦理规范的产品,利益相关者的反馈对于设计一个兼顾伦理和社会问题的系统至关重要。已有一些成功的用户体验(UX)设计理念,
as accessibility, that consider human physical disabilities, which should be incorporated into A/IS as they are more widely deployed. It is important to continuously consider the impact of A/IS through unanticipated use and on unforeseen interests. 例如考虑人类身体残障的无障碍设计原则,随着 A/IS 技术更广泛部署,这些理念应当被纳入系统设计。持续评估 A/IS 技术通过非预期使用方式对未预见利益群体产生的影响至关重要。
Recommendations 建议
To ensure representation of stakeholders, organizations should enact a planned and controlled set of activities to account for the interests of the full range of stakeholders or practitioners who will be working alongside A/IS and incorporating their insights to build upon, rather than circumvent or ignore, the social and practical wisdom of involved practitioners and other stakeholders. 为确保利益相关方的代表性,组织应实施一套有计划且受控的活动流程,充分考虑将与自主/智能系统(A/IS)协同工作的各领域利益相关方或从业者的权益,通过吸纳他们的专业见解来构建而非规避或忽视相关从业者及其他利益相关方的社会经验与实践智慧。
Further Resources 延伸阅读资源
C. Schroeter, et al., “Realization and User Evaluation of a Companion Robot for People with Mild Cognitive Impairments,” Proceedings of IEEE International Conference on Robotics and Automation (ICRA 2013), Karlsruhe, Germany 2013. pp. 1145-1151. C. Schroeter 等,《轻度认知障碍患者陪伴机器人的实现与用户评估》,载于《2013 年 IEEE 机器人与自动化国际会议论文集》(ICRA 2013),德国卡尔斯鲁厄,2013 年,第 1145-1151 页。
T. L. Chen, et al. “Robots for Humanity: Using Assistive Robotics to Empower People with Disabilities,” IEEE Robotics and Automation Magazine, vol. 20, no. 1, pp. 30-39, 2013. T. L. Chen 等,《人性化机器人:通过辅助机器人技术赋能残障人士》,载于《IEEE 机器人与自动化杂志》第 20 卷第 1 期,2013 年,第 30-39 页。
R. Hartson, and P. S. Pyla. The UX Book: Process and Guidelines for Ensuring a Quality User Experience. Waltham, MA: Elsevier, 2012. R. Hartson 与 P. S. Pyla 合著。《用户体验设计指南:确保优质用户体验的流程与规范》。马萨诸塞州沃尔瑟姆:爱思唯尔出版社,2012 年。
Issue: Values-based design 议题:基于价值观的设计
Background 背景
Ethics are often treated as an impediment to innovation, even among those who ostensibly support ethical design practices. In industries that reward rapid innovation in particular, it is necessary to develop ethical design practices that integrate effectively with existing engineering workflows. Those who advocate for ethical design within a company should be seen as innovators seeking the best outcomes for the company, end users, and society. Leaders can facilitate that mindset by promoting an organizational structure that supports the integration of dialogue about ethics throughout product life cycles. 伦理常被视为创新的阻碍,即使在那些表面上支持伦理设计实践的人群中亦是如此。在尤其推崇快速创新的行业里,有必要开发能与现有工程流程有效融合的伦理设计实践。企业内部倡导伦理设计的人士,应被视为追求企业、终端用户和社会最佳效益的创新者。领导者可通过建立支持产品全生命周期伦理对话的组织架构,来促成这种思维模式的转变。
A/IS design processes often present moments where ethical consequences can be highlighted. There are no universally prescribed models for this because organizations vary significantly in structure and culture. In some organizations, design team meetings may be brief and informal. In others, the meetings may be lengthy and structured. The transition points between discovery, prototyping, release, and revisions are natural contexts for conducting such reviews. Iterative review processes are also advisable, in part because changes to risk profiles over time can illustrate needs or opportunities for improving the final product. 人工智能/智能系统(A/IS)的设计流程中常存在可凸显伦理影响的环节。由于组织结构和文化差异显著,目前尚无普适性的规范模型。某些组织的设计团队会议可能简短而随意,另一些则可能冗长且结构化。在需求发现、原型开发、产品发布和迭代修订等阶段过渡节点,自然构成了开展伦理评审的适当时机。采用迭代式评审流程同样可取,部分原因在于风险特征的动态变化能揭示优化最终产品的需求或机遇。
Recommendations 建议
Companies should study design processes to identify situations where engineers and researchers can be encouraged to raise and resolve questions of ethics and foster a proactive environment to realize ethically aligned design. Achieving a distributed responsibility for ethics requires that all people involved in product design are encouraged to notice and respond to ethical concerns. Organizations should consider how they can best encourage and facilitate deliberations among peers. 企业应系统研究设计流程,识别可激励工程师和研究人员提出并解决伦理问题的情境,培育实现伦理对齐设计的主动文化。要实现伦理责任的分布式承担,必须鼓励所有产品设计参与者关注并回应伦理问题。各组织需考量如何最优地促进同行间的伦理审议机制。
Organizations should identify points for formal review during product development. These reviews can focus on “red flags” that have been identified in advance as indicators of risk. For example, if the datasets involve minors or focus on users from protected classes, then it may require additional justification or alterations to the research or development protocols. 企业应在产品开发过程中设置正式审查节点。这些审查可重点关注预先确定的"危险信号"——即风险指标。例如,若数据集涉及未成年人或聚焦受保护群体用户,则可能需要提供额外论证或调整研发方案。
Further Resources 延伸阅读
A. Sinclair, “Approaches to Organizational Culture and Ethics,” Journal of Business Ethics, vol. 12, no. 1, pp. 63-73, 1993. A. 辛克莱,《组织文化与伦理的探讨方法》,《商业伦理期刊》第 12 卷第 1 期,第 63-73 页,1993 年。
Al Y. S. Chen, R. B. Sawyers, and P. F. Williams. “Reinforcing Ethical Decision Making Through Corporate Culture,” Journal of Business Ethics 16, no. 8, pp. 855-865, 1997. 陈亚生、R.B.索耶斯、P.F.威廉姆斯,《通过企业文化强化伦理决策》,《商业伦理期刊》第 16 卷第 8 期,第 855-865 页,1997 年。
K. Crawford and R. Calo, “There Is a Blind Spot in Al Research,” Nature 538, pp. 311-313, 2016. K. Crawford 和 R. Calo,《人工智能研究存在盲点》,《自然》杂志第 538 卷,第 311-313 页,2016 年。
Section 3-Responsibility and Assessment 第三节 责任与评估
Lack of accountability of the A/IS design and development process presents a challenge to ethical implementation and oversight. This section presents four issues, moving from macro oversight to micro documentation practices. 人工智能/智能系统(A/IS)设计与开发过程缺乏问责机制,对伦理实施与监管构成挑战。本节从宏观监管到微观记录实践,提出四个关键问题。
Issue: Oversight for algorithms 问题:算法监管机制
The algorithms behind A//IS\mathrm{A} / \mathrm{IS} are not subject to consistent oversight. This lack of assessment causes concern because end users have no account of how a certain algorithm or system came to its conclusions. These recommendations are similar to those made in the “General Principles” and “Embedding Values into Autonomous and Intelligent Systems” chapters of Ethically Aligned Design, but here the recommendations are used as they apply to the narrow scope of this chapter . A//IS\mathrm{A} / \mathrm{IS} 背后的算法缺乏持续监管。这种评估机制的缺失引发担忧,因为终端用户无从了解特定算法或系统如何得出其结论。这些建议与《伦理对齐设计》中"通用原则"和"将价值观嵌入自主智能系统"章节提出的观点相似,但此处建议仅适用于本章的特定范畴。
Recommendations 建议
Accountability: As touched on in the General Principles chapter of Ethically Aligned Design, algorithmic transparency is an issue of concern. It is understood that specifics relating to algorithms or systems contain intellectual property that cannot, or will not, be released to the general public. Nonetheless, standards providing oversight of the manufacturing process of A/IS technologies need to be created to avoid harm and negative consequences. We can look to other technical domains, such as biomedical, civil, and aerospace engineering, where commercial 问责机制:如《伦理对齐设计》通用原则章节所述,算法透明度是核心关切问题。我们理解涉及算法或系统的具体细节可能包含无法或不愿向公众公开的知识产权。尽管如此,仍需建立监督 A/IS 技术制造过程的标准,以避免伤害和负面后果。可借鉴生物医学、土木和航空航天工程等其他技术领域的经验,
protections for proprietary technology are routinely and effectively balanced with the need for appropriate oversight standards and mechanisms to safeguard the public. 对专有技术的保护措施,需要与保障公众利益的适当监督标准和机制进行常规且有效的平衡。
Human rights and algorithmic impact assessments should be explored as a meaningful way to improve the accountability of A/IS. These need to be paired with public consultations, and the final impact assessments must be made public. 应探索将人权与算法影响评估作为提升自主/智能系统(A/IS)问责制的有效途径。这些措施需配合公众咨询共同实施,最终的影响评估结果必须向社会公开。
Further Resources 延伸阅读
F. Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Information. Cambridge, MA: Harvard University Press, 2016. F. 帕斯奎尔,《黑箱社会:控制金钱与信息的秘密算法》,马萨诸塞州剑桥市:哈佛大学出版社,2016 年。
R. Calo, “Artificial Intelligence Policy: A Primer and Roadmap,” UC Davis Law Review, 52: pp. 399-435, 2017. R. 卡洛,《人工智能政策:入门与路线图》,《加州大学戴维斯分校法律评论》,第 52 卷:第 399-435 页,2017 年。
We need independent, expert opinions that provide guidance to the general public regarding A/IS. Currently, there is a gap between how A/IS are marketed and their actual performance or application. We need to ensure that A/IS technology is accompanied by best-use recommendations and associated warnings. Additionally, we need to develop a certification scheme for A/IS which ensures that the technologies have been independently assessed as being safe and ethically sound. 我们需要独立的专家意见,为公众提供关于自主/智能系统(A/IS)的指导。目前,A/IS 的市场宣传与其实际性能或应用之间存在差距。我们必须确保 A/IS 技术配备最佳使用建议和相关警示。此外,还需建立 A/IS 认证体系,确保这些技术经过独立评估,达到安全与伦理标准。
For example, today it is possible for systems to download new self-parking functionality to cars, and no independent reviewer establishes or characterizes boundaries or use. Or, when a companion robot promises to watch your children, there is no organization that can issue an independent seal of approval or limitation on these devices. We need a ratings and approval system ready to serve social/automation technologies that will come online as soon as possible. We also need further government funding for research into how A/IS technologies can best be subjected to review, and how review organizations can consider both traditional health and safety issues, as well as ethical considerations. 例如,当前系统可为汽车下载自动泊车新功能,却没有任何独立评审机构界定或说明其使用边界。又如陪伴机器人承诺照看儿童时,没有组织能对这些设备颁发独立认证或使用限制标识。我们需要尽快建立评级与认证体系,以应对即将涌现的社会化/自动化技术。同时需要政府加大资金投入,研究如何对 A/IS 技术实施有效审查,以及审查机构如何兼顾传统健康安全问题和伦理考量。
Recommendations 建议方案
An independent, internationally coordinated body-akin to ISO-should be formed to oversee whether A/IS products actually meet ethical criteria, both when designed, developed, deployed, and when considering their evolution after deployment and during interaction with other products. It should also include a certification process. 应成立一个独立的、国际协调的机构——类似于国际标准化组织(ISO)——负责监督人工智能/自主系统(A/IS)产品在设计、开发、部署阶段以及考量其部署后演变及与其他产品交互时是否真正符合伦理标准。该机构还应包含认证流程。
Further Resources 延伸阅读
A. Tutt, “An FDA for Algorithms,” Administrative Law Review 69, 83-123, 2016. A. Tutt,《算法领域的 FDA》,《行政法评论》第 69 卷,83-123 页,2016 年。
M. U. Scherer, “Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies,” Harvard Journal of Law and Technology vol. 29, no. 2, 354-400, 2016. M. U. Scherer,《人工智能系统监管:风险、挑战、能力与策略》,《哈佛科技法期刊》第 29 卷第 2 期,354-400 页,2016 年。
D. R. Desai and J. A. Kroll, “Trust But Verify: A Guide to Algorithms and the Law.” Harvard Journal of Law and Technology, Forthcoming; Georgia Tech Scheller College of Business Research Paper No. 17-19, 2017. D. R. 德赛与 J. A. 克罗尔合著,《信任但需验证:算法与法律指南》。《哈佛科技法学期刊》(即将出版);佐治亚理工学院谢勒商学院研究论文第 17-19 号,2017 年。
Issue: Use of black-box components 问题:黑箱组件的使用
Background 背景
Software developers regularly use “black box” components in their software, the functioning of which they often do not fully understand. “Deep” machine learning processes, which are driving many advancements in autonomous and intelligent systems, are a growing source of black box software. At least for the foreseeable future, A/IS developers will likely be unable to build systems that are guaranteed to operate as intended. 软件开发人员经常在软件中使用"黑箱"组件,其运行机制往往未被完全理解。推动自主智能系统诸多进步的"深度"机器学习过程,正成为黑箱软件日益增长的来源。至少在可预见的未来,人工智能/智能系统开发者可能仍无法构建保证按预期运行的系统。
Recommendations 建议
When systems are built that could impact the safety or well-being of humans, it is not enough to just presume that a system works. Engineers must acknowledge and assess the ethical risks involved with black box software and implement mitigation strategies. 当构建可能影响人类安全或福祉的系统时,仅假设系统能够正常运行是不够的。工程师必须承认并评估黑箱软件涉及的伦理风险,同时实施相应的缓解策略。
Technologists should be able to characterize what their algorithms or systems are going to do via documentation, audits, and transparent and traceable standards. To the degree possible, these characterizations should be predictive, but given the nature of A/IS, they might need to be more retrospective and mitigation-oriented. As such, it is also important to ensure access to remedy adverse impacts. 技术人员应能通过文档记录、审计以及透明可追溯的标准来描述其算法或系统的预期行为。在可能的情况下,这些描述应具有预测性;但鉴于人工智能/智能系统(A/IS)的特性,可能更需要采取回顾性和缓解导向的方式。因此,确保对不利影响采取补救措施同样至关重要。
Technologists and corporations must do their ethical due diligence before deploying A/IS technology. Standards for what constitutes ethical due diligence would ideally be generated by an international body such as IEEE or ISO, and barring that, each corporation should work to generate a set of ethical standards by which their processes are evaluated and modified. Similar to a flight data recorder in the field of aviation, algorithmic traceability can provide insights on what computations led to questionable or dangerous behaviors. Even where such processes remain somewhat opaque, technologists should seek indirect means of validating results and detecting harms. 技术人员与企业在部署人工智能/智能系统(A/IS)技术前必须履行伦理尽职调查。伦理尽职调查的标准最好由 IEEE 或 ISO 等国际机构制定,若缺乏此类标准,各企业应自行建立伦理准则体系以评估和改进其流程。类似于航空领域的飞行数据记录仪,算法可追溯性能够揭示导致可疑或危险行为的计算过程。即使某些流程仍存在不透明性,技术人员也应通过间接手段验证结果并识别潜在危害。
Further Resources 延伸阅读
M. Ananny and K. Crawford, “Seeing without Knowing: Limitations of the Transparency Ideal and Its Application to Algorithmic Accountability,” New Media & Society, vol. 20, no. 3, pp. 973-989, Dec. 13, 2016. M. 安南尼与 K. 克劳福德,《未知之见:透明度理想的局限性及其在算法问责中的应用》,《新媒体与社会》第 20 卷第 3 期,第 973-989 页,2016 年 12 月 13 日。
D. Reisman, J. Schultz, K. Crawford, and M. Whittaker, “Algorithmic Impact Assessments: A Practical Framework for Public Agency Accountability,” AI NOW 2018. [Online]. Available: https://ainowinstitute.org/ aiareport2018.pdf. [Accessed October 28, 2018]. D. 雷斯曼、J. 舒尔茨、K. 克劳福德与 M. 惠特克,《算法影响评估:公共机构问责的实践框架》,AI NOW 2018 报告。[在线]。获取地址:https://ainowinstitute.org/aiareport2018.pdf。[访问日期:2018 年 10 月 28 日]。
J. A. Kroll “The Fallacy of Inscrutability.” Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, C. Cath, S. Wachter, B. Mittelstadt and L. Floridi, Eds., October 15, 2018 DOI: 10.1098/rsta.2018.0084. J. A. Kroll 《难以理解的谬误》。《皇家学会哲学汇刊 A 辑:数学、物理与工程科学》,C. Cath、S. Wachter、B. Mittelstadt 与 L. Floridi 编,2018 年 10 月 15 日 DOI: 10.1098/rsta.2018.0084.
Issue: Need for better technical documentation 问题:需要更完善的技术文档
Background 背景
A/IS are often construed as fundamentally opaque and inscrutable. However, lack of transparency is often the result of human decision. The problem can be traced to a variety of sources, including poor documentation that excludes vital information about the limitations and assumptions of a system. Better documentation combined with internal and external auditing are crucial to understanding a system’s ethical impact. 人工智能/智能系统(A/IS)常被视为本质不透明且难以理解的系统。然而,这种透明度的缺失往往源于人为决策。该问题可追溯至多种根源,包括缺乏记载系统局限性与基本假设的关键技术文档。结合内外部审计的完善文档体系,对于理解系统的伦理影响至关重要。
Recommendation 建议
Engineers should be required to thoroughly document the end product and related data flows, performance, limitations, and risks of A/IS. Behaviors and practices that have been prominent in the engineering processes should also be explicitly presented, as well as empirical evidence of compliance and methodology used, such as training data used in predictive systems, algorithms and components used, and results of behavior monitoring. Criteria for such documentation could be: auditability, accessibility, meaningfulness, and readability. 工程师应被要求全面记录人工智能/智能系统(A/IS)的最终产品及相关数据流、性能、局限性和风险。工程流程中突出的行为与实践也应明确呈现,包括合规性的实证证据及所采用的方法论,例如预测系统中使用的训练数据、采用的算法与组件,以及行为监测的结果。此类文档的标准可包括:可审计性、可访问性、意义明确性及可读性。
Companies should make their systems auditable and should explore novel methods for external and internal auditing. 企业应确保其系统具备可审计性,并探索内外审计的创新方法。
Further Resources 延伸资源
S. Wachter, B. Mittelstadt, and L. Floridi. “Transparent, Explainable, and Accountable Al for Robotics.” Science Robotics, vol. 2, no. 6, May 31, 2017. [Online]. Available: DOI: 10.1126/scirobotics.aan6080. [Accessed Nov. S. Wachter、B. Mittelstadt 与 L. Floridi 合著,《透明、可解释且可追责的机器人人工智能》,载《科学·机器人学》第 2 卷第 6 期,2017 年 5 月 31 日。[网络资源]。获取地址:DOI: 10.1126/scirobotics.aan6080。[访问于 11 月]
S. Barocas, and A. D. Selbst, “Big Data’s Disparate Impact.” California Law Review 104, 671-732, 2016. S. Barocas 与 A. D. Selbst 合著,《大数据的差异性影响》,载《加州法律评论》第 104 卷,第 671-732 页,2016 年。
J. A. Kroll, J. Huey, S. Barocas, E. W. Felten, J. R. Reidenberg, D. G. Robinson, and H. Yu. “Accountable Algorithms.” University of Pennsylvania Law Review 165, no. 1, 633705, 2017. J. A. Kroll、J. Huey、S. Barocas、E. W. Felten、J. R. Reidenberg、D. G. Robinson 及 H. Yu 合著,《可追责算法》,载《宾夕法尼亚大学法律评论》第 165 卷第 1 期,第 633-705 页,2017 年。
J. M. Balkin, “Free Speech in the Algorithmic Society: Big Data, Private Governance, and New School Speech Regulation.” UC Davis Law Review, 2017. J. M. Balkin 著,《算法社会中的言论自由:大数据、私人治理与新式言论监管》,载《加州大学戴维斯分校法律评论》,2017 年。
Thanks to the Contributors 致谢贡献者
We wish to acknowledge all of the people who contributed to this chapter. 我们谨向所有为本章节做出贡献的人士致以诚挚谢意。
The Methods to Guide Ethical Research and Design Committee 伦理研究与设计指导方法委员会
Corinne Cath-Speth (Co-Chair) - PhD student at Oxford Internet Institute, The University of Oxford, Doctoral student at the Alan Turing Institute, Digital Consultant at ARTICLE 19 科琳娜·凯斯-斯佩思(联席主席) - 牛津大学牛津互联网研究所博士研究生,艾伦·图灵研究所博士生,第 19 条组织数字顾问
Raja Chatila (Co-Chair) - CNRS-Sorbonne Institute of Intelligent Systems and Robotics, Paris, France; Member of the French Commission on the Ethics of Digital Sciences and Technologies CERNA; Past President of IEEE Robotics and Automation Society 拉贾·查蒂拉(联合主席)- 法国国家科学研究中心-索邦大学智能系统与机器人研究所,法国巴黎;法国数字科学与技术伦理委员会 CERNA 成员;IEEE 机器人与自动化学会前主席
Thomas Arnold - Research Associate at Tufts University Human-Robot Interaction Laboratory 托马斯·阿诺德 - 塔夫茨大学人机交互实验室研究员
Jared Bielby - President, Netizen Consulting Ltd; Chair, International Center for Information Ethics; editor, Information Cultures in the Digital Age 贾里德·比尔比 - Netizen 咨询有限公司总裁;国际信息伦理中心主席;《数字时代的信息文化》编辑
Reid Blackman, PhD - Founder & CEO Virtue Consultants, Assistant Professor of Philosophy Colgate University 里德·布莱克曼博士 - Virtue 咨询公司创始人兼首席执行官,科尔盖特大学哲学助理教授
Tom Guarriello, PhD - Founding Faculty member in the Master’s in Branding program at New York City’s School of Visual Arts, Host of RoboPsyc Podcast and author of RoboPsych Newsletter 汤姆·瓜列洛博士 - 纽约视觉艺术学院品牌设计硕士课程创始教师,《机器人心理学》播客主持人及《机器人心理通讯》作者
John C. Havens - Executive Director, The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems; Executive Director, The Council on Extended Intelligence; Author, Heartificial Intelligence: Embracing Our Humanity to Maximize Machines 约翰·C·哈文斯 - IEEE 全球自主与智能系统伦理倡议执行主任,扩展智能委员会执行主任,《人工情感智能:以人性最大化机器潜能》作者
Sara Jordan - Assistant Professor of Public Administration in the Center for Public Administration & Policy at Virginia Tech 萨拉·乔丹 - 弗吉尼亚理工大学公共行政与政策中心公共行政学助理教授
Jason Millar - Professor, robot ethics at Carleton University 贾森·米勒 - 卡尔顿大学机器人伦理学教授
Sarah Spiekermann - Chair of the Institute for Information Systems & Society at Vienna University of Economics and Business; Author of the textbook “Ethical IT-Innovation”, the popular book “Digitale Ethik-Ein Wertesystem für das 21. Jahrhundert” and Blogger on “The Ethical Machine” 莎拉·斯皮克尔曼 - 维也纳经济大学信息系统与社会研究所所长;《伦理 IT 创新》教材作者,畅销书《数字伦理——21 世纪价值体系》作者,以及"伦理机器"博客撰稿人
Shannon Vallor - William J. Rewak Professor in the Department of Philosophy at Santa Clara University in Silicon Valley and Executive Board member of the Foundation for Responsible Robotics 香农·瓦洛尔 - 硅谷圣克拉拉大学哲学系威廉·J·雷瓦克讲席教授,责任机器人基金会执行董事会成员
Klein, Wilhelm E. J., PhD - Senior Research Associate & Lecturer in Technology Ethics, City University of Hong Kong 威廉·E·J·克莱因博士 - 香港城市大学技术伦理高级研究员兼讲师
For a full listing of all IEEE Global Initiative Members, visit standards.ieee.org/content/dam/ ieee-standards/standards/web/documents/other/ ec bios.pdf. 完整 IEEE 全球倡议成员名单请访问:standards.ieee.org/content/dam/ieee-standards/standards/web/documents/other/ec bios.pdf
For information on disclaimers associated with EAD1e, see How the Document Was Prepared. 关于 EAD1e 的免责声明信息,请参阅《文档编制说明》。
A/IS for Sustainable Development 促进可持续发展的自主与智能系统
Autonomous and intelligent systems (A/IS) offer unique and impactful opportunities as well as risks both to people living in high-income countries (HIC) and in low-and middle-income countries (LMIC). The scaling and use of A/IS represent a genuine opportunity across the globe to provide individuals and communities-be they rural, semi-urban, or urban-with the means to satisfy their needs and develop their full potential, with greater autonomy and choice. A/IS will potentially disrupt economic, social, and political relationships and interactions at many levels. Those disruptions could provide an historical opportunity to reset those relationships in order to distribute power and wealth more equitably and thus promote social justice.’ They could also leverage quality and better standards of life and protect people’s dignity, while maintaining cultural diversity and protecting the environment. 自主与智能系统(A/IS)为高收入国家(HIC)和中低收入国家(LMIC)的居民既带来了独特而深远的发展机遇,也伴随着相应风险。A/IS 技术的规模化应用为全球各地——无论是农村、城乡结合部还是都市——的个人与社区提供了满足需求、充分发展潜能的新途径,同时赋予其更大的自主权与选择空间。A/IS 可能从多个层面重构经济、社会和政治关系及互动模式,这种变革或将创造历史性机遇,通过重新调整社会关系来实现更公平的权力与财富分配,从而促进社会正义。在维护文化多样性和环境保护的同时,这些技术还能提升生活质量与标准,捍卫人类尊严。
One possible vehicle that can be used to agree on priorities and prioritize resources and actions is the United Nations Agenda for Sustainable Development, which was adopted by the UN General Assembly in 2015; 193 nations voted in favor of the Agenda, which also includes 17 Sustainable Development Goals (SDGs) for the world to achieve by 2030. The Agenda challenges all member states to make concerted efforts toward the above mentioned goals, and thus toward a sustainable, prosperous, and resilient future for people and the planet. These universally applicable goals should be reached by 2030.^(2)2030 .^{2} 联合国《2030 年可持续发展议程》可作为协调优先事项、配置资源与行动的可行框架,该议程于 2015 年经联合国大会通过,获得 193 个国家支持,其中包含 17 项全球应在 2030 年前实现的可持续发展目标(SDGs)。议程要求所有成员国协同努力实现上述目标,从而为人类与地球构建可持续、繁荣且具有韧性的未来。这些普适性目标应通过 2030.^(2)2030 .^{2} 达成
The value of A/IS is significantly associated with the generation of various types of superior and unique insights, many of which could help achieve positive socioeconomic outcomes for both HIC and LMIC societies, in keeping with the SDGs. The ethical imperative driving this chapter is that A/IS must be harnessed to benefit humanity, promote equality, and realize the world community’s vision of a sustainable future and the SDGs: 人工智能与智能系统(A/IS)的核心价值在于其能产生各类优质独到见解,其中许多见解有助于高收入国家与中低收入国家社会实现符合可持续发展目标的社会经济效益。本章的伦理要义在于:必须引导 A/IS 技术造福人类、促进平等,实现国际社会对可持续未来及可持续发展目标的共同愿景。
… of universal respect for human rights and human dignity, the rule of law, justice, equality and nondiscrimination; of respect for race, ethnicity and cultural diversity; and of equal opportunity permitting the full realization of human potential and contributing to shared prosperity. A world which invests in its children and in which every child grows up free from violence and exploitation. A world in which every woman and girl enjoys full gender equality and all legal, social and economic barriers to their empowerment have been removed. A just, equitable, tolerant, open and socially inclusive world in which the needs of the most vulnerable are met. ^(3){ }^{3} ……一个普遍尊重人权和人类尊严、法治、正义、平等与不歧视的世界;一个尊重种族、民族和文化多样性的世界;一个提供平等机会让人类潜能得以充分实现并促进共同繁荣的世界。一个为其儿童投资、让每个孩子都能在远离暴力和剥削环境中成长的世界。一个每位妇女和女孩都能享有完全性别平等、所有阻碍她们赋权的法律、社会和经济障碍都被消除的世界。一个公正、公平、宽容、开放且具有社会包容性、能满足最弱势群体需求的世界。 ^(3){ }^{3}
We recognize that how A/IS are deployed globally will be a determining factor in whether, in fact, “no-one gets left behind”, whether human rights and dignity of all people are respected, whether children are protected, and whether the gap between rich and poor, within and between nations, narrows or widens. A/IS can advance the Sustainable Development Agenda’s transformative vision, but at the same time, A/IS can undermine it if risks reviewed in this chapter are not managed properly. 我们认识到,自主/智能系统(A/IS)在全球的部署方式将成为以下方面的决定性因素:事实上能否实现"不让任何人掉队"、所有人的人权和尊严是否得到尊重、儿童是否受到保护、以及国家内部和国家间的贫富差距是缩小还是扩大。A/IS 可以推进可持续发展议程的变革性愿景,但与此同时,如果本章所述风险得不到妥善管理,A/IS 也可能破坏这一愿景。
For example, A/IS create the risk of accelerating inequality within and among nations, if their development and marketing are controlled by a few select companies, primarily in HIC. The benefits would largely accrue to the highly educated and wealthier segment of the population, while displacing the less educated workforce, both by automation and by the absence of educational or retraining systems capable of imparting skills and knowledge needed to work productively alongside A/IS. These risks, although differentiated by IT infrastructure, educational attainment, economic, and cultural contexts, exist in HIC and LMIC alike. The inequality in accessing and using the internet, both within and among countries, raises questions on how to spread A/IS benefits across humanity. Ensuring A/IS “for the common good” is an ethical imperative and at the core of Ethically Aligned Design, First Edition; the key elements of this “common good” are that it is humancentered, accountable, and ensure outcomes that are fair and inclusive. 例如,人工智能与智能系统(A/IS)若其开发与营销由少数主要位于高收入国家(HIC)的企业掌控,将加剧国家内部及国家间的不平等风险。其收益将主要集中于受教育程度高且更富裕的群体,同时通过自动化及缺乏能够传授与 A/IS 高效协作所需技能知识的教育或再培训体系,导致低教育程度劳动力被替代。这些风险虽因信息技术基础设施、教育水平、经济及文化背景而存在差异,但在高收入国家与中低收入国家(LMIC)中同样存在。国家内部及国家间在互联网接入和使用上的不平等,引发了如何让全人类共享 A/IS 效益的思考。确保 A/IS"为共同利益服务"是一项伦理要务,也是《伦理对齐设计(第一版)》的核心原则;该"共同利益"的关键要素在于以人为中心、具备问责机制,并确保实现公平包容的成果。
This chapter explores the imperative for A/IS to serve humanity by improving the quality and standard of life for all people everywhere. It makes recommendations for advancing equal access to this transformative technology, so that it drives the well-being of all people, rather than further concentrating wealth, resources, and decision-making power in the hands of a few countries, companies, or citizens. The recommendations further reflect policies and collaborative public, private, and people programs which, if implemented, will respect the ethical imperative embedded in the Sustainable Development Agenda’s transformative vision. The respect of human rights and dignity, and the advancement of “common good” with equal benefit to both HIC and LMIC, are central to every recommendation within this chapter. 本章探讨了人工智能与智能系统(A/IS)必须通过提升全球各地所有人的生活质量和标准来服务人类的必要性。它提出了促进平等获取这一变革性技术的建议,以确保其推动全人类的福祉,而非进一步将财富、资源和决策权集中在少数国家、企业或公民手中。这些建议还体现了应实施的政策及公私民合作计划,以遵循可持续发展议程变革愿景中蕴含的伦理要求。尊重人权与尊严、促进高收入国家(HIC)与中低收入国家(LMIC)平等受益的"共同利益",是本章所有建议的核心。
Section 1-A/IS in Service to Sustainable Development for All 第一节-A/IS 助力全民可持续发展
A/IS have the potential to contribute to the resolution of some of the world’s most pressing problems, including: violation of fundamental rights, poverty, exploitation, climate change, lack of highquality services to excluded populations, increased violence, and the achievement of the SDGs. 人工智能与智能系统(A/IS)具有助力解决全球最紧迫问题的潜力,包括:基本权利受侵、贫困、剥削、气候变化、被排斥群体缺乏高质量服务、暴力升级,以及实现可持续发展目标(SDGs)。
Issue: Current roadmaps for development and deployment of A/IS are not aligned with or guided by their impact in the most important challenges of humanity, defined in the seventeen United Nations Sustainable Development Goals (SDGs), which collectively aspire to create a more equal world of prosperity, peace, planet protection, and human dignity for all people. ^(4){ }^{4} 问题:当前人工智能/智能系统(A/IS)的发展与部署路线图,未能与联合国十七项可持续发展目标(SDGs)所定义的人类最重要挑战相协调或受其指导。这些目标共同致力于为全人类创造一个更加平等、繁荣、和平的世界,保护地球环境并维护人类尊严。
Background 背景
SDGs promoting prosperity, peace, planet protection, human dignity, and respect for human rights of all, apply to HIC and LMIC alike. Yet ensuring that the benefits of A/IS will accrue to humanity as a whole, leaving “no one behind”, requires an ethical commitment to global 可持续发展目标(SDGs)旨在促进繁荣、和平、地球保护、人类尊严及对所有人权的尊重,无论高收入国家(HIC)还是中低收入国家(LMIC)均适用。然而要确保人工智能/智能系统(A/IS)的效益能普惠全人类,实现"不让任何人掉队"的愿景,就必须恪守全球伦理承诺。
citizenship and well-being, and a conscious effort to counter the nature of the tech economy, with its tendency to concentrate wealth within high income populations. Implementation of the SDGs should benefit excluded sectors of society in every country, regardless of A/IS infrastructure. 公民福祉与有意识地抵制科技经济本质的努力,这种经济往往将财富集中于高收入群体。可持续发展目标的实施应惠及各国社会中被边缘化的群体,无论其人工智能/智能系统基础设施发展水平如何。
“The Road to Dignity by 2030” document of the UN Secretary General reports on resources and methods for implementing the 2030 Agenda for Sustainable Development and emphasizes the importance of science, technology, and innovation for a sustainable future. ^(5){ }^{5} The UN Secretary General posits that: 联合国秘书长《2030 年尊严之路》文件报告了实施《2030 年可持续发展议程》的资源与方法,并强调科学技术与创新对可持续未来的重要性。 ^(5){ }^{5} 联合国秘书长提出:
“A sustainable future will require that we act now to phase out unsustainable technologies and to invest in innovation and in the development of clean and sound technologies for sustainable development. We must ensure that they are fairly priced, broadly disseminated and fairly absorbed, including to and by developing countries.” (para. 120) "要实现可持续未来,我们必须立即行动,逐步淘汰不可持续技术,投资创新并开发清洁可靠的技术以促进可持续发展。我们必须确保这些技术定价合理、广泛传播并被公平吸收,包括发展中国家对技术的获取与应用。"(第 120 段)
A/IS are among the technologies that can play an important role in the solution of the deep social problems plaguing our global civilization, contributing to the transformation of society away from an unsustainable, unequal socioeconomic system, towards one that realizes the vision of universal human dignity, peace, and prosperity. 人工智能与智能系统(A/IS)是能够解决困扰全球文明的深层社会问题的重要技术之一,有助于推动社会转型,摆脱不可持续且不平等的社会经济体系,迈向实现人类普遍尊严、和平与繁荣的愿景。
However, with all the potential benefits of A/IS, there are also risks. For example, given A/IS technology’s immense power needs, without 然而,尽管 A/IS 技术潜力巨大,其风险同样不容忽视。例如,考虑到 A/IS 技术巨大的能源需求,若缺乏
new sources of sustainable energy harnessed to power A/IS in the future, there is a risk that it will increase fossil fuel use and have a negative impact on the environment and the climate. 未来若未能开发出可持续新能源来驱动人工智能与智能系统(A/IS),则存在加剧化石燃料使用并对环境气候造成负面影响的风险。
While 45% of the world’s population is not connected to the internet, they are not necessarily excluded from A/IS’ potential benefits: in LMIC mobile networks can provide data for A/IS applications. However, only those connected are likely to benefit from the incomeproducing potential of internet technologies. In 2017, internet penetration in HIC left behind certain portions of the population often in rural or remote areas; 12% of U.S. residents and 20% of residents across Europe were unable to access the internet. In Asia with its concentration of LMIC, 52% of the population, on average, had no access, a statistic skewed by the large population of China, where internet penetration reached 45%45 \% of the population. In numerous other countries in the region, 99% of residents had no access. This nearly total exclusion also exists in several countries in Africa, where the overall internet penetration is only 35%: 2 of every 3 residents in Africa have no access. ^(6){ }^{6} Those with no internet access also do not generate data needed to “train” A/IS, and are thereby excluded from benefits of the technology, the development of which risks systematic discriminatory bias, particularly against people from minority populations, and those living in rural areas, or in low-income countries. As a comparison, one study estimated that “in the US, just one home automation product can generate a data point every six seconds.” ^(7){ }^{7} In Mozambique, where about 90% of the population lack internet access, “the average household generates zero digital data 尽管全球 45%人口尚未接入互联网,但这并不妨碍他们受益于 A/IS 的潜在价值:中低收入国家(LMIC)的移动网络可为 A/IS 应用提供数据支持。然而,只有联网人群才能充分享受互联网技术带来的创收机遇。2017 年,高收入国家(HIC)的互联网普及率仍使部分人群(尤其是农村或偏远地区居民)处于落后状态:美国 12%的居民和欧洲 20%的居民无法接入网络。在 LMIC 集中的亚洲地区,平均 52%人口处于断网状态——这一数据因中国庞大的人口基数而存在偏差(中国互联网普及率达 45%45 \% )。该地区其他许多国家仍有 99%居民无法上网。非洲多国同样面临近乎全民断网的困境,全洲互联网普及率仅 35%:平均每 3 名非洲居民中就有 2 人无法接入网络。 ^(6){ }^{6} 那些无法接入互联网的人同样无法产生用于"训练"人工智能/智能系统(A/IS)所需的数据,因此被排除在该技术带来的益处之外。这种技术发展可能产生系统性歧视偏见,尤其针对少数族裔群体、农村地区居民以及低收入国家人群。作为对比,一项研究估计"在美国,仅一个家庭自动化产品每六秒就能生成一个数据点"。 ^(7){ }^{7} 在约 90%人口无法上网的莫桑比克,"普通家庭产生的数字数据点为零"。
points.” ^(8){ }^{8} With mobile phones generating much of the data needed for developing A/IS applications in LMIC, unequal phone ownership may build in bias. For example, there is a risk of discrimination against women, who across LMIC are 14% less likely than men to own a mobile phone, and in South Asia where 38% are less likely to own a mobile phone. ^(9){ }^{9} ^(8){ }^{8} 由于低收入和中等收入国家(LMIC)开发 A/IS 应用所需数据主要来自手机,手机拥有率不均可能造成固有偏见。例如存在对女性的歧视风险——在整个 LMIC 地区,女性拥有手机的可能性比男性低 14%,而在南亚地区这一差距高达 38%。 ^(9){ }^{9}
Recommendations 建议
The current range of A/IS applications in sectors crucial to the SDGs, and to excluded populations everywhere, should be studied, with the strengths, weaknesses, and potential of the most significant recent applications analyzed, and the best ones developed at scale. Specific objectives to consider include: 当前应研究人工智能与智能系统(A/IS)在可持续发展目标(SDGs)关键领域及全球边缘化群体中的应用范围,分析最具影响力的近期应用案例的优势、不足及潜力,并将最优方案规模化推广。具体考量目标包括:
Identifying and experimenting with A/IS technologies relevant to the SDGs, such as: big data for development relevant to, for example, agriculture and medical tele-diagnosis; geographic information systems needed in public service planning, disaster prevention, emergency planning, and disease monitoring; control systems used in, for example, naturalizing intelligent cities through energy and traffic control and management of urban agriculture; applications that promote human empathy focused on diminishing violence and exclusion and increasing well-being. 识别并试点与 SDGs 相关的 A/IS 技术,例如:适用于农业和远程医疗诊断的发展领域大数据;公共服务规划、灾害预防、应急响应和疾病监测所需的地理信息系统;通过能源与交通管控及都市农业管理实现智慧城市自然化的控制系统;旨在减少暴力排斥、增进福祉的促进人类共情应用。
Promoting the potential role of A/IS in sustainable development by collaboration between national and international government agencies and nongovernmental organizations (NGOs) in technology sectors. 通过各国政府机构、国际组织与技术领域非政府组织(NGOs)的合作,提升 A/IS 在可持续发展中的潜在作用。
A/IS for Sustainable Development 促进可持续发展的人工智能与智能系统
Analyzing the cost of and proposing strategies for publicly providing internet access for all, as a means of diminishing the gap in A/IS’ potential benefit to humanity, particularly between urban and rural populations in HIC and LMIC alike. 分析全民公共互联网接入的成本并提出相应策略,以此缩小人工智能/智能系统(A/IS)对全人类潜在效益的差距,特别是在高收入国家(HIC)和低收入及中等收入国家(LMIC)中城乡人口之间的差异。
Investing in the documentation and dissemination of innovative applications of A/IS that advance the resolution of identified societal issues and the SDGs. 投资于记录和传播人工智能/智能系统(A/IS)的创新应用,以推动解决已识别的社会问题和实现可持续发展目标(SDGs)。
Researching sustainable energy to power A/IS computational capacity. 研究可持续能源以支持人工智能/智能系统(A/IS)的计算能力需求。
Investing in the development of transparent monitoring frameworks to track the concrete results of donations by international organizations, corporations, independent agencies, and the State, to ensure efficiency and accountability in applied A/IS. 投资开发透明的监测框架,以追踪国际组织、企业、独立机构和国家捐赠的具体成果,确保应用人工智能/智能系统(A/IS)时的效率和问责制。
Developing national legal, policy, and fiscal measures to encourage competition in the A/IS domestic markets and the flourishing of scalable A/IS applications. 制定国家法律、政策和财政措施,以促进人工智能/智能系统(A/IS)国内市场竞争,推动可扩展 A/IS 应用的蓬勃发展。
Integrating the SDGs into the core of private sector business strategies and adding SDG indicators to companies’ key performance indicators, going beyond corporate social responsibility (CSR). 将可持续发展目标(SDGs)融入私营部门商业战略核心,并将 SDG 指标纳入企业关键绩效指标,超越企业社会责任(CSR)范畴。
Applying the well-being indicators ^(10){ }^{10} to evaluate A/IS’ impact from multiple perspectives in HIC and LMIC alike. 运用福祉指标 ^(10){ }^{10} 从多维度评估 A/IS 对高收入国家(HIC)和低收入中等收入国家(LMIC)的影响。
Further Resources 扩展资源
R. Van Est and J.B.A. Gerritsen, with assistance of L. Kool, Human Rights in the Robot Age: Challenges arising from the use of Robots, Artificial Intelligence and Augmented Reality Expert Report written for the Committee on Culture, Science, Education and Media of the Parliamentary Assembly of the Council of Europe (PACE), The Hague: Rathenau Instituut 2017. R. Van Est 与 J.B.A. Gerritsen 在 L. Kool 协助下,《机器人时代的人权:由机器人、人工智能和增强现实应用引发的挑战》,为欧洲委员会议会大会文化、科学、教育和媒体委员会撰写的专家报告,海牙:荷兰拉特瑙研究所,2017 年。
World Economic Forum Global Future Council on Human Rights 2016-18, “White Paper: How to Prevent Discriminatory Outcomes in Machine Learning,” World Economic Forum, March 2018. 世界经济论坛全球未来人权理事会 2016-2018,《白皮书:如何防止机器学习中的歧视性结果》,世界经济论坛,2018 年 3 月。
United Nations General Assembly, Transforming Our World: The 2030 Agenda for Sustainable Development (A/RES/70/1: 21 October 2015) Preamble. http://www.un.org/ en/development/desa/population/migration/ generalassembly/docs/globalcompact/ A RES 701 E.pdf. 联合国大会,《变革我们的世界:2030 年可持续发展议程》(A/RES/70/1:2015 年 10 月 21 日)序言部分。http://www.un.org/zh/development/desa/population/migration/ generalassembly/docs/globalcompact/A_RES_70_1_E.pdf
United Nations Global Pulse, Big Data for Development: Challenges and Opportunities, 2012. 联合国全球脉动,《大数据促进发展:挑战与机遇》,2012 年。
Issue: A/IS are often viewed only as having impact in market contexts, yet these technologies also have an impact on social relations and culture. 问题:人工智能/智能系统(A/IS)通常仅被视为在市场环境中产生影响,然而这些技术同样对社会关系和文化具有深远影响。
Background 背景
A/IS are expected to have an impact beyond market domains and business models, diffusing throughout the global society. For instance, A/IS have and will impact social relationships in a way similar to how mobile phones changed our daily lives, reflecting directly on our culture, customs, and language. The extent and direction of this impact is not yet clear, but documented experience in HIC and high internet-penetration environments of trolls, “fake news,” and cyberbullying on social media offer a cautionary tale. ^(11){ }^{11} Depression, social isolation, aggression, and the dissemination of violent behavior with damage to human relations, so extreme that, in some cases, it has resulted in suicide, are all correlated with the internet. ^(12){ }^{12} As an example, the technology for “smart homes” has been used for inflicting domestic violence by remotely locking doors, turning off heat/AC, and otherwise harassing a partner. This problem could be easily extended to include elder and child abuse. ^(13){ }^{13} Measures need to be developed to prevent A/IS from contributing to the emergence or amplification of social disorders. 人工智能与智能系统(A/IS)的影响预计将超越市场领域和商业模式,渗透至全球社会各个层面。例如,A/IS 已经并将持续影响社会关系,其程度堪比手机对我们日常生活的改造,这种影响将直接反映在文化习俗和语言变迁上。虽然这种影响的广度与方向尚未明晰,但高收入国家(HIC)和高网络普及率环境中记录的网络喷子、"假新闻"和社交媒体网络霸凌现象已敲响警钟。 ^(11){ }^{11} 抑郁症、社会孤立、攻击性行为、暴力行径的传播以及人际关系的损害——极端情况下甚至导致自杀——均与互联网使用存在关联。 ^(12){ }^{12} 以"智能家居"技术为例,施暴者通过远程锁门、关闭暖气/空调等手段实施家庭暴力,这种技术滥用可能轻易延伸至虐待老人和儿童领域。 ^(13){ }^{13} 必须制定相应措施,防止 A/IS 加剧或诱发社会失序现象。
Recommendations 建议方案
To understand the impact of A/IS on society, it is necessary to consider product and process innovation, as well as wider sociocultural and ethical implications, from a global perspective, including the following: 要理解人工智能与智能系统(A/IS)对社会的影响,必须从全球视角考量产品和流程创新,以及更广泛的社会文化和伦理影响,包括以下方面:
Exploring the development of algorithms capable of detecting and reporting discrimination, cyberbullying, deceptive content and identities, etc., and of notifying competent authorities; recognizing that the use of such algorithms must address ethical concerns related to algorithm explainability as well as take into account the risk to certain aspects of human rights, notably to privacy and freedom from oppression. 探索开发能够检测并报告歧视、网络欺凌、欺骗性内容及虚假身份等行为的算法,并向主管部门发出通知;同时认识到此类算法的使用必须解决与算法可解释性相关的伦理问题,并考虑其对隐私权和免于压迫自由等人权方面可能构成的风险。
Developing a globally recognized professional Code of Ethics with and for technology companies. 与科技公司共同制定并获得全球认可的专业伦理准则。
Identifying social disorders, such as depression, anxiety, psychological violence, political manipulation, etc., correlated with the use of A/IS-based technologies as a world health problem; monitoring and measuring their impact. 将与 A/IS 技术使用相关的社会失调现象(如抑郁、焦虑、心理暴力、政治操纵等)识别为全球性健康问题;监测并评估其影响。
Elaborating metrics measuring how, where and on whom there is a cultural impact of new A/IS-based technologies. 制定衡量新型人工智能/智能系统技术对何人、何处及如何产生文化影响的指标。
Further Resources 更多资源
T. Luong, “Thermostats, Locks and Lights: Digital Tools of Domestic Abuse,” The New York Times, June 23, 2018, https://www. nytimes.com/2018/06/23/technology/smart-home-devices-domestic-abuse.html. T. Luong,《恒温器、门锁与电灯:家庭暴力的数字工具》,《纽约时报》,2018 年 6 月 23 日,https://www.nytimes.com/2018/06/23/technology/smart-home-devices-domestic-abuse.html
J. Naughton, “The internet of things has opened up a new frontier of domestic abuse.” The Guardian, July 2018. J. Naughton,《物联网开辟了家庭暴力的新领域》,《卫报》,2018 年 7 月
M. Pianta, Innovation and Employment, Handbook of Innovation. Oxford, U.K.: Oxford University Press, 2003. M. Pianta,《创新与就业》,载《创新手册》。英国牛津:牛津大学出版社,2003 年。
M.J. Salganik, Bit by Bit. Princeton, NJ: Princeton University Press 2018. M.J. Salganik,《比特与比特》。新泽西州普林斯顿:普林斯顿大学出版社,2018 年。
J. Torresen, “A Review of Future and Ethical Perspectives of Robotics and AI” Frontiers in Robotics and AI, Jan. 15, 2018. [Online]. Available: https://doi.org/10.3389/ frobt.2017.00075. [Accessed Nov. 1, 2018]. J. Torresen,《机器人学与人工智能的未来及伦理视角综述》,《机器人学与人工智能前沿》,2018 年 1 月 15 日。[在线]。获取于:https://doi.org/10.3389/ frobt.2017.00075。[访问日期:2018 年 11 月 1 日]。
Issue: The right to truthful information is key to a democratic society and to achieving sustainable development and a more equal world, but A/IS poses risks to this right that must be managed. 议题:获取真实信息的权利是民主社会的基石,也是实现可持续发展和更平等世界的关键,但自主/智能系统(A/IS)对这一权利构成的风险必须加以管控。
Background 背景
Social media have become the dominant technological infrastructure for the dissemination of information such as news, opinion, advertising, 社交媒体已成为新闻、观点、广告等信息传播的主导性技术基础设施,目前正引领着基于用户画像的定制化/定向信息潮流,这一过程大量运用了人工智能与信息系统技术。
etc., and are currently in the vanguard of the movement toward customized/targeted information based on user profiling that involves significant use of A/IS techniques. Analysis of opinion polls and trends in social networks, blogs, etc., and of the emotional response to news items can be used for the purposes of manipulation, facilitating both the selection of news that guides public opinion in the desired direction and the practice of sensationalism. 对民意调查、社交网络及博客等平台趋势的分析,以及对新闻内容情感反馈的研究,均可被用于操纵目的——既便于筛选能引导舆论朝向预期方向的新闻,也助长了煽情主义的实践。
The “personalization of the consumer experience”, that is, the adaptation of articles to the interests, political vision, cultural level, education, and geographic location of the reader, is a new challenge for the journalism profession that expands the possibilities of manipulation. "消费者体验个性化"(即根据读者兴趣、政治观点、文化程度、教育背景及地理位置调整文章内容)为新闻行业带来了新的挑战,这种模式进一步扩大了信息操纵的可能性空间。
The information infrastructure is currently lacking in transparency, such that it is difficult or impossible to know (except perhaps for the infrastructure operator): 当前的信息基础设施缺乏透明度,使得(除基础设施运营商外)人们难以或无法知晓:
what private information is being collected for user profiling and by whom, 哪些私人信息被谁收集用于用户画像,
which groups are targeted and by whom, 哪些群体被谁锁定为目标,
what information has been received by any given targeted group, 特定目标群体接收到了哪些信息,
who financed the creation and dissemination of this information, 谁资助了这些信息的创建和传播,
the percentage of the information being disseminated by bots, and 由机器人传播的信息所占比例,以及
who is financing these bots. 谁在资助这些机器人。
Many actors have found this opaque infrastructure ideal for spreading politically motivated disinformation, which has a negative 许多行为者发现这种不透明的架构非常适合传播具有政治动机的虚假信息,这对
effect on the creation of a more equal world, democracy, and the respect for fundamental rights. This disinformation can have tragic consequences. For instance, human rights groups have unearthed evidence that the military authorities of Myanmar used Facebook for inciting hatred against the Rohingya Muslim minority, hatred which facilitated an ethnic cleansing campaign and the murder of up to 50,000 people. ^(14){ }^{14} The UN determined that these actions constituted genocide, crimes against humanity, and war crimes. ^(15){ }^{15} 对构建更平等世界、民主制度及基本权利尊重的负面影响。此类虚假信息可能造成悲剧性后果。例如,人权组织发现证据表明缅甸军方利用 Facebook 煽动对罗兴亚穆斯林少数群体的仇恨,这种仇恨助长了种族清洗行动,导致多达 5 万人丧生。 ^(14){ }^{14} 联合国认定这些行为构成种族灭绝、危害人类罪和战争罪。 ^(15){ }^{15}
Recommendations 建议
To protect democracy, respect fundamental rights, and promote sustainable development, governments should implement a legislative agenda which prevents the spread of misinformation and hate speech, by: 为捍卫民主、尊重基本权利并促进可持续发展,各国政府应通过以下措施实施防止虚假信息和仇恨言论传播的立法议程:
Ensuring more control and transparency in the use of A/IS techniques for user profiling in order to protect privacy and prevent user manipulation. 加强对人工智能/信息系统(A/IS)用户画像技术的监管与透明度,以保护隐私并防止用户被操纵。
Using A/IS techniques to detect untruthful information circulating in the infrastructures, overseen by a democratic body to prevent potential censorship. 运用人工智能/智能系统技术检测基础设施中传播的不实信息,并由民主机构监督以防止潜在的审查行为。
Obliging companies owning A/IS infrastructures to provide more transparency regarding their algorithms, sources of funding, services, and clients. 要求拥有人工智能/智能系统基础设施的企业提高算法透明度,公开资金来源、服务内容及客户信息。
Defining a new legal status somewhere between “platforms” and “content providers” for A/IS infrastructures. 为人工智能/智能系统基础设施界定介于"平台"与"内容提供商"之间的新型法律地位。
Reformulating the deontological codes of the journalistic profession to take into account the intensive use of A/IS techniques foreseen in the future. 重构新闻职业道德准则,以适应未来将广泛使用人工智能/智能系统技术的行业前景。
Promoting the right to information in official documents, and developing A/IS techniques to automate journalistic tasks such as verification of sources and checking the accuracy of the information in official documents, or in the selection, hierarchy, assessment, and development of news, thereby contributing to objectivity and reliability. 在官方文件中促进信息知情权,开发人工智能/智能系统(A/IS)技术以实现新闻工作的自动化,例如验证信息来源、核查官方文件信息的准确性,或在新闻的选择、分级、评估和开发过程中发挥作用,从而提升客观性与可靠性。
Further Resources 延伸阅读
M. Broussard, “Artificial lintelligence for Investigative Reporting: Using an expert system to enhance journalists’ ability to discover original public affairs stories.” Digital Journalism, vol. 3, no. 6, pp. 814-831, 2015. M. Broussard,《调查性报道中的人工智能:运用专家系统增强记者发现原创公共事务新闻的能力》,《数字新闻学》第 3 卷第 6 期,第 814-831 页,2015 年。
M. Carlson, “The robotic reporter: Automated journalism and the redefinition of labor, compositional forms, and journalistic authority.” Digital Journalism, vol. 3, no. 3, pp. 416-431, 2015. M. Carlson,《机器人记者:自动化新闻对劳动形式、文本构成及新闻权威的重构》,《数字新闻学》第 3 卷第 3 期,第 416-431 页,2015 年。
A. López Barriuso, F. de la Prieta Pintado, Á. Lozano Murciego, , D. Hernández de la Iglesia and J. Revuelta Herrero, JOUR-MAS: A Multiagent System Approach to Help Journalism Management, vol. 4, no. 4, 2015. A. López Barriuso、F. de la Prieta Pintado、Á. Lozano Murciego、D. Hernández de la Iglesia 与 J. Revuelta Herrero,《JOUR-MAS:辅助新闻管理的多智能体系统方法》,第 4 卷第 4 期,2015 年。
P. Mozur, “A Genocide Incited on Facebook with Posts from Myanmar’s Military,” The New York Times, Oct. 15 2018. https:// www.nytimes.com/2018/10/15/technology/ myanmar-faceboo.k-genocide.html P. Mozur,《缅甸军方在 Facebook 上煽动种族灭绝的帖子》,《纽约时报》,2018 年 10 月 15 日。https://www.nytimes.com/2018/10/15/technology/myanmar-facebook-genocide.html
UK Parliament, House of Commons, Digital, Culture, Media and Sport Committee Disinformation and ‘fake news’: Interim Report, Fifth Report of Session 2017-19UK Parliament, Published on July 29, 2018. 英国议会下议院数字、文化、媒体与体育委员会,《虚假信息与"假新闻":中期报告》,2017-19 届议会第五号报告,2018 年 7 月 29 日发布。
Section 2-Equal Availability 第二章-平等获取条款
Issue: Vastly different power structures among and within countries create risk that A/IS deployment accelerates, rather than reduces, inequality in the pursuit of a sustainable future. It is unclear how LMIC can best implement A/IS via existing resources and take full advantage of the technology’s potential to achieve a sustainable future. 问题:各国之间及国家内部悬殊的权力结构存在风险,即人工智能/智能系统(A/IS)的部署可能加速而非减少追求可持续未来过程中的不平等现象。目前尚不清楚中低收入国家(LMIC)如何通过现有资源最优实施 A/IS,并充分利用该技术潜力来实现可持续未来。
Background 背景
The potential use of A/IS to create sustainable economic growth for LMIC is uniquely powerful. Yet, many of the debates surrounding A/IS take place within HIC, among highly educated and financially secure individuals. It is imperative that all humans, in any condition around the world, are considered in the general development and application of these systems to avoid the risk of bias, excessive inequality, classism, and general rejection of these technologies. With much of the financial and technical resources for A/IS development and deployment residing in HIC, not only are A/IS benefits more difficult to access for LMIC populations, but those A/IS applications that are deployed outside of HIC realities may not be appropriate. This is for reasons of cultural/ethnic bias, language difficulties, or simply an inability to adapt to local internet infrastructure constraints. 人工智能与智能系统(A/IS)为低收入和中等收入国家(LMIC)创造可持续经济增长的潜力具有独特优势。然而,当前围绕 A/IS 的讨论大多发生在高收入国家(HIC)中,参与者多为教育程度高且经济优渥的群体。为避免偏见、过度不平等、阶级歧视及技术排斥风险,必须确保全球所有人群都能参与这些系统的开发与应用。由于 A/IS 研发部署的资金和技术资源主要集中在 HIC,LMIC 民众不仅更难获取 A/IS 效益,那些脱离 LMIC 实际开发的 A/IS 应用也可能因文化/种族偏见、语言障碍或无法适应当地网络基础设施限制而水土不服。
Furthermore, technological innovation in LMIC comes up against many potential obstacles, which could be considered when undertaking initiatives aimed at enhancing LMIC access: 此外,LMIC 的技术创新面临诸多潜在障碍,在制定提升 LMIC 技术可及性的倡议时应予以考量:
Reluctance to provide open source licensing of technological development innovations, 技术研发创新成果开源许可的推行阻力,
Lack of the human capital and knowledge required to adapt HIC-developed technologies to resolving problems in the LMIC context, or to develop local technological solutions to these problems, 缺乏将高收入国家开发的技术应用于中低收入国家问题解决或开发本地化技术解决方案所需的人力资本和知识储备
Retention of A/IS capacity in LMIC due to globally uncompetitive salaries, 由于薪资水平在全球范围内缺乏竞争力,导致中低收入国家难以留住人工智能/智能系统领域人才
Lack of infrastructure for deployment, and difficulties in taking technological solutions to where they are needed, 技术部署基础设施不足,且难以及时将技术解决方案输送到需求地区
Lack of organizational and business models for adapting technologies to the specific needs of different regions, 缺乏针对不同地区特定需求进行技术适配的组织架构与商业模式
Lack of active participation of the target population, 目标人群缺乏积极参与,
Lack of political will to allow people to have access to technological resources, 缺乏允许人们获取技术资源的政治意愿,
Existence of oligopolies that hinder new technological development, 存在阻碍新技术发展的寡头垄断,
Lack of inclusive and high-quality education at all levels, and 各级教育缺乏包容性和高质量
Bureaucratic policies ill-adapted to highly dynamic scenarios. 官僚政策难以适应高度动态变化的场景。
For A/IS capacities and benefits to become equally available worldwide, training, education, and opportunities should be provided particularly for LMIC. Currently, access to products that facilitate A/IS research of timely topics is quite limited for researchers in LMIC, due to cost considerations. 要使人工智能/智能系统(A/IS)的能力和效益在全球范围内平等普及,尤其需要为中低收入国家(LMIC)提供培训、教育和机会。目前,由于成本考虑,LMIC 的研究人员难以及时获取促进 A/IS 热点课题研究的相关产品。
If A/IS capacity and governance problems, such as relevant laws, policies, regulations, and anticorruption safeguards, are addressed, LMIC could have the ability to use A/IS to transform their economies and leapfrog into a new era of inclusive growth. Indeed, A/IS itself can contribute to good governance when applied to the detection of corruption in state and banking institutions, one of the most serious recognized constraints to investment in LMIC. Particular attention, however, must be paid to ensure that the use of A/IS is for the common good-especially in the context of LMIC-and does not reinforce existing socioeconomic inequities through systematic discriminatory bias in both design and application, or undermine fundamental rights through, among other issues, lax data privacy laws and practice. 若能解决 A/IS 能力建设与治理问题——包括相关法律、政策、法规及反腐败保障措施,LMIC 将有能力运用 A/IS 实现经济转型,跨越式进入包容性增长的新纪元。事实上,当 A/IS 应用于检测政府和金融机构的腐败行为(这被公认为阻碍 LMIC 投资的最严重制约因素之一)时,其本身就能促进良政。但必须特别注意确保 A/IS 的使用是为了公共利益——特别是在 LMIC 背景下——不会通过系统性的设计应用歧视偏见加剧现有社会经济不平等,或因数据隐私法律与实践的疏漏等问题损害基本权利。
Recommendations 建议
A/IS benefits should be equally available to populations in HIC and LMIC, in the interest of universal human dignity, peace, prosperity, and planet protection. Specific measures for LMIC should include: 为保障全人类尊严、和平、繁荣及地球保护,高收入国家与中低收入国家应平等享有自主智能系统(A/IS)带来的福祉。针对中低收入国家的具体措施应包括:
Deploying A/IS to detect fraud and corruption, to increase the transparency of power structures, to contribute to a favorable investment, governance, and innovation environment. 运用自主智能系统侦测欺诈腐败行为,增强权力结构透明度,为投资环境、治理体系与创新生态建设提供支持。
Supporting LMIC in the development of their own A/IS strategies, and in the retention or return of their A/IS talent to prevent “brain drain”. 协助中低收入国家制定本土化自主智能系统发展战略,通过人才保留与回流计划防止"智力外流"现象。
Encouraging global standardization/ harmonization and open source A/IS software. 推动全球技术标准统一化进程,促进自主智能系统开源软件开发。
Promoting distribution of knowledge and wealth generated by the latest A//IS\mathrm{A} / \mathrm{IS}, including through formal public policy and financial mechanisms to advance equity worldwide. 通过正式公共政策和金融机制促进最新 A//IS\mathrm{A} / \mathrm{IS} 所创造知识与财富的全球公平分配
Developing public datasets to facilitate the access of people from LMIC to data resources to facilitate their applied research, while ensuring the protection of personal data. 开发公共数据集以促进中低收入国家人群获取数据资源,助力其应用研究,同时确保个人数据保护
Creating A/IS international research centers in every continent, that promote culturally appropriate research, and allow the remote access of LMIC’s communities to high-end technology. ^(16){ }^{16} 在各大洲建立人工智能/智能系统国际研究中心,推动符合文化特性的研究,并使中低收入国家社群能够远程使用高端技术 ^(16){ }^{16}
Facilitating A/IS access in LMIC through online courses in local languages. 通过本地语言在线课程促进中低收入国家的人工智能/信息系统(A/IS)普及。
Ensuring that, along with the use of A/IS, discussions related to identity, platforms, and blockchain are conducted, such that core enabling technologies are designed to meet the economic, social, and cultural needs of LMIC. 确保在运用人工智能/智能系统(A/IS)的同时,开展关于身份识别、平台架构及区块链技术的相关讨论,使核心赋能技术的设计能够满足中低收入国家(LMIC)的经济、社会及文化需求。
Diminishing the barriers and increase LMIC access to technological products, including the formation of collaborative networks between developers in HIC and LMIC, supporting the latter in attending global A/IS conferences. ^(17){ }^{17} 降低技术产品的获取门槛,增加中低收入国家(LMIC)的接触机会,包括建立高收入国家(HIC)与中低收入国家开发者之间的协作网络,支持后者参与全球人工智能与智能系统(A/IS)会议。
Promoting research into A/IS-based technologies, for example, mobile lightweight A/IS applications, that are readily available in LMIC. 推动针对中低收入国家(LMIC)易于获取的 A/IS 技术研究,例如移动轻量级 A/IS 应用。
Facilitating A/IS research and development in LMIC through investment incentives, public- 通过投资激励、公共资金支持等方式促进中低收入国家的人工智能与智能系统(A/IS)研发
private partnerships, and/or joint grants, and collaboration between international organizations, government bodies, universities, and research institutes. 公私合作伙伴关系、联合资助项目,以及国际组织、政府机构、大学与研究机构之间的协作。
Prioritizing A/IS infrastructure in international development assistance, as necessary to improve the quality and standard of living and advance progress towards the SDGs in LMIC. 在国际发展援助中优先考虑人工智能/智能系统(A/IS)基础设施建设,这对于提升中低收入国家(LMIC)的生活质量与水平、推动实现可持续发展目标(SDGs)至关重要。
Recognizing data issues that may be particular to LMIC contexts, i.e., insufficient sample size for machine learning which sometimes results in de facto discrimination, and inadequate laws for, and the practice of, data protection. 认识到可能为中低收入国家所特有的数据问题,即机器学习样本量不足有时会导致事实上的歧视,以及数据保护法律与实践的缺失。
Supporting research on the adaptation of A/IS methods to scarce data environments and other remedies that facilitate an optimal A/IS enabling environment in LMIC. 支持研究 A/IS 方法在数据稀缺环境中的适应性改进方案,以及促进中低收入国家构建最优 A/IS 赋能环境的其他补救措施。
Further Resources 延伸阅读
A. Akubue, “Appropriate Technology for Socioeconomic Development in Third World Countries.” The Journal of Technology Studies 26, no. 1, pp. 33-43, 2000. A. 阿库布,《适用于第三世界国家社会经济发展的适宜技术》,《技术研究期刊》第 26 卷第 1 期,第 33-43 页,2000 年。
O. Ajakaiye and M. S. Kimenyi. “Higher Education and Economic Development in Africa: Introduction and Overview.” Journal of African Economies 20, no. 3, iii3-iii13, 2011. O. Ajakaiye 与 M. S. Kimenyi。《非洲高等教育与经济发展:引言与概览》。《非洲经济期刊》第 20 卷第 3 期,iii3-iii13 页,2011 年。
D. Allison-Hope and M. Hodge, “Artificial Intelligence: A Rights-Based Blueprint for Business,” San Francisco: BSF, Aug. 28, 2018 D. 艾利森-霍普与 M. 霍奇,《人工智能:企业权利本位实施纲要》,旧金山:BSF,2018 年 8 月 28 日
D. E. Bloom, D. Canning, and K. Chan. Higher Education and Economic Development in Africa (Vol. 102). Washington, DC: World Bank, 2006. D. E. 布鲁姆、D. 坎宁与 K. 陈合著。《非洲高等教育与经济发展》(第 102 卷)。华盛顿特区:世界银行出版社,2006 年。
N. Bloom, “Corporations in the Age of Inequality.” Harvard Business Review, April 21, 2017. N. 布鲁姆,《不平等时代的企业》,哈佛商业评论,2017 年 4 月 21 日。
C. Dahlman, Technology, Globalization, and Competitiveness: Challenges for Developing C. 达尔曼,《技术、全球化与竞争力:发展中国家的挑战》
Countries. Industrialization in the 21st Century. New York: United Nations, 2006. 《伦理对齐》
国家。21 世纪的工业化。纽约:联合国,2006 年。
M. Fong, Technology Leapfrogging for Developing Countries. Encyclopedia of Information Science and Technology, 2nd ed. Hershey, PA: IGI Global, 2009 (pp. 3707-3713). M. Fong,《发展中国家技术跨越式发展》。载于《信息科学与技术百科全书》(第二版),美国宾夕法尼亚州赫尔希:IGI Global 出版社,2009 年(第 3707-3713 页)。
C. B. Frey and M. A. Osborne. “The Future of Employment: How Susceptible Are Jobs to Computerisation?” (working paper). Oxford, U.K.: Oxford University, 2013. C. B. 弗雷与 M. A. 奥斯本。《就业的未来:职业对计算机化的敏感度如何?》(工作论文)。英国牛津:牛津大学,2013 年。
B. Hazeltine and C. Bull. Appropriate Technology: Tools, Choices, and Implications. New York: Academic Press, 1999. B. 黑兹尔廷与 C. 布尔。《适宜技术:工具、选择与影响》。纽约:学术出版社,1999 年。
McKinsey Global Institute. “Disruptive Technologies: Advances That Will Transform Life, Business, and the Global Economy” (report), May 2013. 麦肯锡全球研究院。《颠覆性技术:将改变生活、商业和全球经济的进步》(报告),2013 年 5 月。
D. Rotman, “How Technology Is Destroying Jobs.” MIT Technology Review, June 12, 2013. D. 罗特曼,《技术如何摧毁就业岗位》,《麻省理工科技评论》,2013 年 6 月 12 日。
R. Sauter and J. Watson. “Technology Leapfrogging: A Review of the Evidence, A Report for DFID.” Brighton, England: University of Sussex. October 3, 2008. R. 索特与 J. 沃森。《技术跨越:证据综述——为英国国际发展部撰写的报告》。英格兰布莱顿:萨塞克斯大学。2008 年 10 月 3 日。
“The Rich and the Rest.” The Economist. October 13, 2012. 《富者与余众》。《经济学人》。2012 年 10 月 13 日。
“Wealth without Workers, Workers without Wealth.” The Economist. October 4, 2014. “无工人的财富,无财富的工人。”《经济学人》,2014 年 10 月 4 日。
World Bank. “Global Economic Prospects 2008: Technology Diffusion in the Developing World.” Washington, DC: World Bank, 2008. 世界银行。《2008 年全球经济展望:发展中国家的技术扩散》。华盛顿特区:世界银行,2008 年。
World Development Report 2016: Digital Dividends. Washington, DC: World Bank. doi:10.1596/978-1-4648-0671-1. 《2016 年世界发展报告:数字红利》。华盛顿特区:世界银行。doi:10.1596/978-1-4648-0671-1。
World Wide Web Foundation “Artificial Intelligence: The Road ahead in Low and Middle-income Countries,” webfoundation.org, June 2017. ...
Section 3-A/IS and Employment ...
Issue: A/IS are changing the nature of work, disrupting employment, while technological change is happening too fast for existing methods of (re)training the workforce. ...
Background ...
The current pace of technological development will heavily influence changes in employment structure. In order to properly prepare the workforce for such evolution, actions should be proactive and not only reactive. The wave of automation caused by the A/IS revolution will displace a very large share of jobs across domains and value chains. The U.S. “automated vehicle” case study analyzed in the White House 2016 report Artificial Intelligence, Automation, and the Economy is emblematic of what is at stake: “2.2 to 3.1 million existing part- and full-time U.S. jobs are exposed over the next two decades, although the timeline remains uncertain.” ^(18){ }^{18} ...
The risk of unemployment for LMIC is more serious than for developed countries. The industry of most LMIC is labor intensive. While labor may be cheap(er) in LMIC economies, the ripple effects of A/IS and automation will be felt much more than in the HIC economies. The 2016 World Bank Development Report stated that the share of occupations susceptible to automation and A/IS is higher in LMIC than in ...
HIC, where such jobs have already disappeared. In addition, the qualities which made certain jobs easy to outsource to LMIC where wages are lower are those that may make them easy to automate. ^(19){ }^{19} An offsetting factor is the reality that many LMIC lack the communication, energy, and IT infrastructure required to support highly automated industries. ^(20){ }^{20} Notwithstanding this reality, the World Bank estimated the automatable share of employment, unadjusted for adoption time lag, for LMIC ranges from 85% in Ethiopia to 62%62 \% in Argentina, compared to the OECD average of 57%. ^(21){ }^{21} ...
In the coming decades, the automation wave calls for higher investment and the transformation of labor market capacity development programs. Innovative and fair ways of funding such an investment are required; the solutions should be designed in cooperation with the companies benefiting from the increase of profitability, thanks to automation. This should be done in a responsible way so that the innovation cycle is not broken, and yet workforce capacity does not fall behind the needs of 21st century employment. At the same time, A/IS and other digital technologies offer real potential to innovate new approaches to job-search assistance, placement, and hiring processes in the age of personalized services. The efficiency of matching labor supply and demand can be tremendously enhanced by the rise of multisided platforms and predictive analytics, provided they do not entrench discrimination. ^(22){ }^{22} The case of platforms, such as LinkedIn, for instance, with its 470 million ...
registered users, and online job consolidators such as indeed.com and Simply Hired, are interesting as an evolution in hiring practices, at least for those able to access the internet. ...
Tailored counseling and integrated retraining programs also represent promising grounds for innovation. In addition, much will have to be done to create fair and effective lifelong skill development/training, infrastructures, and mechanisms capable of empowering millions of people to viably transition jobs, sectors, and potentially locations, and to address differential geographic impacts that exacerbate income and wealth disparities. Effectively enabling the workforce to be more mobile-physically, legally, and virtually-will be crucial. This implies systemic policy approaches which encompass housing, transportation, licensing, tax incentives, and crucially in the age of A/IS, universal broadband access, especially in rural areas of both HIC and LMIC. ...
Recommendations 建议
To thrive in the A/IS age, workers must be provided training in skills that improve their adaptability to rapid technological changes; programs should be available to any worker, with special attention to the low-skilled workforce. Those programs can be private, that is, sponsored by the employer, or publicly and freely offered through specific public channels and government policies, and should be available regardless of whether the worker is in between jobs or still employed. Specific measures include: ...
Offering new technical programs, possibly earlier than high school, to increase the workforce capacity to close the skills gap and thrive in employment alongside A/IS. ...
Creating opportunities for apprenticeships, pilot programs, and scaling up data-driven evidence-based solutions that increase employment and earnings. ...
Supporting new forms of public-private partnerships involving civil society, as well as new outcome-oriented financial mechanisms, e.g., social impact bonds, that help scale up successful innovations. ...
Supporting partnerships between universities, innovation labs in corporations, and governments to research and incubate startups for A/IS graduates. ^(23){ }^{23} ...
Developing regulations to hold corporations responsible for employee retraining necessary due to increased automation and other technological applications having impact on the workforce. ...
Facilitating private sector initiatives by public policy for co-investment in training and retraining programs through tax incentives. ...
Establishing and resourcing public policies that assure the survival and well-being of workers, displaced by A/IS and automation, who cannot be retrained. ...
Researching complementary areas, to lay solid foundations for the transformation outlined above. ...
Requiring more policy research on the dynamics of professional transitions in different labor market conditions. ...
Researching the fairest and most efficient public-private options for financing labor force transformation due to A/IS. ...
Developing national and regional future of work strategies based on sound research and strategic foresight. ...
Further Resources 延伸阅读
V. Cerf and D. Norfors, The People-centered Economy: The New Ecosystem for Work. California: IIIJ Foundation, 2018. ...
Executive Office of the President. Artificial Intelligence, Automation, and the Economy. December 20, 2016. ...
S. Kilcarr, “Defining the American Dream for Trucking … and the Nation, Too,” FleetOwner, April 26, 2016. ...
M. Mason, "Millions of Californians’ Jobs could be Affected by Automation-a Scenario the next Governor has to Address,"Los Angeles Times, October 14, 2018. ...
OECD, “Labor Market Programs: Expenditure and Participants,” OECD Employment and Labor Market Statistics (database), 2016. ...
M. Vivarelli, “Innovation and Employment: A Survey,” Institute for the Study of Labor (IZA) Discussion Paper No. 2621, February 2007. ...
Issue: Analysis of the A/IS impact on employment is too focused on the number and category of jobs affected, whereas more attention should be addressed to the complexities of changing the task content of jobs. ...
Background ...
Current attention on automation and employment tends to focus on the sheer number of jobs lost or gained. It is important to focus the analysis on how employment structures will be changed by A/IS, rather than solely dwelling on the number of jobs that might be impacted. For example, rather than carrying out a task themselves, workers will need to shift to supervision of robots performing that task. Other concerns include changes in traditional employment structures, with an increase in flexible, contract-based temporary jobs, without employee protection, and a shift in task composition away from routine/repetitive and toward complex decision-making. This is in addition to the enormous need for the aforementioned retraining. Given the extent of disruption, workforce trends will need to measure time spent unemployed or underemployed, labor force participation rates, and other factors beyond simple unemployment numbers. ...
The Future of Jobs 2018 report of the World Economic Forum highlights: ...
“…the potential of new technologies to create as well as disrupt jobs and to improve the quality and productivity of the existing work of human employees. Our findings indicate that, by 2022, augmentation of existing jobs through technology may free up workers from the majority of data processing and information search tasks-and may also increasingly support them in high-value tasks such as reasoning and decision-making as ...
augmentation becomes increasingly common over the coming years as a way to supplement and complement human labour.” ^(24){ }^{24} ...
The report predicts the shift in skill demand between today and 2022 will be significant and that “proactive, strategic and targeted efforts will be needed to map and incentivize workforce redeployment… [and therefore]… investment decisions [on] whether to prioritize automation or augmentation and the question of whether or not to invest in workforce reskilling.” ^(25){ }^{25} ...
Comparing Skills Demand, 2018 Versus 2022, Top Ten ...
TODAY, 2018 ...
TRENDING, 2022 ...
DECLINING, 2022 ...
...
1. Analytical thinking and innovation
2. Complex problemsolving
3. Critical thinking and analysis
4. Active learning and learning strategies
5. Creativity, originality, and initiative
6. Attention to detail, trustworthiness
7. Emotional Intelligence
8. Reasoning, problemsolving, and ideation
9. Leadership and social influence
10. Coordination and time management
1. Analytical thinking and innovation
2. Complex problemsolving
3. Critical thinking and analysis
4. Active learning and learning strategies
5. Creativity, originality, and initiative
6. Attention to detail, trustworthiness
7. Emotional Intelligence
8. Reasoning, problemsolving, and ideation
9. Leadership and social influence
10. Coordination and time management| 1. Analytical thinking and innovation |
| :--- |
| 2. Complex problemsolving |
| 3. Critical thinking and analysis |
| 4. Active learning and learning strategies |
| 5. Creativity, originality, and initiative |
| 6. Attention to detail, trustworthiness |
| 7. Emotional Intelligence |
| 8. Reasoning, problemsolving, and ideation |
| 9. Leadership and social influence |
| 10. Coordination and time management |
...
1. Analytical thinking and innovation
2. Active learning and learning strategies
3. Creativity, originality, and initiative
4. Technology design and programming
5. Critical thinking and analysis
6. Complex problemsolving
7. Leadership and social influence
8. Emotional intelligence
9. Reasoning, problemsolving, and ideation
10. Systems analysis and evaluation
1. Analytical thinking and innovation
2. Active learning and learning strategies
3. Creativity, originality, and initiative
4. Technology design and programming
5. Critical thinking and analysis
6. Complex problemsolving
7. Leadership and social influence
8. Emotional intelligence
9. Reasoning, problemsolving, and ideation
10. Systems analysis and evaluation| 1. Analytical thinking and innovation |
| :--- |
| 2. Active learning and learning strategies |
| 3. Creativity, originality, and initiative |
| 4. Technology design and programming |
| 5. Critical thinking and analysis |
| 6. Complex problemsolving |
| 7. Leadership and social influence |
| 8. Emotional intelligence |
| 9. Reasoning, problemsolving, and ideation |
| 10. Systems analysis and evaluation |
...
1. Manual dexterity, endurance, and precision
2. Memory, verbal, auditory, and spatial abilities
3. Management of financial and material resources
4. Technology installation and maintenance
5. Reading, writing, math, and active listening
6. Management of personnel
7. Quality control and safety awareness
8. Coordination and time-management
9. Visual, auditory, and speech abilities
10. Technology use, monitoring, and control
1. Manual dexterity, endurance, and precision
2. Memory, verbal, auditory, and spatial abilities
3. Management of financial and material resources
4. Technology installation and maintenance
5. Reading, writing, math, and active listening
6. Management of personnel
7. Quality control and safety awareness
8. Coordination and time-management
9. Visual, auditory, and speech abilities
10. Technology use, monitoring, and control| 1. Manual dexterity, endurance, and precision |
| :--- |
| 2. Memory, verbal, auditory, and spatial abilities |
| 3. Management of financial and material resources |
| 4. Technology installation and maintenance |
| 5. Reading, writing, math, and active listening |
| 6. Management of personnel |
| 7. Quality control and safety awareness |
| 8. Coordination and time-management |
| 9. Visual, auditory, and speech abilities |
| 10. Technology use, monitoring, and control |
TODAY, 2018 TRENDING, 2022 DECLINING, 2022
"1. Analytical thinking and innovation
2. Complex problemsolving
3. Critical thinking and analysis
4. Active learning and learning strategies
5. Creativity, originality, and initiative
6. Attention to detail, trustworthiness
7. Emotional Intelligence
8. Reasoning, problemsolving, and ideation
9. Leadership and social influence
10. Coordination and time management" "1. Analytical thinking and innovation
2. Active learning and learning strategies
3. Creativity, originality, and initiative
4. Technology design and programming
5. Critical thinking and analysis
6. Complex problemsolving
7. Leadership and social influence
8. Emotional intelligence
9. Reasoning, problemsolving, and ideation
10. Systems analysis and evaluation" "1. Manual dexterity, endurance, and precision
2. Memory, verbal, auditory, and spatial abilities
3. Management of financial and material resources
4. Technology installation and maintenance
5. Reading, writing, math, and active listening
6. Management of personnel
7. Quality control and safety awareness
8. Coordination and time-management
9. Visual, auditory, and speech abilities
10. Technology use, monitoring, and control"| TODAY, 2018 | TRENDING, 2022 | DECLINING, 2022 |
| :--- | :--- | :--- |
| 1. Analytical thinking and innovation <br> 2. Complex problemsolving <br> 3. Critical thinking and analysis <br> 4. Active learning and learning strategies <br> 5. Creativity, originality, and initiative <br> 6. Attention to detail, trustworthiness <br> 7. Emotional Intelligence <br> 8. Reasoning, problemsolving, and ideation <br> 9. Leadership and social influence <br> 10. Coordination and time management | 1. Analytical thinking and innovation <br> 2. Active learning and learning strategies <br> 3. Creativity, originality, and initiative <br> 4. Technology design and programming <br> 5. Critical thinking and analysis <br> 6. Complex problemsolving <br> 7. Leadership and social influence <br> 8. Emotional intelligence <br> 9. Reasoning, problemsolving, and ideation <br> 10. Systems analysis and evaluation | 1. Manual dexterity, endurance, and precision <br> 2. Memory, verbal, auditory, and spatial abilities <br> 3. Management of financial and material resources <br> 4. Technology installation and maintenance <br> 5. Reading, writing, math, and active listening <br> 6. Management of personnel <br> 7. Quality control and safety awareness <br> 8. Coordination and time-management <br> 9. Visual, auditory, and speech abilities <br> 10. Technology use, monitoring, and control |
Source: Future of Jobs Survey 2018, World Economic Forum, Table 4 ...
Recommendations 建议
While there is evidence that robots and automation are taking jobs away in various sectors, a more balanced, granular, analytical, and objective treatment of A/IS impact on the workforce is needed to effectively inform policy making and essential workforce reskilling. Specifics to accomplish this include: ...
Creating an international and independent agency able to properly disseminate objective statistics and inform the media, as well as the general public, about the impact of robotics and A/IS on jobs, tax revenue, growth, ^(26){ }^{26} and well-being. ...
Analyzing and disseminating data on how current task content of jobs have changed, based on a clear assessment of the automatability of the occupational description of such jobs. ...
Promoting automation with augmentation, as recommended in the Future of Jobs Report 2018 (see chart on page 154), to maximize the benefit of A/IS to employment and meaningful work. ...
Integrating more granulated dynamic mapping of the future jobs, tasks, activities, workplacestructures, associated work-habits, and skills base spurred by the A/IS revolution, in order to innovate, align, and synchronize skill development and training programs with future requirements. This workforce mapping is needed at the macro, but also crucially at the micro, levels where labor market programs are deployed. ...
Considering both product and process innovation, and looking at them from a global perspective in order to understand properly the global impact of A/IS on employment. ...
Proposing mechanisms for redistribution of productivity increases and developing an adaptation plan for the evolving labor market. ...
Further Resources 延伸阅读
E. Brynjolfsson and A. McAfee. The Second Age of Machine Intelligence: Work Progress and Prosperity in a Time of Brilliant Technologies. New York, NY: W. W. Norton & Company, 2014. ...
P.R. Daugherty, and H.J. Wilson, Human + Machine: Reimagining Work in the Age of Al. Watertown, MA: Harvard Business Review Press, 2018. ...
International Federation of Robotics. “The Impact of Robots on Productivity, Employment and Jobs,” A positioning paper by the International Federation of Robotics, April 2017. ...
RockEU. “Robotics Coordination Action for Europe Report on Robotics and Employment,” Deliverable D3.4.1, June 30, 2016. ...
World Economic Forum, Centre for the New Economy and Society, The Future of Jobs 2018, Geneva: WEF 2018. ...
Section 4-Education for the A/IS Age ...
Issue: Education to prepare the future workforce, in both HIC and LMIC, to design ethical A/IS applications or to have a comparative advantage in working alongside A/IS, is either lacking or unevenly available, risking inequality perpetuated across generations, within and between countries, constraining equitable growth, supporting a sustainable future, and achievement of the SDGs. ...
Background ...
Multiple international institutions, in particular educational engineering organizations, ^(27){ }^{27} have called on universities to play an active role, both locally and globally, in the resolution of the enormous problems that the world faces in securing peace, prosperity, planet protection, and universal human dignity: armed conflict, social injustice, rapid climate change, abuse of human rights, etc. Addressing global social problems is one of the central objectives of many universities, transversal to their other functions, including research in A/IS. UNESCO points out that universities’ preparation of future scientists and engineers for social responsibility is presently ...
very limited, in view of the enormous ethical and social problems associated with technology. ^(28){ }^{28} Enhancing the global dimension of engineering in undergraduate and postgraduate A/IS education is necessary, so that students can be prepared as technical professionals, aware of the opportunities and risks that A/IS present, and ready for work anywhere in the world in any sector. ...
Engineering studies at the university and postgraduate levels is just one dimension of the A/IS education challenge. For instance, business, law, public policy, and medical students will also need to be prepared for professions where A/IS are a partner, and to have internalized ethical principles to guide the deployment of such technologies. LMIC need financial and academic support to incorporate global A/IS professional curricula in their own universities, and all countries need to develop the pipeline by preparing elementary and secondary school students to access such professional programs. While the need for curriculum reform is recognized, the impact of A/IS on various professions and socioeconomic contexts is, at this time, both evolving and largely undocumented. Thus, the overhaul of education systems at all levels should be preceded by A/IS research. ...
Much of LMIC education is not globally competitive today, so there is a risk that the global advent of A/IS could negatively affect the chances of young people in LMIC finding ...
productive employment, further fueling global inequality. Education systems worldwide have to be reformed and transformed to fit the new demands of the information age, in view of the changing mix of skills demanded from the workforce. ^(29){ }^{29} In 21st century education, it has been observed that children need less rote knowledge, given so much is instantly accessible on the web and more tools to network and innovate are available; less memory and more imagination should be developed; and fewer physical books and more internet access is required. Young people everywhere need to develop their capacities for creativity, human empathy, ethics, and systems thinking in order to work productively alongside robots and A/IS technologies. Science, Technology, Engineering, Art/design, and Math (STEAM) subjects need to be more extensive and more creatively taught. ^(30){ }^{30} In addition, research is needed to establish ways that a new subject, empathy, can be added to these crucial 21st century subjects in order to educate the future A/IS workforce in social skills. Instead, in rich and poor countries alike, children are continuing to be educated for an industrial age which has disappeared or never even arrived. LMIC education systems, being less entrenched in many countries, may have the potential to be more flexible than those in HIC. Perhaps A/IS can be harnessed to help educational systems to leapfrog into the 21st century, just as mobile phone technology enabled LMIC leapfrog over the phase of wired communication infrastructure. ...
Recommendations 建议
Education with respect to A/IS must be targeted to three sets of students: the general public, present and future professionals in A/IS, and present and future policy makers. To prepare the future workforce to develop culturally appropriate A/IS, to work productively and ethically alongside such technologies, and to advance the UN SDGs, the curricula in HIC and LMIC universities and professional schools require innovation. Equally importantly, preuniversity education systems, starting with early childhood education, need to be reformed to prepare society for the risks and opportunities of the A/IS age, rather than the current system which prepares society for work in an industrial age that ended with the 20th century. Specific recommendations include: ...
Preparing future managers, lawyers, engineers, civil servants, and entrepreneurs to work productively and ethically as global citizens alongside A/IS, through reform of undergraduate and graduate curricula as well as of preschool, primary, and secondary school curricula. This will require: ...
Fomenting interaction between universities and other actors such as companies, governments, NGOs, etc., with respect to A/IS research through definition of research priorities and joint projects, subcontracts to universities, participation in observatories, and co-creation of curricula, cooperative teaching, internships/service learning, and conferences/seminars/courses. ...
Establishing and supporting more multidisciplinary degrees that include ...
A/IS, and adapting university curricula to provide a broad, integrated perspective which allows students to understand the impact of A/IS in the global, economic, environmental, and sociocultural domains and trains them as future policy makers in A/IS fields. ...
Integrating the teaching of ethics and A/IS across the education spectrum, from preschool to postgraduate curricula, instead of relegating ethics to a standalone module with little direct practical application. ...
Promoting service learning opportunities that allow A/IS undergraduate and graduate students to apply their knowledge to meet the needs of a community. ...
Creating international exchange programs, through both private and public institutions, which expose students to different cultural contexts for A/IS applications in both HIC and LMIC. ...
Creating experimental curricula to prepare people for information-based work in the 21st century, from preschool through postgraduate education. ...
Taking into account transversal competencies students need to acquire to become ethical global citizens, i.e., critical thinking, empathy, sociocultural awareness, flexibility, and deontological reasoning in the planning and assessment of A/IS curricula. ...
Training teachers in teaching methodologies suited to addressing challenges imposed in the age of A/IS. ...
Stimulating STEAM courses in preuniversity education. ...
Encouraging high-quality HIC-LMIC collaborative A/IS research in both private and public universities. ...
Conducting research to support innovation in education and business for the A/IS world, which could include: ...
Researching the impact of A/IS on the governance and macro/micro strategies of companies and organizations, together with those companies, in an interdisciplinary manner which harnesses expertise of both social scientists and technology experts. ...
Researching the impact of A/IS on the business model for the development of new products and services through the collaborative efforts of management, operations, and the technical research and development function. ...
Researching how empathy can be taught and integrated into curricula, starting at the preschool level. ...
Researching how schools and education systems in low-income settings of both HIC and LMIC can leverage their lessentrenched interests to leapfrog into a 21st century-ready education system. ...
A/IS for Sustainable Development 促进可持续发展的人工智能与智能系统
Establishing ethics observatories in universities with the purpose of fostering an informed public opinion capable of participating in policy decisions regarding the ethics and social impact of A/IS applications. ...
Creating professional continuing education and employment opportunities in A/IS for current professionals, including through online and executive education courses. ...
Creating educative mass media campaigns to elevate society’s ongoing baseline level of understanding of A/IS systems, including what it is, if and how it can be trusted in various contexts, and what are its limitations. ...
Further Resources 延伸阅读
ABET Computing and Engineering Accreditation Criteria 2018. Available at: http://www.abet.org/accreditation/ accreditation-criteria/ ...
ABET, 2017 ABET Impact Report, Working Together for a Sustainable Future, 2017. ...
emlyon business school, Artificial Intelligence in Management (AIM) Institute http://aim. em-lyon.com ...
UNESCO, The UN Decade of Education for Sustainable Development, Shaping the Education of Tomorrow. UNESCO 2012. ...
Section 5-A/IS and Humanitarian Action ...
Issue: A/IS are contributing to humanitarian action to save lives, alleviate suffering, and maintain human dignity both during and in the aftermath of man-made crises and natural disasters, as well as to prevent and strengthen preparedness for the occurrence of such situations. However, there are ethical concerns with both the collection and use of data during humanitarian emergencies. ...
Background ...
There have been a number of promising A/IS applications that relieve suffering in humanitarian crises, such as extending the reach of the health system by using drones to deliver blood to remote parts of Rwanda, ^(31){ }^{31} locating and removing landmines, ^(32){ }^{32} efforts to use A/IS to track movements and population survival needs following a natural disaster, and to meet the multiple management requirements of refugee camps. ^(33){ }^{33} There are also promising developments using A/IS and robotics to assist people with disabilities to recover mobility, and robots to rescue people trapped in collapsed buildings. ^(34)A//IS{ }^{34} \mathrm{~A} / \mathrm{IS} are also being used to monitor ...
conflict zones and to enable early warning systems. ^(35){ }^{35} For example, Microsoft has partnered with the UN Human Rights Office of the High Commissioner (OHCHR) to use big data in order to track and analyze human rights violations in conflict zones. ^(36){ }^{36} Machine learning is being used for improved decision-making regarding asylum adjudication and refugee resettlement, with a view to increasing successful integration between refugees and host communities. ^(37){ }^{37} In addition, there is evidence that a recent growth in human empathy has increased well-being while diminishing psychological and physical violence, ^(38){ }^{38} inspiring some researchers to look for ways of harnessing the power of A/IS to introduce more empathy and less violence into society. ...
The design and ethical deployment of these technologies in crisis settings are both essential and challenging. Large volumes of both personally identifiable and demographically identifiable data are collected in fragile environments, where tracking of individuals or groups may compromise their security if data privacy cannot be assured. Consent to data use is also impractical in such environments, yet crucial for the respect of human rights. ...
Recommendations 建议
The potential for A/IS to contribute to humanitarian action to save and improve lives should be prioritized for research and development, including by organizing global research challenges, while also building in safeguards to protect the creation, collection, processing, sharing, use, and disposal of information, including data from and about individuals and populations. Specific recommendations include: ...
Promoting awareness of the vulnerable condition of certain communities around the globe and the need to develop and use A/IS applications for humanitarian purposes. ...
Elaborating competitions and challenges in high impact conferences and university hackathons to engage both technical and nontechnical communities in the development of A/IS for humanitarian purposes and to address social issues. ...
Support civil society groups who organize themselves for the purpose of A/IS research and advocacy to develop applications to benefit humanitarian causes. ^(39){ }^{39} ...
Developing and applying ethical standards for the collection, use, sharing, and disposal of data in fragile settings. ...
Following privacy protection frameworks for pressing humanitarian situations that ensure the most vulnerable are protected. ^(40){ }^{40} ...
Setting up clear ethical frameworks for exceptional use of A/IS technologies in lifesaving humanitarian situations, compared to “normal” situations. ^(41){ }^{41} ...
Stimulating the development of low-cost and open source solutions based on A/IS to address specific humanitarian problems. ...
Training A/IS experts in humanitarian action and norms, and humanitarian practitioners to catalyze collaboration in designing, piloting, developing, and implementing A/IS technologies for humanitarian purposes. Forging public-private A/IS participant alliances that develop crisis scenarios in advance. ...
Working on cultural and contextual acceptance of any A/IS introduced during emergencies. ...
Documenting and developing quantifiable metrics for evaluating the outcomes of humanitarian digital projects, and educating the humanitarian ecosystem on the same. ...
Further Resources 延伸阅读
E. Prestes et al., “The 2016 Humanitarian Robotics and Automation Technology Challenge [Competitions],” in IEEE Robotics & Automation Magazine, vol. 23, no. 3, pp. 23-24, Sept. 2016. http://ieeexplore.ieee.org/stamp/ stamp.jsp?tp=&arnumber=7565695&isnumber=7565655 ...
L. Marques et al., “Automation of humanitarian demining: The 2016 Humanitarian Robotics and Automation Technology Challenge,” 2016 International Conference on Robotics and Automation for Humanitarian Applications (RAHA), Kollam, 2016, pp. 1-7. http://ieeex-plore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7931893&isnumber=7931858 ...
J.A. Quinn, et al., “Humanitarian applications of machine learning with remote-sensing data: review and case study in refugee settlement mapping” Philosophical Transactions of the Royal Society A, 376 20170363; DOI: 10.1098/rsta.2017.0363. Aug. 6, 2018. ...
P. Meier, Digital Humanitarians: How Big Data is Changing the Face of Humanitarian Response. Florida: CRC Press, 2015. ...
“Technology for human rights: UN Human Rights Office announces landmark partnership with Microsoft” https://www.ohchr.org/ EN/NewsEvents/Pages/DisplayNews. aspx?NewsID=21620&LangID=E ...
Optic Technologies, Press Release, Vatican Hack 2018-Results, 18 March 2018, which announced winning AI applications to benefit migrants and refugees as well as social inclusion and interfaith dialogue, http://optictechnology.org/index.php/en/ news-en/151-vhack-2018winners-en ...
Thanks to the Contributors 致谢贡献者
We wish to acknowledge all of the people who contributed to this chapter. 我们谨向所有为本章节做出贡献的人士致以谢意。
The A/IS for Sustainable Development Committee ...
Elizabeth D. Gibbons (Chair) - Senior Fellow and Director of the Child Protection Certificate Program, FXB Center for Health and Human Rights, Harvard T.H. Chan School of Public Health ...
Kay Firth-Butterfield (Founding Co-Chair) - Project Head, AI and Machine Learning at the World Economic Forum. Founding Advocate of AI-Global; Senior Fellow and Distinguished Scholar, Robert S. Strauss Center for International Security and Law, University of Texas, Austin; Co-Founder, Consortium for Law and Ethics of Artificial Intelligence and Robotics, University of Texas, Austin; Partner, Cognitive Finance Group, London, U.K. ...
Raj Madhavan (Founding Co-Chair) Founder & CEO of Humanitarian Robotics Technologies, LLC, Maryland, U.S.A. ...
Ronald C. Arkin - Regents’ Professor & Director of the Mobile Robot Laboratory; Associate Dean for Research & Space Planning, College of Computing, Georgia Institute of Technology ...
Joanna J. Bryson - Reader (Associate Professor), University of Bath, Intelligent Systems Research Group, Department of Computer Science ...
Renaud Champion - Director of Emerging Intelligences, emlyon business school; Founder of Robolution Capital & CEO of PRIMNEXT ...
Chandramauli Chaudhuri - Senior Data Scientist; Fractal Analytics ...
Rozita Dara - Assistant Professor, Principal Investigator of Data Management and Data Governance program, School of Computer Science, University of Guelph, Canada ...
Scott L. David - Director of Policy at University of Washington-Center for Data Management and Privacy Governance LabInformation Assurance and Cybersecurity ...
Jia He - Executive Director of Toutiao Research (Think Tank), Bytedance Inc. ...
William Hoffman - Associate director and head of Data-Driven Development, The World Economic Forum ...
Michael Lennon - Senior Fellow, Center for Excellence in Public Leadership, George Washington University; Co-Founder, Govpreneur.org; Principal, CAIPP.org (Consortium for Action Intelligence and Positive Performance); Member, Wellbeing Metrics Standard for Ethical Artificial Intelligence and Autonomous Systems Committee ...
Miguel Luengo-Oroz - Chief Data Scientist, United Nations Global Pulse. ...
A/IS for Sustainable Development 促进可持续发展的人工智能与智能系统
Angeles Manjarrés - Professor of the Department of Artificial Intelligence of the Spanish National Distance-Learning University ...
Nicolas Miailhe - Co-Founder & President, The Future Society; Member, AI Expert Group at the OECD; Member, Global Council on Extended Intelligence; Senior Visiting Research Fellow, Program on Science Technology and Society at Harvard Kennedy School. Lecturer, Paris School of International Affairs (Sciences Po). Visiting Professor, IE School of Global and Public Affairs ...
Roya Pakzad - Research Associate and Project Leader in Technology and Human Rights, Global Digital Policy Incubator (GDPi), Stanford University ...
Edson Prestes - Professor, Institute of Informatics, Federal University of Rio Grande do Sul (UFRGS), Brazil; Head, Phi Robotics Research Group, UFRGS; CNPq Fellow ...
Simon Pickin - Professor, Dpto. de Sistemas Informáticos y Computación, Facultad de Informática, Universidad Complutense de Madrid, Spain ...
Rose Shuman - Partner at BrightFront Group & Founder, Question Box 罗斯·舒曼 - BrightFront 集团合伙人,Question Box 创始人
Hruy Tsegaye - One of the founders of iCog Labs; a pioneer company in East Africa to work on Research and Development of Artificial General Intelligence, Ethiopia ...
For a full listing of all IEEE Global Initiative Members, visit standards.ieee.org/content/dam/ ieee-standards/standards/web/documents/other/ ec bios.pdf. 如需查看 IEEE 全球倡议组织全体成员名单,请访问 standards.ieee.org/content/dam/ieee-standards/standards/web/documents/other/ec_bios.pdf。
For information on disclaimers associated with EAD 1e, see How the Document Was Prepared. ...
Endnotes 尾注
^(1){ }^{1} See, for example, the writing of T. Piketty, Capital in the Twenty-First Century (Cambridge: Belknap Press 2014). ... ... ^(2){ }^{2} See preamble of the United Nations General Assembly, Transforming our world: the 2030 Agenda for Sustainable Development (A/RES/70/1: 21 October 2015): “This Agenda is a plan of action for people, planet and prosperity. It also seeks to strengthen universal peace in larger freedom. We recognize that eradicating poverty in all its forms and dimensions, including extreme poverty, is the greatest global challenge and an indispensable requirement for sustainable development. All countries and all stakeholders, acting in collaborative partnership, will implement this plan. We are resolved to free the human race from the tyranny of poverty and want and to heal and secure our planet. We are determined to take the bold and transformative steps which are urgently needed to shift the world on to a sustainable and resilient path. As we embark on this collective journey, we pledge that no one will be left behind. The 17 Sustainable Development Goals and 169 targets which we are announcing today demonstrate the scale and ambition of this new universal Agenda.”
3 Ibid, paragraph 8. ...
4 A/IS has the potential to advance positive change toward all seventeen 2030 Sustainable Development Goals, which are: ...
Goal 1. End poverty in all its forms everywhere ...
Goal 2. End hunger, achieve food security and improved nutrition and promote sustainable agriculture ...
Goal 3. Ensure healthy lives and promote well-being for all at all ages ...
Goal 4. Ensure inclusive and equitable quality education and promote lifelong learning opportunities for all ...
Goal 5. Achieve gender equality and empower all women and girls ...
Goal 6. Ensure availability and sustainable management of water and sanitation for all ...
Goal 7. Ensure access to affordable, reliable, sustainable and modern energy for all ...
Goal 8. Promote sustained, inclusive and sustainable economic growth, full and productive employment and decent work for all ...
Goal 9. Build resilient infrastructure, promote inclusive and sustainable industrialization and foster innovation ...
Goal 10. Reduce inequality within and among countries ...
Goal ^(11){ }^{11} Make cities and human settlements inclusive, safe, resilient and sustainable ...
Goal 12. Ensure sustainable consumption and production patterns ...
Goal 13. Take urgent action to combat climate change and its impacts ...
Goal 14. Conserve and sustainably use the oceans, seas and marine resources for sustainable development ...
Goal 15. Protect, restore and promote sustainable use of terrestrial ecosystems, sustainably manage forests, combat desertification, and halt and reverse land degradation and halt biodiversity loss ...
Goal 16. Promote peaceful and inclusive societies for sustainable development, provide access to justice for all and build effective, accountable and inclusive institutions at all levels ...
Goal 17. Strengthen the means of implementation and revitalize the global partnership for sustainable development ...
Source: United Nations General Assembly, Transforming our world: the 2030 Agenda for Sustainable Development ...
(A/RES/70/1: 21 October 2015) p. 14 ...
5 United Nations Secretary General “The road to dignity by 2030: ending poverty, transforming all lives and protecting the planet” United Nations, A/69/700, 4 December 2014, pp. 2527 http://www.un.org/ga/search/view doc. asp?symbol=A/69/700&Lang=E ...
7 (“Internet of Things, Privacy and Security in a Connected World,” FTC, https:// www.ftc. gov/system/ les/documents/reports/federal-trade-commission-staff- report-november-2013-workshop-entitled-internet-thingsprivacy/150127iotrpt.pdf) ...
8 World Economic Forum Global Future Council on Human Rights 2016-18 “White Paper: How to Prevent Discriminatory Outcomes in Machine Learning” (WEF: March 2018). ...
9 World Wide Web Foundation Artificial Intelligence: the Road ahead in Low and Middle-income Countries (June 2017: webfoundation.org) p. 13 ... ...
^(10){ }^{10} See the Well-being chapter of Ethically Aligned Design, First Edition^(11){ }^{11} See, for example, S. Vosougi, D. Roy, and S. Aral, “The spread of true and false news online” Science 09 Mar 2018: Vol. 359, Issue 6380, pp. 1146-1151 and M. Fox, “Fake News:Lies spread faster on social media than Truth does” NBC Health News, 8 March 2018 https://www.nbcnews.com/health/ health-news/fake-news-lies-spread-faster-social-media-truth-does-n854896; Cyberbullying Research Center: Summary of Cyberbullying Research 20042016 https://cyberbullying.org/summary-of-our-cyberbullying-research and TeenSafe “Cyberbullying Facts and Statistics” TeenSafe October 4, 2016, https://www.teensafe.com/blog/cyber-bullying-facts-and-statistics/ ...
A. Hutchison, “Social Media Still Has a Fake News Problem and Digital Literacy is Largely to Blame” Social Media Today, October 5, 2018 https:// www.socialmediatoday.com/news/social-media-still-has-a-fake-news-problem-and-digital-literacy-is-largel/538930/; D.D. Luxton, J.D. June, and J. M. Fairall, “Social Media and Suicide: A Public Health Perspective”, Am J Public Health. 2012 May; 102(Suppl 2): S195-S200. J. Twege, T. E. Joiner, M.L. Rogers, “Increases in Depressive Symptoms, Suicide-Related Outcomes, and Suicide Rates Among U.S. Adolescents After 2010 and Links ...
to Increased New Media Screen Time” Clinical Psychological Science, November 14, 2017 https:// doi.org/10.1177/2167702617723376^(12){ }^{12} D.D. Luxton, J.D. June, and J. M. Fairall, “Social Media and Suicide: A Public Health Perspective”, Am J Public Health. 2012 May; 102(Suppl 2): S195-S200. J. Twege, T. E. Joiner, M.L. Rogers, “Increases in Depressive Symptoms, SuicideRelated Outcomes, and Suicide Rates Among U.S. Adolescents After 2010 and Links to Increased New Media Screen Time” Clinical Psychological Science, November 14, 2017 https://doi. org/10.1177/2167702617723376
13 T. Luong, “Thermostats, Locks and Lights: Digital Tools of Domestic Abuse.” The New York Times, June 23, 2018, https://www.nytimes. com/2018/06/23/technology/smart-home-devices-domestic-abuse.html ... ... ^(14){ }^{14} P. Mozur, “A Genocide incited on Facebook with posts from Myanmar’s Military”, The New York Times, October 15, 2018. https://www.nytimes. com/2018/10/15/technology/myanmar-facebookgenocide.html
15 United Nations Human Rights Council “Human rights situations that require the Council’s attention Report of the independent international factfinding mission on Myanmar*” (A/HRC/39/64, 12 September 2018) ...
16 See for example Google AI in Ghana https:// www.blog.google/around-the-globe/google-africa/ google-ai-ghana/ ...
18 Executive Office of the President of the United States. Artificial Intelligence, Automation, and the Economy. December 20, 2016. page 21. ... ^(19){ }^{19} From World Wide Web Foundation Artificial Intelligence: The Road ahead in Low and Middleincome Countries (June 2017: webfoundation.org) page 8. ...
20 lbid. ... ^(21){ }^{21} World Bank, 2016. World Development Report 2016: Digital Dividends. Washington, DC: World Bank. doi:10.1596/978-1-4648-0671-1 page 129. ...
23 For example, The Vector Institute, CIFAR and the Legal Innovation Group at Ryerson University. See https://vectorinstitute.ai and http://www. legalinnovationzone.ca. ... ... ^(24){ }^{24} World Economic Forum, Centre for the New Economy and Society the Future of Jobs 2018 (Geneva: WEF 2018) p. 3.
25 Ibid, page 9 ... ^(26){ }^{26} It must be noted that the OECD is already engaged in this work as well as are some government bodies. See http://www.oecd.org/ employment/future-of-work/ ...
27 UNESCO, WHO, ABET, Bologna Follow-Up Group Secretariat for the European Higher Education Area ...
28 UNESCO, The UN Decade of Education for Sustainable Development, Shaping the Education of Tomorrow. (UNESCO: Paris 2012). ... ^(29){ }^{29} See Future of Jobs Report 2018 Survey table, p. 154. ...
30 National Math and Science Initiative, STEM Education and Workforce, 2014 https://www.nms. org/Portals/0/Docs/STEM%20Crisis%20Page%20 Stats%20and%20References.pdf ...
36 “United Nations Human Rights Office of the High Commissioner, press release, “Technology for human rights: UN Human Rights Office announces landmark partnership with Microsoft” 16 May 2017.” https://www.ohchr.org/EN/NewsEvents/Pages/ DisplayNews.aspx?NewsID=21620&LangID=E ...
37 For example, researchers at Stanford University are running a pilot project to develop machine learning algorithms for a better resettlement program. To train their algorithm, the Immigration Policy Lab (IPL) at Stanford University and ETH Zurich gathered data from refugee resettlement agencies in the US and Switzerland. The model is optimized based on refugees’ background and skill sets to match them to a host city in which the individual has a higher chance of finding employment. ...
38 See for example S. Pinker, The Better Angels of Our Nature: Why Violence has Declined (Penguin 2012) and R. Krznaric, Empathy: How it matters and how to get it. (Perigee 2015). ...
Embedding Values into Autonomous and Intelligent Systems ...
Society has not established universal standards or guiding principles for embedding human values and norms into autonomous and intelligent systems (A/IS) today. But as these systems are instilled with increasing autonomy in making decisions and manipulating their environment, it is essential that they are designed to adopt, learn, and follow the norms and values of the community they serve. Moreover, their actions should be transparent in signaling their norm compliance and, if needed, they must be able to explain their actions. This is essential if humans are to develop appropriate levels of trust in A/IS in the specific contexts and roles in which A/IS function. ...
At the present time, the conceptual complexities surrounding what “values” are (Hitlin and Piliavin 2004¹; Malle and Dickert 2007²; Rohan 2000³; Sommer 20164) make it difficult to envision A/IS that have computational structures directly corresponding to social or cultural values such as “security,” “autonomy,” or “fairness”. It may be a more realistic goal to embed explicit norms into such systems. Since norms are observable in human behavior, they can therefore be represented as instructions to act in defined ways in defined contexts, for a specific community-from family to town to country and beyond. A community’s network of social and moral norms is likely to reflect the community’s values, and A/IS equipped with such a network would, therefore, also reflect the community’s values. For discussion of specific values that are critical for ethical considerations of A/IS, see the chapters of Ethically Aligned Design, “Personal Data and Individual Agency” and “Well-being”. ...
Norms are typically expressed in terms of obligations and prohibitions, and these can be expressed computationally (Malle, Scheutz, and Austerweil 20175; Vázquez-Salceda, Aldewereld and Dignum 20046). They are typically qualitative in nature, e.g., do not stand too close to people. However, the implementation of norms also has a quantitative componentthe measurement of the physical distance we mean by “too close”, and the possible instantiations of the quantitative component technically enable the qualitative norm. ...
Embedding Values into Autonomous and Intelligent Systems ...
To address the broad objective of embedding norms and, by implication, values into A/IS, this chapter addresses three more concrete goals: ...
Identifying the norms of the specific community in which the A/IS operate, ...
Computationally implementing the norms of that community within the A/IS, and ...
Evaluating whether the implementation of the identified norms in the A/IS are indeed conforming to the norms reflective of that community. ...
Pursuing these three goals represents an iterative process that is sensitive to the purpose of the A/IS and to its users within a specific community. It is understood that there may be conflicts of values and norms when identifying, implementing, and evaluating these systems. Such conflicts are a natural part of the dynamically changing and renegotiated norm systems of any community. As a result, we advocate for an approach in which systems are designed to provide transparent signals describing the specific nature of their behavior to the individuals in the community they serve. Such signals may include explanations or offers for inspection and must be in a language or form that is meaningful to the community. ...
Further Resources 延伸阅读资源
S. Hitlin and J. A. Piliavin, “Values: Reviving a Dormant Concept.” Annual Review of Sociology 30, pp.359-393, 2004. ...
B. F. Malle, and S. Dickert. “Values,” in Encyclopedia of Social Psychology, edited by R. F. Baumeister and K. D. Vohs. Thousand Oaks, CA: Sage, 2007. ...
B. F. Malle, M. Scheutz, and J. L. Austerweil. “Networks of Social and Moral Norms in Human and Robot Agents,” in A World with Robots: International Conference on Robot Ethics: ICRE 2015, edited by M. I. Aldinhas Ferreira, J. Silva Sequeira, M. O. Tokhi, E. E. Kadar, and G. S. Virk, 3-17. Cham, Switzerland: Springer International Publishing, 2017. ...
M. J. Rohan, “A Rose by Any Name? The Values Construct.” Personality and Social Psychology Review 4, pp. 255-277, 2000. ...
U.Sommer, Werte: Warum Man Sie Braucht, Obwohl es Sie Nicht Gibt. [Values. Why We Need Them Even Though They Don’t Exist.] Stuttgart, Germany: J. B. Metzler, 2016. ...
J. Vázquez-Salceda, H. Aldewereld, and F. Dignum. “Implementing Norms in Multiagent Systems,” in Multiagent System Technologies. MATES 2004, edited by G. Lindemann, Denzinger, I. J. Timm, and R. Unland. (Lecture Notes in Computer Science, vol. 3187.) Berlin: Springer, 2004. ...
Section 1-Identifying Norms for Autonomous and Intelligent Systems ...
We identify three issues that must be addressed in the attempt to identify norms and corresponding values for A/IS. The first issue asks which norms should be identified and with which properties. Here we highlight context specificity as a fundamental property of norms. Second, we emphasize another important property of norms: their dynamically changing nature (Mack 2018 ^(7){ }^{7} ), which requires A//IS\mathrm{A} / \mathrm{IS} to have the capacity to update their norms and learn new ones. Third, we address the challenge of norm conflicts that naturally arise in a complex social world. Resolving such conflicts requires priority structures among norms, which help determine whether, in a given context, adhering to one norm is more important than adhering to another norm, often in light of overarching standards, e.g., laws and international humanitarian principles. ...
Issue 1: Which norms should be identified? ...
Background ...
If machines engage in human communities, then those agents will be expected to follow the community’s social and moral norms. A necessary step in enabling machines to do so is to identify these norms. But which norms should be identified? Laws are publicly ...
documented and therefore easy to identify, so they can be incorporated into A/IS as long as they do not violate humanitarian or community moral principles. Social and moral norms are more difficult to ascertain, as they are expressed through behavior, language, customs, cultural symbols, and artifacts. Most important, communities ranging from families to whole nations differ to various degrees in the norms they follow. Therefore, generating a universal set of norms that applies to all A/IS in all contexts is not realistic, but neither is it advisable to completely tailor the A/IS to individual preferences. We suggest that it is feasible to identify broadly observed norms of communities in which a technology is deployed. ...
Furthermore, the difficulty of generating a universal set of norms is not inconsistent with the goal of seeking agreement over Universal Human Rights (see the “General Principles” chapter of Ethically Aligned Design). However, these universal rights are not sufficient for devising A/IS that conform to the specific norms of its community. Universal Human Rights must, however, constrain the kinds of norms that are implemented in the A/IS (cf. van de Poel 2016³). ...
Embedding norms in A/IS requires a careful understanding of the communities in which the A/IS are to be deployed. Further, even within a particular community, different types of A/IS will demand different sets of norms. The relevant ...
Embedding Values into Autonomous and Intelligent Systems ...
norms for self-driving vehicles, for example, may differ greatly from those for robots used in healthcare. Thus, we recommend that to develop A/IS capable of following legal, social, and moral norms, the first step is to identify the norms of the specific community in which the A/IS are to be deployed and, in particular, norms relevant to the kinds of tasks and roles for which the A/IS are designed. Even when designating a narrowly defined community, e.g., a nursing home, an apartment complex, or a company, there will be variations in the norms that apply, or in their relative weighting. The norm identification process must heed such variation and ensure that the identified norms are representative, not only of the dominant subgroup in the community but also of vulnerable and underrepresented groups. ...
The most narrowly defined “community” is a single person, and A/IS may well have to adapt to the unique expectations and needs of a given individual, such as the arrangement of a disabled person’s living accommodations. However, unique individual expectations must not violate norms in the larger community. Whereas the arrangement of someone’s kitchen or the frequency with which a care robot checks in with a patient can be personalized without violating any community norms, encouraging the robot to use derogatory language to talk about certain social groups does violate such norms. In the next section, we discuss how A/IS might handle such norm conflicts. ...
Innovation projects and development efforts for A/IS should always rely on empirical research, involving multiple disciplines and multiple methods; to investigate and document both context- and task-specific norms, spoken and ...
unspoken, that typically apply in a particular community. Such a set of empirically identified norms should then guide system design. This process of norm identification and implementation must be iterative and revisable. A/IS with an initial set of implemented norms may betray biases of original assessments (Misra, Zitnick, Mitchell, and Girshick 2016²) that can be revealed by interactions with, and feedback from, the relevant community. This leads to a process of norm updating, which is described next in Issue 2. ...
Recommendation 建议
To develop A/IS capable of following social and moral norms, the first step is to identify the norms of the specific community in which the A/IS are to be deployed and, in particular, norms relevant to the kinds of tasks and roles that the A/IS are designed for. This norm identification process must use appropriate scientific methods and continue through the system’s life cycle. ...
Further Resources 延伸阅读资源
Mack, Ed., “Changing social norms.” Social Research: An International Quarterly, 85, no.1, 1-271, 2018. ...
I. Misra, C. L. Zitnick, M. Mitchell, and R. Girshick, (2016). Seeing through the human reporting bias: Visual Classifiers from Noisy Human-Centric Labels. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 29302939. doi:10.1109/CVPR.2016.320 ...
I. van de Poel, “An Ethical Framework for Evaluating Experimental Technology,” Science and Engineering Ethics, 22, no. 3,pp. 667686, 2016. ...
Issue 2: The need for norm updating ...
Background ...
Norms are not static. They change over time, in response to social progress, political change, new legal measures, or novel opportunities (Mack 2018^(10)2018^{10} ). Norms can fade away when, for whatever reasons, fewer and fewer people adhere to them. And new norms emerge when technological innovation invites novel behaviors and novel standards, e.g., cell phone use in public. ...
A/IS should be equipped with a starting set of social and legal norms before they are deployed in their intended community (see Issue 1), but this will not suffice for A/IS to behave appropriately over time. A/IS or the designers of A/IS, must be adept at identifying and adding new norms to its starting set, because the initial norm identification process in the community will undoubtedly have missed some norms and because the community’s norms change. ...
Humans rely on numerous capacities to update their knowledge of norms and learn new ones. They observe other community members’ behavior and are sensitive to collective norm change; they explicitly ask about new norms when joining new communities, e.g., entering college or a job in a new town; and they respond to feedback from others when they exhibit uncertainty about norms or have violated a norm. ...
Likewise, A/IS need multiple capacities to improve their own norm knowledge and to adapt to a community’s dynamically changing norms. These capacities include: ...
Processing behavioral trends by members of the target community and comparing them to trends predicted by the baseline norm system, ...
Asking for guidance from the community when uncertainty about applicable norms exceeds a critical threshold, ...
Responding to instruction from the community members who introduce a robot to a previously unknown context or who notice the A/IS’ uncertainty in a familiar context, and ...
Responding to formal or informal feedback from the community when the A//IS\mathrm{A} / \mathrm{IS} violate a norm. ...
The modification of a normative system can occur at any level of the system: it could involve altering the priority weightings between individual norms, changing the qualitative expression of a norm, or altering the quantitative parameters that enable the norm. ...
We recommend that the system’s norm changes be transparent. That is, the system or its designer should consult with users, designers, and community representatives when adding new norms to its norm system or adjusting the priority or content of existing norms. Allowing a system to learn new norms without public or expert review has detrimental consequences (Green and Hu 2018¹1). The form of consultation ...
Embedding Values into Autonomous and Intelligent Systems ...
and the specific review process will vary by machine sophistication e.g., linguistic capacity and function/role, or a flexible social companion versus a task-defined medical robot and best practices will have to be established. In some cases, the system may document its dynamic change, and the user can consult this documentation as desired. In other cases, explicit announcements and requests for discussion with the designer may be appropriate. In yet other cases, the A/IS may propose changes, and the relevant human community, e.g., drawn from a representative crowdsourced panel, will decide whether such changes should be implemented in the system. ...
Recommendation 建议
To respond to the dynamic change of norms in society A/IS or their designers must be able to amend their norms or add new ones, while being transparent about these changes to users, designers, broader community representatives, and other stakeholders. ...
Further Resources 延伸阅读资源
B. Green and L. Hu. “The Myth in the Methodology: Towards a Recontextualization of Fairness in ML.” Paper presented at the Debates workshop at the 35th International Conference on Machine Learning, Stockholm, Sweden 2018. ...
Mack, Ed., “Changing social norms,” Social Research: An International Quarterly, 85 (1, Special Issue), 1-271, 2018. ...
Issue 3: A/IS will face norm conflicts and need methods to resolve them. ...
Background ...
Often, even within a well-specified context, no action is available that fulfills all obligations and prohibitions. Such situations—often described as moral dilemmas or moral overload (Van den Hoven 2012¹2)—must be computationally tractable by A/IS; they cannot simply stop in their tracks and end on a logical contradiction. Humans resolve such situations by accepting trade-offs between conflicting norms, which constitute priorities of one norm or value over another in a given context. Such priorities may be represented in the norm system as hierarchical relations. ...
Along with identifying the norms within a specific community and task domain, empirical research must identify the ways in which people prioritize competing norms and resolve norm conflicts, and the ways in which people expect A/IS to resolve similar norm conflicts. These more local conflict resolutions will be further constrained by some general principles, such as the “Common Good Principle” (Andre and Velasquez 1992¹3) or local and national laws. For example, a self-driving vehicle’s prioritization of one factor over another in its decision-making will need to reflect the laws and norms of the population in which the A/IS are deployed, e.g., the traffic laws of a U.S. state and the United States as a whole. ...
Embedding Values into Autonomous and Intelligent Systems ...
Some priority orders can be built into a given norm network as hierarchical relations, e.g., more general prohibitions against harm to humans typically override more specific norms against lying. Other priority orders can stem from the override that norms in the larger community exert on norms and preferences of an individual user. In the earlier example discussing personalization (see Issue 1), the A/IS of a racist user who demands the A/IS use derogatory language for certain social groups will have to resist such demands because community norms hierarchically override an individual user’s preferences. In many cases, priority orders are not built in as fixed hierarchies because the priorities are themselves context-specific or may arise from net moral costs and benefits of the particular case at hand. A/IS must have learning capacities to track such variations and incorporate user and community input, e.g., about the subtle differences between contexts, so as to refine the system’s norm network (see Issue 2). ...
Tension may sometimes arise between a community’s social and legal norms and the normative considerations of designers or manufacturers. Democratic processes may need to be developed that resolve this tensionprocesses that cannot be presented in detail in this chapter. Often such resolution will favor the local laws and norms, but in some cases the community may have to be persuaded to accept A/IS favoring international law or broader humanitarian principles over, say, racist or sexist local practices. ...
In general, we recommend that the system’s resolution of norm conflicts be transparent-that is, documented by the system and ready to be made available to users, the relevant community of deployment, and third-party evaluators. Just like people explain to each other why they made decisions, they will expect any A/IS to be able to explain their decisions and be sensitive to user feedback about the appropriateness of the decisions. To do so, design and development of A/IS should specifically identify the relevant groups of humans who may request explanations and evaluate the systems’ behaviors. In the case of a system detecting a norm conflict, the system should consult and offer explanations to representatives from the community, e.g., randomly sampled crowdsourced members or elected officials, as well as to third-party evaluators, with the goal of discussing and resolving the norm conflict. ...
Recommendation 建议
A/IS developers should identify the ways in which people resolve norm conflicts and the ways in which they expect A/IS to resolve similar norm conflicts. A system’s resolution of norm conflicts must be transparent-that is, documented by the system and ready to be made available to users, the relevant community of deployment, and third-party evaluators. ...
Further Resources 延伸阅读资源
M. Velasquez, C. Andre, T. Shanks, S.J., and M. J. Meyer, “The Common Good.” Issues in Ethics, vol. 5, no. 1, 1992. ...
J. Van den Hoven, “Engineering and the Problem of Moral Overload.” Science and Engineering Ethics, vol. 18, no. 1, pp. 143-155, 2012. ...
D. Abel, J. MacGlashan, and M. L. Littman. “Reinforcement Learning as a Framework for Ethical Decision Making.” AAAI Workshop AI, Ethics, and Society, Volume WS-16-02 of 13th AAAI Workshops. Palo Alto, CA: AAAI Press, 2016. ...
O. Bendel, Die Moral in der Maschine: Beiträge zu Roboter- und Maschinenethik. Hannover, Germany: Heise Medien, 2016. ...
Accessible popular-science contributions to philosophical issues and technical implementations of machine ethics ...
S. V. Burks, and E. L. Krupka. “A Multimethod Approach to Identifying Norms and Normative Expectations within a Corporate Hierarchy: Evidence from the Financial Services Industry.” Management Science, vol. 58, pp. 203-217, 2012. ...
Illustrates surveys and incentivized coordination games as methods to elicit norms in a large financial services firm ...
F. Cushman, V. Kumar, and P. Railton, “Moral Learning,” Cognition, vol. 167, pp. 1-282, 2017. ...
M. Flanagan, D. C. Howe, and H. Nissenbaum, “Embodying Values in Technology: Theory and Practice.” Information Technology and Moral Philosophy, J. van den Hoven and J. Weckert, Eds., Cambridge University Press, 2008, pp. 322-53. Cambridge Core, Cambridge University Press. Preprint available at http://www.nyu.edu/projects/nissenbaum/ papers/Nissenbaum-VID.4-25.pdf ...
B. Friedman, P. H. Kahn, A. Borning, and A. Huldtgren. “Value Sensitive Design and Information Systems,” in Early Engagement and New Technologies: Opening up the Laboratory, N. Doorn, Schuurbiers, I. van de Poel, and M. Gorman, Eds., vol. 16, pp. 55-95. Dordrecht: Springer, 2013. ...
A comprehensive introduction into Value Sensitive Design and three sample applications ...
G. Mackie, F. Moneti, E. Denny, and H. Shakya. “What Are Social Norms? How Are They Measured?” UNICEF Working Paper. University of California at San Diego: UNICEF, Sept. 2014. https://dmeforpeace.org/sites/ default/files/4%2009%2030%20Whole%20 What%2Oare%20Social%20Norms.pdf ...
A broad survey of conceptual and measurement questions regarding social norms. ...
J. A. Leydens and J. C. Lucena. Engineering Justice: Transforming Engineering Education and Practice. Hoboken, NJ: John Wiley & Sons, 2018. ...
Identifies principles of engineering for social justice. ...
Embedding Values into Autonomous and Intelligent Systems ...
B. F. Malle, “Integrating Robot Ethics and Machine Morality: The Study and Design of Moral Competence in Robots.” Ethics and Information Technology, vol. 18, no. 4, pp. 243-256, 2016. ...
Discusses how a robot’s norm capacity fits in the larger vision of a robot with moral competence. ...
K. W. Miller, M. J. Wolf, and F. Grodzinsky, “This ‘Ethical Trap’ Is for Roboticists, Not Robots: On the Issue of Artificial Agent Ethical DecisionMaking.” Science and Engineering Ethics, vol. 23, pp. 389-401, 2017. ...
This article raises doubts about the possibility of imbuing artificial agents with morality, or of claiming to have done so. ...
Open Roboethics Initiative: www. openroboethics.org. A series of poll results on differences in human moral decision-making and changes in priority order of values for autonomous systems (e.g., on care robots), 2019. ...
A. Rizzo and L. L. Swisher, “Comparing the Stewart-Sprinthall Management Survey and the Defining Issues Test-2 as Measures of Moral Reasoning in Public Administration.” Journal of Public Administration Research and Theory, vol. 14, pp. 335-348, 2004. ...
Describes two assessment instruments of moral reasoning (including norm maintenance) based on Kohlberg’s theory of moral development. ...
S. H. Schwartz, “An Overview of the Schwartz Theory of Basic Values.” Online Readings in Psychology and Culture 2, 2012. ...
Comprehensive overview of a specific theory of values, understood as motivational orientations toward abstract outcomes (e.g., self-direction, power, security). ...
S. H. Schwartz and K. Boehnke. “Evaluating the Structure of Human Values with Confirmatory Factor Analysis.” Journal of Research in Personality, vol. 38, pp. 230-255, 2004. ...
Describes an older method of subjective judgments of relations among valued outcomes and a newer, formal method of analyzing these relations. ...
W. Wallach and C. Allen. Moral Machines: Teaching Robots Right from Wrong. New York: Oxford University Press, 2008. ...
This book describes some of the challenges of having a one-size-fits-all approach to embedding human values in autonomous systems. ...
Section 2-Implementing Norms in Autonomous and Intelligent Systems ...
Once the norms relevant to A//IS\mathrm{A} / \mathrm{IS} ’ role in a specific community have been identified, including their properties and priority structure, we must link these norms to the functionalities of the underlying computational system. We discuss three issues that arise in this process of norm implementation. First, computational approaches to enable a system to represent, learn, and execute norms are only slowly emerging. However, the diversity of approaches may soon lead to substantial advances. Second, for A/IS that operate in human communities, there is a particular need for transparency-ranging from the technical process of implementation to the ethical decisions that A/IS will make in humanmachine interactions, which will require a high level of explainability. Third, failures of normative reasoning can be considered inevitable and mitigation strategies should therefore be put in place to handle such failures when they occur. ...
As a general guideline, we recommend that, through the entire process of implementation of norms, designers should consider various forms and metrics of evaluation, and they should define and incorporate central criteria for assessing the A/IS’ norm conformity, e.g., human-machine agreement on moral decisions, verifiability of A/IS decisions, or justified trust. In this way, implementation already prepares for the critical third phase of evaluation (discussed in Section 3). ...
Issue 1: Many approaches to norm implementation are currently available, and it is not yet settled which ones are most suitable. ...
Background ...
The prospect of developing A/IS that are sensitive to human norms and factor them into morally or legally significant decisions has intrigued science fiction writers, philosophers, and computer scientists alike. Modest efforts to realize this worthy goal in limited or bounded contexts are already underway. This emerging field of research appears under many names, including: machine morality, machine ethics, moral machines, value alignment, computational ethics, artificial morality, safe Al , and friendly Al . ...
There are a number of different implementation routes for implementing ethics into autonomous and intelligent systems. Following Wallach and Allen (2008) ^(14){ }^{14}, we might begin to categorize these as either: ...
A. Top-down approaches, where the system, e.g., a software agent, has some symbolic representation of its activity, and so can identify specific states, plans, or actions as ethical or unethical with respect to particular ethical requirements (Dennis, ...
Fisher, Slavkovik, Webster 2016¹5; Pereira and Saptawijaya 2016¹6; Rötzer, 2016¹7; Scheutz, Malle, and Briggs 2015¹8); or ...
B. Bottom-up approaches, where the system, e.g., a learning component, builds up, through experience of what is to be considered ethical and unethical in certain situations, an implicit notion of ethical behavior (Anderson and Anderson 2014¹9; Riedl and Harrison 2016²0). ...
Relevant examples of these two are: (A) symbolic agents that have explicit representations of plans, actions, goals, etc.; and (B) machine learning systems that train subsymbolic mechanisms with acceptable ethical behavior. For more detailed discussion, see Charisi et al. 2017^(21)2017{ }^{21}. ...
Many of the existing experimental approaches to building moral machines are top-down, in the sense that norms, rules, principles, or procedures are used by the system to evaluate the acceptability of differing courses of action, or as moral standards or goals to be realized. Increasingly, however, A/IS will encounter situations that initially programmed norms do not clearly address, requiring algorithmic procedures to select the better of two or more novel courses of action. Recent breakthroughs in machine learning and perception enable researchers to explore bottom-up approaches in which the A/IS learn about their context and about human norms, similar to the manner in which a child slowly learns which forms of behavior are safe and acceptable. Of course, unlike current A/IS, children can feel pain and pleasure, and empathize with others. Still, A/IS can learn to detect and take into account others’ pain and pleasure, thus at least achieving some of the positive effects of empathy. As research on A/IS ...
progresses, engineers will explore new ways to improve these capabilities. ...
Each of the first two options has obvious limitations, such as option A’s inability to learn and adapt and option B’s unconstrained learning behavior. A third option tries to address these limitations: ...
C. Hybrid approaches, combining (A) and (B). ...
For example, the selection of action might be carried out by a subsymbolic system, but this action must be checked by a symbolic “gateway” agent before being invoked. This is a typical approach for “Ethical Governors” (Arkin, 200822; Winfield, Blum, and Liu 2014²3) or “Guardians” (Etzioni 201624) that monitor, restrict, and even adapt certain unacceptable behaviors proposed by the system (see Issue 3). Alternatively, action selection in light of norms could be done in a verifiable logical format, while many of the norms constraining those actions can be learned through bottom-up learning mechanisms (Arnold, Kasenberg, and Scheutz 2017²5). ...
These three architectures do not cover all possible techniques for implementing norms into A/IS. For example, some contributors to the multi-agent systems literature have integrated norms into their agent specifications (Andrighetto et al. 2013²6), and even though these agents live in societal simulations and are too underspecified to be translated into individual A/IS such as robots, the emerging work can inform cognitive architectures of such A/IS that fully integrate norms. Of course, none of these experimental systems should be deployed outside of the laboratory before testing or before certain criteria are met, which we outline in the remainder of this section and in Section 3. ...
Embedding Values into Autonomous and Intelligent Systems ...
Recommendation 建议
In light of the multiple possible approaches to computationally implement norms, diverse research efforts should be pursued, especially collaborative research between scientists from different schools of thought and different disciplines. ...
Further Resources 延伸阅读资源
M. Anderson, and S. L. Anderson, “GenEth: A General Ethical Dilemma Analyzer,” Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence, Québec City, Québec, Canada, July 27 -31, 2014, pp. 253-261, Palo Alto, CA, The AAAI Press, 2014. ...
G. Andrighetto, G. Governatori, P. Noriega, and L. W. N. van der Torre, eds. Normative Multi-Agent Systems. Saarbrücken/Wadern, Germany: Dagstuhl Publishing, 2013. ...
R. Arkin, “Governing Lethal Behavior: Embedding Ethics in a Hybrid Deliberative/ Reactive Robot Architecture.” Proceedings of the 20083^("rd ")20083^{\text {rd }} ACM/IEEE International Conference on Human-Robot Interaction (HRI), Amsterdam, Netherlands, March 12-15, 2008, IEEE, pp. 121-128, 2008. ...
T. Arnold, D. Kasenberg, and M. Scheutz. “Value Alignment or Misalignment-What Will Keep Systems Accountable?” The Workshops of the Thirty-First AAAI Conference on Artificial Intelligence: Technical Reports, WS-17-02: AI, Ethics, and Society, pp. 81-88. Palo Alto, CA: The AAAI Press, 2017. ...
V. Charisi, L. Dennis, M. Fisher, et al. “Towards Moral Autonomous Systems,” 2017. ...
A. Conn, “How Do We Align Artificial Intelligence with Human Values?” Future of Life Institute, Feb. 3, 2017. ...
L. Dennis, M. Fisher, M. Slavkovik, and M. Webster, “Formal Verification of Ethical Choices in Autonomous Systems.” Robotics and Autonomous Systems, vol. 77, pp. 1-14, 2016. ...
A. Etzioni and O. Etzioni, “Designing AI Systems That Obey Our Laws and Values.” Communications of the ACM, vol. 59, no. 9, pp. 29-31, Sept. 2016. ...
L. M. Pereira and A. Saptawijaya, Programming Machine Ethics. Cham, Switzerland: Springer International, 2016. ...
M. O. Riedl and B. Harrison. “Using Stories to Teach Human Values to Artificial Agents.” AAAI Workshops 2016. Phoenix, Arizona, February 12-13, 2016. ...
F. Rötzer, ed. Programmierte Ethik: Brauchen Roboter Regeln oder Moral? Hannover, Germany: Heise Medien, 2016. ...
M. Scheutz, B. F. Malle, and G. Briggs. “Towards Morally Sensitive Action Selection for Autonomous Social Robots.” Proceedings of the 24th International Symposium on Robot and Human Interactive Communication, RO-MAN 2015 (2015): 492-497. ...
U. Sommer, Werte: Warum Man Sie Braucht, Obwohl es Sie Nicht Gibt. [Values. Why we need them even though they don’t exist.] Stuttgart, Germany: J. B. Metzler, 2016. ...
I. Sommerville, Software Engineering. Harlow, U.K.: Pearson Studium, 2001. ...
W. Wallach and C. Allen. Moral Machines: Teaching Robots Right from Wrong. New York: Oxford University Press, 2008. ...
F. T. Winfield, C. Blum, and W. Liu. “Towards an Ethical Robot: Internal Models, Consequences and Ethical Action Selection” in Advances in Autonomous Robotics Systems, Lecture Notes in Computer Science Volume, M. Mistry, A. Leonardis, Witkowski, and C. Melhuish, eds. pp. 85-96. Springer, 2014. ...
Issue 2: The need for transparency from implementation to deployment ...
Background ...
When A/IS become part of social communities and behave according to the norms of their communities, people will want to understand the A/IS decisions and actions, just as they want to understand each other’s decisions and actions. This is particularly true for morally significant actions or omissions: an ethical reasoning system should be able to explain its own reasoning to a user on request. Thus, transparency, or “explainability”, of A/IS is paramount (Chaudhuri 2017²7; Wachter, Mittelstadt, and Floridi 2017 ^(28){ }^{28} ), and it will allow a community to understand, predict, and modify the A/IS (see Section 1, Issue 2; for a nuanced discussion see Selbst and Barocas ^(29){ }^{29} ). Moreover, as the norms embedded in A/IS are continuously updated and refined (see Section 1, Issue 2), transparency allows for appropriate trust to be developed (Grodzinsky, Miller, and Wolf 201130), and, where necessary, allows the community to modify a system’s norms, reasoning, and behavior. ...
Transparency can occur at multiple levels, e.g., ordinary language or coder verification, and for multiple stakeholders, e.g., user, engineer, and attorney. (See IEEE P7001™, IEEE Standards Project for Transparency of Autonomous Systems). It should be noted that transparency to all parties may not always be advisable, such as in the case of security programs that prevent a system from being hacked (Kroll et al. 2016³1). Here we briefly illustrate the broad ...
range of transparency by reference to four ways in which systems can be transparent-traceability, verifiability, honest design, and intelligibility-and apply these considerations to the implementation of norms in A/IS. ...
Transparency as traceability-Most relevant for the topic of implementation is the transparency of the software engineering process during implementation (Cleland-Huang, Gotel, and Zisman2012 ^(32){ }^{32} ). It allows for the originally identified norms (Section 1, Issue 1) to be traced through to the final system. This allows technical inspection of which norms have been implemented, for which contexts, and how norm conflicts are resolved, e.g., priority weights given to different norms. Transparency in the implementation process may also reveal biases that were inadvertently built into systems, such as racism and sexism, in search engine algorithms (Noble 2013³3). (See Section 3, Issue 2.) Such traceability in turn calibrates a community’s trust about whether A/IS are conforming to the norms and values relevant in their use contexts (Fleischmann and Wallace 2005³4). ...
Transparency as verifiability-Transparency concerning how normative reasoning is approached in the implementation is important as we wish to verify that the normative decisions the system makes match the required norms and values. Explicit and exact representations of these normative decisions can then provide the basis for a range of strong mathematical techniques, such as formal verification (Fisher, Dennis, and Webster 2013³5). Even if a system cannot explain every single reasoning step in understandable human terms, a log of ethical reasoning should be available for inspection of later evaluation purposes (Hind et al. 2018 ^(36){ }^{36} ). ...
Transparency as honest design-German designer Dieter Rams coined the term “honest design” to refer to design that “does not make a product more innovative, powerful or valuable than it really is” (Vitsoe 2018³7; see also Donelli 2015 ^(38){ }^{38}; Jong 2017 ^(39){ }^{39} ). Honest design of A/IS is one aspect of their transparency, because it allows the user to “see through” the outward appearance and accurately infer the A//IS^(')\mathrm{A} / \mathrm{IS}^{\prime} actual capacities. At times, however, the physical appearance of a system does not accurately represent what the system is capable of doing-e.g., the agent displays signs of a certain human-like emotion but its internal state does not represent that human emotion. Humans are quick to make strong inferences from outward appearances of human-likeness to the mental and social capacities the A/IS might have. Demands for transparency in design therefore put a responsibility on the designer to “not attempt to manipulate the consumer with promises that cannot be kept” (Vitsoe 2018 ^(40){ }^{40} ). ...
Transparency as intelligibility-As mentioned above, humans will want to understand the A/IS’ decisions and actions, especially the morally significant ones. A clear requirement for an ethical A/IS is that the system be able to explain its own reasoning to a user, when asked-or, ideally, also when suspecting the user’s confusion, and the system should do so at a level of ordinary human reasoning, not with incomprehensible technical detail (Tintarev and Kutlak 201441). Furthermore, when the system cannot explain some of its actions, technicians or designers should be available to make those actions intelligible. Along these lines, the European Union’s General Data Protection Regulation (GDPR), in effect since May 2018, states that, for automated decisions based on personal data, individuals have a right ...
to “an explanation of the [algorithmic] decision reached after such assessment and to challenge the decision”. (See boyd [sic] 2016², for a critical discussion of this regulation.) ...
Recommendation 建议
A/IS, especially those with embedded norms, must have a high level of transparency, shown as traceability in the implementation process, mathematical verifiability of their reasoning, honesty in appearance-based signals, and intelligibility of the systems’ operation and decisions. ...
Further Resources 延伸阅读资源
d. boyd, “Transparency !=\neq Accountability.” Data & Society: Points, November 29, 2016. ...
A. Chaudhuri, "Philosophical Dimensions of Information and Ethics in the Internet of Things (IoT) Technology,"The EDP Audit, Control, and Security Newsletter, vol. 56, no. 4, pp. 7-18, DOI: 10.1080/07366981.2017.1380474, 2017. ...
J. Cleland-Huang, O. Gotel, and A. Zisman, eds. Software and Systems Traceability. London: Springer, 2012. doi:10.1007/978-1-4471-2239-5 ...
M. Fisher, L. A. Dennis, and M. P. Webster. “Verifying Autonomous Systems.” Communications of the ACM, vol. 56, no. 9, pp. 84-93, 2013. ...
Embedding Values into Autonomous and Intelligent Systems ...
K. R. Fleischmann and W. A. Wallace. “A Covenant with Transparency: Opening the Black Box of Models.” Communications of the ACM, vol. 48, no. 5, pp. 93-97, 2005. ...
F. S. Grodzinsky, K. W. Miller, and M. J. Wolf. “Developing Artificial Agents Worthy of Trust: Would You Buy a Used Car from This Artificial Agent?” Ethics and Information Technology, vol. 13, pp. 17-27, 2011. ...
M. Hind, et al. “Increasing Trust in AI Services through Supplier’s Declarations of Conformity.” ArXiv E-Prints, Aug. 2018. [Online] Available: https://arxiv.org/abs/1808.07261. [Accessed October 28, 2018]. ...
C. W. De Jong, ed., Dieter Rams: Ten Principles for Good Design. New York, NY: Prestel Publishing, 2017. ...
J. A. Kroll, J. Huey, S. Barocas et al. “Accountable Algorithms.” University of Pennsylvania Law Review 1652017. ...
S. U. Noble, “Google Search: Hyper-Visibility as a Means of Rendering Black Women and Girls Invisible.” InVisible Culture 19, 2013. ...
D. Selbst and S. Barocas, “The Intuitive Appeal of Explainable Machines,” 87 Fordham Law Review 1085, Available at SSRN: https:// ssrn.com/abstract=3126971 or http://dx.doi. org/10.2139/ssrn.3126971, Feb. 19, 2018. ...
N. Tintarev and R. Kutlak. “Demo: Making Plans Scrutable with Argumentation and Natural Language Generation.” Proceedings of the Companion Publication of the 19th International Conference on Intelligent User Interfaces, pp. 29-32, 2014. ...
S.Wachter, B. Mittelstadt, and L. Floridi, “Transparent, Explainable, and Accountable AI for Robotics.” Science Robotics, vol. 2, no. 6, eaan6080. doi:10.1126/scirobotics. aan6080, 2017. ...
Issue 3: Failures will occur. ...
Background ...
Operational failures and, in particular, violations of a system’s embedded community norms, are unavoidable, both during system testing and during deployment. Not only are implementations never perfect, but A/IS with embedded norms will update or expand their norms over time (see Section 1, Issue 2) and interactions in the social world are particularly complex and uncertain. Thus, prevention and mitigation strategies must be adopted, and we sample four possible ones. ...
First, anticipating the process of evaluation during the implementation phase requires defining criteria and metrics for such evaluation, which in turn better allows the detection and mitigation of failures. Metrics will include: ...
Technical variables, such as traceability and verifiability, ...
User-level variables such as reliability, understandable explanations, and responsiveness to feedback, and ...
Community-level variables such as justified trust (see Issue 2) and the collective belief that A/IS are generally creating social benefits rather than, for example, technological unemployment. ...
Second, a systematic risk analysis and management approach can be useful (Oetzel and Spiekermann 2014 ^(43){ }^{43} ) for an application to privacy ...
norms. This approach tries to anticipate potential points of failure, e.g., norm violations, and, where possible, develops some ways to reduce or remove the effects of failures. Successful behavior, and occasional failures, can then iteratively improve predictions and mitigation attempts. ...
Third, because not all risks and failures are predictable (Brundage et al 2018 ^(44){ }^{44}; Vanderelst and Winfield 2018 ^(45){ }^{45} ), especially in complex human-machine interactions in social contexts, additional mitigation mechanisms must be made available. Designers are strongly encouraged to augment the architectures of their systems with components that handle unanticipated norm violations with a fail-safe, such as the symbolic “gateway” agents discussed in Section 2, Issue 1. Designers should identify a number of strict laws, that is, task- and community-specific norms that should never be violated, and the failsafe components should continuously monitor operations against possible violations of these laws. In case of violations, the higher-order gateway agent should take appropriate actions, such as safely disabling the system’s operation, or greatly limiting its scope of operation, until the source of failure is identified. The failsafe components need to be understandable, extremely reliable, and protected against security breaches, which can be achieved, for example, by validating them carefully and not letting them adapt their parameters during execution. ...
Fourth, once failures have occurred, responsible entities, e.g., corporate, government, science, and engineering, shall create a publicly accessible ...
Embedding Values into Autonomous and Intelligent Systems ...
database with undesired outcomes caused by specific A/IS systems. The database would include descriptions of the problem, background information on how the problem was detected, which context it occurred in, and how it was addressed. ...
In summary, we offer the following recommendation. ...
Recommendation 建议
Because designers and developers cannot anticipate all possible operating conditions and potential failures of A/IS, multiple strategies to mitigate the chance and magnitude of harm must be in place. ...
Further Resources 延伸阅读资源
M. Brundage, S. Avin, J. Clark, H. Toner, P. Eckersley, B. Garfunkel, A. Dafoe, P. Scharre, T. Zeitzo, et al. " “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation,” CoRR abs/1802.07228 [cs.AI]. 2018. https://arxiv.org/abs/1802.07228 ...
M. C. Oetzel and S. Spiekermann, “A Systematic Methodology for Privacy Impact Assessments: A Design Science Approach.” European Journal of Information Systems, vol. 23, pp. 126-150, 2014. https://link.springer. com/article/10.1057/ejis.2013.18 ...
D. Vanderelst and A.F. Winfield, 2018 “The Dark Side of Ethical Robots,” In Proc. The First AAAI/ACM Conf. on Artificial Intelligence, Ethics and Society, New Orleans, LA, Feb. 1-3, 2018. ...
Section 3-Evaluating the Implementation of A/IS ...
The success of implementing appropriate norms in A/IS must be rigorously evaluated. This evaluation process must be anticipated during design and incorporated into the implementation process and continue throughout the life cycle of the system’s deployment. Assessment before full-scale deployment would best take place in systematic test beds that allow human usersfrom the defined community and representing all demographic groups-to engage safely with the A/IS in intended tasks. Multiple disciplines and methods should contribute to developing and conducting such evaluations. ...
Evaluation criteria must capture, among others, the quality of human-machine interactions, human approval and appreciation of the A/IS, appropriate trust in the A/IS, adaptability of the A/IS to human users, and benefits to human well-being in the presence or under the influence of the A/IS. A range of normative aspects to be considered can be found in British Standard BS 8611:2016 on Robot Ethics (British Standards Institution 2016 ^(46){ }^{46} ). These are important general evaluation criteria, but they do not yet fully capture evaluation of a system that has “norm capacities”. ...
To evaluate a system’s norm-conforming behavior, one must describe-and ideally, formally specify-criterion behaviors that reflect the previously identified norms, describe what ...
the user expects the system to do, verify that the system really does this, and validate that the specification actually matches the criteria. Many different evaluation techniques are available in the field of software engineering (Sommerville 2015 ^(47){ }^{47} ), ranging from formal mathematical proof, through rigorous empirical testing against criteria of normatively correct behavior, to informal analysis of user interactions and responses to the machine’s norm awareness and compliance. All these approaches can, in principle, be applied to the full range of A/IS including robots (Fisher, Dennis, and Webster 2013 ^(48){ }^{48} ). More general principles from system quality management may also be integrated into the evaluation process, such as the Plan-Do-Check-Act (PDCA) cycle that underlies standards like ISO 9001 (International Organization for Standardization 2015 ^(49){ }^{49} ). ...
Evaluation may be done by first parties, e.g., designers, manufacturers, and users, as well as third parties, e.g., regulators, independent testing agencies, and certification bodies. In either case, the results of evaluations should be made available to all parties, with strong encouragement to resolve discovered system limitations and resolve potential discrepancies among multiple evaluations. ...
As a general guideline, we recommend that evaluation of A/IS implementations must be anticipated during a system’s design, incorporated ...
Embedding Values into Autonomous and Intelligent Systems ...
into the implementation process, and continue throughout the system’s deployment (cf. ITIL principles, BMC 2016 ^(50){ }^{50} ). Evaluation must include multiple methods, be made available to all parties-from designers and users to regulators, and should include procedures to resolve conflicting evaluation results. Specific issues that need to be addressed in this process are discussed next. ...
Further Resources 延伸阅读资源
British Standards Institution. BS8611:2016, “Robots and Robotic Devices. Guide to the Ethical Design and Application of Robots and Robotic Systems,” 2016. ...
BMC Software. ITIL: The Beginner’s Guide to Processes & Best Practices. http://www.bmc. com/guides/itil-introduction.html, Dec. 6, 2016. ...
M. Fisher, L. A. Dennis, and M. P. Webster. “Verifying Autonomous Systems.” Communications of the ACM, vol. 56, no. 9, pp. 84-93, 2013. ...
International Organization for Standardization (2015). ISO 9001:2015, Quality management systems -Requirements. Retrieved July 12, 2018 from https://www.iso.org/ standard/62085.html. ...
Issue 1: Not all norms of a target community apply equally to human and artificial agents ...
Background ...
An intuitive criterion for evaluations of norms embedded in A/IS would be that the A/IS norms should mirror the community’s norms-that is, the A/IS should be disposed to behave the same way that people expect each other to behave. However, for a given community and a given A/IS use context, A/IS and humans are unlikely to have identical sets of norms. People will have some unique expectations for humans than they do not for machines, e.g., norms governing the regulation of negative emotions, assuming that machines do not have such emotions. People may in some cases have unique expectations of A/IS that they do not have for humans, e.g., a robot worker, but not a human worker, is expected to work without regular breaks. ...
Recommendation 建议
The norm identification process must document the similarities and differences between the norms that humans apply to other humans and the norms they apply to A/IS. Norm implementations should be evaluated specifically against the norms that the community expects the A/IS to follow. ...
Issue 2: A/IS can have biases that disadvantage specific groups ...
Background ...
Even when reflecting the full system of community norms that was identified, A/IS may show operation biases that disadvantage specific groups in the community or instill biases in users by reinforcing group stereotypes. A system’s bias can emerge in perception. For example, a passport application AI rejected an Asian man’s photo because it insisted his eyes were closed (Griffiths 201651). Bias can emerge in information processing. For instance, speech recognition systems are notoriously less accurate for female speakers than for male speakers (Tatman 201652). System bias can affect decisions, such as a criminal risk assessment device which overpredicts recidivism by African Americans (Angwin et al. 2016 ^(53){ }^{53} ). The system’s bias can present itself even in its own appearance and presentation: the vast majority of humanoid robots have white “skin” color and use female voices (Riek and Howard 201454). ...
The norm identification process detailed in Section 1 is intended to minimize individual designers’ biases because the community norms are assessed empirically. The identification process also seeks to incorporate norms against prejudice and discrimination. However, biases may still emerge from imperfections in the norm identification process itself, from unrepresentative training sets for machine learning systems, and from programmers’ and designers’ unconscious ...
assumptions. Therefore, unanticipated or undetected biases should be further reduced by including members of diverse social groups in both the planning and evaluation of A/IS and integrating community outreach into the evaluation process, e.g., DO-IT program and RRI framework. Behavioral scientists and members of the target populations will be particularly valuable when devising criterion tasks for system evaluation and assessing the success of evaluating the A/IS performance on those tasks. Such tasks would assess, for example, whether the A/IS apply norms in discriminatory ways to different races, ethnicities, genders, ages, body shapes, or to people who use wheelchairs or prosthetics, and so on. ...
Recommendation 建议
Evaluation of A/IS must carefully assess potential biases in the systems’ performance that disadvantage specific social and demographic groups. The evaluation process should integrate members of potentially disadvantaged groups in efforts to diagnose and correct such biases. ...
Further Resources 延伸阅读资源
J. Angwin, J. Larson, S. Mattu, and L. Kirchner, “Machine Bias: There’s Software Used Across the Country to Predict Future Criminals. And It’s Biased Against Blacks.” ProPublica, May 23, 2016. ...
J. Griffiths, “New Zealand Passport Robot Thinks This Asian Man’s Eyes Are Closed.” CNN.com, December 9, 2016. ...
L. D. Riek and D. Howard,. “A Code of Ethics for the Human-Robot Interaction Profession.” Proceedings of We Robot, April 4, 2014. ...
R. Tatman, “Google’s Speech Recognition Has a Gender Bias.” Making Noise and Hearing Things, July 12, 2016. ...
Issue 3: Challenges to evaluation by third parties ...
Background ...
A/IS should have sufficient transparency to allow evaluation by third parties, including regulators, consumer advocates, ethicists, post-accident investigators, or society at large. However, transparency can be severely limited in some systems, especially in those that rely on machine learning algorithms trained on large data sets. The data sets may not be accessible to evaluators; the algorithms may be proprietary information or mathematically so complex that they defy common-sense explanation; and even fellow software experts may be unable to verify reliability and efficacy of the final system because the system’s specifications are opaque. ...
For less inscrutable systems, numerous techniques are available to evaluate the implementation of the A/IS’ norm conformity. On one side there is formal verification, which provides a mathematical proof that the A/IS will always match specific normative and ethical requirements, typically devised in a top-down ...
approach (see Section 2, Issue 1). This approach requires access to the decision-making process and the reasons for each decision (Fisher, Dennis, and Webster 201355). A simpler alternative, sometimes suitable even for machine learning systems, is to test the A/IS against a set of scenarios and assess how well they matches their normative requirements, e.g., acting in accordance with relevant norms and recognizing other agents’ norm violations. A “red team” may also devise scenarios that try to get the A/IS to break norms so that its vulnerabilities can be revealed. ...
These different evaluation techniques can be assigned different levels of “strength”: strong ones demonstrate the exhaustive set of the A/IS’ allowable behaviors for a range of criterion scenarios; weaker ones sample from criterion scenarios and illustrate the systems’ behavior for that subsample. In the latter case, confidence in the A/IS’ ability to meet normative requirements is more limited. An evaluation’s concluding judgment must therefore acknowledge the strength of the verification technique used, and the expressed confidence in the evaluationand in the A/IS themselves-must be qualified by this level of strength. ...
Transparency is only a necessary requirement for a more important long-term goal: having systems be accountable to their users and community members. However, this goal raises many questions such as to whom the A/IS are accountable, who has the right to correct the systems, and which kind of A/IS should be subject to accountability requirements. ...
Embedding Values into Autonomous and Intelligent Systems ...
Recommendation 建议
To maximize effective evaluation by third parties, e.g., regulators and accident investigators, A/IS should be designed, specified, and documented so as to permit the use of strong verification and validation techniques for assessing the system’s safety and norm compliance, in order to achieve accountability to the relevant communities. ...
Further Resources 延伸阅读资源
M. Fisher, L. A. Dennis, and M. P. Webster. “Verifying Autonomous Systems.” Communications of the ACM, vol. 56, pp. 84-93, 2013. ...
K. Abney, G. A. Bekey, and P. Lin. Robot Ethics: The Ethical and Social Implications of Robotics. Cambridge, MA: The MIT Press, 2011. ...
M. Anderson and S. L. Anderson, eds. Machine Ethics. New York: Cambridge University Press, 2011. ...
M. Boden, J. Bryson, et al. “Principles of Robotics: Regulating Robots in the Real World.” Connection Science 29, no. 2, pp. 124-129, 2017. ...
M. Coeckelbergh, “Can We Trust Robots?” Ethics and Information Technology, vol.14, pp. 53-60, 2012. ...
L. A. Dennis, M. Fisher, N. Lincoln, A. Lisitsa, and S. M. Veres, “Practical Verification of Decision-Making in Agent-Based Autonomous Systems.” Automated Software Engineering, vol. 23, no. 3, pp. 305-359, 2016. ...
M. Fisher, C. List, M. Slavkovik, and A. F. T. Winfield. "Engineering Moral Agents- ...
From Human Morality to Artificial Morality" (Dagstuhl Seminar 16222). Dagstuhl Reports 6, no. 5, pp. 114-137, 2016. ...
K. R. Fleischmann, Information and Human Values. San Rafael, CA: Morgan and Claypool, 2014. ...
G. Governatori and A. Rotolo. "How Do Agents Comply with Norms? " in Normative Multi-Agent Systems, G. Boella, P. Noriega, G. Pigozzi, and H. Verhagen, eds., Dagstuhl Seminar Proceedings. Dagstuhl, Germany: Schloss Dagstuhl-Leibniz- Zentrum für Informatik, 2009. ...
S. L. Jarvenpaa, N. Tractinsky, and L. Saarinen. “Consumer Trust in an Internet Store: A CrossCultural Validation” Journal of ComputerMediated Communication, vol. 5, no. 2, pp. 1-37, 1999. ...
E. H. Leet and W. A. Wallace. “Society’s Role and the Ethics of Modeling,” in Ethics in Modeling, W. A. Wallace, ed., Tarrytown, NY: Elsevier, 1994, pp. 242-245. ...
M. A. Mahmoud, M. S. Ahmad, M. Z. M. Yusoff, and A. Mustapha. “A Review of Norms and Normative Multiagent Systems,” The Scientific World Journal, vol. 2014, Article ID 684587, 2014. ...
Thanks to the Contributors 致谢贡献者
We wish to acknowledge all of the people who contributed to this chapter. 我们谨向所有为本章节做出贡献的人士致以谢意。
The Embedding Values into Autonomous Intelligent Systems Committee ...
AJung Moon (Founding Chair) - Director of Open Roboethics Institute ...
Bertram F. Malle (Co-Chair) - Professor, Department of Cognitive, Linguistic, and Psychological Sciences, Co-Director of the Humanity-Centered Robotics Initiative, Brown University ...
Francesca Rossi (Co-Chair) - Full Professor, computer science at the University of Padova, Italy, currently at the IBM Research Center at Yorktown Heights, NY ...
Stefano Albrecht - Postdoctoral Fellow in the Department of Computer Science at The University of Texas at Austin ...
Bijilash Babu - Senior Manager, Ernst and Young, EY Global Delivery Services India LLP ...
Jan Carlo Barca - Senior Lecturer in Software Engineering and Internet of Things (IoT), School of Info Technology, Deakin University, Australia ...
Catherine Berger - IEEE Standards Senior Program Manager, IEEE ...
Malo Bourgon - COO, Machine Intelligence Research Institute ...
Richard S. Bowyer - Adjunct Senior Lecturer and Research Fellow, College of Science and Engineering, Centre for Maritime Engineering, Control and Imaging (cmeci), Flinders University, South Australia ...
Stephen Cave - Executive Director of the Leverhulme Centre for the Future of Intelligence, University of Cambridge ...
Raja Chatila - CNRS-Sorbonne Institute of Intelligent Systems and Robotics, Paris, France; Member of the French Commission on the Ethics of Digital Sciences and Technologies CERNA; Past President of IEEE Robotics and Automation Society ...
Mark Coeckelbergh - Professor, Philosophy of Media and Technology, the University of Vienna ...
Louise Dennis - Lecturer, Autonomy and Verification Laboratory, University of Liverpool ...
Laurence Devillers - Professor of Computer Sciences, University Paris Sorbonne, LIMSICNRS ‘Affective and social dimensions in spoken interactions’; member of the French Commission on the Ethics of Research in Digital Sciences and Technologies (CERNA) ...
Virginia Dignum - Associate Professor, Faculty of Technology Policy and Management, TU Delft ...
Embedding Values into Autonomous and Intelligent Systems ...
Ebru Dogan - Research Engineer, VEDECOM ...
Takashi Egawa - Cloud Infrastructure Laboratory, NEC Corporation, Tokyo ...
Vanessa Evers - Professor, Human-Machine Interaction, and Science Director, DesignLab, University of Twente ...
Michael Fisher - Professor of Computer Science, University of Liverpool, and Director of the UK Network on the Verification and Validation of Autonomous Systems, vavas.org ...
Ken Fleischmann - Associate Professor in the School of Information at The University of Texas at Austin ...
Ryan Integlia - assistant professor, Electrical and Computer Engineering, Florida Polytechnic University; Co-Founder of the em[POWER] Energy Group ...
Catholijn Jonker - Full professor of Interactive Intelligence at the Faculty of Electrical Engineering, Mathematics and Computer Science of the Delft University of Technology. Part-time full professor at Leiden Institute of Advanced Computer Science of the Leiden University ...
Sara Jordan - Assistant Professor of Public Administration in the Center for Public Administration & Policy at Virginia Tech ...
Jong-Wook Kim - Professor, Al.Robotics Lab, Department of Electronic Engineering, Dong-A University, Busan, Korea ...
Sven Koenig - Professor, Computer Science Department, University of Southern California ...
Brenda Leong - Senior Counsel, Director of Operations, The Future of Privacy Forum ...
Alan Mackworth - Professor of Computer Science, University of British Columbia; Former President, AAAI; Co-author of “Artificial Intelligence: Foundations of Computational Agents”. ...
Pablo Noriega - Scientist, Artificial Intelligence Research Institute of the Spanish National Research Council (IIIA-CSIC), Barcelona. ...
Rajendran Parthiban - Professor, School of Engineering, Monash University, Bandar Sunway, Malaysia ...
Heather M. Patterson - Senior Research Scientist, Anticipatory Computing Lab, Intel Corp. ...
Edson Prestes - Professor, Institute of Informatics, Federal University of Rio Grande do Sul (UFRGS), Brazil; Head, Phi Robotics Research Group, UFRGS; CNPq Fellow. ...
Laurel Riek - Associate Professor, Computer Science and Engineering, University of California San Diego ...
Leanne Seeto - Co-Founder and Strategy and Operations Precision Autonomy ...
Sarah Spiekermann - Chair of the Institute for Information Systems & Society at Vienna University of Economics and Business; Author of the textbook “Ethical IT-Innovation”, the popular book “Digitale Ethik-Ein Wertesystem für das 21. Jahrhundert” and Blogger on “The Ethical Machine” ...
Embedding Values into Autonomous and Intelligent Systems ...
John P. Sullins - Professor of Philosophy, Chair of the Center for Ethics Law and Society (CELS), Sonoma State University ...
Jaan Tallinn - Founding engineer of Skype and Kazaa; co-founder of the Future of Life Institute ...
Mike Van der Loos - Associate Prof., Dept. of Mechanical Engineering, Director of Robotics for Rehabilitation, Exercise and Assessment in Collaborative Healthcare (RREACH) Lab, and Associate Director of CARIS Lab, University of British Columbia ...
Wendell Wallach - Consultant, ethicist, and scholar, Yale University’s Interdisciplinary Center for Bioethics ...
Karolina Zawieska - Postdoctoral Research Fellow in Ethics and Cultural Learning of Robotics at DeMontfort University, UK and Researcher at Industrial Research Institute for Automation and Measurements PIAP, Poland ...
For a full listing of all IEEE Global Initiative Members, visit standards.ieee.org/content/dam/ ieee-standards/standards/web/documents/other/ ec_bios.pdf. ...
For information on disclaimers associated with EAD 1e, see How the Document Was Prepared. ...
Embedding Values into Autonomous and Intelligent Systems ...
Endnotes ...
^(1){ }^{1} S. Hitlin and J. A. Piliavin. “Values: Reviving a Dormant Concept.” Annual Review of Sociology 30 (2004): 359-393. ...
2 B. F. Malle, and S. Dickert. “Values,” The Encyclopedia of Social Psychology, edited by R. F. Baumeister and K. D. Vohs. Thousand Oaks, CA: Sage, 2007. ...
3 M. J. Rohan, “A Rose by Any Name? The Values Construct.” Personality and Social Psychology Review 4 (2000): 255-277. ... ^(4){ }^{4} A. U. Sommer, Werte: Warum Man Sie Braucht, Obwohl es Sie Nicht Gibt. [Values. Why We Need Them Even Though They Don’t Exist.] Stuttgart, Germany: J. B. Metzler, 2016. ...
5 B. F. Malle, M. Scheutz, and J. L. Austerweil. “Networks of Social and Moral Norms in Human and Robot Agents,” in A World with Robots: International Conference on Robot Ethics: ICRE 2015, edited by M. I. Aldinhas Ferreira, J. Silva Sequeira, M. O. Tokhi, E. E. Kadar, and G. S. Virk, 3-17. Cham, Switzerland: Springer International Publishing, 2017. ...
6 J. Vázquez-Salceda, H. Aldewereld, and F. Dignum. “Implementing Norms in Multiagent Systems,” in Multiagent System Technologies. MATES 2004, edited by G. Lindemann, Denzinger, I. J. Timm, and R. Unland. (Lecture Notes in Computer Science, vol. 3187.) Berlin: Springer, 2004. ...
7 A. Mack, (Ed.). “Changing social norms.” Social Research: An International Quarterly, 85, no. 1 (2018): 1-271. ...
8 I. van de Poel, “An Ethical Framework for Evaluating Experimental Technology”, Science and Engineering Ethics, 22, no. 3 (2016): 667-686. ...
9 I. Misra, C. L. Zitnick, M. Mitchell, and R. Girshick, (2016). Seeing through the human reporting bias: Visual Classifiers from Noisy Human-Centric Labels. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 2930-2939). doi:10.1109/CVPR.2016.320 ...
10 A. Mack, (Ed.). (2018). Changing social norms. Social Research: An International Quarterly, 85(1, Special Issue), 1-271. ... ^(11){ }^{11} B. Green and L. Hu. “The Myth in the Methodology: Towards a Recontextualization of Fairness in ML.” Paper presented at the Debates workshop at the 35th International Conference on Machine Learning, Stockholm, Sweden 2018. ...
12 J. Van den Hoven, “Engineering and the Problem of Moral Overload.” Science and Engineering Ethics 18, no. 1 (2012): 143-155. ... ^(13){ }^{13} C. Andre and M. Velasquez. “The Common Good.” Issues in Ethics 5, no. 1 (1992). ... ...
^(14){ }^{14} W. Wallach and C. Allen. Moral Machines: Teaching Robots Right from Wrong. New York: Oxford University Press, 2008.^(15){ }^{15} L. Dennis, M. Fisher, M. Slavkovik, and M. Webster. “Formal Verification of Ethical Choices in Autonomous Systems.” Robotics and Autonomous Systems 77 (2016): 1-14.
Embedding Values into Autonomous and Intelligent Systems ...
16 L. M. Pereira and A. Saptawijaya. Programming Machine Ethics. Cham, Switzerland: Springer International, 2016. ...
17 F. Rötzer, ed. Programmierte Ethik: Brauchen Roboter Regeln oder Moral? Hannover, Germany: Heise Medien, 2016. ... ^(18){ }^{18} M. Scheutz, B. F. Malle, and G. Briggs. “Towards Morally Sensitive Action Selection for Autonomous Social Robots.” Proceedings of the 24th International Symposium on Robot and Human Interactive Communication, RO-MAN 2015 (2015): 492-497. ...
19 M. Anderson and S. L. Anderson. “GenEth: A General Ethical Dilemma Analyzer.” Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence (2014): 253-261. ... ^(20){ }^{20} M. O. Riedl and B. Harrison. “Using Stories to Teach Human Values to Artificial Agents.” Proceedings of the 2nd International Workshop on AI, Ethics and Society, Phoenix, Arizona, 2016. ... ... ^(21){ }^{21} V. Charisi, L. Dennis, M. Fisher et al. “Towards Moral Autonomous Systems,” 2017.
22 R. Arkin, “Governing Lethal Behavior: Embedding Ethics in a Hybrid Deliberative/Reactive Robot Architecture.” Proceedings of the 20083^("rd ")20083^{\text {rd }} ACM/IEEE International Conference on Human-Robot Interaction (2008): 121-128. ...
23 A. F. T. Winfield, C. Blum, and W. Liu. “Towards an Ethical Robot: Internal Models, Consequences and Ethical Action Selection” in Advances in Autonomous Robotics Systems, Lecture Notes in Computer Science Volume, edited by M. Mistry, A. Leonardis, Witkowski, and C. Melhuish, 85-96. Springer, 2014. ... ^(24){ }^{24} A. Etzioni, “Designing AI Systems That Obey Our Laws and Values.” Communications of the ACM 59, no. 9 (2016): 29-31. ...
25 T. Arnold, D. Kasenberg, and M. Scheutz. “Value Alignment or Misalignment-What Will Keep Systems Accountable?” The Workshops of the Thir-ty-First AAAI Conference on Artificial Intelligence: Technical Reports, WS-17-02: AI, Ethics, and Society, 81-88. Palo Alto, CA: The AAAI Press, 2017. ...
26 G. Andrighetto, G. Governatori, P. Noriega, and L. W. N. van der Torre, eds. Normative Multi-Agent Systems. Saarbrücken/Wadern, Germany: Dagstuhl Publishing, 2013. ...
27 A. Chaudhuri, (2017) Philosophical Dimensions of Information and Ethics in the Internet of Things (IoT) Technology. The EDP Audit, Control, and Security Newsletter, 56:4, 7-18, DOI: 10.1080/07366981.2017.1380474 ...
28 S.Wachter, B. Mittelstadt, and L. Floridi, “Transparent, Explainable, and Accountable AI for Robotics.” Science Robotics 2, no. 6 (2017): eaan6080. doi:10.1126/scirobotics. aan6080 ...
29 A. D. Selbst and S. Barocas, The Intuitive Appeal of Explainable Machines (February 19, 2018). Fordham Law Review. Available at SSRN: https:// ssrn.com/abstract=3126971 or http://dx.doi. org/10.2139/ssrn. 3126971 ...
30 F. S. Grodzinsky, K. W. Miller, and M. J. Wolf. “Developing Artificial Agents Worthy of Trust: Would You Buy a Used Car from This Artificial Agent?” Ethics and Information Technology 13, (2011): 17-27. ...
Embedding Values into Autonomous and Intelligent Systems ...
31 J. A. Kroll, J. Huey, S. Barocas et al. “Accountable Algorithms.” University of Pennsylvania Law Review 165 (2017). ...
32 J. Cleland-Huang, O. Gotel, and A. Zisman, eds. Software and Systems Traceability. London: Springer, 2012. doi:10.1007/978-1-4471-2239-5 ...
33 S. U. Noble, “Google Search: Hyper-Visibility as a Means of Rendering Black Women and Girls Invisible.” InVisible Culture 19 (2013). ... ^(34){ }^{34} K. R. Fleischmann and W. A. Wallace. “A Covenant with Transparency: Opening the Black Box of Models.” Communications of the ACM 48, no. 5 (2005): 93-97. ...
35 M. Fisher, L. A. Dennis, and M. P. Webster. “Verifying Autonomous Systems.” Communications of the ACM 56, no. 9 (2013): 84-93. ...
36 M. Hind, et al. “Increasing Trust in AI Services through Supplier’s Declarations of Conformity.” ArXiv E-Prints, Aug. 2018. Retrieved October 28, 2018 from https://arxiv.org/abs/1808.07261. ...
39 C. de Jong Ed., “Ten principles for good design: Dieter Rams.” New York, NY: Prestel Publishing, 2017. ...
40 lbid. ... ^(41){ }^{41} N. Tintarev and R. Kutlak. “Demo: Making Plans Scrutable with Argumentation and Natural Language Generation.” Proceedings of the Companion Publication of the 19th International Conference on Intelligent User Interfaces (2014): 29-32. ...
42 d. boyd, “Transparency !=\neq Accountability.” Data & Society: Points, November 29, 2016. ...
43 C. Oetzel and S. Spiekermann, “A Systematic Methodology for Privacy Impact Assessments: A Design Science Approach.” European Journal of Information Systems 23, (2014): 126-150. https:// link.springer.com/article/10.1057/ejis.2013.18 ... ^(44){ }^{44} M. Brundage, S. Avin, J. Clark, H. Toner, P. Eckersley, B. Garfunkel, A. Dafoe, P. Scharre, T. Zeitzo, et al. 2018. The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation. CoRR abs/1802.07228 (2018). https://arxiv.org/ abs/1802.07228M. ...
45 D. Vanderelst and A.F. Winfield, 2018 The Dark Side of Ethical Robots. In Proc. AAAI/ACM Conf. on Artificial Intelligence, Ethics and Society, New Orleans. ...
46 British Standards Institution. BS8611:2016, “Robots and Robotic Devices. Guide to the Ethical Design and Application of Robots and Robotic Systems,” 2016. ...
47 I. Sommerville, Software Engineering (10th edition). Harlow, U.K.: Pearson Studium, 2015. ... ^(48){ }^{48} M. Fisher, L. A. Dennis, and M. P. Webster. “Verifying Autonomous Systems.” Communications of the ACM 56, no. 9 (2013): 84-93. ...
Embedding Values into Autonomous and Intelligent Systems ...
49 International Organization for Standardization (2015). ISO 9001:2015, Quality management systems-Requirements. Retrieved July 12, 2018 from https://www.iso.org/standard/62085.html. ... ... ^(50){ }^{50} BMC Software. ITIL: The Beginner’s Guide to Processes & Best Practices. 6 Dec. 2016, http:// www.bmc.com/guides/itil-introduction.html.
51 J. Griffiths, “New Zealand Passport Robot Thinks This Asian Man’s Eyes Are -Closed.” CNN.com, December 9, 2016. ... ... ^(52){ }^{52} R. Tatman, “Google’s Speech Recognition Has a Gender Bias.” Making Noise and Hearing Things, July 12, 2016.
53 J. Angwin, J. Larson, S. Mattu, L. Kirchner. “Machine Bias: There’s Software Used Across the Country to Predict Future Criminals. And It’s Biased Against Blacks.” ProPublica, May 23, 2016. ... ^(54){ }^{54} L. D. Riek and D. Howard. “A Code of Ethics for the Human-Robot Interaction Profession.” Proceedings of We Robot, April 4, 2014. ...
55 M. Fisher, L. A. Dennis, and M. P. Webster. “Verifying Autonomous Systems.” Communications of the ACM 56 (2013): 84-93. ...
Introduction ...
Autonomous and intelligent systems (A/IS) are a part of our society. The use of these powerful technologies promotes a range of social benefits. They may spur development across economies and society through numerous applications, including in commerce, finance, employment, health care, agriculture, education, transportation, politics, privacy, public safety, national security, civil liberties, and human rights. To encourage the development of socially beneficial applications of A/IS, and to protect the public from adverse consequences of A/IS, intended or otherwise, effective policies and government regulations are needed. ...
Effective A/IS policies serve the public interest in several important respects. A/IS policies and regulations, at both the national level and as developed by professional organizations and governing institutions, protect and promote safety, privacy, human rights, and cybersecurity, as well as enhance the public’s understanding of the potential impacts of A/IS on society. Without policies designed with these considerations in mind, there may be critical technology failures, loss of life, and high-profile social controversies. Such events could engender policies that unnecessarily hinder innovation, or regulations that do not effectively advance public interest and protect human rights. ...
We believe that effective A/IS policies should embody a rights-based approach ^(1){ }^{1} that addresses five issues: ...
1. Ensure that A/IS support, promote, and enable internationally recognized legal norms. ...
Establish policies for A/IS using the internationally recognized legal framework for human rights standards that is directed at accounting for the impact of technology on individuals. ...
Policy ...
2. Develop government expertise in A/IS. ...
Facilitate skill development, technical and otherwise, to further boost the ability of policy makers, regulators, and elected officials to make informed proposals and decisions about the various facets of these new technologies. ...
3. Ensure governance and ethics are core components in A/IS research, development, acquisition, and use. ...
Require support for A/IS research and development (R&D) efforts with a focus on the ethical impact of A/IS. To benefit from these new technologies while also ensuring they meet societal needs and values, governments should be actively involved in supporting relevant R&D efforts. ...
4. Create policies for A/IS to ensure public safety and responsible A/IS design. ...
Governments must ensure consistent and locally adaptable policies and regulations for A/IS. Effective regulation should address transparency, explainability, predictability, bias, and accountability for A/IS algorithms, as well as risk management, privacy, data protection measures, safety, and security considerations. Certification of systems involving A/IS is a key technical, societal, and industrial issue. ...
5. Educate the public on the ethical and societal impacts of A/IS. ...
Industry, academia, the media, and governments must establish strategies for informing and engaging the public on benefits and challenges posed by A/IS. Communicating accurately both the positive potential of A/IS and the areas that require caution and further development is critical to effective decision-making environments. ...
As A/IS comprise a greater part of our daily lives, managing the associated risks and rewards becomes increasingly important. Technology leaders and policy makers have much to contribute to the debate on how to build trust, promote safety and reliability, and integrate ethical and legal considerations into the design of A/IS technologies. This chapter provides a principled foundation for these discussions. ...
Policy ...
Issue 1: Ensure that A/IS support, promote, and enable internationally recognized legal norms ...
Background ...
A/IS technologies have the potential to impact internationally recognized economic, social, cultural, and political rights through unintended outcomes and outright design decisions. Important examples of this issue have occurred with certain unmanned aircraft systems (Bowcott 2013), use of A/IS in predictive policing (Shapiro 2017), banking (Garcia 2017), judicial sentencing (Osoba and Welser 2017), and job hunting and hiring practices (Datta, Tschantz, and Datta 2014). Even service delivery of goods (Ingold and Soper 2016) can impact human rights by automating discrimination (Eubanks 2018) and inhibiting the right of assembly, freedom of expression, and access to information. To ensure A/IS are used as a force for social benefit, nations must develop policies that safeguard human rights. ...
A/IS regulation, development, and deployment should, therefore, be based on international human rights standards and standards of international humanitarian laws. When put into practice, both states and private actors will consider their responsibilities to protect and respect internationally recognized political, social, economic, and cultural rights. Similarly, business actors will consider their obligations to respect international human rights, as described in the United Nations Guiding Principles on Business and Human Rights (OHCHR 2011), also known as the Ruggie principles. ...
The Ruggie principles have been widely referenced and endorsed by corporations and have led to the adoption of several corporate social responsibility (CSR) policies in various companies. With broadened support, the Ruggie principles will strengthen the role of businesses in protecting and promoting human rights and ensuring that the most crucial human values and legal standards of human rights are respected by A/IS technologists. ...
Recommendations ...
National policies and business regulations for A/IS should be founded on a rights-based approach. The Ruggie principles provide the internationally recognized legal framework for human rights standards that accounts for the impact of technology on individuals while also addressing inequalities, discriminatory practices, and the unjust distribution of resources. ...
These six considerations for a rights-based approach to A/IS flow from the recommendation above: ...
Responsibility: Identify the right holders and the duty bearers and ensure that duty bearers have an obligation to fulfill all human rights. ...
Accountability: Oblige states, as duty bearers, to behave responsibly, to seek to represent the greater public interest, and to be open to public scrutiny of their A/IS policies. ...
Participation: Encourage and support a high degree of participation of duty bearers, right holders, and other interested parties. ...
Policy ...
Nondiscrimination: Underlie the practice of A/IS with principles of nondiscrimination, equality, and inclusiveness. Particular attention must be given to vulnerable groups, to be determined locally, such as minorities, indigenous peoples, or persons with disabilities. ...
Empowerment: Empower right holders to claim and exercise their rights. ...
Corporate responsibility: Ensure that companies’ developments of A/IS comply with the rights-based approach. Companies must not willingly provide A/IS to actors that will use them in ways that lead to human rights violations. ...
Further Resources ...
Human rights-based approaches have been applied to development, education and reproductive health. See the UN Practitioners’ Portal on Human Rights Based Programming. ...
O. Bowcott, “Drone Strikes by US May Violate International Law, Says UN,” The Guardian, October 18, 2013. ...
A. Shapiro, “Reform Predictive Policing,” Nature News, vol. 541, no. 7638, pp. 458460, Jan. 25, 2017. ...
M. Garcia, “How to Keep Your Al from Turning Into a Racist Monster,” Wired, April 21, 2017. ...
O. A. Osoba, and W. Welser IV, “An Intelligence in Our Image: The Risks of Bias and Errors in Artificial Intelligence,” (Research Report 1744). Santa Monica, CA: RAND Corporation, 2017. ...
A. Datta, M. C. Tschantz, and A. Datta. “Automated Experiments on Ad Privacy Settings: A Tale of Opacity, Choice, and Discrimination,” arXiv:1408.6491 [Cs] , 2014. ...
D. Ingold, and S. Soper, “Amazon Doesn’t Consider the Race of Its Customers. Should It?” Bloomberg, April 21, 2016. ...
United Nations. Office of the High Commissioner of Human Rights. Guiding Principles on Business and Human Rights: Implementing the United Nations “Protect, Respect and Remedy” Framework. United Nations Office of the High Commissioner of Human Rights. New York and Geneva: UN, 2011. ...
“Mapping Regulatory Proposals for Artificial Intelligence in Europe.” Access Now, November 2018. ...
V. Eubanks, Automating Inequality. How HighTech Tools Profile, Police, and Punish the Poor. St. Martin’s Press, January 2018. ...
Policy ...
Issue 2: Develop government expertise in A/IS ...
Background ...
There is a consensus among private sector and academic stakeholders that effectively governing A/IS and related technologies requires a level of technical expertise that governments currently do not possess. Effective governance requires experts who understand and can analyze the interactions between A/IS technologies, policy objectives, and overall societal values. Sufficient depth and breadth of technical expertise will help ensure policies and regulations successfully support innovation, adhere to national principles, and protect public safety. ...
Effective governance also requires an A/IS workforce that has adequate training in ethics and access to other resources on human rights standards and obligations, along with guidance on how to apply them in practice. ...
Recommendations ...
Policy makers should support the development of expertise required to create a public policy, legal, and regulatory environment that allows innovation to flourish while protecting the public and gaining public trust. ^(2){ }^{2} Example strategies include the following: ...
Expertise can be furthered through technical fellowships, or rotation schemes, where technologists spend an extended time in political offices, or policy makers work with ...
organizations ^(3){ }^{3} that operate at the intersection of technology policy, technical engineering, and advocacy. This will enhance the technical knowledge of policy makers, strengthen ties between political and technical communities, and contribute to the formulation of effective A/IS policy. ...
Expertise can also be developed through cross-border sharing of best practices around A/IS legislation, consumer protection, workforce transformation, and economic displacement stemming from A/IS-based automation. This can be done through governmental cooperation, knowledge exchanges, and by building A/IS components into venues and efforts surrounding existing regulation, e.g., the General Data Protection Regulation (GDPR). ...
Because A/IS involve rapidly evolving technologies, both workforce training in A/IS areas and long-term science, technology, engineering, and math (STEM) educational strategies, along with ethics courses, are needed beginning in primary school and extending into university or vocational courses. These strategies will foster A/IS expertise in the next generation of many groups, e.g., supervisors of critical systems, scientists, and policy makers. ...
Policy ...
Further Resources 延伸阅读资源
J. Holdren, and M. Smith, “Preparing for the Future of Artificial Intelligence.” Washington, DC: Executive Office of the President, National Science and Technology Council, 2016. ...
P. Stone, R. Brooks, E. Brynjolfsson, R. Calo, O. Etzioni, G. Hager, J. Hirschberg, S. Kalyanakrishnan, E. Kamar, S. Kraus, K. LeytonBrown, D. Parkes, W. Press, A. Saxenian, J. Shah, M. Tambe, and A. Teller. “‘Artificial Intelligence and Life in 2030’: One Hundred Year Study on Artificial Intelligence.” (Report of the 2015-2016 Study Panel). Stanford, CA: Stanford University, 2016. ...
“Japan Industrial Policy Spotlights AI, Foreign Labor.” Nikkei Asian Review, May 20, 2016. ...
Y.H. Weng, “A European Perspective on Robot Law: Interview with Mady Delvaux-Stehres.” Robohub, July 15, 2016. ...
Issue 3: Ensure governance and ethics are core components in A/IS research, development, acquisition, and use. ...
Background ...
Greater national investment in ethical A/IS research and development would stimulate the economy, create high-value jobs, improve governmental services to society, and encourage international innovation and collaboration (U.S. OSTP report on the Future of AI 2016). A/IS have the potential to improve our societies through ...
technologies such as intelligent robots and selfdriving cars that will revolutionize automobile transportation and logistics systems and reduce traffic fatalities. A/IS can improve quality of life through smart cities and decision support in health care, social services, criminal justice, and the environment. To ensure such a positive effect on individuals, societies, and businesses, nations must increase A/IS R&D investments, with particular focus on the ethical development and deployment of A/IS. ...
International collaboration involving governments, private industry, and non-governmental organizations (NGOs) would promote the development of standards, data sharing, and norms that guide ethically aligned A/IS R&D. ...
Recommendations 建议
Develop national and international standards for A/IS to enable efficient and effective public and private sector investments. Important aspects for international standards include measures of societal benefits derived from A/IS, the use of ethical considerations in A/IS investments, and risks increased or decreased by A/IS. Nations should consider their own ethical principles and develop a framework for ethics that each country could use to reflect local systems of values and laws. This will encourage actors to think both locally and globally regarding ethics. Therefore, we recommend governments to: ...
Establish priorities for funding A/IS research that identify approaches and challenges for A/IS governance. This research will identify models for national and global A/IS governance and assess their benefits and adequacy to address A/IS societal needs. ...
Policy ...
Encourage the participation of a diverse set of stakeholders in the standards development process. Standards should address A/IS issues such as fairness, security, transparency, understandability, privacy, and societal impacts of A/IS. A global framework for identification and sharing of these and other issues should be developed. Standards should incorporate independent mechanisms to properly vet, certify, audit, and assign accountability for the A/IS applications. ...
Encourage and establish national and international research groups that provide incentives for A/IS research that is publicly beneficial but may not be commercially viable. ...
Further Resources 延伸阅读
E. T. Kim, “How an Old Hacking Law Hampers the Fight Against Online Discrimination.” The New Yorker, October 1, 2016. ...
National Research Council. “Developments in Artificial Intelligence, Funding a Revolution: Government Support for Computing Research.” Washington, DC: The National Academies Press, 1999. ...
N. Chen, L. Christensen, K. Gallagher, R. Mate, and G. Rafert, “Global Economic Impacts Associated with Artificial Intelligence.” Analysis Group, February 25, 2016. ...
The Networking and Information Technology Research and Development Program, “Supplement to the President’s Budget, FY2017.” NITRD National Coordination Office, April 2016. ...
S. B. Furber, F. Galluppi, S. Temple, and L. A. Plana, “The SpiNNaker Project.” Proceedings of the IEEE, vol. 102, no. 5, pp. 652-665, 2014. ...
H. Markram, “The Human Brain Project,” Scientific American, vol. 306, no. 2, pp. 50-55, June 2012. ...
L. Yuan, “China Gears Up in ArtificialIntelligence Race.” Wall Street Journal, August 24, 2016. ...
Issue 4: Create policies for A/IS to ensure public safety and responsible A/IS design ...
Background ...
Effective governance encourages innovation and cooperation, helps synchronize policies globally, and reduces barriers to trade. Governments must ensure consistent and appropriate policies and regulations for A/IS that address transparency, explainability, predictability, and accountability of A/IS algorithms, risk management, ^(4){ }^{4} data protection, safety, and certification of A/IS. ...
Appropriate regulatory responses are contextdependent and should be developed through an approach that is based on human rights ^(5){ }^{5} and has human well-being as a key goal. ...
Policy ...
Recommendations 建议
Nations should develop and harmonize their policies and regulations for A/IS using a process that is based on informed input from a range of expert stakeholders, including academia, industry, NGOs, and government officials, that addresses questions related to the governance and safe deployment of A/IS. We recommend: ...
Policy makers should consider similar work from around the world. Due to the transnational nature of A/IS, globally synchronized policies can benefit public safety, technological innovation, and access to A/IS. ...
Policies should foster the development of economies able to absorb A/IS. Additional focus is needed to address the effect of A/IS on employment and income and how to ameliorate certain societal conditions. New models of public-private partnerships should be studied. ...
Policies for A/IS should remain founded on a rights-based approach. ...
Policy makers should be prepared to address issues that will arise when innovative and new practices enabled by A/IS are not consistent with current law. In A/IS, where there is often a different system developer, integrator, user, and ultimate customer, application of traditional legal concepts of agency, strict liability, and parental liability will require legal research and deliberation. Challenges from A/IS that must be considered include increasing complexity of and interactions between systems, and the potential for reduced predictability due to the nature of machine learning systems. ...
Further Resources 延伸阅读
P. Stone, R. Brooks, E. Brynjolfsson, R. Calo, O. Etzioni, G. Hager, J. Hirschberg, S. Kalyanakrishnan, E. Kamar, S. Kraus, K. LeytonBrown, D. Parkes, W. Press, A. Saxenian, J. Shah, M. Tambe, and A. Teller. “‘Artificial Intelligence and Life in 2030’: One Hundred Year Study on Artificial Intelligence.” (Report of the 2015-2016 Study Panel). Stanford, CA: Stanford University, 2016. ...
R. Calo, “The Case for a Federal Robotics Commission,” The Brookings Institution, 2014. ...
O. Groth, and Mark Nitzberg, Solomon’s Code: Humanity in a World of Thinking Machines (chapter 8 on governance), New York: Pegasus Books, 2018. ...
A. Mannes, “Institutional Options for Robot Governance,” 1-40, in We Robot 2016, Miami, FL, April 1-2, 2016. ...
G. E. Marchant, K. W. Abbott, and B. Allenby, Innovative Governance Models for Emerging Technologies. Cheltenham, U.K.: Edward Elgar Publishing, 2014. ...
Y. H. Weng, Y. Sugahara, K. Hashimoto, and A. Takanishi. “Intersection of ‘Tokku’ Special Zone, Robots, and the Law: A Case Study on Legal Impacts to Humanoid Robots,” International Journal of Social Robotics 7, no. 5, pp. 841-857, 2015. ...
Policy ...
Issue 5: Educate the public on the ethical and societal impacts of A/IS ...
Background ...
It is imperative for industry, academia, and government to communicate accurately to the public both the positive and negative potential of A/IS and the areas that require caution. ^(6){ }^{6} Strategies for informing and engaging the public on A/IS benefits and challenges are critical to creating an environment conducive to effective decision-making. ...
Educating users of A/IS will help influence the nature of A/IS development. Educating policy makers and regulators on the technical and legal aspects of A/IS will help enable the creation of well-defined policies that promote human rights, safety, and economic benefits. Educating corporations, researchers, and developers of A/IS on the benefits and risks to individuals and societies will enhance the creation of A/IS that better serve human well-being. ^(7){ }^{7} ...
Another key requirement is that A//IS\mathrm{A} / \mathrm{IS} are sufficiently transparent regarding implicit and explicit values and algorithmic processes. This is necessary for the public understanding of A/IS accountability, predictions, decisions, biases, and mistakes. ...
Recommendations 建议
Establish an international multi-stakeholder forum, to include commercial, governmental, and other civil society groups, to determine the best practices for using and developing A/IS. Codify the deliberations into international norms and standards. Many industries-in particular, system industries (automotive, air and space, defense, energy, medical systems, manufacturing)-will be changed by the growing use of A/IS. Therefore, we recommend governments to: ...
Increase funding for interdisciplinary research and communication on topics ranging from basic research on intelligence to principles of ethics, safety, privacy, fairness, liability, and trustworthiness of A/IS. Societal aspects should be addressed both at an academic level and through the engagement of business, civil society, public authorities, and policy makers. ...
Empower and enable independent journalists and media outlets to report on A/IS by providing access to technical expertise. ...
Conduct educational outreach to inform the public on A/IS research, development, applications, risks and rewards, along with the policies, regulations, and testing that are designed to safeguard human rights and public safety. ...
Policy ...
Develop a broad range of A/IS educational programs. Undergraduate, professional degree, advanced degree, and executive education programs should offer instruction that ensures lawyers, legislators, and A/IS workers are well informed about issues arising from A/IS, including the need for measurable standards of A/IS performance, effects, and ethics, and the need to mature the still nascent capabilities to measure these elements of A/IS. ...
Further Resources 延伸阅读资源
Networking and Information Technology Research and Development (NITRD) Program, “The National Artificial Intelligence Research and Development Strategic Plan,” Washington, DC: Office of Science and Technology Policy, 2016. ...
J. Saunders, P. Hunt, and J. S. Hollywood, “Predictions Put into Practice: A QuasiExperimental Evaluation of Chicago’s Predictive Policing Pilot,” Journal of Experimental Criminology, vol. 12, no. 347, pp. 347-371, 2016. [Online] Available: doi:10.1007/s11292-019272-0. [Accessed Nov. 10, 2018]. ...
B. Edelman and M. Luca, “Digital Discrimination: The Case of Airbnb.com.” ...
Harvard Business School Working Paper 14-054, Jan. 28, 2014. ...
C. Garvie, A. Bedoya, and J. Frankle. “The Perpetual Line-Up: Unregulated Police Face Recognition in America.” Washington, DC: Georgetown Law, Center on Privacy & Technology, 2016. ...
M. Chui, and J. Manyika, “Automation, Jobs, and the Future of Work.” Seattle, WA: McKinsey Global Institute, 2014. ...
R. C. Arkin, “Ethics and Autonomous Systems: Perils and Promises [Point of View].” Proceedings of the IEEE 104, no. 10, pp. 1779-1781, Sept. 19, 2016. ...
European Commission, Eurobarometer Survey on Autonomous Systems (DG Connect, June 2015), looks at Europeans’ attitudes toward robots, driverless vehicles, and autonomous drones. The survey shows that those who have more experience with robots (at home, at work or elsewhere) are more positive toward their use. ...
Policy ...
Thanks to the Contributors ...
We wish to acknowledge all of the people who contributed to this chapter. 我们谨向所有为本章节做出贡献的人士致以谢意。
The Policy Committee ...
Kay Firth-Butterfield (Founding Co-Chair) - Project Head, Al and Machine Learning at the World Economic Forum. Founding Advocate of AI-Global; Senior Fellow and Distinguished Scholar, Robert S. Strauss Center for International Security and Law, University of Texas, Austin; Co-Founder, Consortium for Law and Ethics of Artificial Intelligence and Robotics, University of Texas, Austin; Partner, Cognitive Finance Group, London, U.K. ...
Dr. Peter S. Brooks (Co-Chair) - Institute for Defense Analyses ...
Mina Hanna (Co-Chair) - Chair IEEE-USA Artificial Intelligence and Autonomous Systems Policy Committee, Vice Chair IEEE-USA Research and Development Policy Committee, Member of the Editorial Board of IEEE Computer Magazine ...
Chloe Autio - Government & Policy Group, Intel Corporation ...
Stan Byers - Frontier Markets Specialist ...
Corinne Cath-Speth - PhD student at Oxford Internet Institute, The University of Oxford, Doctoral student at the Alan Turing Institute, Digital Consultant at ARTICLE 19 科琳娜·凯斯-斯佩思 - 牛津大学牛津互联网研究所博士研究生,艾伦·图灵研究所博士生,ARTICLE 19 数字顾问
The Privacy Engineer’s Manifesto: Getting from Policy to Code to QA to Value ...
Eileen Donahoe - Executive Director of Stanford Global Digital Policy Incubator ...
Danit Gal - Project Assistant Professor, Keio University; Chair, IEEE Standard P7009 on the Fail-Safe Design of Autonomous and SemiAutonomous Systems 达尼特·加尔 - 庆应义塾大学项目助理教授;IEEE P7009 标准《自主与半自主系统故障安全设计》主席
Olaf J. Groth - Professor of Strategy, Innovation, Economics & Program Director for Disruption Futures, HULT International Business School; Visiting Scholar, UC Berkeley BRIE/CITRIS; CEO, Cambrian.ai ...
Philip Hall - (Founding Co-Chair) CoFounder & CEO, RelmaTech; Member (and Immediate Past Chair), IEEE-USA Committee on Transportation & Aerospace Policy (CTAP); and Member, IEEE Society on Social Implications of Technology ...
John C. Havens - Executive Director, The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems; Executive Director, The Council on Extended Intelligence; Author, Heartificial Intelligence: Embracing Our Humanity to Maximize Machines 约翰·C·哈文斯 - IEEE 自主与智能系统全球伦理倡议执行董事;扩展智能委员会执行董事;《人工情感:在机器最大化时代拥抱人性》作者
Cyrus Hodes - Senior Advisor, Al Office, UAE Prime Minister’s Office; Co-Founded at Harvard Kennedy School the AI Initiative; Member, AI Expert Group at the OECD; Member, Global Council on Extended Intelligence. ...
Policy ...
Chihyung Jeon - Assistant Professor, Graduate School of Science and Technology Policy (STP), Korea Advanced Institute of Science and Technology (KAIST) ...
Anja Kaspersen - Former Head of International Security, World Economic Forum and head of strategic engagement and new technologies at the international committee of Red Cross (ICRC) ...
Nicolas Miailhe - Co-Founder & President, The Future Society; Member, Al Expert Group at the OECD; Member, Global Council on Extended Intelligence; Senior Visiting Research Fellow, Program on Science Technology and Society at Harvard Kennedy School. Lecturer, Paris School of International Affairs (Sciences Po); Visiting Professor, IE School of Global and Public Affairs. ...
Simon Mueller - Executive Director, The AI Initiative; Vice President, The Future Society ...
Carolyn Nguyen - Director, Microsoft’s Technology Policy Group, responsible for policy initiatives related to data governance and personal data ...
Mark J. Nitzberg - Executive Director, Center for Human-Compatible Artificial Intelligence at UC Berkeley; co-author, Solomon’s Code: Humanity in a World of Thinking Machines ...
Daniel Schiff - PhD Student, Georgia Institute of Technology; Chair, Sub-Group for Autonomous and Intelligent Systems Implementation, IEEE P7010: Well-being Metric for Autonomous and Intelligent Systems ...
Evangelos Simoudis - Co-Founder and Managing Director, Synapse Partners. Author, The Big Data Opportunity in our Driverless Future ...
Brian W. Tang - Founder and Managing Director, Asia Capital Markets Institute (ACMI); Founding executive director, LITE Lab@HKU at Hong Kong University Faculty of Law ...
Martin Tisné - Managing Director, Luminate ...
Sarah Villeneuve - Policy Analyst; Member, IEEE P7010: Well-being Metric for Autonomous and Intelligent Systems ...
Adrian Weller - Senior Research Fellow, University of Cambridge; Programme Director for AI, The Alan Turing Institute ...
Yueh-Hsuan Weng - Assistant Professor, Frontier Research Institute for Interdisciplinary Sciences (FRIS), Tohoku University; Fellow, Transatlantic Technology Law Forum (TTLF), Stanford Law School ...
Darrell M. West - Vice President and Director, Governance Studies | Founding Director, Center for Technology Innovation | The Douglas Dillon Chair, Brookings Institution ...
Andreas Wolkenstein - Researcher on neurotechnologies, AI, and political philosophy at LMU Munich (Germany) ...
For a full listing of all IEEE Global Initiative Members, visit standards.ieee.org/content/dam/ ieee-standards/standards/web/documents/other/ ec_bios.pdf. ...
For information on disclaimers associated with EAD1e, see How the Document Was Prepared. 关于 EAD1e 相关免责声明,请参阅《文档编制说明》。
Policy ...
Endnotes 尾注
^(1){ }^{1} This approach is rooted in internationally recognized economic, social, cultural, and political rights. ...
2 This recommendation concurs with the multiple recommendations of the United States National Science and Technology Council, One Hundred Year Study of Artificial Intelligence, Japan’s Cabinet Office Council, European Parliament’s Committee on Legal Affairs, and others. ...
3 For example, American Civil Liberties Union, Article 19, the Center for Democracy & Technology, Canada.AI, or Privacy International. United Nations committees may also be useful in fostering knowledge exchanges. ...
4 This includes consideration regarding application of the precautionary principle, as used in environmental and health policy-making, where the possibility of widespread harm is high and extensive scientific knowledge or understanding on the matter is lacking. ... ^(5){ }^{5} Human rights-based approaches have been applied to development, education, and reproductive health. See the UN Practitioners’ Portal on Human Rights Based Programming. ...
6 “(Al100),” Stanford University., August 2016. ...
7 Private sector initiatives are already emerging, such as the Partnership on AI; the AI for Good Foundation; and the Ethics and Governance of Artificial Intelligence Initiative, launched by Harvard’s Berkman Klein Center for Internet & Society and the MIT Media Lab. ...
Law ...
The law affects and is affected by the development and deployment of autonomous and intelligent systems (A/IS) in contemporary life. Science, technological development, law, public policy, and ethics are not independent fields of activity that occasionally overlap. Instead, they are disciplines that are fundamentally tied to each other and collectively interact in the creation of a social order. ...
Accordingly, in studying A/IS and the law, we focus not only on how the law responds to the technological innovation represented by A/IS, but also on how the law guides and sets the conditions for that innovation. This interactive process is complex, and its desired outcomes can rest on particular legal and cultural traditions. While acknowledging this complexity and uncertainty, as well as the acute risk that A/IS may intentionally or unintentionally be misused or abused, we seek to identify principles that will steer this interactive process in a manner that leads to the improvement, prosperity, and well-being of everyone. ...
The fact that the law has a unique role to play in achieving this outcome is observed by Sheila Jasanoff, a preeminent scholar of science and technology studies: ...
Part of the answer is to recognize that science and technology-for all their power to create, preserve, and destroy-are not the only engines of innovation in the world. Other social institutions also innovate, and they may play an invaluable part in realigning the aims of science and technology with those of culturally disparate human societies. Foremost among these is the law. ^(1){ }^{1} ...
The law can play its part in ensuring that A/IS, in both design and operation, are aligned with principles of ethics and human well-being. ^(2){ }^{2} ...
Comprehensive coverage of all issues within our scope of study is not feasible in a single chapter of Ethically Aligned Design (EAD). Accordingly, aggregate coverage will expand as issues not yet studied are selected for treatment in future versions of EAD. ...
EAD, First Edition includes commentary about how the law should respond to a number of specific ethical and legal challenges raised by the development and deployment of A/IS in contemporary life. It also focuses on the impact of A/IS on the practice of law itself. More specifically, we study both the potential benefits and the potential risks resulting from the incorporation of A/IS into a society’s legal system-specifically, in law making, civil justice, criminal justice, and law enforcement. Considering the results of those inquiries, we endeavor to identify norms for the adoption of A/IS in a legal system that will enable the realization of the benefits while mitigating the risks. ^(3){ }^{3} ...
In this chapter of EAD, we include the following: ...
Section 1: Norms for the Trustworthy Adoption of A/IS in Legal Systems. ...
This section addresses issues raised by the potential adoption of A/IS in legal systems for the purpose of performing, or assisting in performing, tasks traditionally carried out by humans with specialized legal training or expertise. The section begins with the question of how A/IS, if properly incorporated into a legal system, can improve the functions of that legal system and thus enhance its ability to contribute to human well-being. The section then discusses challenges to the safe and effective incorporation of A/IS into a legal system and identifies the chief challenge as an absence of informed trust. The remainder of the section examines how societies can fill the trust gap by enacting policies and promoting practices that advance publicly accessible standards of effectiveness, competence, accountability, and transparency. ...
Section 2: Legal Status of A/IS. ...
This section addresses issues raised by the legal status of A/IS, including the potential assignment of certain legal rights and obligations to such systems. The section provides background on the issue and outlines some of the potential advantages and disadvantages of assigning some form of legal personhood to A/IS. Based on these considerations, the section concludes that extending legal personhood to A//IS\mathrm{A} / \mathrm{IS} is not appropriate at this time. It then considers alternatives and outlines certain future conditions that might warrant reconsideration of the section’s central recommendation. ...
Section 1: Norms for the Trustworthy Adoption of A/IS in Legal Systems* ...
“It’s a day that is here.” ...
John G. Roberts, Chief Justice of the Supreme Court of the United States, when asked in 2017 whether he could foresee a day when intelligent machines would assist with courtroom factfinding or judicial decision-making. ^(5){ }^{5} ...
A/IS hold the potential to improve the functioning of a legal system and, thereby, to contribute to human well-being. That potential will be realized, however, only if both the use of A/IS and the avoidance of their use are grounded in solid information about the capabilities and limitations of A/IS, the competencies and conditions required for their safe and effective operation (including data requirements), and the lines along which responsibility for the outcomes generated by A/IS can be assigned. Absent that information, society risks both uninformed adoption of A/IS and uninformed avoidance of adoption of A/IS, risks that are particularly acute when A/IS are applied in an integral component of the social order, such as the law. ...
Uninformed adoption poses the risk that A/IS will be applied to inform or replace the judgments of legal actors (legislators, judges, lawyers, law enforcement officers, and jurors) without controls to ensure their safe and effective operation. They may even be used ...
for purposes other than those for which the systems have been validated and vetted for legal use. In addition to actual harm to individuals, the result will be distrust, not only of the effectiveness of A/IS, but also of the fairness and effectiveness of the legal system itself. ...
Uninformed avoidance of adoption poses the risk that a lack of understanding of what is required for the safe and effective operation of A/IS will result in blanket distrust of all forms and applications of A/IS, even those that are, when properly applied, safe and effective. The result will be a failure to realize the significant improvements in the legal system that A/IS can offer and a continuation of systems that are, even with the best of safeguards, still subject to human bias, inconsistency, and error. ^(6){ }^{6} ...
In this section, we consider how society can address these risks by developing norms for the adoption of A/IS in legal systems. The specific issues discussed follow. The first and second issues reflect the potential benefits of, and challenges to, trustworthy adoption of A/IS in the world’s legal systems. The remaining issues discuss four principles, ^(7){ }^{7} which, if adhered to, will enable trustworthy adoption. ^(89){ }^{89} ...
Issue 1: Well-being, Legal Systems, and A/IS-How can A/IS improve the functioning of a legal system and, thereby, enhance human well-being? ...
Issue 2: Impediments to Informed Trust-What are the challenges to adopting A/IS in legal systems and how can those impediments be overcome? ...
Issue 3: Effectiveness-How can the collection and disclosure of evidence of effectiveness of A/IS foster informed trust in the suitability of A/IS for adoption in legal systems? ...
Issue 4: Competence-How can specification of the knowledge and skills required of the human operator(s) of A/IS foster informed trust in the suitability of A/IS for adoption in legal systems? ...
Issue 5: Accountability-How can the ability to apportion responsibility for the outcome of the application of A/IS foster informed trust in the suitability of A/IS for adoption in legal systems? ...
Issue 6: Transparency-How can sharing information that explains how A/IS reach given decisions or outcomes foster informed trust in the suitability of A/IS for adoption in legal systems? ...
Issue 1: Well-Being, Legal Systems, and A/IS ...
How can A/IS improve the functioning of a legal system and, thereby, enhance human well-being? ...
Background ...
An effective legal system contributes to human well-being. The law is an integral component of social order; the nature of a legal system informs, in fundamental ways, the nature of a society, its potential for economic growth and technological innovation, and its capacity for advancing the well-being of its members. ...
If the law is a constitutive element of social order, it is not surprising that it also plays a key role in setting the conditions for well-being and economic growth. In part, this flows from the fact that a well-functioning legal system is an element of good governance. Good governance and a well-functioning legal system can help society and its members flourish, as measured by indicators of both economic prosperity ^(10){ }^{10} and human well-being. ^(11){ }^{11} The attributes of good governance can be defined in several ways. Good governance can mean democracy; the observance of norms of human rights enshrined in conventions such as the Universal Declaration of Human Rights ^(12){ }^{12} and the Convention of the Rights of the Child; ^(13){ }^{13} and constitutional constraints on government power. It can also ...
mean bureaucratic competence, law and order, property rights, and contract enforcement. ...
The United Nations (UN) defines the rule of law as: ...
a principle of governance in which all persons, institutions and entities, public and private, including the State itself, are accountable to laws that are publicly promulgated, equally enforced and independently adjudicated. . . It requires, as well, measures to ensure adherence to the principles of supremacy of law, equality before the law, accountability to the law, fairness in the application of the law, separation of powers, participation in decision-making, legal certainty, avoidance of arbitrariness and procedural and legal transparency. ^(14){ }^{14} ...
Orderly systems of legal rules and institutions generally correlate positively with economic prosperity, social stability, and human wellbeing, including the protection of childhood. ^(15){ }^{15} Studies from the World Bank suggest that legal reforms can lead to increased foreign investment, higher incomes, and greater wealth. ^(16){ }^{16} Wealth, in turn, can enable policies that support improved education, health, environmental protection, equal opportunity, and, in democratic societies, greater individual freedom. ...
Law, moreover, can contribute to prosperity not only through its functional attributes, but also through its substantive content. Patent laws, for example, if well-designed, can encourage technological innovation, leading to increases in productivity and the economic growth that follows. Poorly designed patent laws, on the ...
other hand, may foster monopolistic markets and decrease competition, resulting in a decreased pace of technological innovation, fewer gains in productivity, and slower economic growth. ^(17){ }^{17} ...
While economic growth is a valuable benefit of a well-designed and well-functioning legal system, it is not the only benefit. Such a system can bring benefits to society and its members that, beyond economic prosperity, extend to mental and physical well-being. Specific benefits include the protection and advancement of an individual’s dignity, ^(18){ }^{18} human rights, ^(19){ }^{19} liberty, stability, security, equality of treatment under the law, and ability to provide for the future. ^(20){ }^{20} ...
In fact, recent thinking on the relationship between law and economic development has come to hold that a well-functioning legal system is not simply a means to development but is development, insofar as such a system is a constitutive element of a social order that protects and advances human dignity, rights, and well-being. As this position has been characterized by David Kennedy: ...
… the focal point for development policy was increasingly provided less by economics than from ideas about the nature of the good state themselves provided by literatures of political science, political economy, ethics, social theory, and law. In particular, “human rights” and the “rule of law” ^(21){ }^{21} became substantive definitions of development. One should promote human rights not to facilitate development-but as development. The rule of law was not a development tool-it was itself a development ...
objective. Increasingly, law-understood as a combination of human rights, courts, property rights, formalization of entitlements, prosecution of corruption, and public ordercame to define development. ^(22){ }^{22} ...
While this shift from considering law as a means to an end to considering law as an end in itself has been criticized on the grounds that it takes the focus off the difficult political choices that are inherent in any development policy, _(1)^(23){ }_{1}^{23} it remains true that a well-functioning legal system is essential to the realization of a social order that protects and advances human dignity, rights, and well-being. ...
A/IS can contribute to the proper functioning of a legal system. A properly ...
Speedy: enable quick resolution of civil and criminal cases; ...
Fair: produce results that are just and proportionate to circumstance; _(;)^(24){ }_{;}^{24} ...
Free from undesirable bias: operate without prejudice; ...
Consistent: arrive at outcomes in a principled, consistent, and nonarbitrary manner; ...
Transparent: be open to appropriate public examination and oversight; ^(25){ }^{25} ...
Accessible: be equally open to all citizens and residents in resolving disputes; ...
Effective: achieve the ends intended by its laws and rules without negative collateral consequences; ^(26){ }^{26} ...
Accurate: achieve accurate results, minimizing both false positives (persons unjustly or incorrectly targeted, investigated, or sentenced for crimes) and false negatives (persons incorrectly not targeted, investigated, or sentenced for crimes); ...
Adaptable: have the flexibility to adapt to changes in societal circumstances. ...
A/IS have the potential to alter the overall functioning of a legal system. A/IS, applied responsibly and appropriately, could improve the legislative process, enhance access to justice, accelerate judicial decision-making, provide transparent and readily accessible information on why and how decisions were reached, reduce bias, support uniformity in judicial outcomes, help society identify (and potentially correct) judicial errors, and improve public confidence in the legal system. By way of example: ...
A/IS can make legislation and regulation more effective and adaptable. For lawmaking, A/IS could help legislators analyze data to craft more finely tuned, responsive, evidencebased laws and regulations. This could, potentially, offer self-correcting suggestions to legislators (and to the general public) to help inform dialogue on how to meet defined public policy objectives. ...
A/IS can make the practice of law more effective and efficient. For example, A/IS can enhance the speed, accuracy, and accessibility of the process of fact-finding in legal proceedings. When used appropriately in legal fact-finding, particularly in jurisdictions that allow extensive discovery or disclosure, A/IS already make litigation and investigations more accessible by analyzing vast data ...
collections faster, more efficiently, and potentially more effectively ^(27){ }^{27} than document analysis conducted solely by human attorneys. By making fact-finding in an era of big data progressively easier, faster, and cheaper, A/IS may facilitate access to justice for parties that otherwise may find using the legal system to resolve disputes cost-prohibitive. A/IS can also help ensure that justice is rendered based on better accounting of the facts, thus serving the central purpose of any legal system. ...
In both civil and criminal proceedings, A/IS can be used to improve the accuracy, fairness, and consistency of decisions rendered during proceedings. A/IS could serve as an auditing function for both the civil and criminal justice systems, helping to identify and correct judicial and law enforcement errors. ^(28){ }^{28} ...
A/IS can increase the speed, accuracy, fairness, freedom from bias, and general effectiveness with which law enforcement resources are deployed to combat crime. A/IS could be used to reduce or prevent crime, respond more quickly to crimes in progress, and improve collaboration among different law enforcement agencies. ^(29){ }^{29} ...
A/IS can help ensure that determinations about the arrest, detention, and incarceration of individuals suspected of, or convicted of, violations of the law are fair, free from bias, consistent, and accurate. Automated risk assessment tools have the potential to address issues of systemic racial bias in sentencing, parole, and bail determination while also safely reducing incarceration and recidivism ...
rates by identifying individuals who are less likely to commit crimes if released. ...
A/IS can help to ensure that the tools, procedures, and resources of the legal system are more transparent and accessible to citizens. For the ordinary citizen, A/IS can democratize access to legal expertise, especially in smaller matters, where they may provide effective, prompt, and low-cost initial guidance to an aggrieved party; for example, in landlord-tenant, product purchase, employment, or other contractual contexts where the individual often tends to find access to legal information and legal advice prohibitive, or where asymmetry of resources between the parties renders recourse to the legal system inequitable. ^(30){ }^{30} ...
A/IS have the potential to improve how a legal system functions in fundamental ways. As is the case with all powerful tools, there are some risks. A/IS should not be adopted in a legal system without due care and scrutiny; they should be adopted after a society’s careful reflection and proper examination of evidence that their deployment and operation can be trusted to advance human dignity, rights, and well-being (see Issues 2-6). ...
Recommendations ^(31){ }^{31} ...
Policymakers should, in the interest of improving the function of their legal systems and bringing about improvements to human well-being, explore, through a broad consultative dialogue with all stakeholders, how A/IS can be adopted for use in their legal systems. They should do ...
Law 法律
so, however, only in accordance with norms for adoption that mitigate the risks attendant on such adoption (see Issues 2-6 in this section). ...
2. Governments, non-governmental organizations, and professional associations should support educational initiatives designed to create greater awareness among all stakeholders of the potential benefits and risks of adopting A/IS in the legal system, and of the ways of mitigating such risks. A particular focus of these initiatives should be the ordinary citizen who interacts with the legal system as a victim or criminal defendant. ...
Further Resources 延伸阅读资源
A. Brunetti, G. Kisunko, and B. Weder, “Credibility of Rules and Economic Growth: Evidence from a Worldwide Survey of the Private Sector,” The World Bank Economic Review, vol. 12, no. 3, pp. 353-384, Sep. 1998. ...
S. Jasanoff, “Governing Innovation: The Social Contract and the Democratic Imagination,” Seminar, vol. 597, pp. 16-25, May 2009. ...
D. Kennedy, “The ‘Rule of Law,’ Political Choices and Development Common Sense,” ...
in The New Law and Economic Development: A Critical Appraisal, D. M. Trubek and A. Santos, eds., Cambridge: Cambridge University Press, 2006, pp. 95-173. ...
“Artificial Intelligence,” National Institute of Standards and Technology. ...
K. Schwab, “The Global Competitiveness Report: 2018,” The World Economic Forum, 2018. ...
A. Sen, Development as Freedom. New York, NY: Alfred A. Knopf, 1999. ...
United Nations General Assembly, Universal Declaration of Human Rights, Dec. 10, 1948. ...
UNICEF, Convention on the Rights of the Child, Nov. 4, 2014. ...
United Nations Office of the High Commissioner: Human Rights, The Vienna Declaration and Programme of Action, June 25, 1993. ...
World Bank, World Development Report 2017: Governance and the Law, Jan. 2017. ...
World Justice Project, Rule of Law Index, June 2018. ...
Issue 2: Impediments to Informed Trust ...
What are the challenges to adopting A/IS in legal systems and how can those impediments be overcome? ...
Background 背景
Although the benefits to be gained by adopting A/IS in legal systems are potentially numerous (see the discussion of Issue 1), there are also significant risks that must be addressed in order for the A/IS to be adopted in a manner that will realize those benefits. The risks sometimes mirror expected benefits: ...
the potential for opaque decision-making; ...
the intentional or unintentional biases and abuses of power; ...
the emergence of nontraditional bad actors; ...
the perpetuation of inequality; ...
the depletion of public trust in a legal system; ...
the lack of human capital active in judicial systems to manage and operate A/IS; ...
the sacrifice of the spirit of the law in order to achieve the expediency that the letter of the law allows; ...
the unanticipated consequences of the surrender of human agency to nonethical agents; ...
the loss of privacy and dignity; ...
and the erosion of democratic institutions. ^(32){ }^{32} ...
By way of example: ...
Currently, A/IS used in justice systems are not subject to uniform rules and norms and are often adopted piecemeal at the local or regional level, thereby creating a highly variable landscape of tools and adoption practices. Critics argue that, far from improving fact-finding in civil and criminal matters or eliminating bias in law enforcement, these tools have unproven accuracy, are error-prone, and may serve to entrench existing social inequalities. These tools’ potential must be weighed against their pitfalls. These include unclear efficacy; incompetent operation; and potential impairment of a legal system’s ability to adhere to principles of socioeconomic, racial, or religious equality, government transparency, and individual due process, to render justice in an informed, consistent, and fair manner. ...
In the case of State v. Loomis, an important but not widely known case, the Wisconsin Supreme Court held that a trial court’s use of an algorithmic risk assessment tool in sentencing did not violate the defendant’s due process rights, despite the fact that the methodology used to obtain the automated assessment was not disclosed to either the court or the defendant. ^(33)A{ }^{33} \mathrm{~A} man received a lengthy sentence based in part on what an opaque algorithm thought of him. While the court considered many factors, and sought to balance competing societal values, this ...
is just one case in a growing set of cases illustrating how criminal justice systems are being impacted by proprietary claims of trade secrets, opaque operation of A/IS, a lack of evidence of the effectiveness of A/IS, and a lack of norms for the adoption of A/IS in the extended legal system. ...
More generally, humans tend to be subject to the cognitive bias known as “anchoring”, which can be described as the excessive reliance on an initial piece of information. This may lead to the progressive, unwitting, and detrimental reliance of judges and legal practitioners on assessments produced by A/IS. This risk is compounded by the fact that A/IS are (and shall remain in the foreseeable future) nonethical agents, incapable of empathy, and thus at risk of being unable to produce decisions aligned with not just the letter of the law, but also the spirit of the law and reasonable regard for the circumstances of each defendant. ...
The required technical and scientific knowledge to procure, deploy, and effectively operate A/IS, as well as that required to measure the ability of A/IS to achieve a given purpose without adverse collateral consequences, represent significant hurdles to the beneficial long-term adoption of A/IS in a legal system. This is especially the case when-as is the case presently-actors in the civil and criminal justice systems and in law enforcement may lack the requisite specialized technological or scientific expertise. ^(34){ }^{34} ...
Such risks must be addressed in order to ensure sustainable management and public oversight of what will foreseeably become an increasingly automated justice system. ^(35){ }^{35} The view expressed by the Organisation for Economic Co-operation and Development (OECD) in the domain of digital security that "robust strategies to [manage risk] are essential to establish the trust needed for economic and social activities to fully benefit from digital innovation"36 applies equally to the adoption of A/IS in the world’s legal systems. ...
Informed trust. If we are to realize the benefits of A/IS, we must trust that they are safe and effective. People board airplanes, take medicine, and allow their children on amusement park rides because they trust that the tools, methods, and people powering those technologies meet certain safety and effectiveness standards that reduce the risks to an acceptable level given the objectives and benefits to be achieved. This need for trust is especially important in the case of A/IS used in a legal system. The “black box” nature and lack of trust in A/IS deployed in the service of a legal system could quickly translate into a lack of trust in the legal system itself. This, in turn, may lead to an undermining of the social order. Therefore, if we are to improve the functioning of our legal systems through the adoption of A/IS, we must enact policies and promote practices that allow those technologies to be adopted on the basis of informed trust. Informed trust rests on a reasoned evaluation of clear and accurate information about the effectiveness of A/IS and the competence of their operators. ^(37){ }^{37} ...
A set of four principles that we believe meets the design criteria just described are the following: ...
Effectiveness: Adoption of A/IS in a legal system should be based on sound empirical evidence that they are fit for their intended purpose. ...
Competence: A/IS should be adopted in a legal system only if their creators specify ...
the skills and knowledge required for their effective operation and if their operators adhere to those competency requirements. ...
Accountability: A/IS should be adopted in a legal system only if all those engaged in their design, development, procurement, deployment, operation, and validation of effectiveness maintain clear and transparent lines of responsibility for their outcomes and are open to inquiries as may be appropriate. ...
Transparency: A/IS should be adopted in a legal system only if the stakeholders in the results of A/IS have access to pertinent and appropriate information about their design, development, procurement, deployment, operation, and validation of effectiveness. ...
In the remainder of Section 1, we elaborate on each of these principles. Before turning to a specific discussion of each, we add two further considerations that should be kept in mind when applying them collectively. ...
Differences in emphasis. While all four of the aforementioned principles will contribute to the fostering of trust, each principle will not contribute equally in every circumstance. For example, in many applications of A/IS, a wellestablished measure of effectiveness, obtained by proven and accepted methods, may go a considerable way to creating conditions for trust in the given application. In such a case, the other principles may add to trust, but they may not be necessary to establish trust. Or, to take another example, in some applications the role of the human operator may be minimal, while in other applications there will be extensive scope for ...
human agency where competence has a greater role to play. In finding the right emphasis and balance among the four principles, policymakers and practitioners will have to consider the specific circumstances of A/IS. ...
Governments should set procurement and contracting requirements that encourage parties seeking to use A/IS in the conduct of business with or for the government, particularly with or for the court system and law enforcement agencies, to adhere to the principles of effectiveness, competence, accountability, and transparency as described in this chapter. This can be achieved through legislation or administrative regulation. All government efforts in this regard should be transparent and open to public scrutiny. ...
Professionals engaged in the practice, interpretation, and enforcement of the ...
law (such as lawyers, judges, and law enforcement officers), when engaging with or relying on providers of A/IS technology or services, should require, at a minimum, that those providers adhere to, and be able to demonstrate adherence to, the principles of effectiveness, competence, accountability, and transparency as described in this chapter. Likewise, those professionals, when operating A/IS themselves, should adhere to, and be able to demonstrate adherence to, the principles of effectiveness, competence, accountability, and transparency. Demonstrations of adherence to the requirements should be publicly accessible. ...
Regulators should permit insurers to issue professional liability and other insurance policies that consider whether the insured (either a provider or operator of A/IS in a legal system) adheres to the principles of effectiveness, competence, accountability, and transparency (as they are articulated in this chapter). ...
Further Resources 延伸阅读资源
“Criminal Law-Sentencing GuidelinesWisconsin Supreme Court Requires Warning Before Use of Algorithmic Risk Assessments in Sentencing-State v. Loomis, 881 N.W.2d 749 (Wis. 2016),” Harvard Law Review, vol. 130, no. 5, pp. 1530-1537, 2017. ...
K. Freeman, “Algorithmic Injustice: How the Wisconsin Supreme Court Failed to Protect Due Process Rights in State v. Loomis,” North Carolina Journal of Law and Technology, vol. 18, no. 5, pp. 75-76, 2016. ...
“Managing Digital Security and Privacy Risk: Background Report for Ministerial Panel 3.2,” Organisation for Economic Co-operation and Development (OECD) Directorate for Science, Technology, and Innovation: Committee on Digital Economy Policy, June 1, 2016. ...
State v Loomis, 881 N.W.2d 749 (Wis. 2016), cert. denied (2017). ...
“Global Governance of Al Roundtable: Summary Report 2018,” World Government Summit, 2018. ...
Issue 3: Effectiveness ...
How can the collection and disclosure of evidence of effectiveness of A/IS foster informed trust in the suitability for adoption in legal systems? ...
Background ...
An essential component of trust in a technology is trust that it works and meets the purpose for which it is intended. We now turn to a discussion of the role that evidence of effectiveness, chiefly in the form of the results of a measurement exercise, can play in fostering informed trust in A/IS as applied in legal systems. ^(38){ }^{38} We begin with a general characterization of what we mean by evidence of effectiveness: what we are measuring, how we are measuring, what form our results take, and who the intended ...
consumers of the evidence are. We then identify the specific features of the practice of measuring effectiveness that will enable it to contribute to informed trust in A/IS as applied in a legal system. ...
What constitutes evidence of effectiveness? ...
What we are measuring. In gathering evidence of effectiveness, we are seeking to gather empirical data that will tell us whether a given technology or its application will serve as an effective solution to the problem it is intended to address. Serving as an effective solution means more than meeting narrow specifications or requirements; it means that the A/IS are capable of addressing their target problems in the real world, which, in the case of A/IS applied in a legal system, are problems in the making, administration, adjudication, or enforcement of the law. It also means remaining practically feasible once collateral concerns and potential unintended consequences are taken into account. ^(39){ }^{39} To take a non-A/IS example, under the definition of effectiveness we are considering, for an herbicide to be considered effective, it must be shown not only to kill the target weeds, but also to do so without causing harm to nontarget plants, to the person applying the agent, and to the environment in general. ...
Under the definition above, assessing the effectiveness of A/IS in accomplishing the target task (narrowly defined) is not sufficient; it may also be necessary to assess the extent to which the A/IS are aligned with applicable ...
laws, regulations, and standards, ^(40){ }^{40} and whether (and to what extent) they impinge on values such as privacy, fairness, or freedom from bias. ^(41){ }^{41} Whether such collateral concerns are salient will depend on the nature of the A//IS\mathrm{A} / \mathrm{IS} and on the particular circumstances in which they are to be applied. ^(42){ }^{42} However, it is only from such a complete view of the impact of A/IS that a balanced judgment can be made of the appropriateness of their adoption. ^(43){ }^{43} ...
Although the scope of an evaluation of effectiveness is broader than a narrowly focused verification that a specific requirement is met, it has its limits. There are measures of aspects of A/IS that one might find useful but that are outside the scope of effectiveness. For example, given frequently expressed concerns that A/IS will one day cross the limits of their intended purpose and overwhelm their creators and users, one might seek to define and obtain general measures of the autonomy of a system or of a system’s capacity for artificial general intelligence (AGI). Although such measures could be usefulassuming they could be defined-they are beyond the scope of evaluations of effectiveness. Effectiveness is always tied to a target purpose, even if it includes consideration of the collateral effects of the manner of meeting that purpose. ...
What we are measuring is therefore a general “fitness for purpose”. ...
How we measure. Evidence of effectiveness is typically gathered in one of two types of exercises: ^(44){ }^{44} ...
A single-system validation exercise measures and reports on the effectiveness of a single system on a given task. In such an exercise, the system to be validated will typically have already carried out the target task on a given data set. The purpose of the validation is to provide empirical evidence of how successful the system has been in carrying out the task on that data set. Measurements are obtained by independent sampling and review of the data to which the system was applied. Once obtained, those metrics serve to corroborate or refute the hypothesis that the system operated as intended in the instance under consideration. An example of validation as applied to legal fact-finding would be a test of the effectiveness of A/IS that had been used to retrieve material relevant (as defined by the humans deploying the system) to a given legal inquiry from a collection of emails. ...
A multi-system (or benchmarking) evaluation involves conducting a comparative study of the effectiveness of several systems designed to meet the same objective. Typically, in such a study, a test data set is identified, a task to be performed is defined (ideally, a task that models the real-world objectives and conditions for which the systems under evaluation have been designed ^(45){ }^{45} ), the systems to be evaluated are used to carry out the task, and the success of each system in carrying out the task is measured and reported. An example of this sort of evaluation applied to a specific ...
real-world challenge in the justice system is the series of evaluations of the effectiveness of information retrieval systems in civil discovery, including A/IS, conducted as part of the US National Institute of Standards and Technology (NIST) Text REtrieval Conference (TREC) Legal Track initiative. ^(46){ }^{46} ...
The measurements obtained by both types of evaluation exercises are valuable. The results of a single-system validation exercise are typically more specific, answering the question of whether a system was effective in a specific instance. The results of a multi-system evaluation are typically more generic, answering the question of whether a system can be effective in real-world circumstances. Both questions are important, hence both types of evaluations are valuable. ^(47){ }^{47} ...
The form of results. The results of an evaluation typically take the form of a numbera quantitative gauge of effectiveness. This can be, for example, the decreased likelihood of developing a given medical condition; safety ratings for automobiles; recall measures for retrieving responsive documents; and so on. Certainly, qualitative considerations are not (and should not) be ignored; they often provide context crucial to interpreting the quantitative results. ^(48){ }^{48} Nevertheless, at the heart of the results of an evaluation exercise is a number, a metric that serves as a telling indicator of effectiveness. ^(49){ }^{49} ...
In some cases, the research community engaged in developing any new system will have reached consensus on salient effectiveness metrics. In other cases, the research community may not ...
have reached a consensus, requiring further study. In the case of A/IS, given both their accelerating development and the fact that they are often applied to tasks for which the effectiveness of their human counterparts is seldom precisely gauged, we are often still at the stage of defining metrics. An example of an application of A/IS for which there is a general consensus around measures of effectiveness is legal electronic discovery, _(1)^(50){ }_{1}^{50} where there is a working consensus around the use of the evaluation metrics referred to as “recall” and “precision”. ^(51){ }^{51} Conversely, in the case of A/IS applied in support of sentencing decisions, a consensus on the operative effectiveness metrics does not yet exist. ^(52){ }^{52} ...
The consumers of the results. In defining metrics, it is important to keep in mind the consumers of the results of an evaluation of effectiveness. Broadly speaking, it is helpful to distinguish between two categories of stakeholders who will be interested in measurements of effectiveness: ...
Experts are the researchers, designers, operators, and advanced users with appropriate scientific or professional credentials who have a technical understanding of the way in which a system works and are well-versed in evaluation methods and the results they generate. ...
Nonexperts are the legislators, judges, lawyers, prosecutors, litigants, communities, victims, defendants, and system advocates whose work or legal outcomes may, even if only indirectly, be affected by the results ...
of a given system. These individuals, however, may not have a technical understanding of the way in which a system operates. Furthermore, they may have little experience in conducting scientific evaluations and interpreting their results. ...
Effectiveness metrics must meet the needs of both expert and nonexpert consumers. ...
With respect to experts, the purpose of an effectiveness metric is to advance both longterm research and more immediate product development, maintenance, and oversight. To achieve that purpose, it is appropriate to define a fine-grained metric that may not be within the grasp of the nonexpert. Researchers and developers will be acting on the information provided by such a metric, so it should be tailored to their needs. ...
With respect to nonexperts, including the general public, the purpose of an effectiveness metric is to advance informed trust, meaning trust that is based on sound evidence that the A/IS have met, or will meet, their intended objectives, taking into account both the immediate purpose and the contextual purpose of preserving and fostering important values such as human rights, dignity, and well-being. For this purpose, it will be necessary to define a metric that can serve as a readily understood summary measure of effectiveness. This metric must provide a simple, direct answer to the question of how effective a given system is. Automobile safety ratings are an example of this sort of metric. For automobile designers and engineers, the summary ...
metrics are not sufficiently fine-grained to give immediately actionable information; for consumers, however, the metrics, insofar as they are accurate, empower them to make better-informed buying decisions. ...
For the purpose of fostering informed trust in A/IS adopted in the legal system, the most important goal is to establish a clear measure of effectiveness that can be understood by nonexperts. However, significant obstacles to achieving this goal include (a) developer incentives that prioritize research and development, along with the metrics that support such efforts, and (b) market forces that inhibit, or do not encourage, consumer-facing metrics. For those reasons, it is important that the selection and definition of the operative metrics draw on input not only from the A/IS creators but from other stakeholders as well; only under these conditions will a consensus form around the meaningfulness of the metrics. ...
What measurement practices foster informed trust? ...
By equipping both experts and nonexperts with accurate information regarding the capabilities and limitations of a given system, measurements of effectiveness can provide society with information needed to adopt and apply A/IS in a thoughtful, carefully considered, beneficial manner. ^(53){ }^{53} ...
In order for the practice of measuring effectiveness to realize its full potential for fostering trust and mitigating the risks of uninformed adoption and uninformed avoidance of adoption, it must have certain features: ...
Meaningful metrics: As noted above, an essential element of a measurement practice is a metric that provides an accurate and readily understood gauge of effectiveness. The metric should provide clear and actionable information as to the extent to which a given application has, or has not, met its objective so that potential users of the results of the application can respond accordingly. For example, in legal discovery, both recall and precision have done this well and have contributed to the acceptance of the use of A/IS for this purpose. ^(54){ }^{54} ...
Sound methods: Measures of effectiveness must be obtained by scientifically sound methods. If, for example, measures are obtained by sampling, those sample-based estimates must be the result of sound statistical procedures that hold up to objective scrutiny. ...
Valid data: Data on which evaluations of effectiveness are conducted should accurately represent the actual data to which the given A/IS would be applied and should be vetted for potential bias. Any data sets used for benchmarking or testing should be collected, maintained, and used in accordance with principles for the protection of individual privacy and agency. ^(55){ }^{55} ...
Awareness and consensus: Measurement practices must not only be technically sound in terms of metrics, methods, and data, but they must also be widely understood and accepted as evidence of effectiveness. ...
Implementation: Measurement practices must be both practically feasible and actually implemented, i.e., widely adopted by practitioners ^(56){ }^{56}. ...
Transparency. Measurement methods and results must be open to scrutiny by experts and the general public. ^(57){ }^{57} Without such scrutiny, the measurements will not be trusted and will be incapable of fulfilling their intended purpose. ^(58){ }^{58} ...
In seeking to advance informed trust in A/IS, policymakers should formulate policies and promote standards that encourage sound measurement practices, especially those that incorporate the key features. ...
Additional note. While in all circumstances all four principles discussed in this chapter (Effectiveness, Competence, Accountability, Transparency) will have something to contribute to the fostering of informed trust, it is not the case that in every circumstance all four principles will contribute equally to the fostering of trust. In some circumstances, a well-established measure of effectiveness, obtained by proven and accepted methods, may go a considerable way, on its own, in fostering trust in a given application-or distrust, if that is what the measurements indicate. In such circumstances, the challenges presented by the other principles, e.g., the challenge of adhering to the principle of transparency while respecting intellectual property considerations, may become of secondary importance. ...
Illustration-Effectiveness ...
The search for factual evidence in large document collections in US civil or criminal proceedings has traditionally involved page-by-page manual review by attorneys. Starting in the 1990s, the proliferation of electronic data, such as email, rendered manual review prohibitively costly and time-consuming. By 2008, A/IS designed to substantially automate review of electronic data (a task known as “e-discovery”) were available. Yet, adoption remained limited. Chief among the obstacles to adoption was a concern about the effectiveness, and hence defensibility in court, of A/IS in e-discovery. Simply put, practitioners and courts needed a sound answer to a simple question: “Does it work?” ...
Starting in 2006, the US NIST ^(59){ }^{59} conducted studies to assess that question. ^(60){ }^{60} The studies focused on, among others, two sound statistical metrics, both expressed as easy-to-understand percentages: ^(61,62){ }^{61,62} ...
Recall, which is a gauge of the extent to which all the relevant documents were retrieved. For example, if there were 1,000 relevant documents to be found in the collection, and the review process identified 700 of them, then it achieved 70%70 \% recall. ...
Precision, which is a gauge of the extent to which the documents identified as relevant by a process were actually relevant. For example, if for every two relevant documents the system captured, it also captured a nonrelevant one (i.e., a false positive), then it achieved 67% precision. ...
The studies provided empirical evidence that some systems could achieve high scores ( 80%80 \% ) according to both metrics. ^(63){ }^{63} In a seminal follow-up study, Maura R. Grossman and Gordon V. Cormack found that two automated systems did, in fact, “conclusively” outperform human reviewers. ^(64){ }^{64} Drawing on the results of that study, Magistrate Judge Andrew Peck, in an opinion with far-reaching consequences, gave court approval for the use of A/IS to conduct legal discovery. ^(65){ }^{65} ...
The story of the TREC Legal Track’s role in facilitating the adoption of A/IS for legal factfinding contains a few lessons: ...
Metrics: By focusing on recall and precision, the TREC studies quantified the effectiveness of the systems evaluated in a way that legal practitioners could readily understand. ...
Benchmarks: The TREC studies filled an important gap: independent, scientifically sound evaluations of the effectiveness of A/IS applied to the real-world challenge of legal e-discovery. ...
Collaboration: The founders of the TREC studies and the most successful participants came from both scientific and legal backgrounds, demonstrating the importance of multidisciplinary collaboration. ...
The TREC studies are a shining example of how the truth-seeking protocols of science can be used to advance the truth-seeking protocols of the law. They can serve as a conceptual basis for future benchmarking efforts, as well as the development of standards and certification programs to support informed trust when it comes to effectiveness of A/IS deployed in legal systems. ^(66){ }^{66} ...
Recommendations ...
Governments should fund and support the establishment of ongoing benchmarking exercises designed to provide valid, publicly accessible measurements of the effectiveness of A/IS deployed, or potentially deployed, in the legal system. That support could take a number of forms, ranging from direct sponsorship and oversight-for example, by nonregulatory measurement laboratories such as the US NIST-to indirect support by the recognition of the results of a credible thirdparty benchmarking exercise for the purposes of meeting procurement and contracting requirements. All government efforts in this regard should be transparent and open to public scrutiny. ...
Governments should facilitate the creation of data sets that can be used for purposes of evaluating the effectiveness of A/IS as applied in the legal system. In assisting in the creation of such data sets, governments and administrative agencies will have to take into consideration potentially competing societal values, such as the protection of personal data, and arrive at solutions that maintain those values while enabling the creation of usable, real-world data sets. All government efforts in this regard should be transparent and open to public scrutiny. ...
Creators of A/IS to be applied to legal matters should pursue valid measures of the effectiveness of their systems, whether through participation in benchmarking exercises or through conducting single-system validation exercises. Creators should describe ...
the procedures and results of the testing in clear language that is understandable to both experts and nonexperts, and should do so without disclosing intellectual property. Further, the descriptions should be open to examination by all stakeholders, including, when appropriate, the general public. ...
Researchers engaged in the study and development of A/IS for use in the legal system should seek to define meaningful metrics that gauge the effectiveness of the systems they study. In selecting and defining metrics, researchers should seek input from all stakeholders in the outcome of the given application of A/IS in the legal system. The metrics should be readily understandable by experts and nonexperts alike. ...
Governments and industry associations should undertake educational efforts to inform both those engaged in the operation of A/IS deployed in the legal system and those affected by the results of their operation of the salient measures of effectiveness and what they can indicate about the capabilities and limitations of the A//IS\mathrm{A} / \mathrm{IS} in question. ...
Creators of A/IS for use in the legal system should ensure that the effectiveness metrics defined by the research community are readily obtainable and accessible to all stakeholders, including, when appropriate, the general public. Creators should provide guidance on how to interpret and respond to the metrics generated by the system. ...
Operators of A/IS applied to a legal task should follow the guidance on the measurement of effectiveness provided for ...
the A/IS being used. This includes guidance about which metrics to obtain, how and when to obtain them, how to respond to given results, when it may be appropriate to follow alternative methods of gauging effectiveness, and so on. ...
In interpreting and responding to measurements of the effectiveness of A/IS applied to legal problems or questions, allowance should be made by those interpreting the results for variation in the specific objectives and circumstances of a given deployment of A/IS. Quantitative results should be supplemented by qualitative evaluation of the practical significance of a given outcome and whether it indicates a need for remediation. This evaluation should be done by an individual with the technical expertise and pragmatic experience needed to make a sound judgment. ...
Industry associations or other organizations should collaborate on developing standards for measuring and reporting on the effectiveness of A/IS. These standards should be developed with input from both the scientific and legal communities. ...
10.Recommendation 1 under Issue 2, with respect to effectiveness. ...
Recommendation 2 under Issue 2, with respect to effectiveness. ...
Further Resources ...
Da Silva Moore v. Publicis Groupe, 2012 WL 607412 (S.D.N.Y. Feb. 24, 2012). ...
C. Garvie, A. M. Bedoya, and J. Frankle, “The Perpetual Line-Up: Unregulated Police Face Recognition in America,” Georgetown Law, Center on Privacy & Technology, Oct. 2016. ...
M. R. Grossman and G. V. Cormack, “Technology-Assisted Review in E-Discovery Can Be More Effective and More Efficient Than Exhaustive Manual Review,” Richmond Journal of Law and Technology, vol. 17, no. 3, 2011. ...
B. Hedin, D. Brassil, and A. Jones, “On the Place of Measurement in E-Discovery,” in Perspectives on Predictive Coding and Other Advanced Search Methods for the Legal Practitioner, J. R. Baron, R. C. Losey, and M. D. Berman, Eds. Chicago: American Bar Association, 2016. ...
J. A. Kroll, “The fallacy of inscrutability,” Philosophical Transactions of the Royal Society A: Mathematical, Physical, and Engineering Sciences, vol. 376, no. 2133, Oct. 2018. ...
D. W. Oard, J. R. Baron, B. Hedin, D. Lewis, and S. Tomlinson, “Evaluation of Information Retrieval for E-Discovery,” Artificial Intelligence and Law, vol. 18, no. 4, pp. 347-386, Aug. 2010. ...
The Sedona Conference, “The Sedona Conference Commentary on Achieving Quality in the E-Discovery Process,” The Sedona Conference Journal, vol. 15, pp. 265-304, 2014. ...
M. T. Stevenson, “Assessing Risk Assessment in Action,” Minnesota Law Review, vol. 103, June 2018. ...
“Global Governance of AI Roundtable: Summary Report 2018,” World Government Summit, 2018. ...
High-Level Expert Group on Artificial Intelligence, “DRAFT Ethics Guidelines for Trustworthy AI: Working Document for Stakeholders’ Consultation,” The European Commission. Brussels, Belgium: Dec. 18, 2018. ...
Issue 4: Competence ...
How can specification of the knowledge and skills required of the human operator(s) of A/IS foster informed in the suitability of A/IS for adoption in legal systems? ...
Background ...
An essential component of informed trust in a technological system, especially one that may affect us in profound ways, is confidence in the competence of the operator(s) of the technology. We trust surgeons or pilots with our lives because we have confidence that they have the knowledge, skills, and experience to apply the tools and methods needed to carry out their tasks effectively. We have that confidence because we know that these operators have met rigorous professional and scientific accreditation standards before being allowed to step into the ...
operating room or cockpit. This informed trust in operator competence is what gives us confidence that surgery or air travel will result in the desired outcome. No such standards of operator competence currently exist with respect to A/IS applied in legal systems, where the life, liberty, and rights of citizens can be at stake. That absence of standards hinders the trustworthy adoption of A/IS in the legal domain. ...
The human operator is an integral component of A/IS ...
Despite this, there are few standards that specify how humans should mediate applications of A/IS in legal systems, or what knowledge qualifies a person to apply A/IS and interpret their results. ^(68){ }^{68} This reality is especially troubling for the instances in which the life, rights, or liberty of humans are at stake. Today, while professional codes of ethics for lawyers are beginning to include among their ...
Law ...
requirements an awareness and understanding of technologies with legal application, ^(69){ }^{69} the operators of A/IS in legal systems are essentially deemed to be capable of determining their own competence: lawyers or IT professionals operating in civil discovery, correctional officers using risk assessment algorithms, and law enforcement agencies engaging in predictive policing or using automated surveillance technologies. All are mostly able to use A/IS without demonstrating that they understand the operation of the system they are using or that they have any particular set of consensus competencies. ^(70){ }^{70} ...
The lack of competency requirements or standards undermines the establishment of informed trust in the use of A/IS in legal systems. If courts, legal practitioners, law enforcement agencies, and the general public are to rely on the results of A/IS when applied to tasks traditionally carried out by legal professionals, they must have grounds for believing that those operating A/IS will possess the requisite knowledge and skill to understand the conditions and methods for operating the systems effectively, including evaluating the data on which the A/IS trained, the data to which they are applied, the results they produce, and the methods and results of measuring the effectiveness the systems. Applied incompetently, A/IS could produce the opposite intended effect. Instead of improving a legal system-and bringing about the gains in well-being that follow from such improvementsthey may undermine both the fairness and effectiveness of a legal system and trust in its fairness and effectiveness, creating conditions for social disorder and the deterioration of human ...
well-being that would follow from that disorder. By way of illustration: ...
A city council might misallocate funds for policing across city neighborhoods because it relies on the output of an algorithm that directs attention to neighborhoods based on arrest rates rather than actual crime rates. ^(71){ }^{71} ...
In civil justice, A/IS applied in a search of documents to uncover relevant facts may fail to do so because an operator without sufficient competence in statistics may materially overestimate the accuracy of the system, thus ceasing vital fact-finding activities. ^(72){ }^{72} ...
In the money bail system, reliance on A/IS to reduce bias may instead perpetuate it. For example, if a judge does not understand whether an algorithm makes sufficient contextual distinctions between gradations of offenses, ^(73){ }^{73} that judge would not able to probe the output of the A/IS and make a well-informed use of it. ...
In the criminal justice system, an operator using A/IS in sentencing decision-support may fail to identify bias, or to assess the risk of bias, in the results generated by the A/IS, ^(74){ }^{74} unfairly depriving a citizen of his or her liberty or prematurely granting an offender’s release, increasing the risk of recidivism. ...
More generally, without the confidence that A/IS operators will apply the technology as intended and supervise it appropriately, the general public will harbor fear, uncertainty, and doubt about the use of A/IS in legal systems and potentially about the legal systems themselves. ...
Fostering informed trust in the competence of human operators ...
If negative outcomes such as those just described are to be avoided, it will be necessary to include among norms for the adoption of A/IS in a legal system a provision for building informed trust in the operators of A/IS. Building trust will require articulating standards and best practices for two groups of agents involved in the deployment of A/IS: creators and operators. ...
On the one hand, those engaged in the design, development, and marketing of A/IS must commit to specifying the knowledge, skills, and conditions required for the safe, ethical, and effective deployment and operation of the systems. ^(75){ }^{75} On the other hand, those engaged in actually operating the systems, including both legal professionals and experts acting in the service of legal professionals, must commit to adhering to these requirements in a manner consistent with other operative legal, ethical, and professional requirements. The precise nature of the competency requirements will vary with the nature and purpose of the A/IS and what is at stake in their effective operation. The requirements for the operation of A/IS designed to assist in the creation of contracts, for example, might be less stringent than those for the operation of A/IS designed to assess flight risk, which could affect the liberty of individual citizens. ...
A corollary of these provisions is that education and training in the requisite skills should be available and accessible to those who would operate A/IS, whether that training is provided ...
through professional schools, such as law school; through institutions providing ongoing professional training, such as, for federal judges in the United States, the Federal Judicial Center; through professional and industry associations, such as the American Bar Association; or through resources accessible by the general public. ^(76){ }^{76} Making sure such training is available and accessible will be essential to ensuring that the resources needed for the competent operation of A/IS are widely and equitably distributed. ^(77){ }^{77} ...
It will take a combined effort of both creators and operators to ensure both that A/IS designed for use in legal systems are properly applied and that those with a stake in the effective functioning of legal systems-including legal professionals, of course, but also decision subjects, victims of crime, communities, and the general public-will have informed trust, or, for that matter, informed distrust (if that is what a competence assessment finds) in the competence of the operators of A/IS as applied to legal problems and questions. ^(78){ }^{78} ...
Illustration-Competence ...
Included among the offerings of Amazon Web Services is an image and video analysis service known as Amazon Rekognition. ^(79){ }^{79} The service is designed to enable the recognition of text, objects, people, and actions in images and videos. The technology also enables the search and comparison of faces, a feature with potential law enforcement and national security applications, such as comparing faces identified in video taken by a security camera with those in a database of jail booking photos. Attracted by ...
the latter feature, police departments in Oregon and Florida have undertaken pilots of Rekognition as a tool in their law enforcement efforts. ^(80){ }^{80} ...
In 2018, the American Civil Liberties Union (ACLU), a frequent critic of the use of facial recognition technologies by law enforcement agencies, ^(81){ }^{81} conducted a test of Rekognition. The test consisted of first constructing a database of 25,000 booking photos (“mugshots”) then comparing publicly available photos of all thencurrent members of the US Congress against the images in the database. The test found that Rekognition incorrectly matched the faces of 28 members of Congress with faces of individuals who had been arrested for a crime. ^(82){ }^{82} The ACLU argues that the high number of false positives generated by the technology shows that police use of facial recognition technologies generally (and of Rekognition in particular) poses a risk to the privacy and liberty of law-abiding citizens. The ACLU has used the results of its test of Rekognition to support its proposal that Congress enact a moratorium on the use of facial recognition technologies by law enforcement agencies until stronger safeguards against their misuse, and potential abuse, can be put in place. ^(83){ }^{83} ...
In response to the ACLU report, Amazon noted that the ACLU researchers, in conducting their study, had applied the technology utilizing a similarity threshold (a gauge of the likelihood of a true match) of 80%80 \%, a threshold that casts a fairly wide net for potential matches (and hence generates a high number of false positives). For applications in which there are greater costs associated with false positives (e.g., policing), ...
Amazon recommends utilizing a similarity threshold value of 99% or above to reduce accidental misidentification. ^(84){ }^{84} Amazon also noted that, in all law enforcement use cases, it would be expected that the results of the technology would be reviewed by a human before any actual police action would be undertaken. ...
The story of the ACLU’s testing of Rekognition and Amazon’s response to the test highlights the importance of specifying and adhering to guidelines for competent use. ^(85){ }^{85} Had a law enforcement agency used the technology in the way it was used in the ACLU test, it would, in most legitimate use cases, be guilty of incompetent use. At the same time, Amazon is not free of blame insofar as it did not specify prominently and clearly the competency guidelines for effective use of the technology in support of law enforcement efforts, as well as the risks that might be incurred if those guidelines are not followed. Competent use ^(86){ }^{86} follows both from the A/IS creator’s specification of well-grounded ^(87){ }^{87} competency guidelines and from the A/IS operator’s adherence to those guidelines. ^(88){ }^{88} ...
Recommendations ...
Creators of A/IS for application in legal systems should provide clear and accessible guidance for the knowledge, skills, and experience required of the human operators of the A/IS if the systems are to achieve expected levels of effectiveness. Included in that guidance should be a delineation of the risks involved if those requirements are not met. Such guidance should be ...
Law ...
documented in a form that is accessible and understandable by both experts and the general public. ...
2. Creators and developers of A/IS for application in legal systems should create written policies that govern how the A/IS should be operated. In creating these policies, creators and developers should draw on input from the legal professionals who will be using the A/IS they are creating. The policies should include: ...
the specification of the real-world applications for the A/IS; ...
the preconditions for their effective use; ...
the training and skills that are required for operators of the systems; ...
the procedures for gauging the effectiveness of the A/IS; ...
the considerations to take into account in interpreting the results of the A/IS; ...
the outcomes that can be expected by both operators and other affected parties when the A/IS are operated properly; and ...
the specific risks that follow from improper use. ...
The policies should also specify circumstances in which it might be necessary for the operator to override the A/IS. All such policies should be publicly accessible. ...
3. Creators and developers of A/IS to be applied in legal systems should integrate safeguards against the incompetent operation of their systems. Safeguards could include issuing notifications and warnings to operators in ...
certain conditions, requiring, as appropriate, acknowledgment of receipt; limiting access to A/IS functionality based on the operator’s level of expertise; enabling system shut-down in potentially high-risk conditions; and more. These safeguards should be flexible and governed by context-sensitive policies set by competent personnel of the entity (e.g., the judiciary), utilizing the A/IS to address a legal problem. ...
4. Governments should provide that any individual whose legal outcome is affected by the application of A/IS should be notified of the role played by A//IS\mathrm{A} / \mathrm{IS} in that outcome. Further, the affected party should have recourse to appeal to the judgment of a competent human being. ...
5. Professionals engaged in the creation, practice, interpretation, and enforcement of the law, such as lawyers, judges, and law enforcement officers, should recognize the specialized scientific and professional expertise required for the ethical and effective application of A/IS to their professional duties. The professional associations to which such legal practitioners belong, such as the American Bar Association, should, through both educational programs and professional codes of ethics, seek to ensure that their members are well informed about the scientific and technical competency requirements for the effective and trustworthy application of A/IS to the law. ^(89){ }^{89} ...
6. The operators of A/IS applied in legal systems-whether the operator is a specialist in A/IS or a legal professional-should ...
understand the competencies required for the effective performance of their roles and should either acquire those competencies or identify individuals with those competencies who can support them in the performance of their roles. The operator does not need to be an expert in all the pertinent domains but should have access to individuals with the requisite expertise. ...
7. Recommendation 1 under Issue 2, with respect to competence. ...
8. Recommendation 2 under Issue 2, with respect to competence. ...
Further Resources ...
C. Garvie, A. M. Bedoya, and J. Frankle, “The Perpetual Line-Up: Unregulated Police Face Recognition in America,” Georgetown Law, Center on Privacy & Technology, Oct. 2016. ...
International Organization for Standardization, ISO/IEC 27050-3: Information technologySecurity techniques-Electronic discoveryPart 3: Code of practice for electronic discovery, Geneva, 2017. ...
J. A. Kroll, “The fallacy of inscrutability,” Philosophical Transactions of the Royal Society A: Mathematical, Physical, and Engineering Sciences, vol. 376, no. 2133, Oct. 2018. ...
A. G. Ferguson, “Policing Predictive Policing,” Washington University Law Review, vol. 94, no. 52017. ...
“Global Governance of Al Roundtable: Summary Report 2018,” World Government Summit, 2018. ...
Issue 5: Accountability ...
How can the ability to apportion responsibility for the outcome of the application of A/IS foster informed trust in the suitability of A/IS for adoption in legal systems? ...
Background ...
Apportioning responsibility. An essential component of informed trust in a technological system is confidence that it is possible, if the need arises, to apportion responsibility among the human agents engaged along the path of its creation and application: from design through to development, procurement, deployment, ^(90){ }^{90} operation, and, finally, validation of effectiveness. Unless there are mechanisms to hold the agents engaged in these steps accountable, it will be difficult or impossible to assess responsibility for the outcome of the system under any framework, whether a formal legal framework or a less formal normative framework. A model of A/IS creation and use that does not have such mechanisms will also lack important forms of deterrence against poorly thought-out design, casual adoption, and inappropriate use of A/IS. ...
Simply put, a system that produces outcomes for which no one is responsible cannot be trusted. Those engaged in creating, procuring, deploying, and operating such a system will lack the discipline engendered by the clear ...
assignment of responsibility. Meanwhile, those affected by the results of the system’s operation will find their questions around a given result inadequately answered, and errors generated by the system will go uncorrected. In the case of A/IS applied in a legal system, where an individual’s basic human rights may be at issue, these questions and errors are of fundamental importance. In such circumstances, the only options are either blind trust or blind distrust. Neither of those options is satisfactory, especially in the case of a technological system applied in a domain as fundamental to the social order as the law. ...
Challenges to accountability ...
In the case of A/IS, whether applied in a legal system or another domain, maintaining accountability can be a particularly steep challenge. This challenge to accountability is because of both the perceived “black box” nature of A/IS and the diffusion of responsibility it brings. ...
The perception of A/IS as a black box stems from the opacity that is an inevitable characteristic of a system that is a complex nexus of algorithms, computer code, and input data. As observed by Joshua New and Daniel Castro of the Information Technology and Innovation Foundation: ...
The most common criticism of algorithmic decision-making is that it is a “black box” of extraordinarily complex underlying decision models involving millions of data points and thousands of lines of code. Moreover, the model can change over time, particularly when using ...
machine learning algorithms that adjust the model as the algorithm encounters new data. ^(91){ }^{91} ...
This opacity of the systems makes it challenging to trace cause to effect, ^(92){ }^{92} which, in turn, makes it difficult or even impossible, to draw lines of responsibility. ...
The diffuseness challenge stems from the fact that even the most seemingly straightforward A/IS can be complex, with a wide range of agents-systems designers, engineers, data analysts, quality control specialists, operators, and others-involved in design, development, and deployment. Moreover, some of these agents may not even have been engaged in the development of the A/IS in question; they may have, for example, developed open-source components that were intended for an entirely different purpose but that were subsequently incorporated into the A/IS. This diffuseness of responsibility poses a challenge to the maintenance of accountability. ^(93){ }^{93} As Matthew Scherer, a frequent writer and speaker on topics at the intersection of law and A/IS, observes: ...
The sheer number of individuals and firms that may participate in the design, modification, and incorporation of an AI system’s components will make it difficult to identify the most responsible party or parties. Some components may have been designed years before the AI project had even been conceived, and the components’ designers may never have envisioned, much less intended, that their designs would be incorporated into any AI system, still less the specific AI system that caused harm. In such circumstances, it may seem unfair to assign ...
blame to the designer of a component whose work was far-removed in both time and geographic location from the completion and operation of the AI system. ^(94){ }^{94} ...
Examples include the following: ...
When a judge’s ruling includes a long prison sentence, based in part on a flawed A/ISenabled process that erroneously deemed a particular person to be at high risk of recidivism, who is responsible for the error? Is it the A/IS designer, the person who chose the data or weighed the inputs, the prosecution team who developed and delivered the risk profile to the court, or the judge who did not have the competence to ask the appropriate questions that would have enabled a clearer understanding of the limitations of the system? Or is responsibility somehow distributed among these various agents? ^(95){ }^{95} ...
When a lawyer engaged in civil or criminal discovery believes, erroneously, that all the relevant information was found when using A/IS in a data-intensive matter, who is responsible for the failure to gather important facts? The A/IS designer who typically would have had no ability to foretell the specific circumstances of a given matter, the legal or IT professional who operated the A/IS or erroneously measured its effectiveness, or the lawyer who made a representation to his or her client, to the court, or to investigatory agencies? ...
When a law enforcement officer, relying on A/IS, erroneously identifies an individual as being more likely to commit a crime than ...
another, who is responsible for the resulting encroachment on the civil rights of the person erroneously targeted? Is it the A/IS designer, the individual who selected the data on which to train the algorithm, the individual who chose how the effectiveness of the A/IS would be measured, ^(96){ }^{96} the experts who provided training to the officer, or the officer himself or herself? ...
As a result of the challenges presented by the opacity and diffuseness of responsibility in A//IS\mathrm{A} / \mathrm{IS}, the present-day answer to the question, “Who is accountable?” is, in far too many instances, “It’s hard to say.” This is a response that, in practice, means “no one” or, equally unhelpful, “everyone”. Such failure to maintain accountability will undermine efforts to bring A/IS (and all their potential benefits) into legal systems based on informed trust. ...
Maintaining accountability and trust in A/IS ...
Although maintaining accountability in complex systems can be a challenge, it is one that must be met in order to engender informed trust in the use of A/IS in the legal domain. “Blaming the algorithm” is not a substitute for taking on the challenge of maintaining transparent lines of responsibility and establishing norms of accountability. ^(97){ }^{97} This is true even if we allow that, given the complexity of the systems in question, some number of “systems accidents” is inevitable. ^(98){ }^{98} Informed trust in a system does not require a belief that zero errors will occur; however, it does require a belief that there are mechanisms in place for addressing errors when ...
Law 法律
they do occur. Accountability is an essential component of those mechanisms. ...
In meeting the challenge, it should be recognized that there are existing norms and controls that have a role to play in ensuring that accountability is maintained. For example, contractual arrangements between the A/IS provider and a party acquiring and applying a system may help to specify who is (and is not) to be held liable in the event the system produces undesirable results. Professional codes of ethics may also go some way toward specifying the extent to which lawyers, for example, are responsible for the results generated by the technologies they use, whether they operate them directly or retain someone else to do so. Judicial systems may have procedures for assessing responsibility when a citizen’s rights are improperly infringed. As illustrated by the cases described above, however, existing norms and controls, while helpful, are insufficient in themselves to meet the specific challenge represented by the opacity and diffuseness of A/IS. To meet the challenge further steps must be taken. ^(99){ }^{99} ...
The first step is ensuring that all those engaged in the creation, procurement, deployment, operation, and testing of A/IS recognize that, if accountability is not maintained, these systems will not be trusted. In the interest of maintaining accountability, these stakeholders should take steps to clarify lines of responsibility throughout this continuum, and make those lines of responsibility, when appropriate, accessible to meaningful inquiry and audit. ...
The goal of clarifying lines of responsibility in the operation of A/IS is to implement a governing model that specifies who is responsible for what, and who has recourse to which corrective actions, i.e., a trustworthy model that ensures that it will admit actionable answers should questions of accountability arise. Arriving at an effective model will require the participation of those engaged in the creation and operation of A/IS, those affected by the results of their use, and those with the expertise to understand how such a model would be used in a given legal system. For example: ...
Individuals responsible for the design of A/IS will have to maintain a transparent record of the sources of the various components of their systems, including identification of which components were developed in-house and which were acquired from outside sources, whether open source or acquired from another firm. ...
Individuals responsible for the design of A/IS will have to specify the roles, responsibilities, and potential subsequent liabilities of those who will be engaged in the operation of the systems they create. ...
Individuals responsible for the operation of a system will have to understand their roles, responsibilities, potential liabilities, and will have to maintain documentation of their adherence to requirements. ...
Individuals affected by the results of the operation of A/IS, e.g., a defendant in a criminal proceeding, will have to be given access to information about the roles and responsibilities of those involved in relevant ...
aspects of the creation, operation, and validation of the effectiveness of the A/IS affecting them. ^(100){ }^{100} ...
Individuals with legal and political training (e.g., jurists, regulators, as well as legal and political scholars) will have to ensure that any model that is created will provide information that is in fact actionable within the operative legal system. ...
A governing model of accountability that reflects the interests of all these stakeholders will be more effective both at deterring irresponsible design or use of A/IS before it happens and at apportioning responsibility for an undesirable outcome when it does happen. ^(101){ }^{101} ...
Pulling together the input from the various stakeholders will likely not take place without some amount of institutional initiative. Organizations that employ A/IS for accomplishing legal tasks-private firms, regulatory agencies, law enforcement agencies, judicial institutionsshould therefore develop and implement policies that will advance the goal of clarifying lines of responsibility. Such policies could take the form of, for example, designating an official specifically charged with oversight of the organization’s procurement, deployment, and evaluation of A/IS as well as the organization’s efforts to educate people both inside and outside the organization on its use of A/IS. Such policies might also include the establishment of a review board to assess the organization’s use of A/IS and to ensure that lines of responsibility for the outcomes of its use are maintained. In the case of agencies, such as police departments, whose use of A/IS could impact the general public, ...
such review boards would, in the interest of legitimacy, have to include participation from various citizens’ groups, such as those representing defendants in the criminal system as well as those representing victims of crime. ^(102){ }^{102} ...
The goal of opening lines of responsibility to meaningful inquiry is to ensure that an investigation into the use of A/IS will be able to isolate responsibility for errors (or potential errors) generated by the systems and their operation. ^(103){ }^{103} This means that all those engaged in the design, development, procurement, deployment, operation, and validation of the effectiveness of A/IS, as well as the organizations that employ them, must in good faith be willing to participate in an audit, whether the audit is a formal legal investigation or a less formal inquiry. They must also be willing to create and preserve documentation of key procedures, decisions, certifications, ^(104){ }^{104} and tests made in the course of developing and deploying the A/IS. ^(105){ }^{105} ...
The combination of a governing model of accountability and an openness to meaningful audit will allow the maintenance of accountability, even in complex deployments of A/IS in the service of a legal system. ...
Additional note 1. The principle of accountability is closely linked with each of the other principles intended to foster informed trust in A/IS: effectiveness, competence, and transparency. With respect to effectiveness, evidence of attaining key metrics and benchmarks to confirm that A/IS are functioning as intended may put questions of where, among creators, ...
owners, and operators, responsibility for the outcome of a system lies on a sound empirical footing. With respect to competence, operator credentialing and specified system handoffs enable a clear chain of responsibility in the deployment of A/IS. ^(106){ }^{106} With respect to transparency, providing a view into the general design and methods of A/IS, or even a specific explanation for a given outcome, can help to advance accountability. ...
Additional note 2. Closely related to accountability is the trust that follows from knowing that a human expert is guiding the A/IS and is capable of overriding them, if necessary. Subjecting humans to automated decisions not only raises legal and ethical concerns, both from a data protection ^(107){ }^{107} and fundamental rights perspective, ^(108){ }^{108} but also will likely be viewed with distrust if the human component, which can introduce circumstantial flexibility in the interest of realizing an ethically superior outcome, is missing. In addition to ensuring technical safety and reliability of A/IS used in the course of decision-making processes, the legal system should also, where appropriate, provide for the possibility of an appeal for review by a human judge. Careful attention must be paid to the design of corresponding appeal procedures. ^(109){ }^{109} ...
Illustration-Accountability ...
Over the last two decades, criminal justice agencies have increasingly embraced predictive tools to assist in the determination for bail, sentencing, and parole. A mix of companies, government agencies, nonprofits, and universities have built and promoted tools that provide a likelihood that someone may fail to appear ...
or may commit a new crime or a new violent act. While math has played a role in these determinations since at least the 1920s, ^(110){ }^{110} a new interest in accountability and transparency has brought novel legal challenges to these tools. ...
In 2013, Eric Loomis was arrested for a drive-by shooting in La Crosse, Wisconsin. No one was hit, but Loomis faced prison time. Loomis denied involvement in the shooting, but waived his right to trial and entered a guilty plea to two of the less severe offenses with which he was charged: attempting to flee a traffic officer and operating a motor vehicle without the owner’s consent. The judge sentenced him to six years in prison, saying he was “high risk”. The judge based this conclusion, in part, on the risk assessment score given by Compas, a secret and privately held algorithmic tool used routinely by the Wisconsin Department of Corrections. ...
On appeal, Loomis made three major arguments, two focused on accountability. ^(111){ }^{111} First, the tool’s proprietary nature-the underlying code was not made available to the defense-made it impossible to test its scientific validity. Second, the tool inappropriately considered gender in making its determination. ...
A unanimous Wisconsin Supreme Court ruled against Loomis on both arguments. ...
The court reasoned that knowing the inputs and output of the tool, and having access to validating studies of the tool’s accuracy, were sufficient to prevent infringement of Loomis’ due process. ^(112){ }^{112} Regarding the use of gender-a protected class in the United States-the court said he did not show that there was a reliance on gender in making the output or sentencing decision. ...
Without the ability to interrogate the tool and know how gender is used, the court created a paradox with its opinion. ...
The Loomis decision represents the challenges that judges have balancing accountability of “black boxed” A/IS and trade secret protections. ^(113){ }^{113} Other decisions have sided against accountability of other risk assessments, _(1)^(114){ }_{1}^{114} probabilistic DNA analysis tools, ^(115){ }^{115} and government remote hacking investigation software. ^(116){ }^{116} Siding with accountability, a federal judge found that the underlying code of a probability software used in DNA comparisons was admissible and relevant to a pretrial hearing where the admissibility of expert testimony is challenged. ^(117){ }^{117} ...
These issues will continue to be litigated as A/IS tools continue to proliferate in judicial systems. To that end, as the Loomis court notes, “The justice system must keep up with the research and continuously assess the use of these tools.” ...
Recommendations 建议
Creators of A/IS to be applied in a legal system should articulate and document welldefined lines of responsibility, among all those who would be engaged in the development and operation of the A/IS, for the outcome of the A/IS. ...
Those engaged in the adoption and operation of A/IS to be applied in a legal system should understand their specific responsibilities for the outcome of the A/IS as well as their potential liability should the A/IS produce an outcome other than that intended. In the case of A/IS, many questions of legal liability ...
remain unsettled. Adopters and operators of A/IS should nevertheless understand to what extent they could, potentially, be held liable for an undesirable outcome. ...
When negotiating contracts for the provision of A/IS products and services for use in the legal system, providers and buyers of A/IS should include contractual terms specifying clear lines of responsibility for the outcomes of the systems being acquired. ...
Creators and operators of A/IS applied in a legal system, and the organizations that employ them, should be amenable to internal oversight mechanisms and inquiries (or audits) that have the objective of allocating responsibility for the outcomes generated by the A/IS. In the case of A/IS adopted and deployed by organizations that have direct public interaction (e.g., a law enforcement agency), oversight and inquiry could also be conducted by external review boards. Being prepared for such inquiries means maintaining clear documentation of all salient procedures followed, decisions made, and tests conducted in the course of developing and applying the A//IS\mathrm{A} / \mathrm{IS}. ...
Organizations engaged in the development and operation of A/IS for legal tasks should consider mechanisms that will create individual and collective incentives for ensuring both that the outcomes of the A/IS adhere to ethical standards and that accountability for those outcomes is maintained, e.g., mechanisms to ensure that speed and efficiency are not rewarded at the expense of a loss of accountability. ...
Those conducting inquiries to determine responsibility for the outcomes of A/IS applied in a legal system should take into consideration all human agents involved in the design, development, procurement, deployment, operation, and validation of effectiveness of the A/IS and should assign responsibility accordingly. ...
Recommendation 1 under Issue 2, with respect to accountability. ...
Recommendation 2 under Issue 2, with respect to accountability. ...
Further Resources 延伸阅读资源
N. Diakopoulos, S. Friedler, M. Arenas, S. Barocas, M. Hay, B. Howe, H. V. Jagadish, K. Unsworth, A. Sahuguet, S. Venkatasubramanian, C. Wilson, C. Yu, and B. Zevenbergen, “Principles for Accountable Algorithms and a Social Impact Statement for Algorithms,” FAT/ML. ...
F. Doshi-Velez, M. Kortz, R. Budish, C. Bavitz, S. J. Gershman, D. O’Brien, S. Shieber, J. Waldo, D. Weinberger, and A. Wood, “Accountability of AI Under the Law: The Role of Explanation,” Berkman Center Research Publication Forthcoming; Harvard Public Law Working Paper, no. 18-07, Nov. 3, 2017. ...
European Commission for the Efficiency of Justice. European Ethical Charter on the Use of Artificial Intelligence in Judicial Systems and their Environment. Strasbourg, 2018. ...
J. A. Kroll, J. Huey, S. Barocas, E. W. Felten, J. R. Reidenberg, D. G. Robinson, and H. Yu, “Accountable Algorithms,” University of ...
Pennsylvania Law Review, vol. 165, pp. 633705. Feb. 2017. ...
J. New and D. Castro, “How Policymakers Can Foster Algorithmic Accountability,” Information Technology and Innovation Foundation, May 21, 2018. ...
M. U. Scherer, “Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies,” Harvard Journal of Law & Technology, vol. 29. no. 2, pp. 369-373, 2016. ...
J. Tashea, “Calculating Crime: Attorneys are Challenging the Use of Algorithms to Help Determine Bail, Sentencing and Parole,” ABA Journal, March 2017. ...
Issue 6: Transparency ...
How can sharing information that explains how A/IS reached given decisions or outcomes foster informed trust in the suitability of A/IS for adoption in legal systems? ...
Background 背景
Access to meaningful information. ...
An essential component of informed trust in a technological system is confidence that the information required for a human to understand why the system behaves a certain way in a specific circumstance (or would behave in ...
a hypothetical circumstance) will be accessible. Without transparency, there is no basis for trusting that a given decision or outcome of the system can be explained, replicated, or, if necessary, corrected. ^(118){ }^{118} Without transparency, there is no basis for informed trust that the system can be operated in a way that achieves its ends reliably and consistently or that the system will not be used in a way that impinges on human rights. In the case of A/IS applied in a legal system, such a lack of trust could undermine the credibility of the legal system itself. ...
Transparency and trust ...
Transparency, by prioritizing access to information about the operation and effectiveness of A/IS, serves the purpose of fostering informed trust in the systems. More specifically, transparency fosters trust that: ...
the operation of A//IS\mathrm{A} / \mathrm{IS} and the results they produce are explainable; ...
the operation and results of A//IS\mathrm{A} / \mathrm{IS} are fair, ^(119){ }^{119} ...
the operation and results of A/IS are unbiased; ...
the A/IS meet normative standards for operation and results; ...
the A/IS are effective; ...
the results of A/IS are replicable; ^(120){ }^{120} and ...
those engaged in the design, development, procurement, deployment, operation, and validation of the effectiveness of A/IS can be held accountable, where appropriate, for negative outcomes, and that corrective or punitive action can be taken when warranted. ...
For A/IS used in a legal system to achieve their intended purposes, all those with a stake in the effective functioning of the legal system must have a well-grounded trust that the A/IS can meet these requirements. This trust can be fostered by transparency. ...
The elements of transparency ...
Transparency of A/IS in legal matters requires disclosing information about the design and operation of the A/IS to various stakeholders. In implementing the principle, however, we must, in the interest of both feasibility and effectiveness, be more precise both about the categories of stakeholders to whom the information will be disclosed, and about the categories of information that will be disclosed to those stakeholders. ...
Relevant stakeholders in a legal system include those who: ...
operate A//IS\mathrm{A} / \mathrm{IS} for the purpose of carrying out tasks in civil justice, criminal justice, and law enforcement, such as a law enforcement officer who uses facial recognition tools to identify potential suspects; ...
rely on the results of A/IS to make important decisions, such as a judge who draws on the results of an algorithmic assessment of recidivism risk in deciding on a sentence; ...
are directly affected by the use of A/ISa “decision subject”, such as a defendant in a criminal proceeding whose bail terms are influenced by an algorithmic assessment of flight risk; ...
Law 法律
are indirectly affected by the results of A/IS, such as the members of a community that receives more or less police attention because of the results of predictive policing technology; and ...
have an interest in the effective functioning of the legal system, such as judges, lawyers, and the general public. ...
Different types of relevant information can be grouped into high-level categories. As illustrated below, a taxonomy of such high-level categories may, for example, distinguish between: ...
nontechnical procedural information regarding the employment and development of a given application of A/IS; ...
information regarding data involved in the development, training, and operation of the system; ...
information concerning a system’s effectiveness/performance; ...
information about the formal models that the system relies on; and ...
information that serves to explain a system’s general logic or specific outputs. ...
These more granular distinctions matter because different sorts of inquiries will require different sorts of information, and it is important to match the information provided to the actual needs of the inquiry. For example, an inquiry into a predictive policing system that misdirected police resources may not be much advanced by information about the formal models on which the system relied, but it may well be advanced by an explanation for the specific outcome. ...
On the other hand, an inquiry, undertaken by a designer or operator, into ways to improve system performance may benefit from access to information about the formal models on which the system relies. ^(121){ }^{121} ...
These distinctions also matter because there may be circumstances in which it would be desirable to limit access to a given type of information to certain stakeholders. For example, there may be circumstances in which one would want to identify an agent to serve as a public interest steward. For auditing purposes, this individual would have access to certain types of sensitive information unavailable to others. Such restrictions on information access are necessary if the transparency principle is not to impinge on other societal values and goals, such as security, privacy, and appropriate protection of intellectual property. ^(122){ }^{122} ...
The salience of the question, “Who is given access to what information?” is illustrated by Sentiment Meter, a technology developed by Elucd, a GovTech company that provides cities with near real-time understanding of how citizens feel about their government, in conjunction with the New York Police Department, to assist the NYPD in gauging citizens’ views regarding police activity in their communities. ^(123){ }^{123} One of the stated goals of the program is to build public trust in the police department. In the interest of trust, should “the public” have access to all potentially relevant information, including how the system was designed and developed, what the input data are, who operates the system and what their qualifications are, how the system’s effectiveness was tested, and why the public was not brought ...
Law 法律
into the process of construction? If the answer is that the general public should not have access to all this information, then who should? How do we define “the public?” Is it the whole community represented in its elected officials? Or should certain communities have greater access, for example, those most affected by controversial police practices such as stop, question, and frisk? Such questions must be answered if the program is to achieve its stated goals. ...
Transparency in practice ...
As just noted, although transparency can foster informed trust in A/IS applied in a legal system, its practical implementation requires careful thought. Requiring public access to all information pertaining to the operation and results of A/IS is neither necessary nor feasible. What is required is a careful consideration of who needs access to what information for the specific purpose of building informed trust. The following table is an example of a tool that might be used to match type of information to type of information consumer for the purpose of fostering trust. ^(124){ }^{124} ...
Law 法律
Types of information that should be considered in determining transparency demands in relation to a given A/IS ...
Stakeholders whose interest in access to different types of information should be considered in determining the transparency demands in relation to a given application of A/IS ...
High-level category ...
Specific type of information (examples) Disclosure of... ...
Operators ...
Decisionsubjects ...
Public interest steward ...
General public ...
Procedural aspects regarding A/IS employment and development ...
the fact that a given context involves the employment of A/IS ...
N/A ...
?
?
?
how the employment of the system was authorized ...
?
?
?
?
who developed the system ...
?
?
?
?
...
Data involved in A/IS development and operation ...
the origins of training data and data involved in the operation of the system ...
?
?
?
?
the kinds of quality checks that data was subject to and their results ...
?
?
?
?
how data labels are defined and to what extent data involves proxy variables ...
?
?
?
?
relevant data sets themselves ...
?
?
?
?
...
Effectiveness/ performance ...
the kinds of effectiveness/performance measurement that have occurred ...
?
?
?
?
measurement results ...
?
?
?
?
any independent auditing or certification ...
?
?
?
?
...
Model specification ...
the input variables involved ...
?
?
?
?
the variable(s) that the model optimizes for ...
?
?
?
?
tthe complete model (complete formal representation, source code, etc.) ...
?
?
?
?
...
Explanation ...
information concerning the system's general logic or functioning ...
?
?
?
?
information concerning the determinants of a particular output ^(125){ }^{125} ...
?
?
?
?
...
Types of information that should be considered in determining transparency demands in relation to a given A/IS Stakeholders whose interest in access to different types of information should be considered in determining the transparency demands in relation to a given application of A/IS
High-level category Specific type of information (examples) Disclosure of... Operators Decisionsubjects Public interest steward General public
Procedural aspects regarding A/IS employment and development the fact that a given context involves the employment of A/IS N/A ? ? ?
how the employment of the system was authorized ? ? ? ?
who developed the system ? ? ? ?
...
Data involved in A/IS development and operation the origins of training data and data involved in the operation of the system ? ? ? ?
the kinds of quality checks that data was subject to and their results ? ? ? ?
how data labels are defined and to what extent data involves proxy variables ? ? ? ?
relevant data sets themselves ? ? ? ?
...
Effectiveness/ performance the kinds of effectiveness/performance measurement that have occurred ? ? ? ?
measurement results ? ? ? ?
any independent auditing or certification ? ? ? ?
...
Model specification the input variables involved ? ? ? ?
the variable(s) that the model optimizes for ? ? ? ?
tthe complete model (complete formal representation, source code, etc.) ? ? ? ?
...
Explanation information concerning the system's general logic or functioning ? ? ? ?
information concerning the determinants of a particular output ^(125) ? ? ? ?
... | Types of information that should be considered in determining transparency demands in relation to a given A/IS | | Stakeholders whose interest in access to different types of information should be considered in determining the transparency demands in relation to a given application of A/IS | | | |
| :--- | :--- | :--- | :--- | :--- | :--- |
| High-level category | Specific type of information (examples) Disclosure of... | Operators | Decisionsubjects | Public interest steward | General public |
| Procedural aspects regarding A/IS employment and development | the fact that a given context involves the employment of A/IS | N/A | ? | ? | ? |
| | how the employment of the system was authorized | ? | ? | ? | ? |
| | who developed the system | ? | ? | ? | ? |
| | ... | | | | |
| Data involved in A/IS development and operation | the origins of training data and data involved in the operation of the system | ? | ? | ? | ? |
| | the kinds of quality checks that data was subject to and their results | ? | ? | ? | ? |
| | how data labels are defined and to what extent data involves proxy variables | ? | ? | ? | ? |
| | relevant data sets themselves | ? | ? | ? | ? |
| | ... | | | | |
| Effectiveness/ performance | the kinds of effectiveness/performance measurement that have occurred | ? | ? | ? | ? |
| | measurement results | ? | ? | ? | ? |
| | any independent auditing or certification | ? | ? | ? | ? |
| | ... | | | | |
| Model specification | the input variables involved | ? | ? | ? | ? |
| | the variable(s) that the model optimizes for | ? | ? | ? | ? |
| | tthe complete model (complete formal representation, source code, etc.) | ? | ? | ? | ? |
| | ... | | | | |
| Explanation | information concerning the system's general logic or functioning | ? | ? | ? | ? |
| | information concerning the determinants of a particular output ${ }^{125}$ | ? | ? | ? | ? |
| | ... | | | | |
Law 法律
When it comes to deciding whether a specific type of information should be made available and, if so, which types of stakeholders should have access to it, there are various considerations, for example: ...
The release of certain types of information may conflict with data privacy concerns, commercial or public policy interests-such as the promotion of innovation through appropriate intellectual property protectionsand security interests, e.g., concerns about gaming and adversarial attacks. At the same time, such competing interests should not be permitted to be used, without specific justification, as a blanket cover for not adhering to due process, transparency, or accountability standards. The tension between these interests is particularly acute in the case of A/IS applied in a legal system, where the dignity, security, and liberty of individuals are at stake. ^(126){ }^{126} ...
There is tension between the specific goal of explainability, which may argue for limits on system complexity, and system performance, which may be served by greater complexity, to the detriment of explainability. ^(127){ }^{127} ...
One must carefully consider the question that is being asked in an inquiry into A/IS and what information transparency can actually produce to answer that question. Disclosure of A/IS algorithms or training data is, itself, insufficient to enable an auditor to determine whether the system was effective in a specific circumstance. ^(128){ }^{128} By analogy, transparency into drug manufacturing processes does not, itself, provide information about the ...
actual effectiveness of a drug. Clinical trials provide that insight. In a legal system, an excessive focus on transparency-related information-gathering and assessment may overwhelm courts, legal practitioners, and law enforcement agencies. Meanwhile, other factors, such as measurement of effectiveness or operator competence, coupled with information on training data, may often suffice to ensure that there is a well-informed basis for trusting A/IS in a given circumstance. ^(129){ }^{129} ...
Given these competing considerations, arriving at a balance that is optimal for the functioning of a legal system and that has legitimacy in the eyes of the public will require an inclusive dialogue, bringing together the perspectives of those with an immediate stake in the proper functioning of a given technology, including those engaged in the design, development, procurement, deployment, operation, and validation of effectiveness of the technology, as well as those directly affected by the results of the technology; the perspectives of communities that may be indirectly impacted by the technology; and the perspectives of those with specialized expertise in ethics, government, and the law, such as jurists, regulators, and scholars. How the competing considerations should be balanced will also vary from one circumstance to another. Rather than aiming for universal transparency standards that would be applicable to all uses of A/IS within a legal system, transparency standards should allow for circumstancedependent flexibility, in the context of the four constitutive components of trust discussed in this section. ...
Additional note 1. The goals of transparency, e.g., answering a question as to why A/IS reached a given decision, may, in some cases, be better served by modes of explanation that do not involve examining an algorithm’s terms or opening the “black box”. A counterfactual explanation taking the form of, for example, “You were denied a loan because your annual income was £30,000£ 30,000; if your income had been £45,000, you would have been offered a loan,” may provide more insight sooner than the disclosure of an algorithm. ^(130){ }^{130} ...
Additional note 2. The transparency principle intersects with other principles focused on fostering trust. More specifically, we note the following: ...
- Transparency and effectiveness. ...
Information about the measurement of effectiveness can foster trust only if it is disclosed, i.e., only if there is transparency pertaining to the procedures and results of a measurement exercise. ...
Transparency and competence. ...
Transparency is essential in ensuring that the competencies required by the human operators of A/IS are known and met. At the same time, questions addressed by transparency extend beyond competence, while the questions addressed by competence extend beyond those answered by transparency. ...
Transparency and accountability. ...
Transparency is essential in determining accountability, but transparency serves purposes beyond accountability, while accountability seeks to answer questions not addressed directly by transparency. ...
Illustration-Transparency ...
In 2004, the city of Memphis, Tennessee, was experiencing an increase in crime rates that exceeded the national average. In response, in 2005, the city piloted a predictive policing program known as Blue CRUSH (Crime Reduction Utilizing Statistical History). ^(131){ }^{131} Blue CRUSH, developed in conjunction with the University of Memphis, ^(132){ }^{132} utilizes IBM’s SPSS predictive analytics software to identify “hot spots”: locations and times in which a given type of crime has a greater than average likelihood of occurring. The system generates its results through the analysis of a range of both historical data (type of crime, location, time of day, day of week, characteristics of victim, etc.) and live data provided by units on patrol. Equipped with the predictive crime map generated by the system, the Memphis Police Department can allocate resources dynamically to preempt or interrupt the target criminal activity. The precise response the department takes will vary with circumstance: deployment of a visible patrol car, deployment of an unmarked observer car, increasing vehicle stops in the area, undercover infiltration of the location, and so on. ...
The pilot program of Blue CRUSH focused on gang-related gun violence, which had been on the rise in Memphis prior to the pilot. The program showed an improvement, relative to incumbent methods, in the interdiction of such violence. Based on the success of the pilot, the scope of program was expanded, in 2007, for use throughout the city. By 2013, the policing efforts enabled by Blue CRUSH had helped to reduce overall crime in the city by over 30% and violent crime by 20%. ^(133){ }^{133} The program ...
Law 法律
also enabled a dramatic increase in the rate at which crimes were solved: for cases handled by the department’s Felony Assault Unit, the percentage of cases solved increased from 16%16 \% to nearly 70%.^(134)70 \% .^{134} And the program was cost effective: an analysis by Nucleus Research found that the program, when compared to the resources required to achieve the same results by traditional means, realized an annual benefit of approximately $7.2\$ 7.2 million at a cost of just under $400,000. 135 ...
The story of the deployment of Blue CRUSH in the metropolitan Memphis area is not just about the technology; it is equally about the police personnel utilizing the technology and about the communities in which the technology was deployed. As noted by former Memphis Police Department Director Larry Godwin: “You can have all the technology in the world but you’ve got to have leadership, you’ve got to have accountability, you’ve got to have boots on the streets for it to succeed.” 136 Crucial to the program’s success was public support. Blue CRUSH represents a variety of predictive policing technology that limits itself to identifying the “where”, the “when”, and the “what” of criminal activity; it does not attempt to identify the “who”, and therefore avoids a number of the privacy questions raised by technologies that do attempt to identify individual perpetrators. The technology will still, however, prompt responses by the police that could include more intrusive police activity in identified hot spots. The public must be willing to accept that activity, and that acceptance is won by transparency. To that end, Godwin and Janikowski held more than 200 community and neighborhood ...
watch meetings to inform the public about the technology and how it would be used in policing their communities. ^(177){ }^{177} Without that level of transparency, it is doubtful that Blue CRUSH would have had the public support needed for its successful deployment. ...
Holding community meetings is an important step in building trust in a predictive policing program. As such programs become more widely implemented, however, and become more widely studied, trust may require more than town-hall meetings. Research into the programs has raised serious concerns about the ways in which they are implemented and their potential for perpetuating or even exacerbating historical bias. ^(138){ }^{138} Addressing these concerns will require more sophisticated and intrusive oversight than can be realized through community meetings. ...
Included among the questions that must be addressed are the following. ...
In identifying hot spots, does the program rely primarily on arrest rates, which reflect (potentially biased) police activity, or does it rely on actual crime rates? ...
What are the specific criteria for identifying a hot spot and are those criteria free of bias? ^(139){ }^{139} ...
How accessible are the input data used to identify hot spots? Are they open to analysis by an independent expert? ...
What mechanisms for oversight, review, and remediation of the program have been put in place? Such oversight should have access to the data used to train the system, the models used to identify hot spots, tests of the ...
effectiveness of the system, and steps taken to remediate errors (such as bias) when they are uncovered. ...
As the public becomes more aware of the potential negative impact ^(140){ }^{140} of predictive policing programs, law enforcement agencies hoping to build trust in such programs will have to put in place transparency mechanisms that go beyond town-hall meetings and that enable a sophisticated response to such questions. ...
Recommendations 建议
Governments and professional associations should facilitate dialogue among stakeholders-those engaged in the design, development, procurement, deployment, operation, and validation of effectiveness of the technology; those who may be immediately affected by the results of the technology; those who may be indirectly affected by the results of the technology, including the general public; and those with specialized expertise in ethics, politics, and the law-on the question of achieving a balance between transparency and other priorities, e.g., security, privacy, appropriate property rights, efficient and uniform response by the legal system, and more. In developing frameworks for achieving such balance, policymakers and professional associations should make allowance for circumstantial variation in how competing interests may be reconciled. ...
Policymakers developing frameworks for realizing transparency in A/IS applied to legal tasks should require that any frameworks they ...
develop are sensitive both to the distinctions among the types of information that might be disclosed and to the distinctions among categories of individuals who may seek information about the design, operation, and results of a given system. ...
Policymakers developing frameworks for realizing transparency in A/IS to be adopted in a legal system should consider the role of appropriate protection for intellectual property, but should not allow those concerns to be used as a shield to prevent duly limited disclosure of information needed to ascertain whether A/IS meet acceptable standards of effectiveness, fairness, and safety. In developing such frameworks, policymakers should make allowance that the level of disclosure warranted will be, to some extent, dependent on what is at stake in a given circumstance. ...
Policymakers developing frameworks for realizing transparency in A/IS to be adopted in a legal system should consider the option of creating a role for a specially designated “public interest steward”, or “trusted third party”, who would be given access to sensitive information not accessible to others. Such a public interest steward would be charged with assessing the information to answer the public interest questions at hand but would be under obligation not to disclose the specifics of the information accessed in arriving at those answers. ...
Designers of A/IS should design their systems with a view to meeting transparency requirements, i.e., so as to enable some ...
categories of information about the system and its performance to be disclosed while enabling other categories, such as intellectual property, to be protected. ...
When negotiating contracts for the provision of A/IS products and services for use in the legal system, providers and buyers of A/IS should include contractual terms specifying what categories of information will be accessible to what categories of individuals who may seek information about the design, operation, and results of the A/IS. ...
In developing frameworks for realizing transparency in A/IS to be adopted in a legal system, policymakers should recognize that the information provided by other types of inquiries, e.g., examination of evidence of effectiveness or of operator competence, may in certain circumstances provide a more efficient means to informed trust in the effectiveness, fairness, and safety of the A/IS in question. ...
Governments should, where appropriate, work together with A/IS developers, as well as other stakeholders in the effective functioning of the legal system, to facilitate the creation of error-sharing mechanisms to enable the more effective identification, isolation, and correction of flaws in broadly deployed A/IS in their legal systems, such as a systematic facial recognition error in policing applications or in risk assessment algorithms. In developing such mechanisms, the question of precisely what information gets shared with precisely which groups may vary from application to application. All government efforts in this regard should be transparent and open to public scrutiny. ...
Governments should provide whistleblower protections to individuals who volunteer to offer information in situations where A/IS are not designed as claimed or operated as intended, or when their results are not interpreted correctly. For example, if a law enforcement agency is using facial recognition technology for a purpose that is illegal or unethical, or in a manner other than that in which it is intended to be used, an individual reporting that misuse should be given protection against reprisal. All government efforts in this regard should be transparent and open to public scrutiny. ...
10.Recommendation 1 under Issue 2, with respect to transparency. ...
Recommendation 2 under Issue 2, with respect to transparency. ...
Further Resources 延伸阅读资源
J. A. Kroll, J. Huey, S. Barocas, E. W. Felten, J. R. Reidenberg, D. G. Robinson, and H. Yu, “Accountable Algorithms,” University of Pennsylvania Law Review, vol. 165, Feb. 2017. ...
J. A. Kroll, “The fallacy of inscrutability,” Philosophical Transactions of the Royal Society A: Mathematical, Physical, and Engineering Sciences, vol. 376, no. 2133, Oct. 2018. ...
W. L. Perry, B. McInnis, C. C. Price, S. C. Smith, and J. S. Hollywood, “Predictive Policing: The Role of Crime Forecasting in Law Enforcement Operations,” The RAND Corporation, 2013. ...
A. D. Selbst and S. Barocas, “The Intuitive Appeal of Explainable Machines,” Fordham Law Review, vol. 87, no. 3, 2018. ...
Law 法律
S. Wachter, B. Mittelstadt, and L. Floridi, “Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation,” International Data Privacy Law, vol. 7, no. 2, pp. 76-99, June 2017. ...
S. Wachter, B. Mittelstadt, and C. Russell, “Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR,” Harvard Journal of Law & Technology, vol. 31, no. 2, 2018. ...
R. Wexler, “Life, Liberty, and Trade Secrets: Intellectual Property in the Criminal Justice System,” Stanford Law Review, vol. 70, no. 5, pp. 1342-1429, 2017. ...
Section 2: Legal Status of A/IS ...
There has been much discussion about how to legally regulate A/IS-related technologies and the appropriate legal treatment of systems that deploy these technologies. Already, some lawmakers are wrestling with the issue of what status to apply to A/IS. Legal "personhood"applied to humans and certain types of human organizations-is one possible option for framing such legal treatment, but granting that status to A/IS applications raises issues in multiple domains of human interaction. ...
Issue ...
What type of legal status (or other legal analytical framework) is appropriate for A/IS given (i) the legal issues raised by deployment of such technologies, and (ii) the desire to maximize the benefits of A/IS and minimize negative externalities? ...
Background 背景
The convergence of A/IS and robotics technologies has led to the development of systems and devices resembling those of human ...
beings in terms of their autonomy, ability to perform intellectual tasks, and, in the case of some robots, their physical appearance. As some types of A/IS begin to display characteristics resembling those of human actors, some governmental entities and private commentators have concluded that it is time to examine how legal regimes should categorize and treat various types of A/IS, often with an eye toward according A/IS a legal status beyond that of mere property. These entities have posited questions such as whether the law should treat such systems as legal persons. ^(141){ }^{141} ...
While legal personhood is a multifaceted concept, the essential feature of “full” legal personhood is the ability to participate autonomously within the legal system by having the right to sue and the capacity to be sued in court. ^(142){ }^{142} This allows legal “persons” to enter legally binding agreements, take independent action to enforce their own rights, and be held responsible for violations of the rights of others. ...
Conferring such status on A/IS seems initially remarkable until consideration is given to the long-standing legal personhood status granted to corporations, governmental entities, and the like-none of which are themselves human. Unlike these familiar legal entities, however, A/IS are not composed of-or necessarily controlled by-human beings. Recognizing A/IS as independent legal entities could therefore lead to abuses of that status, possibly by A/IS ...
Law 法律
and certainly by the humans and legal entities who create or operate them, just as human shareholders and agents have abused the corporate form. ^(143)A//IS{ }^{143} \mathrm{~A} / \mathrm{IS} personhood is a significant departure from the legal traditions of both common law and civil law. ^(144){ }^{144} ...
Current legal frameworks provide a number of categories of legal status, other than full legal personhood, that could be used as analogues for the legal treatment of A/IS and how to allocate legal responsibility for harm caused by A/IS. At one extreme, legal systems could treat A/IS as mere products, tools, or other form of personal or intellectual property, and therefore subject to the applicable regimes of property law. Such treatment would have the benefit of simplifying allocation of responsibility for harm. It would, however, not account for the fact that A/IS, unlike other forms of property, may be capable of making legally significant decisions autonomously. In addition, if A/IS are to be treated as a form of property, governments and courts would have to establish rules regarding ownership, possession, and use by third parties. Other legal analogues may include the treatment of pets, livestock, wild animals, children, prisoners, and the legal principles of agency, guardianship, and powers of attorney. ^(145){ }^{145} Or perhaps A/IS are something entirely without precedent, raising the question of whether one or more types of A/IS might be assigned a hybrid, intermediate, or novel type of legal status? ...
Clarifying the legal status of A/IS in one or more jurisdictions is essential in removing the uncertainty associated with the obligations and expectations for organization and operation of ...
these systems. Clarification along these lines will encourage more certain development and deployment of A/IS and will help clarify lines of legal responsibility and liability when A/IS cause harm. One of the problems of exploiting the existing status of legal personhood is that international treaties may bind multiple countries to follow the lead of a single legislature, as in the EU, making it impossible for a single country to experiment with the legal and economic consequences of such a strategy. ...
Recognizing A/IS as independent legal persons would limit or eliminate some human responsibility for subsequent decisions made by such A//IS\mathrm{A} / \mathrm{IS}. For example, under a theory of intervening causation, a hammer manufacturer is not held responsible when a burglar uses a hammer to break the window of a house. However, if similar “relief” from responsibility was available to the designers, developers, and users of A/IS, it will potentially reduce their incentives to ensure the safety of A/IS they design and use. In this example, legal issues that are applied in similar chain of causation settings-such as foreseeability, complicity, reasonable care, strict liability for unreasonably dangerous goods, and other precedential notions-will factor into the design process. Different jurisdictions may reach different conclusions about the nature of such causation chains, inviting future creative legal planners to consider how and where to pursue design, development, and deployment of future A/IS in order to receive the most beneficial legal treatment. ...
The legal status of A/IS thus intertwines with broader legal questions regarding how to ensure ...
Law 法律
accountability and assign and allocate liability when A//IS\mathrm{A} / \mathrm{IS} cause harm. The question of legal personhood for A/IS, in particular, also interacts with broader ethical and practical questions on the extent to which A/IS should be treated as moral agents independent from their human designers and operators, whether recognition of A/IS personhood would enhance or detract from the purposes for which humans created the A//IS\mathrm{A} / \mathrm{IS} in the first place, and whether A//IS\mathrm{A} / \mathrm{IS} personhood facilitates of debilitates the widespread benefits of A/IS. ...
Some assert that because A/IS are at a very early stage of development, it is premature to choose a particular legal status or presumption in the many forms and settings in which those systems are and will be deployed. However, thoughtfully establishing a legal status early in the development could also provide crucial guidance to researchers, programmers, and developers. This uncertainty about legal status, coupled with the fact that multiple legal jurisdictions are already deploying A/IS-and each of them, as a sovereign entity, can regulate A/IS as it sees fit-suggests that there are multiple general frameworks that can and should be considered when assessing the legal status of A/IS. ...
Recommendations 建议
While conferring full legal personhood on A/IS might bring some economic benefits, the technology has not yet developed to the point where it would be legally or morally appropriate to generally accord A/IS the rights and responsibilities inherent in the legal definition of personhood as it is now defined. ...
Therefore, even absent the consideration of any negative ramifications from personhood status, it would be unwise to accord such status to A//IS\mathrm{A} / \mathrm{IS} at this time. ...
2. In determining what legal status, including granting A/IS legal rights short of full legal personhood, to accord to A/IS, government and industry stakeholders alike should: (1) identify the types of decisions and operations that should never be delegated to A/IS; and (2) determine what rules and standards will most effectively ensure human control over those decisions. ...
3. Governments and courts should review various potential legal models-including agency, animal law, and the other analogues discussed in this section-and assess whether they could serve as a proper basis for assigning and apportioning legal rights and responsibilities with respect to the deployment and use of A/IS. ...
4. In addition, governments should scrutinize existing laws-especially those governing business organizations-for mechanisms that could allow A/IS to have legal autonomy. If ambiguities or loopholes create a legal method for recognizing A/IS personhood, the government should review and, if appropriate, amend the pertinent laws. ...
5. Manufacturers and operators should learn how each jurisdiction would categorize a given autonomous and/or intelligent system and how each jurisdiction would treat harm caused by the system. Manufacturers and operators should be required to comply with the applicable laws of all jurisdictions in ...
which that system could operate. In addition, manufacturers and operators should be aware of standards of performance and measurement promulgated by standards development organizations and agencies. ...
6. Stakeholders should be attentive to future developments that could warrant reconsideration of the legal status of A/IS. For example, if A/IS were developed that displayed self-awareness and consciousness, it may be appropriate to revisit the issue of whether they deserve a legal status on par with humans. Likewise, if legal systems underwent radical changes such that human rights and dignity no longer represented the primary guiding principle, the concept of full personhood for artificial entities may not represent the radical departure it might today. If the development of A/IS were to go in the opposite direction, and mechanisms were introduced allowing humans to control and predict the actions of A/IS easily and reliably, then the dangers of A/IS personhood would not be any greater than for well-established legal entities, such as corporations. ...
7. In considering whether to accord or expand legal protections, rights, and responsibilities to A/IS, governments should exercise utmost caution. Before according full legal personhood or a comparable legal status on A/IS, governments and courts should carefully consider whether doing so might limit how widely spread the benefits of A/IS are or could be, as well as whether doing so would harm human dignity and uniqueness of human identity. Governments and decisionmakers at every level must work closely with ...
regulators, representatives of civil society, industry actors, and other stakeholders to ensure that the interest of humanity-and not the interests of the autonomous systems themselves-remains the guiding principle. ...
Further Resources 延伸阅读资源
S. Bayern. “The Implications of Modern Business-Entity Law for the Regulation of Autonomous Systems.” Stanford Technology Law Review 19, no. 1, pp. 93-112, 2015. ...
S. Bayern, et al., “Company Law and Autonomous Systems: A Blueprint for Lawyers, Entrepreneurs, and Regulators.” Hastings Science and Technology Law Journal, vol. 9, no. 2, pp. 135-162, 2017. ...
D. Bhattacharyya. “Being, River: The Law, the Person and the Unthinkable.” Humanities and Social Sciences Online, April 26, 2017. ...
B. A. Garner, Black’s Law Dictionary, 10th Edition, Thomas West, 2014. ...
J. Bryson, et al., “Of, for, and by the people: the legal lacuna of synthetic persons,” Artificial Intelligence Law 25, pp. 273-91, 2017. ...
D. J. Calverley, "Android Science and Animal Rights, Does an Analogy Exist? " Connection Science 18, no. 4, pp. 403-417, 2006. ...
D. J. Calverley, “Imagining a Non-Biological Machine as a Legal Person.” AI & Society 22, pp. 403-417, 2008. ...
R. Chatila, “Inclusion of Humanoid Robots in Human Society: Ethical Issues,” in Springer Humanoid Robotics: A Reference, A. Goswami and P. Vadakkepat, Eds., Springer 2018. ...
Law 法律
European Parliament Resolution of 16 February 2017 (2015/2103(INL) with recommendations to the Commission on Civil Law Rules on Robotics, 2017. ...
L. M. LoPucki, “Algorithmic Entities”, 95 Washington University Law Review 887, 2018. ...
J. S. Nelson, “Paper Dragon Thieves.” Georgetown Law Journal 105, pp. 871-941, 2017. ...
M. U. Scherer, “Of Wild Beasts and Digital Analogues: The Legal Status of Autonomous Systems.” Nevada Law Journal 19, forthcoming 2018. ...
M. U. Scherer, “Is Legal Personhood for AI Already Possible Under Current United States Laws?” Law and AI, May 14, 2017. ...
L. B. Solum. “Legal Personhood for Artificial Intelligences.” North Carolina Law Review 70, no. 4, pp. 1231-1287, 1992. ...
J. F. Weaver. Robots Are People Too: How Siri, Google Car, and Artificial Intelligence Will Force Us to Change Our Laws. Santa Barbara, CA: Praeger, 2013. ...
L. Zyga. “Incident of drunk man kicking humanoid robot raises legal questions,” Techxplore, October 2, 2015. ...
Thanks to the Contributors 致谢贡献者
We wish to acknowledge all of the people who contributed to this chapter. 我们谨向所有为本章节做出贡献的人士致以谢意。
The Law Committee ...
John Casey (Co-Chair) - Attorney-at-Law, Corporate, Wilson Sonsini Goodrich & Rosati, P.C. ...
Nicolas Economou (Co-Chair) - Chief Executive Officer, H5; Chair, Science, Law and Society Initiative at The Future Society; Chair, Law Committee, Global Governance of AI Roundtable; Member, Council on Extended Intelligence ...
Aden Allen - Senior Associate, Patent Litigation, Wilson Sonsini Goodrich & Rosati, P.C. ...
Miles Brundage - Research Scientist (Policy), OpenAI; Research Associate, Future of Humanity Institute, University of Oxford; PhD candidate, Human and Social Dimensions of Science and Technology, Arizona State University ...
Thomas Burri - Assistant Professor of International Law and European Law, University of St. Gallen (HSG), Switzerland ...
Ryan Calo - Assistant Professor of Law, the School of Law at the University of Washington ...
Clemens Canel - Referendar (Trainee Lawyer) at Hanseatisches Oberlandesgericht, graduate of the University of Texas School of Law and Bucerius Law School ...
Chandramauli Chaudhuri - Senior Data Scientist; Fractal Analytics ...
Danielle Keats Citron - Lois K. Macht Research Professor & Professor of Law, University of Maryland Carey School of Law ...
Fernando Delgado - PhD Student, Information Science, Cornell University. ...
Deven Desai - Associate Professor of Law and Ethics, Georgia Institute of Technology, Scheller College of Business ...
Julien Durand - International Technology Lawyer; Executive Director Compliance & Ethics, Amgen Biotechnology ...
Todd Elmer, JD - Member of the Board of Directors, National Science and Technology Medals Foundation ...
Kay Firth-Butterfield - Project Head, Al and Machine Learning at the World Economic Forum. Founding Advocate of AI-Global; Senior Fellow and Distinguished Scholar, Robert S. Strauss Center for International Security and Law, University of Texas, Austin; Co-Founder, Consortium for Law and Ethics of Artificial Intelligence and Robotics, University of Texas, Austin; Partner, Cognitive Finance Group, London, U.K. ...
Tom D. Grant - Fellow, Wolfson College; Senior Associate of the Lauterpacht Centre for International Law, University of Cambridge, U.K. ...
Cordel Green - Attorney-at-Law; Executive Director, Broadcasting Commission-Jamaica ...
Maura R. Grossman - Research Professor, David R. Cheriton School of Computer Science, University of Waterloo; Adjunct Professor, Osgoode Hall Law School, York University ...
Bruce Hedin - Principal Scientist, H5 ...
Daniel Hinkle - Senior State Affairs Counsel for the American Association for Justice ...
Derek Jinks - Marrs McLean Professor in Law, University of Texas Law School; Director, Consortium on Law and Ethics of Artificial Intelligence and Robotics (CLEAR), Robert S. Strauss Center for International Security and Law, University of Texas. ...
Nicolas Jupillat - Adjunct Professor, University of Detroit Mercy School of Law ...
Marwan Kawadri - Analyst, Founders Intelligence; Research Associate, The Future Society. ...
Mauricio K. Kimura - Lawyer; PhD student, Faculty of Law, University of Waikato, New Zealand; LLM from George Washington University, Washington DC, USA; Bachelor of Laws from Sao Bernardo do Campo School of Law, Brazil ...
Irene Kitsara - Lawyer; IP Information Officer, Access to Information and Knowledge Division, World Intellectual Property Organization, Switzerland ...
Timothy Lau, J.D., Sc.D. - Research Associate, Federal Judicial Center ...
Mark Lyon - Attorney-at-Law, Chair, Artificial Intelligence and Autonomous Systems Practice Group at Gibson, Dunn & Crutcher LLP ...
Gary Marchant - Regents’ Professor of Law, Lincoln Professor of Emerging Technologies, Law and Ethics, Arizona State University ...
Nicolas Miailhe - Co-Founder & President, The Future Society; Member, AI Expert Group at the OECD; Member, Global Council on Extended Intelligence; Senior Visiting Research Fellow, Program on Science Technology and Society at Harvard Kennedy School. Lecturer, Paris School of International Affairs (Sciences Po); Visiting Professor, IE School of Global and Public Affairs ...
Paul Moseley - Master’s student, Electrical Engineering, Southern Methodist University; graduate of the University of Texas School of Law ...
Florian Ostmann - Policy Fellow, The Alan Turing Institute ...
Pedro Pavón - Assistant General Counsel, Global Data Protection, Honeywell ...
Josephine Png - Al Policy Researcher and Deputy Project Manager, The Future Society; budding barrister; and BA Chinese and Law, School of Oriental and African Studies ...
Matthew Scherer - Attorney at Littler Mendelson, P.C., and legal scholar based in Portland, Oregon, USA; Editor, LawAndAI.com ...
Bardo Schettini Gherardini Independent Legal Advisor on standardization, AI and robotics ...
Law 法律
Jason Tashea - Founder, Justice Codes and adjunct law professor at Georgetown Law Center ...
Yan Tougas - Global Ethics & Compliance Officer, United Technologies Corporation; Adjunct Professor, Law & Ethics, University of Connecticut School of Business; Fellow, Ethics & Compliance Initiative; Kallman Executive Fellow, Bentley University Hoffman Center for Business Ethics ...
Sandra Wachter - Lawyer and Research Fellow in Data Ethics, AI and Robotics, Oxford Internet Institute, University of Oxford ...
Axel Walz - Lawyer; Senior Research Fellow at the Max Planck Institute for Innovation and Competition, Germany. (Member until October 31, 2018) ...
John Frank Weaver - Lawyer, McLane Middleton, P.A; Columnist for and Member of Board of Editors of Journal of Robotics, Artificial Intelligence & Law; Contributing Writer for Slate; Author, Robots Are People Too ...
Julius Weitzdörfer - Affiliated Lecturer, Faculty of Law, University of Cambridge; Research Associate, Centre for the Study of Existential Risk, University of Cambridge ...
Yueh-Hsuan Weng - Assistant Professor, Frontier Research Institute for Interdisciplinary Sciences (FRIS), Tohoku University; Fellow, Transatlantic Technology Law Forum (TTLF), Stanford Law School ...
Andrew Woods - Associate Professor of Law, University of Arizona ...
For a full listing of all IEEE Global Initiative Members, visit standards.ieee.org/content/dam/ ieee-standards/standards/web/documents/ other/ec bios.pdf. 要查看所有 IEEE 全球倡议成员的完整名单,请访问 standards.ieee.org/content/dam/ieee-standards/standards/web/documents/other/ec_bios.pdf。
For information on disclaimers associated with EAD1e, see How the Document Was Prepared. 关于 EAD1e 相关免责声明,请参阅《文档编制说明》。
The Law Committee of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems would like to thank the following individuals for taking the time to offer valuable feedback and suggestions on Section 1 of the Law Chapter, “Norms for the Trustworthy Adoption of A/IS in Legal Systems”. Each of these contributors offered comments in an individual capacity, not in the name of the organization for which they work. The final version of the Section does not necessarily incorporate all comments or reflect the views of each contributor. ...
Rediet Abebe, PhD Candidate, Department of Computer Science, Cornell University; cofounder, Mechanism Design for Social Good; cofounder, Black in AI. ...
Ifeoma Ajunwa, Assistant Professor, Labor & Employment Law, Cornell Industrial and Labor Relations School; faculty Associate at Harvard Law, Berkman Klein Center. ...
Jason R. Baron, of counsel, Drinker Biddle; co-chair, Information Governance Initiative; former Director of Litigation, United States National Archives and Records Administration. ...
Irakli Beridze, Head, Centre for Artificial Intelligence and Robotics, United Nations (UNICRI). ...
Juan Carlos Botero, Law Professor, Pontificia Universidad Javeriana, Bogota; former Executive Director, World Justice Project. ...
Anne Carblanc, Principal Administrator, Information, Communications and Consumer Policy (ICCP) Division, Directorate for Science, Technology and Industry, OECD; former criminal investigations judge (juge d’instruction), Tribunal of Paris. ...
Gallia Daor, Policy Analyst, OECD. ...
Lydia de la Torre, Privacy Law Fellow, Santa Clara University. ...
Isabela Ferrari, Federal Judge, Federal Court, Rio de Janeiro, Brazil. ...
Albert Fox Cahn, Founder and Executive Director, Surveillance Technology Oversight Project; former Legal Director, CAIR-NY. ...
Paul W. Grimm, United States District Judge, United States District Court for the District of Maryland. ...
Gillian Hadfield, Professor of Law and Professor of Strategic Management, University of Toronto; Member, World Economic Forum Future Council for Agile Governance. ...
Sheila Jasanoff, Pforzheimer Professor of Science and Technology Studies, Harvard Kennedy School of Government. ...
Baroness Beeban Kidron, OBE, Member, United Kingdom House of Lords. ...
Eva Kaili, Member, European Parliament; Chair, European Parliament Science and Technology Options Assessment body (STOA). ...
Mantalena Kaili, cofounder, European Law Observatory on New Technologies. ...
Jon Kleinberg, Tisch University Professor, Departments of Computer Science and Information Science, Cornell University; member of the National Academy of Sciences, the National Academy of Engineering, and the American Academy of Arts and Sciences. ...
Shuang Lu Frost, Teaching Fellow, PhD candidate, Department of Anthropology, Harvard University. ...
Arthur R. Miller CBE, University Professor, New York University; former Bruce Bromley Professor of Law, Harvard Law School. ...
Manuel Muñiz, Dean and Rafael del Pino Professor of Practice of Global Leadership, IE School of Global and Public Affairs, Madrid; Senior Associate, Belfer Center, Harvard University. ...
Erik Navarro Wolkart, Federal Judge, Federal Court, Rio de Janeiro, Brazil. ...
Aileen Nielsen, chair, Science and Law Committee, New York City Bar Association. ...
Michael Philips, Assistant General Counsel, Microsoft. ...
Dinah PoKempner, General Counsel, Human Rights Watch. ...
Irina Raicu, Director, Internet Ethics Program, Markkula Center for Applied Ethics, Santa Clara University. ...
David Robinson, Visiting Scientist, Al Policy and Practice Initiative, Cornell University; Adjunct Professor of Law, Georgetown University Law Center; Managing Director (on leave), Upturn. ...
Alanna Rutherford, Vice President, Global Litigation & Competition, Visa. ...
George Socha, Esq., Consulting Managing Director, BDO USA; co-founder, Electronic Discovery Reference Model (EDRM) and Information Governance Reference Model (IGRM). ...
Lee Tiedrich, Partner, IP/Technology Transactions, and Co-Chair, Artificial Intelligence Initiative, Covington & Burling LLP. ...
Darrell M. West, VP, Governance Studies, Director, Center for Technology Innovation, Douglas Dillon Chair in Governance Studies, Brookings Institution. ...
Bendert Zevenbergen, Research Fellow, Center for Information Technology Policy, Princeton University; Researcher, Oxford Internet Institute. ...
Jiyu Zhang, Associate Professor and Executive Director of the Law and Technology Institute, Renmin University of China School of Law. ...
Peter Zimroth, Director, New York University Center on Civil Justice; retired partner, Arnold & Porter; former Assistant US Attorney, Southern District of New York. ...
Endnotes 尾注
1 See S. Jasanoff, “Governing Innovation: The Social Contract and the Democratic Imagination,” Seminar, vol. 597, pp. 16-25, May 2009. ...
2 As articulated in EAD General Principles 1 (Human Rights), 2 (Well-Being), and 3 (Data Agency). See also EAD Chapter, “Classical Ethics in A/IS,” In applying A/IS in pursuit of these goals, tradeoffs are inevitable. Some applications of predictive policing, for example, may reduce crime, and so enhance well-being, but may do so at the cost of impinging on a right to privacy or weakening protections against unwarranted search and seizure. How these tradeoffs are negotiated may vary with cultural and legal traditions. ...
3 Risks and benefits, and their perception, are neither always well-defined at the outset nor static over time. Social expectations and even ideas of lawfulness constantly evolve. For example, if younger generations, accustomed to the use of social networking technologies, have lower expectations of privacy than older generations, should this be deemed to be a benefit to society, a risk, or neither? ...
4 Regarding the nature of the guidance provided in this section: Artificial intelligence, like many other domains relied on by the legal realm (e.g., medical and accounting forensics, ballistics, or economic analysis), is a scientific discipline distinct from the law. Its effective and safe design and operation have underpinnings in academic ...
and professional competencies in computer science, linguistics, data science, statistics, and related technical fields. Lawyers, judges, and law enforcement officers increasingly draw on these fields, directly or indirectly, as A/IS are progressively adopted in the legal system. This document does not seek to offer legal advice to lawyers, courts, or law enforcement agencies on how to practice their professions or enforce the law in their jurisdictions around the globe. Instead, it seeks to help ensure that A/IS and their operators in a given legal system can be trusted by lawyers, courts, and law enforcement agencies, and civil society at large, to perform effectively and safely. Such effective and safe operation of A/IS holds the potential of producing substantial benefits for the legal system, while protecting all of its participants from the ethical, professional, and business risks, or personal jeopardy, that may result from the intentional, unintentional, uninformed, or incompetent procurement and operation of artificial intelligence. ...
5 See Rensselaer Polytechnic Institute, “A Conversation with Chief Justice John G. Roberts, Jr.,” April 11, 2017. YouTube video, 40:12. April 12, 2017. [Online]. Available: https://www. youtube.com/watch?v=TuZEKIRgDEg. ...
6 “Uninformed avoidance of adoption” can be one of two types: (a) avoidance of adoption when the information needed to enable sound decisions is available but is not taken into ...
consideration, and (b) avoidance of adoption when the information needed to enable sound decisions is simply not available. Unlike the former type of avoidance, the latter type is a prudent and well-reasoned avoidance of adoption and, pending better information, is the course recommended by a number experts and nonexperts. ...
7 For purposes of this chapter, we have made the deliberate choice to focus on these four principles without taking a prior position on where the deployment of A/IS may or may not be acceptable in legal systems. Where these principles cannot be adequately operationalized, it would follow that the deployment of A/IS in a legal system cannot be trusted. Where A/IS can be evidenced to meet desired thresholds for each duly operationalized principle, it would follow that their deployment can be trusted. Such information is intended to facilitate, not preempt, the indispensable public policy dialogue on the extent to which A/IS should be relied upon to meet the specific needs of the legal systems of societies around the world. ...
8 It is beyond the scope of this chapter to discuss the process through which such adherence may become institutionalized in the complex legal, technological, political, and cultural dynamics in which sociotechnical innovation occurs. It is worth noting, however, that this process typically involves four steps. First, a wide range of market and culturedriven practices emerge. Second, a set of best practices arises, reflecting a group’s willingness to adopt certain rules. Third, some of these best practices are formulated into standards, which ...
enable enforcement (through private contracts, professional codes of practice, or legislation). Finally, those enforceable standards render the performance of some activities sufficiently reliable and predictable to enable trustworthy operation at the scale of society. Where these elements (rulemaking, enforcement, scalable operation) are present, new institutions are born. ...
9 For a discussion of the definition of A/IS, see the Terminology Update in the Executive Summary of EAD. The principles outlined in this section as constitutive of “informed trust” do not depend on a precise, consensus definition of A/IS and are, in fact, designed to be enable successful operationalization under a broad range of definitions. ...
10 Such as Gross Domestic Product (GDP), Gross National Income (GNI) per capita, the WEF Global Competitiveness Index, and others. ... ^(11){ }^{11} Such as life expectancy, infant mortality rate, and literacy rate, as well as composite indices such as the Human Development Index, the Inequality-Adjusted Human Development Index, the OECD Framework for Measuring Well-being and Progress, and others. For more on measures of well-being, see the EAD chapter on “Well-being”. ... ... ^(12){ }^{12} See United Nation General Assembly, Universal Declaration of Human Rights, Dec. 10, 1948, available: http://www.un.org/en/universal-declaration-human-rights/index.html; see also United Nations Office of the High Commissioner: Human Rights, The Vienna Declaration and Programme of Action, June 25, 1993, available: https://www.ohchr.org/en/professionalinterest/ pages/vienna.aspx.
14 See United Nations Security Council, “The Rule of Law and Transitional Justice in Conflict and Post-conflict Societies: Report of the Secretary General,” Report S/2004/616 (2004). ...
15 See The World Economic Forum, The Global Competitiveness Report: 2018, ed. K. Schwab (2018), pp. 12ff. ...
16 See A. Brunetti, G. Kisunko, and B. Weder, “Credibility of Rules and Economic Growth: Evidence from a Worldwide Survey of the Private Sector,” The World Bank Economic Review, vol. 12, no. 3, pp. 353-384, 1998. Available: https://doi.org/10.1093/wber/12.3.353; see also World Bank, World Development Report 2017: Governance and the Law, Jan. 2017. Available: doi.org/10.1596/978-1-4648-0950-7. ...
17 The question of intellectual property law in an era of rapidly advancing technology (both A/IS and other technologies) is a complex and often contentious one involving legal, economic, and ethical considerations. We have not yet studied the question in sufficient depth to reach a consensus on the issues raised. We may examine the issues in depth in a future version of EAD. For a forum in which such issues are discussed, see the Berkeley-Stanford Advanced Patent Law Institute. See also The World Economic Forum, “Artificial Intelligence Collides with Patent Law.” April 2018. Available: http:// www3.weforum.org/docs/WEF 48540 WP End of_Innovation_Protecting_Patent_Law.pdf. ...
18 A component of human dignity is privacy, and a component of privacy is protection and control of one’s data; in this regard, frameworks such as the EU’s General Data Protection Regulation (GDPR) and the Council of Europe’s “Guidelines on the protection of individuals with regard to the processing of personal data in a world of Big Data” have a role to play in setting standards for how legal systems can protect data privacy. See also EAD General Principle 3 (Data Agency). ...
19 Frameworks such as the Universal Declaration of Human Rights and the Vienna Declaration and Programme of Action (VDPA) have a role to play in articulating human-rights standards to which legal systems should adhere. See also EAD General Principle 1 (Human Rights). ...
20 For more on the importance of measures of well-being beyond GDP, see EAD General Principle 2 (Well-being). ...
21 For a conceptual framework enabling the country-by-country assessment of the Rule of Law, see World Justice Project, Rule of Law Index. 2018. url: https://worldjusticeproject.org/sites/ default/files/documents/WJP-ROLI-2018-June-Online-Edition_O.pdf. ...
22 See D. Kennedy, “The ‘Rule of Law,’ Political Choices and Development Common Sense,” in The New Law and Economic Development: A Critical Appraisal, D. M. Trubek and A. Santos, Ed. Cambridge: Cambridge University Press, 2006, pp. 156-157; see also A. Sen, Development as Freedom. New York: Alfred A. Knopf, 1999. ...
23 See Kennedy (2006): pp. 168-169. “The idea that building ‘the rule of law’ might itself be a development strategy encourages the hope that choosing law in general could substitute for all the perplexing political and economic choices that have been at the center of development policy making for half a century. The politics of allocation is submerged. Although a legal regime offers an arena to contest those choices, it cannot substitute for them.” ... ^(24){ }^{24} Fairness (as well as bias) can be defined in more than one way. For purposes of this chapter, a commitment is not made to any one definition-and indeed, it may not be either desirable or feasible to arrive at a single definition that would be applied in all circumstances. The trust principles proposed in the chapter (Effectiveness, Competence, Accountability, and Transparency) are defined such that they will provide information that will allow the testing of an application of A/IS against any fairness criteria. ...
25 The confidentiality of jury deliberations, certain sensitive cases, and personal data are some of the considerations that influence the extent of appropriate public examination and oversight mechanisms. ...
26 The avoidance of negative consequences is important to note in relation to effectiveness. The law can be used for malevolent or intensely disputed purposes (for example, the quashing of dissent or mass incarceration). The instruments of the law, including A/IS, can render the advancement of such purposes more effective to the detriment of democratic values, human rights, and human well-being. ...
27 Studies conducted by the US National Institute of Standards and Technology (NIST) between 2006 and 2011, known as the US NIST Text REtrieval Conference (TREC) Legal Track, suggest that some A/IS-enabled processes, if operated by trained experts in the relevant scientific fields, can be more effective (or accurate) than human attorneys in correctly identifying caserelevant information in large data sets. NIST has a long-standing reputation for cultivating trust in technology by participating in the development of standards and metrics that strengthen measurement science and make technology more secure, usable, interoperable, and reliable. This work is critical in the A/IS space to ensure public trust of rapidly evolving technologies so that we can benefit from all that this field has to promise. ...
28 In describing the potential A/IS have for aiding in the auditing of decisions made in the civil and criminal justice systems, we are envisioning them acting as aids to a competent human auditor (see Issue 4) in the context of internal or judicial review. ...
29 Of course, the use of A/IS in improving the effectiveness of law enforcement may raise concerns about other aspects of well-being, such as privacy and the rise of the surveillance state, cf. Minority Report (2002). If A/IS are to be used for law enforcement, steps must be taken to ensure that they are used, and that citizens trust that they will be used, in ways that are conducive to ethical law enforcement and individual well-being (see Issue 2). ...
30 A/IS may also provide assistance in carrying out legal tasks associated with larger transactions, such as evaluating contracts for risk in connection with a M&A transaction or reporting exposure to regulators. ...
31 The recommendations provided in this chapter (both under this issue and under the other issues discussed in the chapter) are intended to give general guidance as to how those with a stake in the just and effective operation of a legal system can develop norms for the trustworthy adoption of A/IS in the legal system. The specific ways in which the recommendations are operationalized will vary from society to society and from jurisdiction to jurisdiction. ...
32 See “Global Governance of Al Roundtable: Summary Report 2018,” World Government Summit, 2018: p. 32. Available: https://www. worldgovernmentsummit.org/api/publications/ document?id=ff6c88c5-e97c-6578-b2f8ff0000a7ddb6. (The February 2018 Dubai Global Governance of AI Roundtable brought together ninety leading thinkers on AI governance.) ...
33 See State v Loomis, 881 N.W.2d 749 (Wis. 2016), cert. denied (2017); see also “Criminal Law-Sentencing Guidelines-Wisconsin Supreme Court Requires Warning Before Use of Algorithmic Risk Assessments in Sentencing-State v. Loomis, 881 N.W.2d 749 (Wis. 2016),” Harvard Law Review, vol. 130, no. 5, pp. 1535-1536, 2017. Available: http://harvardlawreview.org/wp-content/uploads/2017/03/1530-1537_online. pdf; see also K. Freeman, “Algorithmic Injustice: How the Wisconsin Supreme Court Failed to Protect Due Process Rights in State v. Loomis,” North Carolina Journal of Law and Technology, ...
vol. 18, no. 5, pp. 75-76, 2016. Available: https:// scholarship.law.unc.edu/ncjolt/vol18/iss5/3/. ...
34 An example of an initiative that seeks to bridge the gap between technical and legal expertise is the Artificial Intelligence Legal Challenge, held at Ryerson University and sponsored by Canada’s Ministry of the Attorney General: http://www. legalinnovationzone.ca/press release/ryersons-legal-innovation-zone-announces-winners-of-ai-legal-challenge/. ...
35 And, in addressing the challenges, consideration must be given to existing modes of proposing and approving innovation in the legal system. Trust in A/IS will be undermined if they are viewed as not having been vetted via established processes. ...
36 For an overview of risk and risk management, see Working Party on Security and Privacy in the Digital Economy, Background Report for Ministerial Panel 3.2, Directorate for Science, Technology and Innovation, Committee on Digital Economy Policy, Managing Digital Security and Privacy Risk, OECD, June 1, 2016; see p. 5. ...
37 It is worth emphasizing the “informed” qualifier we attach to trust here. Far from advocating for a “blind trust” in A/IS, we argue that A/IS should be adopted only when we have sound evidence of their effectiveness, when we can be confident of the competence of their operators, when we have assurances that these systems allow for the attribution of responsibility for outcomes (both positive and negative), and when we have clear views into their operation. Without those conditions, we would argue that A/IS should not be adopted in the legal system. ...
Law 法律
38 The importance of testing the effectiveness of advanced technologies, including A/IS, in the legal system (and beyond) is not new: it was highlighted by Judge Paul W. Grimm in an important early ruling on legal fact-finding, Victor Stanley v. Creative Pipe, Inc., 250 F.R.D. 251, 257 (D. Md. 2008), followed, among others, by the influential research and educational institute The Sedona Conference as well as the International Organization for Standardization (ISO). See An Open Letter to Law Firms and Companies in the Legal Tech Sector, The Sedona Conference (2009), and Commentary on Achieving Quality in the E-Discovery Process (2013): 7; ISO standard on electronic discovery (ISO/IEC 270503:2017): 19. Most recently, in the summary report of the Global Governance of AI Roundtable at the 2018 World Government Summit, Omar bin Sultan Al Olama, Minister of State for Artificial Intelligence of the UAE, highlighted the importance of “empirical information” in assessing the suitability of A/IS. ...
39 In the terminology of software development, verification is a demonstration that a given application meets a narrowly defined requirement; validation is a demonstration that the application answers its real-world use case. When we speak of gathering evidence of the effectiveness of A/IS, we are speaking of validation. ... ^(40){ }^{40} Standards may include compliance with defined professional competence or other ethical requirements, but also other types of standards, such as data standards. Data standards may serve as “a digital lingua franca” with the potential of both supporting broad-based technological innovation (including A/IS innovation) in a legal ...
system and facilitating access to justice. As part of interactive technology solutions, appropriate data standards may help connect the ordinary citizen to the appropriate resources and information for his or her legal needs. For a discussion of open data standards in the context of the US court system, see D. Colarusso and E. J. Rickard, “Speaking the Same Language: Data Standards and Disruptive Technologies in the Administration of Justice,” Suffolk University Law Review, vol. L387, 2017. ... ^(41){ }^{41} For measurement of bias in facial recognition software, see C. Garvie, A. M. Bedoya, and J. Frankle, “The Perpetual Line-Up: Unregulated Police Face Recognition in America,” Georgetown Law, Center on Privacy & Technology, Oct. 2016. Available: https://www.perpetuallineup.org/. ... ... ^(42){ }^{42} The inclusion of such collateral effects in assessing effectiveness is an important element in overcoming the apparent “black box” or inscrutable nature of A/IS. See, for example, J. A. Kroll, “The fallacy of inscrutability,” Philosophical Transactions of the Royal Society A: Mathematical, Physical, and Engineering Sciences, vol. 376, no. 2133, Oct. 2018. Available: doi.org/0.1098/rsta.2018.0084. The study addresses, among other questions, “how measurement of a system beyond understanding of its internals and its design can help to defeat inscrutability.”
43 The question of the salience of collateral impact will vary with the specific application of A/IS. For example, false positives in document review related to fact-finding will generally not raise acute ethical issues, but false positives ...
Law 法律
in predictive policing or sentencing will. In these latter domains, complex and sometimes unsettled issues of fairness arise, particularly when social norms of fairness change regionally and over time (sometimes rapidly). Any A/IS that was designed to replicate some notion of fairness would need to demonstrate its effectiveness, first, at replicating prevailing notions of fairness that have legitimacy in society, and second, at responding to evolutions in such notions of fairness. In the current state of A/IS, in which no system has been able to demonstrate consistent effectiveness in either of the above regards, it is essential that great discretion be exercised in considering any reliance on A/IS in domains such as sentencing and predictive policing. ... ^(44){ }^{44} These exercises go by various names in the literature: effectiveness evaluations, benchmarking exercises, validation studies, and so on. See, for example, the definition of validation study in AINOW’s 2018 Algorithmic Accountability Toolkit (https://ainowinstitute.org/ aap-toolkit.pdf), p. 29. For our purposes, what matters is that the exercise be one that collects, in a scientifically sound manner, evidence of how “fit for purpose” any given A/IS are. ...
45 This feature of evaluation design is important, as only tasks that accurately reflect real-world conditions and objectives (which may include the avoidance of unintended consequences, such as racial bias) will provide compelling guidance as to the suitability of an application for adoption in the real world. ... ^(46){ }^{46} For TREC generally, see: https://trec.nist.gov/. For the TREC Legal Track specifically, see: https:// trec-legal.umiacs.umd.edu/. ...
47 When a complex system can be broken down into separate component systems, it may be appropriate to assess either the effectiveness of each component, or that of the end-toend application as a whole (including human operators), depending on the specific question to be answered. ... ^(48){ }^{48} Qualitative considerations may also help counter attempts to “game the system” (i.e., attempts to use bad-faith methods to meet a specific numerical target); see B. Hedin, D. Brassil, and A. Jones, “On the Place of Measurement in E-Discovery,” in Perspectives on Predictive Coding and Other Advanced Search Methods for the Legal Practitioner, ed. J. R. Baron, R. C. Losey, and M. D. Berman. Chicago: American Bar Association, 2016, p. 415 f415 f. ... ... ^(49){ }^{49} Even in fact-finding, accurate extraction of facts does not eliminate the need for reasoned judgment as to the significance of the facts in the context of specific circumstances and cultural considerations. Used properly, A/IS will advance the spirit of the law, not just the letter of the law.
50 Electronic discovery is the task of searching through large collections of electronically stored information (ESI) for material relevant to civil and criminal litigation and investigations. Among applications of A/IS to legal tasks and questions, the application to legal discovery is probably the most “mature,” as measured against the criteria of having been tested, assessed and approved by courts, and adopted fairly widely across various jurisdictions. ...
Law 法律
^(51){ }^{51} While there is general consensus about the importance of these metrics in gauging effectiveness in legal discovery, there is not a consensus around the precise values for those metrics that must be met for a discovery effort to be acceptable. That is a good thing, as the precise value that should be attained, and demonstrated to have been attained, in any given matter will be dependent on, and proportional to, the specific facts and circumstances of that matter. ...
52 Different domains of application of A/IS to legal matters will vary not only with regard to the availability of consensus metrics of effectiveness, but also with regard to conditions that affect the challenge of measuring effectiveness: availability of data, impact of social bias, and sensitivity to privacy concerns all affect how difficult it may be to arrive at consensus protocols for gauging effectiveness. In the case of defining an effectiveness metric for A/IS used in support of sentencing decisions, one challenge is that, while it is easy to find when an individual who has been released commits a crime (or is convicted of committing a crime), it is difficult to assess when an individual who was not released would have committed a crime. For a discussion of the challenges in measuring the effectiveness of tools designed to assess flight risk, see M. T. Stevenson, “Assessing Risk Assessment in Action.” Minnesota Law Review, vol. 103, 2018. Available: doi.org/10.2139/ssrn. 3016088. ...
53 Sound measurement may also serve as an effective antidote to the unsubstantiated claims sometimes made regarding the effectiveness of certain applications of A/IS to legal matters ...
(e.g., flight risk assessment technologies); see Stevenson, “Assessing Risk Assessment”. Unsubstantiated claims are an appropriate source of an informed distrust in A/IS. Such well-founded distrust can be addressed only with truly meaningful and sound measures that provide accurate information regarding the capabilities and limitations of a given system. ...
54 See the discussion under “IllustrationEffectiveness” in this chapter. ...
55 For more on principles for data protection, see the EAD chapter “Personal Data and Individual Agency”. ...
56 The importance of validation by practitioners is reflected in The European Commission’s High-Level Expert Group on Artificial Intelligence Draft Ethics Guidelines for Trustworthy AI: “Testing and validation of the system should thus occur as early as possible and be iterative, ensuring the system behaves as intended throughout its entire life cycle and especially after deployment.” (Emphasis added.) See High-Level Expert Group on Artificial Intelligence, “DRAFT Ethics Guidelines for Trustworthy AI: Working Document for Stakeholders’ Consultation,” The European Commission. Brussels, Belgium: Dec. 18, 2018. ...
57 That scrutiny need not extend to IP or other protected information (e.g., attorney work product). Validation methods and results are a matter of numbers and procedures for obtaining the numbers, and their disclosure would not impinge on safeguards against the disclosure of legitimately protected information. ...
Law 法律
58 A recent matter from the US legal system illustrates how a failure to disclose the results of a validation exercise can limit the exercise’s ability to achieve its intended purpose. In Winfield v. City of New York (Opinion & Order. 15-CV-05236 [LTS] [KHP]. SDNY 2017), a party had utilized the A/IS-enabled system to conduct a review of documents for relevance to the matter being litigated. When the accuracy and completeness of the results of that review were challenged by the requesting party, the producing party disclosed that it had, in fact, conducted validation of its results. Rather than requiring that the producing party simply disclose the results of the validation to the requesting party, the judge overseeing the dispute chose to review the results herself in camera, without providing access to the requesting party. Although the judge then said that the evidence she was provided supported the accuracy and completeness of the review, the requesting party could not itself examine either the evidence or the methods whereby it was obtained, and so could not gain confidence in the results. That confidence comes only from examining the metrics and the procedures followed in obtaining them. Moreover, the results of a validation exercise, which are usually simple numbers that reflect sampling procedures, can be disclosed without revealing the content of any documents, any proprietary tools or methods, or any attorney work product. If the purpose of conducting a validation exercise is to gather evidence of the effectiveness of a process, in the event that the process is challenged, keeping that evidence hidden from those who would challenge the process limits the ability of the validation exercise to achieve its intended purpose. ...
61 The statistical evidence in question here is statistical evidence of the effectiveness of A/IS applied to the task of discovery; it is not statistical evidence of facts actually at issue in litigation. Courts may have different rules for the admissibility of the two kinds of statistical evidence (and there will be jurisdictional differences on these questions). ...
62 It is important to underscore that, whereas developers and operators of A/IS should be able to derive sound measurements of effectiveness, the courts should determine what level of effectiveness-what score-should be demonstrated to have been achieved, based on the facts and circumstances of a given matter. In some instances, the cost (in terms of sample sizes, resources required to review the samples, and so on) of demonstrating the achievement of a high score will be disproportionate to the stakes of a given matter. In others, for example, a major securities fraud claim that potentially affects thousands of citizens, a court might justifiably demand a demonstration of the achievement of a very high score, irrespective of cost. Demonstrations of the effectiveness of A/IS (and of their operators) are instruments in support of, not in substitution of, judicial decision-making. ...
63 See, for example, B. Hedin, S. Tomlinson, J. R. Baron, and D. W. Oard, “Overview of the TREC 2009 Legal Track,” in NIST Special Publication: SP 500-278, The Eighteenth Text REtrieval Conference (TREC 2009) Proceedings (2009). ...
Law 法律
64 See M. R. Grossman and G. V. Cormack, “Technology-Assisted Review in E-Discovery Can Be More Effective and More Efficient Than Exhaustive Manual Review,” Richmond Journal of Law and Technology, vol. 17, no. 3, 2011. Available: http://jolt.richmond.edu/jolt-archive/ v17i3/article11.pdf. Note that the two systems that conclusively demonstrated “better than human” performance took methodologically distinct approaches, but they shared the characteristic of having been designed, operated, and measured for accuracy by scientifically trained experts. ...
65 Da Silva Moore v. Publicis Groupe, 2012 WL 607412 (S.D.N.Y. Feb. 24, 2012). See also A. Peck, “Search, Forward,” Legaltech News. Oct. 1, 2011. Available: https://www.law.com/ legaltechnews/almID/1202516530534SearchForward/. ...
66 The fact that NIST has as important role to play in developing standards for the measurement of the safety and security of A/IS was recognized in a recent (September, 2018) report from the U.S. House of Representatives: “At minimum, a widely agreed upon standard for measuring the safety and security of AI products and applications should precede any new regulations. … The National Institute of Standards and Technology (NIST) is situated to be a key player in developing standards.” (Will Hurd and Robin Kelly, “Rise of the Machines: Artificial Intelligence and its Growing Impact on U.S. Policy,” U.S. House of RepresentativesCommittee on Oversight and Government Reform-Subcommittee on Information Technology, September, 2018). ...
67 The competence principle is intended to apply to the post design operation of A/IS. Of course, that does not mean that designers and developers of A/IS are free of responsibility for their systems’ outcomes. As discussed in the background to this issue, it is incumbent on designers and developers to assess the risks associated with the operation of their systems and to specify the operator competencies needed to mitigate those risks. For more on the question of designer incompetence or negligence, see the discussion of “software malpractice” in Kroll (2018). ...
68 The ISO standard on e-discovery, ISO/IEC 27050-3, does recognize the importance of expertise in applying advanced technologies in a search for documents responsive to a legal inquiry; see ISO/IEC 27050-3: Information technology - Security techniques - Electronic discovery - Part 3: Code of practice for electronic discovery, Geneva (2017), pp. 19-20. ...
69 See, for example, ABA Model Rule 1, comment 8: “To maintain the requisite knowledge and skill, a lawyer should keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology, engage in continuing study and education and comply with all continuing legal education requirements to which the lawyer is subject.” Available: https://www.americanbar.org/ groups/professional responsibility/publications/ model_rules_of_professional_conduct/rule_1_1_ competence/comment_on_rule_1_1/. See also, The State Bar of California Standing Committee on Professional Responsibility and Conduct, Formal Opinion No. 2015-193. Available: ... https://www.calbar.ca.gov/Portals/0/documents/ ethics/Opinions/CAL%202015-193%20%5B11-0004%5D%20(06-30-15)%20-%20FINAL.pdf. ...
70 In the deliberations of the Law Committee of the 2018 Global Governance of Al Roundtable, the question of the competencies needed “in order to effectively operate and measure the efficacy of AI systems in legal functions that affect the rights and liberty of citizens” was cited as one of the considerations that “appear to be most overlooked in the current public dialogue.” See “Global Governance of AI Roundtable: Summary Report 2018,” World Government Summit, 2018: p. 7. Available: https://www. worldgovernmentsummit.org/api/publications/ document?id=ff6c88c5-e97c-6578-b2f8ff0000a7ddb6. ...
71 See A. G. Ferguson, “Policing Predictive Policing,” Washington University Law Review, vol. 94, no. 5, 2017: 1109, 1172. Available: https:// openscholarship.wustl.edu/law_lawreview/vol94/ iss5/5/. ...
72 In addition, a lack of competence in interpreting the results of a statistical exercise can (and often does) result in an incorrect conclusion (on the part of a party to a dispute or of a judge seeking to resolve a dispute). For example, in In re: Biomet, a judge addressing a discovery dispute interpreted the statistical data provided by the producing party as indicating that the producing party’s retrieval process had left behind “a comparatively modest number” of responsive documents, when the statistical evidence showed, in fact, that a substantial number of responsive documents had been left behind. ...
See In re: Biomet M2a Magnum Hip Implant Prods. Liab. Litig.No. 3:12-MD-2391 (N.D. Ind. April 18, 2013). ...
73 For example, a prior violent conviction may be weighted equally, whether the violent act was a shove or a knife attack. See Human Rights Watch. “Q & A: Profile Based Risk Assessment for US Pretrial Incarceration, Release Decisions,” June 1, 2018. Available: https://www.hrw. org/news/2018/06/01/q-profile-based-risk-assessment-us-pretrial-incarceration-releasedecisions. ... ... ^(74){ }^{74} Bias can be introduced in a number of ways: via the features taken into consideration by the algorithm, via the nature and composition of the training data, via the design of the validation protocol, and so on. A competent operator will be alert to and assess such potential sources of bias.
75 Among the conditions may be, for example, that the results of the system are to be used only to provide guidance to the human decision maker (e.g., judge) and should not be taken as, in themselves, dispositive. ...
76 Given that the effective functioning of a legal system is a matter of interest to the whole of society, it is important that all members of a society be provided with access to the resources needed to understand when and how A/IS are applied in support of the functioning of a legal system. ...
77 Among the topics covered by such training should be the potential for “automation bias” and ways to mitigate it. See L. J. Skitka, K. Mosier, and M. D. Burdick, “Does automation ...
bias decision-making?” International Journal of Human-Computer Studies, vol. 51, no. 5, pp. 991-1006, 1999. Available: https://doi. org/10.1006/ijhc.1999.0252; L. J. Skitka, K. Mosier, and M. D. Burdick, “Accountability and automation bias,” International Journal of HumanComputer Studies, vol. 52, no. 4, pp. 701-717, 2000. Available: https://doi.org/10.1006/ ijhc.1999.0349. ...
78 Some government agencies are working toward creating a more effective partnership between the skills found in technology startups and the skills required of legal practitioners. See Legal Innovation Zone. “Ryerson’s Legal Innovation Zone Announces Winners of AI Legal Challenge,” March 26, 2018. Available: http:// www.legalinnovationzone.ca/press_release/ ryersons-legal-innovation-zone-announces-winners-of-ai-legal-challenge/. ...
80 See E. Dwoskin, “Amazon is selling facial recognition to law enforcement-for a fistful of dollars.” Washington Post, May 22, 2018. Available: https://www.washingtonpost.com/ news/the-switch/wp/2018/05/22/amazon-is-selling-facial-recognition-to-law-enforcement-for-a-fistful-of-dollars/?noredirect=on&utm_ term=.07d9ca13ab77. ... ...
^(81){ }^{81} See, for example, J. Stanley, “FBI and Industry Failing to Provide Needed Protections for Face Recognition.” ACLU-Free Future, June 15, 2016. Available: https://www.aclu.org/blog/privacy-technology/surveillance-technologies/fbi-and-industry-failing-provide-needed.^(82){ }^{82} It is also the case that, among the false positives, nonwhite members of Congress were overrepresented relative to their proportion in Congress as a whole, perhaps indicating that the accuracy of the technology is, to some degree, race-dependent. Without knowing more about the composition of the mugshot database, however, we cannot assess the significance of this result.
85 The story also highlights the question of accountability, illustrating how the principles discussed in this report intersect with and complement each other. ...
Law 法律
86 Of course, competent use does not preclude use for bad ends (e.g., government surveillance that impinges on human rights). The principle of competence is one principle in a set that, collectively, is designed to ensure the ethical application of A/IS. See the EAD chapter “General Principles”. ...
87 Developing “well grounded” guidelines will typically require that the creators of A/IS gather input from both those operating the technology and those affected by the technology’s operation. ...
88 The use of facial recognition technologies by security and law enforcement agencies raises issues that extend beyond the question of operator competence. For further discussion of such issues, see C. Garvie, A. M. Bedoya, and J. Frankle, “The Perpetual Line-Up: Unregulated Police Face Recognition in America,” Georgetown Law, Center on Privacy & Technology, October 18, 2016, Available: https://www.perpetuallineup.org/. ...
89 As noted above, some professional organizations, such as the ABA, have begun to recognize in their codes of ethics the importance of technological competence, although the guidance does not yet address A/IS specifically. ...
90 Including those engaged in the procurement and deployment of a system means that those acquiring and authorizing the use of a system can share in the responsibility for its results. For example, in the case of A/IS deployed in the service of the courts, this could be the judiciary; in the case of A/IS deployed in the service of law enforcement, this could be the agency responsible for the enforcement of the law and ...
the administration of justice; in the case of A/IS used by a party to legal proceedings, this could be the party’s counsel. ...
91 J. New and D. Castro, "How Policymakers Can Foster Algorithmic Accountability."Information Technology & Innovation Foundation, p. 5, 2018. Available: https://www.itif.org/ publications/2018/05/21/how-policymakers-can-foster-algorithmic-accountability. ...
92 Included among possible “causes” for an effect are not only the decision-making pathways of algorithms but also, importantly, the decisions made by humans involved in the design, development, procurement, deployment, operation, and validation of effectiveness of A/IS. ...
93 The challenge, moreover, is one not only of assigning responsibility, but of assigning levels of responsibility (a task that could benefit from a neutral model that could consider how much interaction and influence each stakeholder has in every decision). ...
94 Scherer (2016): 372. In addition to diffuseness, Scherer identifies discreetness, discreteness, and opacity as features of the design and development of A/IS that make apportioning responsibility for their outcomes a challenge for regulators and courts. ...
95 In answering these questions, it will be important to keep in mind the distinction between responsibility (a factual question) and ultimate accountability (a normative question). In the case of the example under discussion, there may be multiple individuals who have ...
Law 法律
some practical responsibility for the sentence given, but the normative framework may place ultimate accountability on the judge. Before normative accountability can be assigned, however, pragmatic responsibilities must be clarified and understood. Hence the focus, in this section, on clarifying lines of responsibility so that ultimate accountability can be determined. ...
96 If effectiveness is measured against statistics that themselves may represent human bias (e.g., arrest rates), then the effectiveness measures may just reflect and reinforce that bias. ...
97 “‘The algorithm did it’ is not an acceptable excuse if algorithmic systems make mistakes or have undesired consequences, including from machine-learning processes.” See “Principles for Accountable Algorithms and a Social Impact Statement for Algorithms.” FAT/ML Resources. www.fatml.org/resources/principles-for-accountable-algorithms. ...
98 See Langewiesche, W. 1998. “The Lessons of ValuJet 592”. Atlantic Monthly. 281: 81-97; S. D. Sagan. Limits of Safety: Organizations, Accidents, and Nuclear Weapons. Princeton University Press, 1995. ...
99 For a discussion of the role of explanation in maintaining accountability for the results of A/IS and of the question of whether the standards for explanation should be different for A/IS than they are for humans, see F. Doshi-Velez, M. Kortz, R. Budish, C. Bavitz, S. J. Gershman, D. O’Brien, S. Shieber, J. Waldo, D. Weinberger, and A. Wood, Accountability of AI Under the Law: The Role of Explanation (November 3, 2017). Berkman Center Research Publication Forthcoming; Harvard Public Law Working ...
100 Also, gaining access to that information should not be unduly burdensome. ... ^(101){ }^{101} Those developing a model for accountability for A/IS may find helpful guidance in considering models of accountability used in other domains (e.g., data protection). ...
102 For a discussion of how such policies might be implemented in accordance with protocols for information governance, see J. R. Baron and K. E. Armstrong, “The Algorithm in the C-Suite: Applying Lessons Learned and Information Governance Best Practices to Achieve Greater Post-GDPR Algorithmic Accountability,” in The GDPR Challenge: Privacy, Technology, and Compliance In An Age of Accelerating Change, A. Taal, Ed. Boca Raton, FL: CRC Press, forthcoming. ...
103 These inquiries can be supported by technological tools that may provide information essential to answering questions of accountability but that do not require full transparency into underlying computer code and may avoid the necessity of an intrusive audit; see Kroll et al. (2017). Among the tools identified by Kroll and his colleagues are: software verification, cryptographic commitments, zero-knowledge proofs, and fair random choices. While the use of such tools may avoid the limitations of solutions such as transparency and audit, they do require that creators of A/IS design their systems so that they will be compatible with the application of such tests. ...
Law ... ^(104){ }^{104} Certifications may include, for example, professional certifications of competence, but also certifications of compliance of processes with standards. An example of a certification program specifically addressing A/IS is The Ethics Certification Program for Autonomous and Intelligent Systems (ECPAIS), https://standards. ieee.org/industry-connections/ecpais.html. ... ...
^(105){ }^{105} This means that A/IS used in legal systems will have to be defensible in courts. The margin of error will have to be low or the use of A/IS will not be permitted.^(106){ }^{106} It is also the case that evidence produced by A/IS will be subject to chain-of-custody rules, as are other types of forensic evidence, to ensure integrity, confidentiality, and authenticity.
107 See for instance Art. 22(1) Regulation (EU) 2016/679. ...
108 Human dignity, as a core value protected by the United Nations Universal Declaration of Human Rights, requires us to fully respect the personality of each human being and prohibits their objectification. ... ^(109){ }^{109} This concern is reflected in Principle 5 of the European Ethical Charter on the Use of Artificial Intelligence in Judicial Systems and their Environment, recently published by the Council of Europe’s European Commission for the Efficiency of Justice (CEPEJ). Principle 5 (“Principle ‘Under User Control’: preclude a prescriptive approach and ensure that users are informed actors and in control of the choices made”) states, with regard to professionals in the justice system that they should “at any moment, be able to review judicial decisions and the data used to produce a result ...
and continue not to be necessarily bound by it in the light of the specific features of that particular case,” and, with regard to decision subjects, that he or she must “be clearly informed of any prior processing of a case by artificial intelligence before or during a judicial process and have the right to object, so that his/her case can be heard directly by a court.” See CEPEJ, European Ethical Charter on the Use of Artificial Intelligence in Judicial Systems and their Environment (Strasbourg, 2018), p. 10. ...
110 J. Tashea, Calculating Crime: Attorneys are Challenging the Use of Algorithms to Help Determine Bail, Sentencing and Parole, ABA Journal (March 2017). ... ^(111){ }^{111} Loomis v. Wisconsin, 68 WI. (2016). ... ^(112){ }^{112} ld. at pp. 46-66. ...
113 R. Wexler, Life, Liberty, and Trade Secrets: Intellectual Property in the Criminal Justice System, Stanford Law Review, 2018. ...
115 People v. Chubbs CA2/4, B258569 (Cal. Ct. App. 2015). ...
116 U.S. v. Ocasio, No. 3:11-cr-02728-KC, slip op. at 1-2, 11-12 (W.D. Tex. May 28, 2013). ...
117 U.S. v. Johnson, No. 1:15-cr-00565-VEC, order (S.D.N.Y., June 7, 2016). ...
118 Indeed, without transparency, there may, in some circumstances, be no means for even knowing whether an error that needs to be corrected was committed. In the case of A/IS ...
applied in a legal system, an “error” can mean real harm to the dignity, liberty, and life of an individual. ... ^(119){ }^{119} Fairness (as well as bias) can be defined in more than one way. For purposes of this discussion, a commitment is not made to any one definition-and indeed, it may not be either desirable or feasible to arrive at a single definition that would be applied in all circumstances. For purposes of this discussion, the key point is that transparency will be essential in building informed trust in the fairness of a system, regardless of the specific definition of fairness that is operative. ... ...
^(120){ }^{120} To the extent permitted by the normal operation of the A/IS: allowing for, for example, variation in the human inputs to a system that may not be eliminated in any attempt at replication.^(121){ }^{121} With regard to information explaining how a system arrived at a given output, GDPR makes provision for a decision subject’s right to an explanation of algorithmic decisions affecting him or her: automated processing of personal data “should be subject to suitable safeguards, which should include specific information to the data subject and the right to obtain human intervention, to express his or her point of view, to obtain an explanation of the decision reached after such assessment and to challenge the decision.” GDPR, Recital 71.
122 Even among sensitive data, some data may be more sensitive than others. See I. Ajunwa, “Genetic Testing Meets Big Data: Tort and Contract Law Issues,” 75 Ohio St. L. J. 1225 (2014). Available: https://ssrn.com/abstract=2460891. ... ...
^(123){ }^{123} See A. Baker, “Updated N.Y.P.D. Anti-Crime System to Ask: ‘How We Doing?’” New York Times, May 8, 2017, https://www.nytimes. com/2017/05/08/nyregion/nypd-compstat-crime-mapping.html; S. Weichselbaum, “How a ‘Sentiment Meter’ Helps Cops Understand Their Precincts,” Wired, July 16, 2018. Available: https://www.wired.com/story/elucd-sentiment-meter-helps-cops-understand-precincts/.^(124){ }^{124} This table is a preliminary draft and is meant only to illustrate a useful tool for facilitating reasoning about who should have access to what information. Other categories of stakeholder and other categories of information (e.g., the identity and nature of the designer/manufacturer of the A/IS, the identity and nature of the investors backing a particular system or company) could be added as needed.
125 For discussions of these two dimensions of explanation, see S. Wachter, et al. (2017). “Why a Right to Explanation of Automated DecisionMaking Does Not Exist in the General Data Protection Regulation”; A. Selbst, and S. Barocas, The Intuitive Appeal of Explainable Machines. ... ^(126){ }^{126} Wexler, Rebecca. 2018. “Life, Liberty, and Trade Secrets: Intellectual Property in the Criminal Justice System”. Stanford Law Review. 70 (5): 1342-1429; Tashea, Jason. “Federal judge releases DNA software source code that was used by New York City’s crime lab.” ABA Journal (2017). http://www.abajournal.com/news/ article/federal_judge_releases_dna_software_ source_code. ... ...
^(127){ }^{127} Or, if two approaches are found to be, for practical purposes, equally effective, the simpler, more easily explained approach may be preferred.^(128){ }^{128} For a discussion of the limits of transparency and of alternative modes of gaining actionable answers to questions of verification and accountability, see J.A. Kroll, J. Huey, S. Barocas, E.W. Felten, J.R. Reidenberg, D.G. Robinson, H. Yu, “Accountable Algorithms” (March 2, 2016). University of Pennsylvania Law Review, Vol. 165, 2017 Forthcoming; Fordham Law Legal Studies Research Paper No. 2765268. Available at SSRN: https://ssrn.com/abstract=2765268. See also J.A. Kroll, The fallacy of inscrutability, Phil. Trans. R. Soc. A 376: 20180084. http:// dx.doi.org/10.1098/rsta.2018.0084 (Note p. 9: “While transparency is often taken to mean the disclosure of source code or data, possibly to a trusted entity such as a regulator, this is neither necessary nor sufficient for improving understanding of a system, and it does not capture the full meaning of transparency.”)^(129){ }^{129} In particular with respect to due process, the current dialogue on the use of A/IS centers on the tension between the need for transparency and the need for the protection of intellectual property rights. Adhering to the principle of Effectiveness as articulated in this work can substantially help in defusing this tension. Reliable empirical evidence of the effectiveness of A/IS in meeting specific real-world objectives may foster informed trust in such A/IS, without disclosure of proprietary or trade secret information.^(130){ }^{130} S. Wachter, B. Mittelstadt, and C. Russell, “Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR,” SSRN Electronic Journal, p. 5, 2017 for the example cited.^(131){ }^{131} W. L. Perry, B. McInnis, C. C. Price, S. C. Smith, and J. S. Hollywood, “Predictive Policing: The Role of Crime Forecasting in Law Enforcement Operations,” The RAND Corporation, pp. 67-69, 2013.
132 Support from the University of Memphis was led by Richard Janikowski, founding Director of the Center for Community Criminology and Research (School of Urban Affairs and Public Policy, the University of Memphis) and the Shared Urban Data System (The University of Memphis). ...
133 E. Figg, “The Legacy of Blue CRUSH,” High Ground, March 19, 2014. ...
134 Figg, “Legacy.” ... ^(135){ }^{135} Nucleus Research, ROI Case Study: IBM SPSS-Memphis Police Department, Boston, Mass., Document K31, June 2010. Perry et al., Predictive Policing, 69. ... ...
^(136){ }^{136} Figg, “Legacy.” ...
137 Figg, “Legacy.”^(138){ }^{138} See: Al Now, Algorithmic Accountability Policy Toolkit, p. 12, Oct. 2018. Available: https://ainowinstitute.org/aap-toolkit.pdf; D. Robinson and L. Koepke, Stuck in a Pattern: Early evidence on “predictive policing” and civil rights, Upturn, Aug. 2016. Available: https:// www.upturn.org/reports/2016/stuck-in-apattern/; S. Brayne, “Big Data Surveillance: The Case of Policing,” American Sociological Review, 2016. Available: https://journals. sagepub.com/doi/10.1177/0003122417725865; A. G. Ferguson, “Policing Predictive Policing,”
139 For a discussion of the criteria that may define a “high-crime area,” and so potentially license more intrusive policing, see A. G. Ferguson and D. Bernache, “The ‘High-Crime Area’ Question: Requiring Verifiable and Quantifiable Evidence for Fourth Amendment Reasonable Suspicion Analysis,” American University Law Review, vol. 57, pp. 1587-1644. ... ^(140){ }^{140} While A/IS, if misapplied, may perpetuate bias, it holds at least the potential, if applied with appropriate controls, to reduce bias. For a study of how an impersonal technology such as a red light camera may reduce bias, see R. J. Eger, C. K. Fortner, and C. P. Slade, “The Policy of Enforcement: Red Light Cameras and Racial Profiling,” Police Quarterly, pp. 1-17, 2015. Available: http://hdl.handle.net/10945/46909. ... ... ^(141){ }^{141} See, for example: J. Tashea, “Estonia considering new legal status for artificial intelligence,” ABA Journal, Oct. 20, 2017, and European Parliament Resolution of Feb. 16, 2017.
142 See Legal Entity, Person, in B. Bryan A. Garner, Black’s Law Dictionary, 10th Edition. Thomas West, 2014. ...
143 J. S. Nelson, “Paper Dragon Thieves.” Georgetown Law Journal 105 (2017): 871-941. ... ^(144){ }^{144} M. U. Scherer, “Of Wild Beasts and Digital Analogues: The Legal Status of Autonomous Systems.” Nevada Law Journal 19, forthcoming 2018. ... ... ^(145){ }^{145} See M. U. Scherer, “Of Wild Beasts and Digital Analogues: The Legal Status of Autonomous Systems.” Nevada Law Journal 19, forthcoming 2018; J. F. Weaver. Robots Are People Too: How Siri, Google Car, and Artificial Intelligence Will Force Us to Change Our Laws. Santa Barbara, CA: Praeger, 2013; L. B. Solum. “Legal Personhood for Artificial Intelligences.” North Carolina Law Review 70, no. 4 (1992): 1231-1287.
The Mission and Results of The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems ...
To ensure every stakeholder involved in the design and development of autonomous and intelligent systems is educated, trained, and empowered to prioritize ethical considerations so that these technologies are advanced for the benefit of humanity. ...
To advance toward this goal, The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems brought together more than a thousand participants from six continents who are thought leaders from academia, industry, civil society, policy, and government in the related technical and humanistic disciplines to identify and find consensus on timely issues surrounding autonomous and intelligent systems. ...
By “stakeholder” we mean anyone involved in the research, design, manufacture, or messaging around intelligent and autonomous systems-including universities, organizations, governments, and corporations-all of which are making these technologies a reality for society. ...
From Principles to PracticeResults from our Work to Date ...
In addition to the creation of Ethically Aligned Design, The IEEE Global Initiative, independently or through the IEEE Standards Association, has directly inspired the following works: ...
The launch of the IEEE P7000 ^("TM "){ }^{\text {TM }} series of approved standardization projects ...
This is the first series of standards in the history of the IEEE Standards Association that explicitly focuses on societal and ethical issues associated with a certain field of technology ...
- Artificial Intelligence and Ethics in Design ...
These ten courses are designed for global professionals, as well as their managers, working in engineering, IT, computer science, big data, artificial intelligence, and related fields across all industries who require up-todate information on the latest technologies. The courses explicitly mirror content from Ethically Aligned Design, and feature numerous experts as instructors who helped create Ethically Aligned Design. ...
The creation of an A/IS Ethics Glossary ...
The Glossary features more than two hundred pages of terms that help to define the context of A/IS ethics for multiple stakeholder groups, specifically: engineers, policy makers, philosophers, standards developers, and computational disciplines experts. It is currently in its second iteration and has also been informed by the IEEE P7000 ^("™ "){ }^{\text {™ }} standards working groups. ...
The IEEE Standards Association, inspired by the work of The IEEE Global Initiative, has contributed significantly to the establishment of The Open Community for Ethics in Autonomous and Intelligent Systems (OCEANIS). It is a global forum for discussion, debate, and collaboration for organizations interested in the development and use of standards to further the creation of autonomous and intelligent systems. OCEANIS members are working together to enhance the understanding of the role of standards in facilitating innovation, while addressing problems that expand beyond technical solutions to addressing ethics and values. ...
The Ethics Certification Program for Autonomous and Intelligent Systems (ECPAIS) has the goal to create specifications for certification and marking processes that advance transparency, accountability, and reduction in algorithmic bias in autonomous and intelligent systems. ECPAIS intends to offer a process and define a series of marks by which organizations can seek certifications for their processes around the A/IS products, systems, and services they provide. ...
More information can be found at: standards.ieee.org/industry-connections/ ecpais.html ...
- The launch of CXI ...
The Council on Extended Intelligence (CXI) was directly inspired by the work of The IEEE Global Initiative and the work of The MIT Media Lab around “Extended Intelligence”. CXI was launched jointly by the IEEE Standards Association and The MIT Media Lab. CXI’s mission is to proliferate the ideals of responsible participant design, data agency, and metrics of economic prosperity, prioritizing people and the planet over profit and productivity. Membership includes thought leaders from the EU Parliament and Commission, the UK House of Lords, the OECD, the United Nations, local and national administrations, and renowned experts in economics, data science, and multiple other disciplines from around the world. ...
The Ethically Aligned Design University Consortium (EADUC) is being established with the aim to reach every engineer at the beginning of their studies to help them prioritize values-driven, applied ethical principles at the core of their work. Working in conjunction with philosophers, designers, social scientists, academics, data scientists, and the corporate and policy communities, EADUC also has the goal that Ethically Aligned Design will be used in teaching at all levels of education globally as the new vision for design in the algorithmic age. ...
- The launch of "AI Commons" ...
The work of The IEEE Global Initiative has delivered key ideas and inspiration that are rapidly evolving toward establishing a global collaborative platform around A/IS. The mission of Al Commons is to gather a true ecosystem to democratize access to Al capabilities and thus to allow anyone, anywhere to benefit from the possibilities that AI can provide. In addition, the group will be working to connect problem owners with the community of solvers, to collectively create solutions with Al . The ultimate goal is to implement a framework for participation and cooperation to make using and benefiting from AI available to all. ...
The IEEE P7000 ^("TM "){ }^{\text {TM }} series of standards projects under development represents a unique addition to the collection of over 1,900 global IEEE standards and projects. Whereas more traditional standards have a focus on technology interoperability, functionality, safety, and trade facilitation, the IEEE P7000 series addresses specific issues at the intersection of technological and ethical considerations. Like its technical standards counterparts, the IEEE P7000 series empowers innovation across borders and enables societal benefit. ...
For more information or to join any working group, please see the links below. Committees that authored Ethically Aligned Design, as well as other committees within IEEE, that created specific working groups are listed below each project. ...
IEEE P7000 ^("TM "){ }^{\text {TM }} - IEEE Standards Project Model Process for Addressing Ethical Concerns During System Design ...
Inspired by Methodologies to Guide Ethical Research and Design Committee, and supported by IEEE Computer Society standards.ieee.org/project/7000.html ...
IEEE P7001 ^("TM "){ }^{\text {TM }} - IEEE Standards Project for Transparency of Autonomous Systems ...
IEEE is the largest technical professional organization dedicated to advancing technology for the benefit of humanity, with over 420,000 members in more than 160 countries. Through its highly cited publications, conferences, technology standards, and professional and educational activities, IEEE is the trusted voice in a wide variety of areas ranging from aerospace systems, computers, and telecommunications to biomedical engineering, electric power, and consumer electronics. ...
To learn more, visit the IEEE website: www.ieee.org ...
About the IEEE Standards Association ...
The IEEE Standards Association (IEEE-SA), a globally recognized standards-setting body within IEEE, develops consensus standards through an open process that engages industry and brings together a broad stakeholder community. IEEE standards set specifications and best practices based on current scientific and technological knowledge. The IEEE-SA has a portfolio of over 1,900 active standards and over 650 standards under development. ...
The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems (The IEEE Global Initiative) is a program of the IEEE ...
Standards Association with the status of an Operating Unit of The Institute of Electrical and Electronics Engineers, Incorporated (IEEE), the world’s largest technical professional organization dedicated to advancing technology for the benefit of humanity with over 420,000 members in more than 160 countries. ...
The IEEE Global Initiative provides the opportunity to bring together multiple voices in the related technological and scientific communities to identify and find consensus on timely issues. ...
Names of experts involved in the various committees of The IEEE Global Initiative can be found at: standards.ieee.org/content/dam/ ieee-standards/standards/web/documents/ other/ec bios.pdf ...
IEEE makes all versions of Ethically Aligned Design available under the Creative Commons Attribution-Non-Commercial 4.0 United States License. Subject to the terms of that license, organizations or individuals can adopt aspects of this work at their discretion at any time. It is also expected that Ethically Aligned Design content and subject matter will be selected for submission into formal IEEE processes, including standards development and education purposes. ...
The IEEE Global Initiative and Ethically Aligned Design contribute, together with other efforts within IEEE, such as IEEE TechEthics ^("TM "){ }^{\text {TM }}, (techethics.ieee.org), to a broader effort at IEEE to foster open, broad, and inclusive conversation about ethics in technology. ...
Our Process ...
To ensure the greatest cultural relevance and intellectual rigor possible in our work, The IEEE Global Initiative has sought for and received global feedback for versions 1 and 2 (after hundreds of experts created first drafts) to inform this Ethically Aligned Design, First Edition (EADIe). ...
We released Ethically Aligned Design, Version ...
1 (EADv1) as a Request for Input in December of 2016 and received over two hundred pages of in-depth feedback about the draft. We subsequently released Ethically Aligned Design, Version 2 (EADv2) in December 2017 and received over three hundred pages of in-depth feedback about the draft. This feedback included further insights about the eight original sections from EADv1, along with unique/new input for the five new sections included in EADv2. ...
Both versions included “candidate recommendations” instead of direct “recommendations”, because our communities had been engaged in debate and weighing various options. ...
This process was taken to the next level with Ethically Aligned Design, First Edition (EAD1e), using EADv1 and EADv2 as its initial foundation. Although we expect future editions of Ethically Aligned Design, a vetting process has taken place within the global community that gave rise to this seminal work. Therefore, we can now speak of “recommendations” without any further restriction, and EAD1e also includes a set of policy recommendations. ...
This process included matters of “internal consistency” across the various chapters of EADTe and also more specific or broader criteria, such as maturity of the specific chapters and consistency with respect to policy statements of IEEE. The review also considered the need for IEEE to maintain a neutral and thus credible position in areas and processes where it is likely that IEEE may become active in the future. ...
Beyond these formal procedures, the Board of Governors of IEEE Standards Association has endorsed the work of the IEEE Global Initiative and offers it for consideration by governments, businesses, and the public at large with the following resolution: ...
Whereas the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems is an authorized activity within the IEEE Standards Association Industry Connections program created with the stated mission: ...
To ensure every stakeholder involved in the design and development of autonomous and intelligent systems is educated, trained, and empowered to prioritize ethical considerations so that these technologies are advanced for the benefit of humanity; ...
Whereas versions 1 and 2 of Ethically Aligned Design: A Vision for Prioritizing Human Wellbeing with Autonomous and Intelligent Systems (A/IS) were developed as calls for comment and candidate recommendations by several hundred professionals including engineers, scientists, ...
ethicists, sociologists, economists, and many others from six continents; ...
Whereas the recommendations contained in Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems (A/IS), First Edition are the result of the consideration of hundreds of comments submitted by professionals and the public at large on versions 1 and 2; ...
Whereas through an extensive, global, and open collaborative process, more than a thousand experts of The IEEE Global Initiative have developed and are in the process of final editing and publishing Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems (A/IS), First Edition; now, therefore, be it ...
Resolved, that the IEEE Standards Association Board of Governors: ...
expresses its appreciation to the leadership and members of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems for the creation of Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems (A/IS), First Edition; and ...
supports and commends the collaborative process used by The IEEE Global Initiative to achieve extraordinary consensus in such complex and vast matters in less than three years; and ...
endorses and offers Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems (A/IS), First Edition to businesses, governments and the public at large for consideration and guidance in the ethical development of autonomous and intelligent systems. ...
Terminology Update ...
For Ethically Aligned Design, we prefer not to use-as far as possible-the vague term “AI” and use instead the term, autonomous and intelligent systems (or A//ISA / I S ). Even so, it is inherently difficult to define “intelligence” and “autonomy”. One could, however, limit the scope for practical purposes to computational systems using algorithms and data to address complex problems and situations, including the capability of improving their performance based on evaluating previous decisions, and say that such systems could be considered as “intelligent”. ...
Such systems could be regarded also as “autonomous” in a given domain as long as they are capable of accomplishing their tasks despite environment changes within the given domain. This terminology is applied throughout Ethically Aligned Design, First Edition to ensure the broadest possible application of ethical considerations in the design of the addressed technologies and systems. ...
How the Document Was Prepared ...
This document was developed by The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, which is an authorized Industry Connections activity within the IEEE Standards Association, a Major Organizational Unit of IEEE. ...
It was prepared using an open, collaborative, and consensus building approach, following the processes of the Industry Connections framework program of the IEEE Standards Association (standards.ieee. org/industry-connections). This process does not necessarily incorporate all comments or reflect the views of every contributor listed in the Acknowledgements above or after each chapter of this work. ...
The views and opinions expressed in this collaborative work are those of the authors and do not necessarily reflect the official policy or position of their respective institutions or of the Institute of Electrical and Electronics Engineers (IEEE). This work is published under the auspices of The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems for the purposes of furthering public understanding of the importance of addressing ethical considerations in the design of autonomous and intelligent systems. ...
In no event shall IEEE or IEEE-SA Industry Connections Activity Members be liable for any errors, omissions or damage, direct or otherwise, however caused, arising in any way out of the use of or application of any recommendation contained in this publication. ...
The Board of Governors of the IEEE Standards Association, its highest governing body, commends the consensus-building process used in developing Ethically Aligned Design, First Edition, and offers the work for consideration and guidance in the ethical development of autonomous and intelligent systems. ...
How to Cite Ethically Aligned Design ...
Please cite Ethically Aligned Design, First Edition in the following manner: ...
The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems, First Edition. IEEE, 2019. https://standards.ieee.org/content/ieee-standards/en/industry-connections/ec/ autonomous-systems.html ...
Key References ...
Key reference documents listed in Ethically Aligned Design, First Edition: ...
Appendix 1 - The State of Well-being Metrics (An Introduction) ... bit.ly/ead1e-appendix1 ...
(Referenced in Well-being Section) ...
Appendix 2 - The Happiness Screening Tool for Business Product Decisions bit.ly/ead1e-appendix2 ...
(Referenced in Well-being Section) ...
Appendix 3 - Additional Resources: Standards Development Models and Frameworks bit.ly/ead1e-appendix3 ...
(Referenced in Well-being Section) ...
The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems (“The IEEE Global Initiative”) is a program of The Institute of Electrical and Electronics Engineers, Incorporated (“IEEE”), the world’s largest technical professional organization dedicated to advancing technology for the benefit of humanity with over 420,000 members in more than 160 countries. The IEEE Global Initiative and Ethically Aligned Design contribute to ...
Advancing Technology for Humanity broader efforts at IEEE about ethics in technology. ...
^(1){ }^{1} The symbols, values, institutions, and norms of a societal group through which people imagine their lives and constitute their societies. ...
17 See Artificial Intelligence: the Road ahead in Low and Middle-income Countries ...