Yao Ye
Abstract: Artificial intelligence algorithms violate human privacy rights, and cause algorithmic discrimination, information bubbles, and other dilemmas. These problems are rooted in the unexplainably of algorithm technology and the concealment of trade secret protection models. The whistleblower immunity system accepted by many countries as a reasonable disclosure system for trade secrets can encourage employees to disclose illegal, irregular, and unethical artificial intelligence algorithms. It has an ethical basis and legal foundation. The whistleblower immunity system of China exists in different laws but needs interpretation and a supporting system. China should draw on the advanced experience of foreign countries, combined with the actual situation, to make a specific construction of the whistleblower immunity system.
Keywords: artificial intelligence algorithms; trade secrets; uninterpretability; whistleblower immunity system
CLC: D 923.4 DC: A Article: 2096?9783(2022)03⁃0129⁃09
1 Introduction
Together with genetic engineering technology and nanotechnology, artificial intelligence is the three cutting-edge technologies of the 21st century. It is a technical science of theories, methods, technologies, and applied systems that simulate, extend and expand human intelligence[1]. The rise of the third wave of artificial intelligence (AI) mainly stems from three developing technologies: algorithms, big data, and computing power. The core technology of AI lies in machine learning, which is mainly through various algorithms that enable machines to learn patterns from samples, data, experiences to recognize new models or make predictions[2]. Existing research has found that the difficulty of patenting them irreconcilably contradicts the high competitive value of AI algorithms. Therefore companies tend to treat AI algorithms as a trade secret and choose to protect them physically[3]. But at the same time, the complexity of AI algorithms can become a "black box" and the designers of AI can hide their illegal tricks behind the machine algorithms. The covert nature of algorithms creates multiple problems such as "information bubbles[4]", judicial injustice[5], violation of the Right to be Forgotten[6], etc. To regulate the above problems, some scholars have proposed a spectrum of algorithm law regulations to regulate the uninterpretability of algorithms, such as regulation of design goals, the rule of design defects, trust maintenance mechanisms, rights regulation, and algorithm supervision[7]. Some scholars believe that the regulatory exclusivity system for protecting pharmaceutical data can be borrowed to encourage the publication of algorithms. exchanging transparency for complete protection is the logic of this system[8]. The regulation of algorithms has become an ethical and technical issue that must be addressed. The changes in technology and the competition of interests often require the responses of rules, and the proper disclosure of artificial intelligence algorithms (hereinafter referred to as "AI algorithm") has become unstoppable. The Anti-Unfair Competition Law provides a general path for the proper disclosure of trade secrets, namely the whistleblower immunity system. However, whether this path is feasible and how to implement it are unknown. Given this, we should explore the reasons for the formation of the "black box" of AI algorithms, the path of trade secret protection for AI algorithms, and analyze how to improve the disclosure immunity system for AI algorithms.
2 The Risk of Algorithm Uninterpretability and the Causes of Uninterpretable Algorithms
The hidden nature of algorithms exacerbates inequality, hinders free choice, and increases insecurity, raising concerns about the controllability of algorithmic power[9]. The opacity of AI algorithms is reflected in the difficulty of access to technology and in all aspects of life, causing uncontrollable damage to human rights such as the Right to Privacy, the Right to quality, and the Right to be Forgotten.
2.1 Risks Caused by "Algorithmic Black Boxes"
There is no doubt that AI technology has "two sides", which can bring positive effects and significant economic benefits to society but also put society at moral and ethical risk[10]. The problem of algorithmic discrimination is the most prominent problem caused by AI algorithms. In traditional human societies, discrimination is straightforward to be recognized, and humans express their preferences through words and gestures. AI technologies learn from vast databases created by humans. However, these datasets are often biased. For example, when an AI beauty pageant judge served as a judge in 2016, it dropped most black candidates[11]. The combination of big data and algorithms makes algorithmic discrimination challenging to identify. In terms of the relationship between data and algorithms, algorithmic bias can be divided into three categories: first, algorithmic discrimination by biased agents. Although algorithmic decision-makers use objectively neutral underlying data, combining these legitimate objectively neutral data produces discriminatory consequences[12]. Second is algorithmic discrimination by feature selection. When algorithms discriminate against specific communities, the bias will be repeated and intensified. Algorithms have a value orientation, and there is a risk that this neutrality will be unbalanced in a "black box" package.
Such discrimination, in turn, has multiple negative consequences. The first one is the disparity in the distribution of social resources. In scenarios where banks use algorithms to extend credit to lenders, for example, black or low-income people have less access to get loans from the bank. The second one is price discrimination. For example, the algorithmic pricing of travel is used to victimize acquaintances by collecting data on users' travel choices and preferences. The third one is judicial fairness. In the United States, the court decided that the state court's decision lacked right because of using the COMPAS model in State v. Loomis and Loomis appealed to the state Supreme Court. However, the COMPAS's owner refused to disclose it to keep the trade secret.
The algorithm may also cause the leakage of human privacy. The browsing history, purchase history, identity information, etc., left on the internet, human speech has become unique symbols and representations of human beings. However, individual control and access to data are ceded to the recorders and possessors of data, who are usually monopolistic technology companies, governments, and other institutions[13]. Although the Cybersecurity Law of the People's Republic of China provides for the right to delete personal information, the E.U. GDPR provides for the right to be forgotten, and the U.S. also mandates companies to delete users' privacy. However, "algorithms" are trained to analyze data and may infer the source of such data without the consumer's consent and obtain a complete image of the individual. At this stage, the human right to privacy and the right to be forgotten cannot compete with technology development.
Keith Sunstein argues that internet users choose to access the information they are interested in based on their personal preferences, and excluding and ignoring other content may create an "information bubble" in the long run[14]. "Personalized recommendation" and "positive feedback" are the core elements of the formation of an "information bubble" the user following is the measure of advertisers' investment in advertising. The platform gets the attention of users attention through intelligent algorithmic recommendations, and user stickiness is enhanced. However, users' preferences will also receive positive feedback, and they will mistakenly believe that their values are the mainstream values. This information bias is the information bubble that is a bias in self-perception and is difficult to change through people's self-awareness.
In a society where technological rationality has overstepped its bounds, wrongful, unfair, and discriminatory treatment, institutionalized privacy concerns, and self-perception bias are linked to algorithmic opacity or the "black box". Investigating the causes of algorithmic opacity is an integral part of risk control and even risk mitigation.
2.2 Causes of "Algorithmic Black Box" Formation
The technical characteristics of AI algorithms and the profit-seeking instincts of enterprises have led to a preference for trade secret mechanisms in the choice of protection models for AI algorithms. However, the combination of AI algorithm's "black box" nature and the "secrecy" nature poses a threat to the judicial environment, public morality, individual privacy, social innovation and many other areas.
The primary reason for the formation of algorithmic black boxes is the "self-interpretability" of AI algorithms. "Deep learning" is an "end-to-end" black box, and humans cannot know the process, reasons, and causes of their decisions[15]. First, humans sometimes do not understand, let alone specifically describe, the algorithms they design. For example, Facebook engineers urgently shut down the program invented by themselves[16]. Algorithms in large datasets may still operate unpredictably, regardless of whether the language is or is not readable. Also, this unpredictability does not depend on whether the algorithm itself is designed to be 'predictive'or 'descriptive'. For example, the decision-making process of some algorithms is based on randomization techniques (stochastic algorithms)[17]. However, randomization techniques themselves can severely limit the predictability of outcomes. Second, the aggregated use of algorithms exacerbates the barriers to their understanding. Algorithms are not necessarily used in isolation; instead, integrated algorithms are used more widely to facilitate the application of a given database to different industrial applications. Multiple algorithms are used to analyze a database to determine the best solution or interpretation, such as credit scoring or ranking of Netflix companies[18]. Third, the unexplained features of algorithms. This is not only reflected in the complexity of the "algorithm structure" but also in the "structural" position of the algorithm in the Big Data environment. Algorithms do not operate independently but perform as a part of the extensive data system. The results of the data or information generated in the previous session are used as input for the later session. From this perspective, studying the content of the algorithm itself can only explain the logic applied by the algorithm in the latter session, not the logic applied in the former session. Thus, algorithm interpretability requires examining not only the content of the algorithm or code itself but also the place of the algorithm in the extensive data system, and the content of the tasks performed[19]. The opaqueness of the operating principles of AI algorithms is the main reason for the opacity of the algorithms.
The "man-made uninterpretability" of AI algorithms exacerbates the uninterpretability of AI algorithms. The nature of the trade secret mechanism is consistent with enterprises' demands to protect AI algorithms. China's former Anti-Unfair Competition Law defines trade secrets as technical and business information of non-public knowledge, value, and secret management[20]. The Regulations on the Protection of Trade Secrets (Draft for Public Comments) amend the elements of protection of trade secrets to "not being known to the public" "having the commercial value" "the right holder taking appropriate confidentiality measures" and "the right holder taking appropriate confidentiality measures". The object shall be technical information, business information, and other commercial information. What can be protected in the AI algorithm is mainly technical information, such as the data (used to train the AI algorithm), the design model (can be considered technical drawings), the programming specification, the source code, and the relevant technical information. In addition, Chinese companies often choose to protect their AI algorithm solutions as "trade secrets" rather than under patent law. The Opinions of the General Office of the CPC Central Committee and the General Office of the State Council on Strengthening the Protection of Intellectual Property Rights clearly state that we should "explore ways to strengthen the effective protection of trade secrets, confidential business information, and their source codes". According to a survey conducted by the European Union, companies whose central economic pillar is data are increasingly likely to choose "trade secrets" for the protection of AI algorithms in the context of protecting creative information. It is clear that the choice of AI algorithms to be saved as trade secrets are justified both in terms of institutional design and practice.
The trade secret models of protection are highly privileged and are the choices of companies in practice. Firstly, the scope of protection of trade secrets is enormous, with customer lists, production methods, marketing strategies, pricing information, and chemical formulas, an unauthorized creation process, etc., which able to contribute to the development of the enterprises. The trade secret regime is related to the patent system in that both can provide specific protections for valuable information to exclude others from using it. Secondly, trade secrets are not time-limited and are easy to obtain. The owners do not have to pay the registration fees. Trade secrets are protection-oriented, and their primary function is to give the exclusive enterprise control over the information product without the need to comply with formalities such as providing information about the trade secret to the government authorities. As a result, the protection of trade secrets is often granted to technologies that cannot be independently discovered or quickly replaced by inventions that require significant effort to acquire.
For the "black box" of AI algorithms, monitoring their efficacy is in a critical position. Only modest disclosure of the data sets used to train the algorithms, and the logic of their design will enable robust oversight of their design and application by the general public. The whistleblower immunity developed in the E.U. and the U.S. protects the secrecy of AI algorithms in the age of data algorithms, encourages employees to disclose illegal algorithms at the right time, and provides risk control and appropriate oversight of AI algorithms.
3 Risk Control of Algorithms in the Context of the Whistleblower Immunity Regime
3.1 The Origins of Whistleblower Immunity System and Its Mechanism
Whistleblower immunity refers to the system of not taking responsibility when the trade secrets of enterprises are disclosed to a third party based on legitimate reasons. At the level of the legal system, the whistleblower immunity system is an exception or defense to the infringement of trade secrets. Such exceptions are frequently public interest, such as public health, environmental pollution, personal safety, etc. Therefore, it is usually referred to as the "public policy exception" or "public interest defense" rule for trade secret protection[21]. The "whistleblower" under the whistleblower immunity system is usually a person who is legally informed of the trade secret directly from the right holder and has the duty of confidentiality, including not only the employees of the trade secret right holder but also the business partners. The whistleblower is exempt from criminal liability and civil liability, which is not only the liability for breach of confidentiality agreement but also the liability for breach of contract. At the mechanism level, the immunity of whistleblowers from liability is not only a means of compensation consisting of "immunity from liability" but also includes the protection of "whistleblower anonymity" and "awarding incentive payments such as awards."
The UTSA does not explicitly state a reasonable trade secret exception. Still, in the Restatement (Third) of Unfair Competition, it is held that "disclosure of information relating to public health or safety, or criminal or tortious conduct, or other matters of significant public concern". The Restatement (Third) of Unfair Competition considers that "disclosure of information relating to public health or safety, or criminal or tortious conduct, or other matters of significant public concern" may constitute a public policy public policy exception. Corporate violations are highly concealed and difficult to detect directly by the public or government regulators. Besides, companies also use confidentiality agreements to silence employees and contractual counterparties to avoid liability. The Health Insurance Portability and Accountability Act of 1996 (HIPAA) provides a good framework for the trusted intermediary exception. The SEC has also created a liability exclusion rule for whistleblowers who disclose material trading information. Subsequently, based on the Theft of Trade Secrets Clarification Act and the Economic Espionages Act (EEA), the US passed the Defend Trade Secrets Act (DTSA) in 2016. The Defend Trade Secrets Act (DTSA) was passed in 2016 to establish the basic framework for a general "whistleblower immunity regime."
The E.U. announced the enactment of the European Union Directive 2016/943 on the Protection of Trade Secrets (hereinafter referred to as EU-TSD), which provides in Article 5 that a company's trade secrets shall be disclosed if they are suspected of infringing the right to freedom of expression under the E.U. Charter1. The EU-TSD provides that trade secrets protected by the EU-TSD shall be disclosed where necessary for revealing misconduct, wrongdoing, or illegal activity that is detrimental to the public interest or if they violate a legal right protected under the E.U. law or the national law of an E.U. country. In other words, it covers freedom of expression, information and the press, protection of the public interest, and protection of the fundamental rights of employees or other types of criminal conduct or violations of the law must be dismissed. This new system needs to reconcile contradictory imperatives: the protection of information, on the one hand, and the safeguarding of fundamental freedoms and the values of transparency (the right to be informed and to be alerted), on the other hand. According to Article 19 of Directive (EU) 2016/943, Member States must comply with this Directive by 9 June 2018. on 24 August 2018, the law on the protection of trade secrets entered into force. In France, the decree was announced on 13 December.
The Proposal for a Directive of the European Parliament and of the Council on the protection of persons reporting on breaches of Union law (EU 2018/0106 (COD)) gives many fields that the whistleblowers can come into effect. By reporting breaches of Union law that are harmful to the public interest, many persons act as 'whistleblowers' and thereby play a key role in exposing and preventing such breaches and safeguarding society's welfare. The enforcement of this decree aims to prevent and detect procurement-related fraud and corruption in the context of the implementation of the Union budget, but also to tackle insufficient enforcement of rules on public procurement by national contracting authorities and contracting entities concerning the execution of works, the supply of products or the provision of services. The whistleblower act also plays an integral part in financial services, environmental protection, and animal health. It helps to control the safety of products placed on the internal market. Avoiding diversion of firearms, their parts and components, and ammunition, preventing and deterring breaches of Union rules on transport safety, preventing and deterring and respecting privacy and protection of personal data are under the control of this act. The decree also fights against fraud, corruption, and any other illegal activity.
The areas covered by the whistleblower immunity system are of international security and public interest and are already covered by a specific law. Whistleblower immunity is already essential in protecting the public interest.
3.2 Possible Application of the Whistleblower Immunity Regime
Firstly, those who are exposed to and disclose AI algorithmic technology are the most appropriate whistleblowers. The role of the whistleblower immunity system is significant. Evidence suggests that employee disclosures may be the best source of information about corporate violations. For example, in the case of fraud, nearly 40 percent of initial fraud detection cases came from employee disclosures, compared to 16.5 percent from internal audits and 13.4 percent from government regulation[22]. The whistleblower immunity system saves enforcement resources because it not only increases the speed of detecting and correcting corporate violations far beyond the efficiency of external monitoring but also promotes self-monitoring by companies that are threatened to regulate their behavior. More importantly, in the case of internal organizational disclosure, whistleblowing can be a vital, effective and valuable source of feedback, as it can correct misunderstandings and wrongdoing without the financial and reputational risks associated with public disclosure. In the case of AI algorithm black boxes, whether deliberate or flawed by corporate design, it is often the designers, trainers, and users of the algorithms most likely to have access to the information. They are the ones who can pinpoint deviations between the operation of the algorithms, the predictions, and the identification of problems. They are nearest to the core secrets of the companies or research institutions and can be in the thick of the intrigue attached to the trained algorithm. These are often the indispensable research anchors or employees of a company or research institution, and it seems more practical to rely on the technical skills of professionals.
Besides, AI algorithms are disclosed to protect the public interest. In this case, whistleblower immunity provides clear limits. The whistleblower immunity is based on a close relationship of "the need to protect the public interest over the need to uphold business ethics". In common law countries, the concept of "public interest law" refers to the general term for civil rights laws, poverty relief laws, environmental protection laws, health care laws and other laws that relate to the public interest. In Japan, the concept of "public welfare" was introduced, and the core of this concept is the equality of individuals and the fair distribution and protection of human rights [23]. The principle of "public order and morality" in the civil law of China is close to the "public interest" which is public order and good customs. There are only scattered arguments in our doctrine that the administrative organ has the right to obtain information for administrative investigation[24], food safety risks[25], corruption[26], etc. It can be seen that the concept of public interest varies from country to country and is discussed from case to case. However, at least it can be considered that "public interest" is an inviolable right in the economic, political, physical, and living aspects of the lives of an unspecified majority of people rather than a specific person. As detailed above, AI algorithms may cause uncontrollable harm to human rights such as the right to privacy, the right to equality, and the right to be forgotten. In this sense, there may be space for a whistleblower immunity regime to apply.
4 The Specific Construction of the Whistleblower Immunity System
The Regulations on the Protection of Trade Secrets (Draft for Public Comments) of China published on 4 September 2020 simply provides for a "whistleblower immunity system", limiting the subject of disclosure to employees, former employees or partners of the right holder or holder of the trade secret. The reasons for disclosure are restricted to environmental protection, public health, public security, illegal and criminal acts, and other public or national interests. However, the provisions of the Regulations on the Protection of Trade Secrets (Draft for Public Comments) are so independent that they lack systematic supporting measures. The trade secret legislation in China is very fragmented, with legal content scattered in different sectoral laws, administrative regulations, and judicial interpretations. Specifically, the Anti-Unfair Competition Law is the central part, supplemented by sectoral laws such as the General Provisions of the Civil Law, the Criminal Law, and procedural laws such as the Civil and the Criminal Procedure Law. The Law on Promoting the Transformation of Scientific and Technological Achievements, the Regulations on the Administration of Import and Export of Technologies, Interpretation of the Supreme People's Court on Some Issues Concerning the Application of Law in the Trial of Civil Cases Involving Unfair Competition and many regulations and judicial interpretations are also essential parts of the] regulation system of the trade secret.[27] The lack of uniformity in the legal system causes the legal content to be highly imperfect, and the regulations of the legal system may also be in form. Our law system provides the fundamental institutional basis for the whistleblower immunity system and the supervision and transparency of AI algorithms. However, it lacks clear guidelines on application. According to the real situation, China can make a specific construction of the whistleblower immunity systematically.
4.1 Following the Principle of Proportionality
Encouraging close contacts of trade secrets to disclose illegal trade secret information through the liability immunity mechanism is the mechanism of the whistleblower immunity system. The keyword in the management of trade secrets of AI algorithms is "proportionate", which means that the scale of disclosure of AI algorithms should be measured while protecting the competitive advantage of enterprises. The whistleblower immunity system guides on the issues raised by protecting the trade secrets of AI algorithms. Trade secrets are rooted in commercial transactions, which should also be governed by principles such as honesty and fairness. Professor Deepa Varadarajan believes that a fair use regime in Copyright Law is also suitable for exposing illegal secrets. The use of trade secrets, the nature of the trade secrets, the proportion of the use of the trade secrets in the products or methods, the market impact, and other factors should be considered when determining whether the trade secrets promptly. Excessive trade secret protection will breed "trade secret hooligans", i.e., rights holders using trade secret infringement litigation as a threat to abuse their rights. Still, it will also impede the spread of commercial information and hinder competition.
4.2 Clarifying Reasons for Disclosure
First, illegal and improper trade secrets are not protected. Many AI algorithms may be used to steal data from users, carry out vertical and horizontal monopolies, harm others, steal, etc. This is not permitted under criminal law, and such algorithms should be disclosed promptly. Second, trade secrets may be revealed in the public interest. A whistleblower discloses a trade secret depends on the understanding of the element of "public interest". The central focus of the first trade secret disclosure case in China was whether the personal safety of car users constituted a public interest. In Hainan Province, Lin Youfeng and Chen Miao bought a car with airbags as a selling point. Still, they did not see the positive effect of the airbags when they were involved in a car accident and sued the Haikou Fengzhenghua Company and Tianjin FAW Toyota Company to court, requesting the manufacturer to provide the "airbag" technical standards and the "Product Certificate of Conformity" for the car. In practice, China does not have a detailed classification of public interest elements. The scope of disclosure has been adjusted according to the natural social environment and the level of economic development, and the content of violation of public interest is constantly updated. It is worth noting that where there is an overlap between a breach of law and a breach of public interest, the authority of the law should be respected, and the breach should be by final provisions. In the case of violations of the public interest, the principle of proportionality should be followed in selecting the appropriate authorities, such as the environmental protection department, and the industry and commerce department, for disclosure.
4.3 Sorting out the Scope of Disclosure
In a broad sense, a whistleblower can only disclose specific AI algorithms or althat are contrary to the law or the public interest. The data used to train the algorithm and the logic of its design are also of more significant concern than the algorithm code itself. Disclosure of the source code of AI algorithms only serves the purpose of "superficial transparency". For example, the E.U. focuses on "data", granting data subjects the right to be forgotten, the right to portability, etc., and discourages data over-mining and algorithmic over-prediction. The disclosure of data used for algorithm training could be an effective way to uncover the algorithm. In addition, it may also be effective to expose the logic of the algorithm design. Some algorithms are designed to go against people or even break the law. Disclosing the reason of these algorithms directly is more straightforward and effective.
4.4 Limiting the Subject of Disclosure
AI algorithms are often created by private companies, scientific institutions, or even government agencies. Their internal operations are subject to neither internal nor external oversight. Self-regulation is non-binding and usually lacks enforcement. Natural persons in legal entities are only externally and independently liable in limited circumstances. Companies only face financial compensation for infringement, which contributes to the risk of abuse of trade secrets by executives and managers within the company. Research institutions and government agencies are less likely to have effective external oversight. The general public is even more powerless in disclosing algorithms to individuals who have been discriminated against or whose privacy has been compromised by the algorithms. Things are even worse when many customers are not even aware that algorithms are discriminating against them or that their privacy is being snooped on or used illegally by others. The only groups that have access to the logic and data of algorithms are internal employees of companies, employees of research institutions, government agencies, or collaborators.
4.5 Developing the Supporting Systems
First, keeping the whistleblower's identity confidential, protecting the personal safety and peace of the whistleblower and their family. Whistleblowers are often employees, and all kinds of information about them are under the control of their companies. To prevent threats and the risk of dismissal, it is necessary to keep the identity of whistleblowers and their families confidential. Secondly, providing incentives for whistleblowers. Although many whistleblowers do not report illegal trade secrets for government incentives. However, incentives such as money and honorary titles are essential to encourage more whistleblowers. Third is the accountability of false whistleblowers. Many whistleblowers do not have a pure purpose, either because of long-standing grievances of the company or conflicts with senior leaders. The law should be fully understanding and tolerant of these purposes. However, false whistleblowers should be warned or disciplined to remind future generations and prevent the whistleblower immunity system from becoming a cover for deliberate retaliation. Fourth, exempt whistleblowers who disclose information about illegal corporate practices to the government (federal, state, or local), intermediaries such as lawyers, and exempt intermediary lawyers who keep trade secrets confidential. Fifth, requiring employers to inform employees within the business of their disclosure rights and disclosure immunity procedures, such as non-public disclosure and trusted intermediary safe harbors, or else the confidentiality agreement is void.
5 Conclusion
AI algorithms are the foundation of AI-based industries and essential for economic growth. In recent years, the protection of AI algorithms as trade secrets by enterprises and the unexplainable nature of AI algorithms themselves have created multiple social risks that need to be alleviated by the legal system. The whistleblower immunity system, which is based on the disclosure of internal employees and the protection of public interest, has gained popularity in the U.S. and the E.U. and can prevent the risks caused by the uninterpretability of AI algorithms. In China, however, this system has been reduced to a legal text that cannot be specifically applied. China should follow the principle of proportionality, provide specific rules on the reasons for disclosure, the scope of the disclosure, the subject of exposure and the supporting system of the whistleblower immunity system to protect the personal and property interests of the public based on promoting the commercial development of AI algorithms.
References:
[1] WU H D. The question of patent law for artificial intelligence generated inventions[J]. Contemporary Law Review, 2019, 33(4): 24-38.
[2] YANG Z H, GUO L Y, LIU W. Introduction to artificial intelligence and big data technology[M]. Beijing: Tsinghua University Press, 2018: 193.
[3] GERVAIS D. Is intellectual property law ready for artificial intelligence?[J]. GRUR International, 2020, 69(2): 117-118.
[4] WANG R. The essence, causes and ways to break out of the "information cocoon"[J]. Youth Journalist, 2020(36): 23-24.
[5] CAO Y Y. Opportunities, challenges and responses of judicial adjudication in the era of artificial intelligence[J]. Nomocracy Forum, 2019(3): 278-290.
[6] JIANG Y. The regulation of algorithms and the regulated algorithms: the legal regulation of algorithms in the era of artificial intelligence[J]. Hebei Law Science, 2018, 36(12): 142-153.
[7] SU Y. A spectrum of algorithmic regulation[J]. China Legal Science, 2020(3): 165-184.
[8] LIANG Z W. On algorithmic exclusivity: a path choice to break algorithmic bias[J]. Political Science and Law, 2020(8): 94-106.
[9] GUO Z. Rethinking algorithmic power[J]. Law Review, 2020, 38(6): 33-41.
[10] ZRIMILA A, YAO Ye. The challenges and responses of artificial intelligence technology to the patent system[J]. Electronics Intellectual Property, 2020(4): 52-61.
[11] MORGANE T, AU-DELÀ F. Quels sont les problèmes concrets que pose l'intelligence artificielle?[EB/OL]. [2019-12-5].http://www.lemonde.fr/pixels/article/2017/08/03/au-dela-des-fantasmes-quels-sont-les-problemes-concrets- que- pose-l-intelligence-artificielle_5168330_4408996.html.
[12] ZHENG Z H, XU Z X. Legal regulation and judicial review of algorithmic discrimination in the era of big data - an example of US legal practice[J]. Journal of Comparative Law, 2019(4): 111-122.
[13] KUANG K. The logical rationale, ethical issues and regulatory strategies of intelligent algorithmic recommendation technology[J]. Journal of Shenzhen University (Humanities & Social Sciences), 2021, 38(1): 144-151.
[14] TANG X Y. Principles of marxist philosophy[M]. Chengdu: Southwest University of Finance and Economics Press, 2010: 180-181.
[15] HONG L X. The dilemma, causes and improvement of the algorithmic problem of legal artificial intelligence[J]. Journal of Sichuan Normal University (Social Sciences Edition), 2020, 47(1): 58-70.
[16] HASHIGUCHI M. The global artificial intelligence revolution challenges patent eligibility laws[J]. J.Bus.& Tech.L, 2017(13) : 1.
[17] KROLL J A. Accountable algorithms[D]. Princeton University, 2015.
[18] SENI G, ELDER J F. Ensemble methods in data mining: improving accuracy through combining predictions[J]. Synthesis lectures on data mining and knowledge discovery, 2010, 2(1): 1-126.
[19] KITCHIN R. Thinking critically about and researching algorithms[J]. Information, Communication & Society, 2017, 20(1): 14-29.
[20] LI Y. Fundamental principles of intellectual property law[M]. Beijing: China Social Science Press, 2010: 644-653.
[21] RUAN K X. The informant immunity system in the U.S. trade secret law and inspiration[J]. Western Law Review, 2017(3): 106-118.
[22] THOMAS L. Federal law may prompt corporate whistle-blowers to act[EB/OL].[2022-05-21].https://perma.cc/AZ46-KWXX.
[23] HAN D Y. Normative analysis of "public interest" in constitutional texts[J]. Legal Forum, 2005(1): 5-9.
[24] TANG L J. The duty of administrative organs to make reasonable use of information obtained through administrative investigation[J]. Journal of Anhui Electrical Engineering Professional Technique College, 2011,16(4): 23-27.
[25] FANG H. On the improvement of China's food safety risk communication system[D]. Changchun University of Science and Technology, 2019.
[26] LI X J. A comparative study of the United Nations Convention Against Corruption and China's criminal procedure[D]. China University of Political Science and Law, 2006.
[27] MA Z F, LI Z C. Re-discussing the legislative model of trade secret protection in China[J]. Electronics Intellectual Property, 2019(12): 4-13.
人工智能算法的不可解釋性:风险、原因、纾解
——兼论我国“举报人免责制度”的具体建构
姚 叶
(中南财经政法大学 知识产权研究中心,武汉 430073)
摘 要:人工智能算法侵犯人类隐私权,造成算法歧视、信息茧房等困境,究其原因在于算法技术的不可解释性与商业秘密保护模式的隐蔽性。举报人免责制度作为商业秘密的合理披露制度,能够鼓励员工对违法、违规、违背道德的人工智能算法进行披露,具有伦理基础和法律基础,已为诸多国家所接纳。我国举报人免责制度在法律中有所规定,但缺乏内涵解释与制度配套。我国应当借鉴国外的先进经验,结合我国实际,对举报人免责制度进行具体建构。
关键词:人工智能算法;商业秘密;不可解释性;举报人免责