On the Transparency of Artificial Intelligence System
Abstract
In order to improve the effectiveness of the management of artificial intelligence system, there is a growing demand for improving the transparency of artificial intelligence system from all parts of society. Improving the transparency of artificial intelligence system is conducive to relevant personnel better assuming their responsibilities and protecting the public’s right to know. Therefore, the principle of transparency appears most frequently in all kinds of ethical principles and ethical guidelines of artificial intelligence, but there are some differences in the definition of its connotation by different subjects. The transparency of artificial intelligence system is reflected in many aspects like algorithm interpretation, data transparency and function transparency. We need to fully understand the limit of artificial intelligence transparency from the perspective of the characteristics of intelligence, the current situation of artificial intelligence technology and the feasibility of technical governance. For the construction path of artificial intelligence system transparency, there are many ways, such as technical approach, ethical and legal regulation and cultural approach.
Keywords
Full Text:
PDFReferences
1. Oliver R. What is transparency. New York: McGraw-Hill; 2004.
2. Hansen H, Christensen L, Flyverbom M. Introduc-tion: Logics of transparency in late modernity: Paradoxes, mediation and governance. European Journal of Social Theory 2015; 18(2): 117–131. doi: 10.1177/1368431014555254.
3. Liu ZY, Lin L. Transparent government: The histo-ry and logic view on the transformation of a gov-ernment pattern (in Chinese). Journal of Sichuan University (Philosophy and Social Sciences edi-tion), 2009; (1): 21–28.
4. Yu KP. Governance and good governance (in Chi-nese). Beijing: Social Sciences Academic Press; 2000. p. 9–10.
5. Kovach B, Rosenstiel T. The elements of journal-ism: What news people should know and the public should expect. Beijing: Peking University; 2014. p. 110–115.
6. Gan SP. The frontiers of applied ethics (in Chinese). Guiyang: Guizhou University; 2019. p. 86.
7. Zhang JS. The right to know and its guarantee (in Chinese). China Legal Science 2008; (4): 12.
8. Stiglitz J. On liberty, the right to know, and public discourse: The role of transparency in public life. Global Law Review 2002; (3): 263–273.
9. Wischmeyer T, Rademacher T. Regulating artificial intelligence. Switzerland: Springer, 2020. p. 76.
10. Dubber MD, Pasquale F, Das S. The Oxford hand-book of ethics of AI. Oxford: Oxford University Press; 2020. p. 200.
11. Jobin A, Ienca M, Vayena E. The global landscape of AI ethical guidelines. Nature Machine Intelli-gence 2019; (9): 389–399. doi: 10.1038/s42256-019-0088-2.
12. Guo R. The ethics and governance of artificial in-telligence (in Chinese). Beijing: Lawpress; 2020. p. 38.
13. Shum H, Smith B. The future computed (in Chi-nese). Beijing: Peking University; 2018. p. 39.
14. Theodorou A, Wortham R, Bryson J. Designing and implementing transparency for real time inspection of autonomous robots. Connection Science 2017; 29(3): 230–241. doi: 10.1080/09540091.2017.1310182.
15. Tsoukas H. The tyranny of light. Future 1997; (9): 827–843.
16. Strathern M. The tyranny of transparency. British Educational Research Journal 2000; 26(3): 309–321.
17. Shen WW. The myth of the algorithmic transpar-ency principle. Global Law Review 2019; 40(6): 20–39.
18. Vardi M. The moral imperative of artificial intelli-gence. Communications of the ACM 2016; 59(5): 5.
19. Knight W. The dark secret at the heart of AI. MIT Technology Review 2017; (3): 55–63.
20. Zhou ZH. Machine learning (in Chinese). Beijing: Tsinghua University; 2016. p. 113–115.
21. Calo R. Robotics and the lessons of cyberlaw. Cal-ifornia Law Review 2015; 103(3): 513–563. doi: 10.2139/ssrn.2402972.
22. Balkin J. The path of robotics law. California Law Review Circuit 2015; (2): 45–60.
23. Dietvorst B, Simmons J, Massey C. Algorithm aversion: People erroneously avoid algorithms after seeing them err. Journal of Experimental Psychol-ogy: General 2015; 144(1): 114–126. doi: doi: 10.1037/xge0000033.
24. Logg J, Minson J, Moore D. Algorithm apprecia-tion: People prefer algorithmic to human judge-ment. Organizational Behavior and Human Deci-sion Processes 2019; 151(1): 90–103. doi: 10.1016/j.obhdp.2018.12.005.
25. Lei N, An D, Guo Y, et al. A geometric understand-ing of deep learning. Engineering 2020; (3): 361–374. doi: 10.48550/arXiv.1805.10451.
26. Montavon G, Samek W, Muller K. Methods of in-terpreting and understanding deep neural networks. Digital Signal Processing 2018; 73(1): 1–15. doi: 10.1016/j.dsp.2017.10.011.
27. Edwards L, Veale M. Slave to the algorithm? Why a right to an explanation is probably not the remedy you are looking for. Duke Law & Technology Re-view 2017; (1): 18–84.
28. Wachter S, Mittelstadt B, Russell C. Counterfactual explanations without opening the black box. Har-vard Journal of Law & Technology 2018; (2): 841–887.
29. Zhao Y, Hu Y, Gong JY, et al. Research on domestic standardization of software quality and software testing (in Chinese). Standard Science 2021; (4): 25–31.
30. Wu H, Du YY. Artificial intelligence ethical gov-ernance: Translating principles into practices. Studies in Dialectics of Nature 2021; 37(4):49–54. doi:10.19484/j.cnki.1000-8934.2021.04.009.
31. Xie ZS. Regulating algorithmic decision: Focusing on the right to explanation of algorithm. Modern Law Science (in Chinsese). Modern Law 2020; 42(1): 179–193.
32. Popper K. Conjectures and refutations. Hangzhou: China Academy of Art Press; 2003. p. 36.
DOI: https://doi.org/10.32629/jai.v5i1.486
Refbacks
- There are currently no refbacks.
Copyright (c) 2022 Yanyong Du
License URL: https://creativecommons.org/licenses/by-nc/4.0