The human cost of ethical artificial intelligence.
Artificial intelligence
Ethical modelling
Philosophy and ethics
Policy
Journal
Brain structure & function
ISSN: 1863-2661
Titre abrégé: Brain Struct Funct
Pays: Germany
ID NLM: 101282001
Informations de publication
Date de publication:
Jul 2023
Jul 2023
Historique:
received:
21
04
2023
accepted:
01
06
2023
medline:
13
7
2023
pubmed:
23
6
2023
entrez:
23
6
2023
Statut:
ppublish
Résumé
Foundational models such as ChatGPT critically depend on vast data scales the internet uniquely enables. This implies exposure to material varying widely in logical sense, factual fidelity, moral value, and even legal status. Whereas data scaling is a technical challenge, soluble with greater computational resource, complex semantic filtering cannot be performed reliably without human intervention: the self-supervision that makes foundational models possible at least in part presupposes the abilities they seek to acquire. This unavoidably introduces the need for large-scale human supervision-not just of training input but also model output-and imbues any model with subjectivity reflecting the beliefs of its creator. The pressure to minimize the cost of the former is in direct conflict with the pressure to maximise the quality of the latter. Moreover, it is unclear how complex semantics, especially in the realm of the moral, could ever be reduced to an objective function any machine could plausibly maximise. We suggest the development of foundational models necessitates urgent innovation in quantitative ethics and outline possible avenues for its realisation.
Identifiants
pubmed: 37351658
doi: 10.1007/s00429-023-02662-7
pii: 10.1007/s00429-023-02662-7
doi:
Types de publication
Letter
Langues
eng
Sous-ensembles de citation
IM
Pagination
1365-1369Subventions
Organisme : Medical Research Council
ID : MR/X00046X/1
Pays : United Kingdom
Organisme : Wellcome Trust
ID : 213038/Z/18/Z
Pays : United Kingdom
Informations de copyright
© 2023. The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature.
Références
Agostinelli A, Denk TI, Borsos Z, Engel J, Verzetti M, Caillon A, Huang Q, Jansen A, Roberts A, Tagliasacchi M, Sharifi M, Zeghidour N, Frank C (2023) MusicLM: Generating Music From Text. arXiv: arXiv:2301.11325
Beauchamp TL (1991) Philosophical ethics: an introduction to moral philosophy. McGraw Hill, New York
Carruthers R, Straw I, Ruffle JK, Herron D, Nelson A, Bzdok D, Fernandez-Reyes D, Rees G, Nachev P (2022) Representational ethical model calibration. NPJ Digit Med 5(1):170. https://doi.org/10.1038/s41746-022-00716-4
doi: 10.1038/s41746-022-00716-4
pubmed: 36333390
pmcid: 9636204
Chowdhery A, Narang S, Devlin J, Bosma M, Mishra G, Roberts A, Barham P, Chung HW, Sutton C, Gehrmann S, Schuh P, Shi K, Tsvyashchenko S, Maynez J, Rao A, Barnes P, Tay Y, Shazeer N, Prabhakaran V, Reif E, Du N, Hutchinson B, Pope R, Bradbury J, Austin J, Isard M, Gur-Ari G, Yin P, Duke T, Levskaya A, Ghemawat S, Dev S, Michalewski H, Garcia X, Misra V, Robinson K, Fedus L, Zhou D, Ippolito D, Luan D, Lim H, Zoph B, Spiridonov A, Sepassi R, Dohan D, Agrawal S, Omernick M, Dai AM, Sankaranarayana Pillai T, Pellat M, Lewkowycz A, Moreira E, Child R, Polozov O, Lee K, Zhou Z, Wang X, Saeta B, Diaz M, Firat O, Catasta M, Wei J, Meier-Hellstern K, Eck D, Dean J, Petrov S, Fiedel N (2022) PaLM: Scaling Language Modeling with Pathways. arXiv: arXiv:2204.02311 . doi: https://doi.org/10.48550/arXiv.2204.02311
Du N, Huang Y, Dai AM, Tong S, Lepikhin D, Xu Y, Krikun M, Zhou Y, Yu AW, Firat O, Zoph B, Fedus L, Bosma M, Zhou Z, Wang T, Wang YE, Webster K, Pellat M, Robinson K, Meier-Hellstern K, Duke T, Dixon L, Zhang K, Le QV, Wu Y, Chen Z, Cui C (2021) GLaM: Efficient Scaling of Language Models with Mixture-of-Experts. arXiv: arXiv:2112.06905 . doi: https://doi.org/10.48550/arXiv.2112.06905
enlyft (2023) OpenSSL. https://enlyft.com/tech/products/openssl . Accessed 24 Jan 2023
Flew A (1979) Consequentialism. A dictionary of philosophy, 2nd edn. St Martin’s, New York
Gao CA, Howard FM, Markov NS, Dyer EC, Ramesh S, Luo Y, Pearson AT (2022) Comparing scientific abstracts generated by ChatGPT to original abstracts using an artificial intelligence output detector, plagiarism detector, and blinded human reviewers. bioRxiv. https://doi.org/10.1101/2022.12.23.521610
doi: 10.1101/2022.12.23.521610
pubmed: 36561175
pmcid: 9709790
Heaven WD (2022) Why Meta’s latest large language model survived only three days online. MIT Technology Review. https://www.technologyreview.com/2022/11/18/1063487/meta-large-language-model-ai-only-survived-three-days-gpt-3-science/ . Accessed 24 Jan 2023
Hoang L-N, Faucon L, Jungo A, Volodin S, Papuc D, Liossatos O, Crulis B, Tighanimine M, Constantin I, Kucherenko A, Maurer A, Grimberg F, Nitu V, Vossen C, Rouault S, El-Mhamdi E-M (2021) Tournesol: A quest for a large, secure and trustworthy database of reliable human judgments. arXiv: arXiv:2107.07334 . doi: https://doi.org/10.48550/arXiv.2107.07334
Hunt E (2016) Tay, Microsoft’s AI chatbot, gets a crash course in racism from Twitter. The Guardian, London. https://www.theguardian.com/technology/2016/mar/24/tay-microsofts-ai-chatbot-gets-a-crash-course-in-racism-from-twitter . Accessed 24 Jan 2023
Kong Z, Ping W, Huang J, Zhao K, Catanzaro B (2020) DiffWave: a versatile diffusion model for audio synthesis. arXiv: arXiv:2009.09761 . doi: https://doi.org/10.48550/arXiv.2009.09761
Kraut R (2022) Aristotle’s ethics. The Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University, Stanford. https://plato.stanford.edu . Accessed 24 Jan 2023
Metz C (2021) Who is making sure the A.I. Machines Aren’t Racist. NY Times, New York. https://www.nytimes.com/2021/03/15/technology/artificial-intelligence-google-bias.html . Accessed 24 Jan 2023
Mill JS (1983) Utilitarianism. Fraser’s Magazine, London
Najafabadi MM, Villanustre F, Khoshgoftaar TM, Seliya N, Wald R, Muharemagic E (2015) Deep learning applications and challenges in big data analytics. J Big Data 2(1):1. https://doi.org/10.1186/s40537-014-0007-7
doi: 10.1186/s40537-014-0007-7
OpenAI (2018) OpenAI Charter. https://openai.com/charter/ . Accessed 24 Jan 2023
OpenAI (2022) ChatGPT: Optimizing Language Models For Dialogue. https://openai.com/blog/chatgpt/ . Accessed 24 Jan 2023
Perrigo B (2023a) DeepMind’s CEO Helped Take AI Mainstream. Now He’s Urging Caution. TIME. https://time.com/6246119/demis-hassabis-deepmind-interview/ . Accessed 24 Jan 2023
Perrigo B (2023b) Exclusive: OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic. TIME. https://time.com/6247678/openai-chatgpt-kenya-workers/ . Accessed 24 Jan 2023
Ramesh A, Dhariwal P, Nichol A, Chu C, Chen M (2022) Hierarchical text-conditional image generation with CLIP latents. arXiv:2204.06125
Rombach R, Blattman A, Lorenz D, Esser P, Ommer B (2021) High-resolution image synthesis with latent diffusion models. arXiv:2112.10752
Saharia C, Chan W, Saxena S, Li L, Whang J, Denton E, Ghasemipour SKS, Ayan BK, Mahdavi ASS, Lopes RG, Salimans T, Ho J, Fleet DJ, Norouzi M (2022) Photorealistic text-to-image diffusion models with deep language understanding. arXiv:2205.11487
Sama (2023) The Ethical AI Supply Chain: Purpose-Built for Impact. https://www.sama.com/ethical-ai/ . Accessed 24 Jan 2023
Sesack SR, Thiebaut de Schotten M (2023) Brain structure and function gets serious about ethical science writing. Brain Struct Funct. https://doi.org/10.1007/s00429-023-02645-8
doi: 10.1007/s00429-023-02645-8
pubmed: 37093303
Simonite T (2021) It Began as an AI-Fueled Dungeon Game. It Got Much Darker. Wired. https://www.wired.com/story/ai-fueled-dungeon-game-got-much-darker/ . Accessed 24 Jan 2023
Taylor R, Kardas M, Cucurull G, Scialom T, Hartshorn A, Saravia E, Poulton A, Kerkez V, Stojnic R (2022) Galactica: a large language model for science. arXiv:2211.09085
Thoppilan R, De Freitas D, Hall J, Shazeer N, Kulshreshtha A, Cheng H-T, Jin A, Bos T, Baker L, Du Y, Li Y, Lee H, Zheng HS, Ghafouri A, Menegali M, Huang Y, Krikun M, Lepikhin D, Qin J, Chen D, Xu Y, Chen Z, Roberts A, Bosma M, Zhao V, Zhou Y, Chang C-C, Krivokon I, Rusch W, Pickett M, Srinivasan P, Man L, Meier-Hellstern K, Ringel Morris M, Doshi T, Delos Santos R, Duke T, Soraker J, Zevenbergen B, Prabhakaran V, Diaz M, Hutchinson B, Olson K, Molina A, Hoffman-John E, Lee J, Aroyo L, Rajakumar R, Butryna A, Lamm M, Kuzmina V, Fenton J, Cohen A, Bernstein R, Kurzweil R, Aguera-Arcas B, Cui C, Croak M, Chi E, Le Q (2022) LaMDA: Language Models for Dialog Applications. arXiv:2201.08239
Walden J (2020) The Impact of a Major Security Event on an Open Source Project: The Case of OpenSSL. MSR 2020: Proceedings of the 17th International Conference on Mining Software Repositories: 404–419