Human intelligence can safeguard against artificial intelligence: individual differences in the discernment of human from AI texts.


Journal

Scientific reports
ISSN: 2045-2322
Titre abrégé: Sci Rep
Pays: England
ID NLM: 101563288

Informations de publication

Date de publication:
29 10 2024
Historique:
received: 16 04 2024
accepted: 11 10 2024
medline: 30 10 2024
pubmed: 30 10 2024
entrez: 30 10 2024
Statut: epublish

Résumé

Artificial intelligence (AI) models can produce output that closely mimics human-generated content. We examined individual differences in the human ability to differentiate human- from AI-generated texts, exploring relationships with fluid intelligence, executive functioning, empathy, and digital habits. Overall, participants exhibited better than chance text discrimination, with substantial variation across individuals. Fluid intelligence strongly predicted differences in the ability to distinguish human from AI, but executive functioning and empathy did not. Meanwhile, heavier smartphone and social media use predicted misattribution of AI content (mistaking it for human). Determinations about the origin of encountered content also affected sharing preferences, with those who were better able to distinguish human from AI indicating a lower likelihood of sharing AI content online. Word-level differences in linguistic composition of the texts did not meaningfully influence participants' judgements. These findings inform our understanding of how individual difference factors may shape the course of human interactions with AI-generated information.

Identifiants

pubmed: 39472489
doi: 10.1038/s41598-024-76218-y
pii: 10.1038/s41598-024-76218-y
doi:

Types de publication

Journal Article

Langues

eng

Sous-ensembles de citation

IM

Pagination

25989

Subventions

Organisme : NICHD NIH HHS
ID : R01 HD098097
Pays : United States
Organisme : National Institute of Child Health and Human Development
ID : R01 HD098097-S

Informations de copyright

© 2024. The Author(s).

Références

Turing, A. M. Computing machinery and intelligence. Mind 59, 433–460 (1950).
doi: 10.1093/mind/LIX.236.433
OpenAI. ChatGPT (Feb 13 version) [Large language model]. (2023).
Gemini et al. Gemini: A family of highly capable multimodal models. Preprint at (2023). https://doi.org/10.48550/arXiv.2312.11805
Oravec, J. A. Artificial Intelligence implications for academic cheating: Expanding the dimensions of responsible human-AI collaboration with ChatGPT. J. Interact. Learn. Res. 34, 213–237 (2023).
Rudolph, J., Tan, S., Tan, S. & ChatGPT Bullshit spewer or the end of traditional assessments in higher education? J. Appl. Learn. Teach. 6, 342–363 (2023).
Yeadon, W., Inyang, O. O., Mizouri, A., Peach, A. & Testrow, C. P. The death of the short-form physics essay in the coming AI revolution. Phys. Educ. 58, 035027 (2023).
doi: 10.1088/1361-6552/acc5cf
Dehouche, N. Plagiarism in the age of massive generative pre-trained transformers (GPT-3). Ethics Sci. Environ. Polit. 21, 17–23 (2021).
doi: 10.3354/esep00195
Tomas, F. & van der Zanden, T. There are layers to liars: A systematic literature review of online dating deception (2023). https://doi.org/10.31219/osf.io/f2bnj
Bankins, S. & Formosa, P. The ethical implications of artificial intelligence (AI) for meaningful work. J. Bus. Ethics 185, 725–740 (2023).
doi: 10.1007/s10551-023-05339-7
Monteith, S. et al. Artificial intelligence and increasing misinformation. Br. J. Psychiatry 224, 33–35 (2024).
doi: 10.1192/bjp.2023.136 pubmed: 37881016
Kertysova, K. Artificial intelligence and disinformation: How AI changes the way disinformation is produced, disseminated, and can be countered. Secur. Hum. Rights 29, 55–81 (2018).
doi: 10.1163/18750230-02901005
Elali, F. R. & Rachid, L. N. AI-generated research paper fabrication and plagiarism in the scientific community. Patterns 4, 100706 (2023).
doi: 10.1016/j.patter.2023.100706 pubmed: 36960451 pmcid: 10028415
Liu, N. & Brown, A. AI increases the pressure to overhaul the scientific peer review process. Comment on artificial intelligence can generate fraudulent but authentic-looking scientific medical articles: Pandora’s box has been opened. J. Med. Internet Res. 25, e50591 (2023).
doi: 10.2196/50591 pubmed: 37651167 pmcid: 10502600
Májovskỳ, M., Černỳ, M., Kasal, M., Komarc, M. & Netuka, D. Artificial intelligence can generate fraudulent but authentic-looking scientific medical articles: Pandora’s box has been opened. J. Med. Internet Res. 25, e46924 (2023).
doi: 10.2196/46924 pubmed: 37256685 pmcid: 10267787
Cabanac, G., Labbé, C. & Magazinov, A. Tortured phrases: A dubious writing style emerging in science. Evidence of critical issues affecting established journals. Preprint at. https://doi.org/10.48550/arXiv.2107.06751 (2021).
van der Lee, C., Gatt, A., van Miltenburg, E., Wubben, S. & Krahmer, E. Association for computational linguistics, Tokyo, Japan, Best practices for the human evaluation of automatically generated text. In Proceedings of the 12th International Conference on Natural Language Generation (eds. van Deemter, K., Lin, C. & Takamura, H.) 355–368  (2019). https://doi.org/10.18653/v1/W19-8643
Ippolito, D., Duckworth, D., Callison-Burch, C. & Eck, D. Automatic detection of generated text is easiest when humans are fooled. Preprint at (2020). https://doi.org/10.48550/arXiv.1911.00650
Nightingale, S. J. & Farid, H. AI-synthesized faces are indistinguishable from real faces and more trustworthy. Proc. Natl. Acad. Sci. 119, e2120481119 (2022).
Miller, E. J., Foo, Y. Z., Mewton, P. & Dawel, A. How do people respond to computer-generated versus human faces? A systematic review and meta-analyses. Comput. Hum. Behav. Rep. 10, 100283 (2023).
doi: 10.1016/j.chbr.2023.100283
Hitsuwari, J., Ueda, Y., Yun, W. & Nomura, M. Does human–AI collaboration lead to more creative art? Aesthetic evaluation of human-made and AI-generated haiku poetry. Comput. Hum. Behav. 139, 107502 (2023).
doi: 10.1016/j.chb.2022.107502
Jakesch, M., Hancock, J. T. & Naaman, M. Human heuristics for AI-generated language are flawed. Proc. Natl. Acad. Sci. 120, e2208839120 (2023).
Dugan, L., Ippolito, D., Kirubarajan, A., Shi, S. & Callison-Burch, C. Real or fake text? Investigating human ability to detect boundaries between human-written and machine-generated text. Proc. AAAI Conf. Artif. Intell. 37, 12763–12771 (2023).
Köbis, N. C., Doležalová, B. & Soraperra, I. Fooled twice: People cannot detect deepfakes but think they can. iScience 24, 103364 (2021).
doi: 10.1016/j.isci.2021.103364 pubmed: 34820608 pmcid: 8602050
Park, J., Kang, H., Kim, H. Y. & Human,. Do you think this painting is the work of a real artist?. Int. J. Hum. Comput. Interact. 0, 1–18 (2023).
Samo, A. & Highhouse, S. Artificial intelligence and art: Identifying the aesthetic judgment factors that distinguish human- and machine-generated artwork. Psychol. Aesthet. Creat Arts. https://doi.org/10.1037/aca0000570 (2023).
doi: 10.1037/aca0000570
Köbis, N. & Mossink, L. D. Artificial intelligence versus Maya Angelou: Experimental evidence that people cannot differentiate AI-generated from human-written poetry. Comput. Hum. Behav. 114, 106553 (2021).
doi: 10.1016/j.chb.2020.106553
Gunser, V. E. et al. The pure poet: How good is the subjective credibility and stylistic quality of literary short texts written with an artificial intelligence tool as compared to texts written by human authors? Proc. Annu. Meet. Cogn. Sci. Soc. 44, (2022).
Kreps, S., McCain, R. M. & Brundage, M. All the news that’s fit to fabricate: AI-generated text as a tool of media misinformation. J. Exp. Polit. Sci. 9, 104–117 (2022).
doi: 10.1017/XPS.2020.37
Lermann Henestrosa, A., Greving, H. & Kimmerle, J. Automated journalism: The effects of AI authorship and evaluative information on the perception of a science journalism article. Comput. Hum. Behav. 138, 107445 (2023).
doi: 10.1016/j.chb.2022.107445
Kaplan, A. D., Kessler, T. T., Brill, J. C. & Hancock, P. A. Trust in artificial intelligence: Meta-analytic findings. Hum. Factors 65, 337–359 (2023).
doi: 10.1177/00187208211013988 pubmed: 34048287
Chamberlain, R., Mullin, C., Scheerlinck, B. & Wagemans, J. Putting the art in artificial: Aesthetic responses to computer-generated art. Psychol. Aesthet. Creat Arts 12, 177–192 (2018).
doi: 10.1037/aca0000136
Hostetter, A. et al. Student and faculty perceptions of artificial intelligence in student writing (2023). https://doi.org/10.31234/osf.io/7dnk9
Waltzer, T., Cox, R. L. & Heyman, G. D. Testing the ability of teachers and students to differentiate between essays generated by ChatGPT and high school students. Hum. Behav. Emerg. Technol. (2023).
Ma, Y. et al. AI vs. Human -- differentiation analysis of scientific content generation. Preprint at (2023). https://doi.org/10.48550/arXiv.2301.10416
Castelo, N., Bos, M. W. & Lehmann, D. R. Task-dependent algorithm aversion. J. Mark. Res. 56, 809–825 (2019).
doi: 10.1177/0022243719851788
Jussupow, E., Benbasat, I. & Heinzl, A. Why are We Averse Towards Algorithms? A Comprehensive Literature Review on Algorithm Aversion. (2020).
Chaka, C. Reviewing the performance of AI detection tools in differentiating between AI-generated and human-written texts: A literature and integrative hybrid review. J. Appl. Learn. Teach. 7, (2024).
Zellers, R. et al. Defending against neural fake news. In Advances in Neural Information Processing Systems vol. 32 (Curran Associates, Inc., (2019).
Hayawi, K., Shahriar, S. & Mathew, S. S. The imitation game: Detecting human and AI-generated texts in the era of ChatGPT and BARD. J. Inf. Sci. https://doi.org/10.1177/01655515241227531 (2024).
doi: 10.1177/01655515241227531
Markowitz, D. M., Hancock, J. T. & Bailenson, J. N. Linguistic Markers of inherently false AI communication and intentionally false human communication: evidence from hotel reviews. J. Lang. Soc. Psychol. 43, 63–82 (2024).
doi: 10.1177/0261927X231200201
Mieczkowski, H., Hancock, J. T., Naaman, M., Jung, M. & Hohenstein, J. AI-Mediated communication: Language use and interpersonal effects in a referential communication task. Proc. ACM Hum. Comput. Interact. 5, 171–1714 (2021).
doi: 10.1145/3449091
R. A Language and Environment for Statistical Computing (R Foundation for Statistical Computing, 2024).
Boyd, R., Ashokkumar, A., Seraj, S. & Pennebaker, J. The Development and Psychometric Properties of LIWC-22. (2022). https://doi.org/10.13140/RG.2.2.23890.43205
Pennebaker, J. W., Mehl, M. R. & Niederhoffer, K. G. Psychological aspects of Natural Language Use: our words, our selves. Annu. Rev. Psychol. 54, 547–577 (2003).
doi: 10.1146/annurev.psych.54.101601.145041 pubmed: 12185209
Newman, M. L., Pennebaker, J. W., Berry, D. S. & Richards, J. M. Lying words: Predicting Deception from linguistic styles. Pers. Soc. Psychol. Bull. 29, 665–675 (2003).
doi: 10.1177/0146167203029005010 pubmed: 15272998
Kacewicz, E., Pennebaker, J. W., Davis, M., Jeon, M. & Graesser, A. C. Pronoun Use reflects standings in Social hierarchies. J. Lang. Soc. Psychol. 33, 125–143 (2014).
doi: 10.1177/0261927X13502654
Clark, E. et al. All That’s ‘Human’ Is Not Gold: Evaluating Human Evaluation of Generated Text. Preprint at (2021). https://doi.org/10.48550/arXiv.2107.00061
Mahmud, H., Islam, A. K. M. N., Ahmed, S. I. & Smolander, K. What influences algorithmic decision-making? A systematic literature review on algorithm aversion. Technol. Forecast. Soc. Change. 175, 121390 (2022).
doi: 10.1016/j.techfore.2021.121390
Cope, B. & Kalantzis, M. A multimodal grammar of artificial intelligence: measuring the gains and losses in generative AI. Multimodality Soc. 4, 123–152 (2024).
doi: 10.1177/26349795231221699
Morrison, A. B. & Chein, J. M. Does working memory training work? The promise and challenges of enhancing cognition by training working memory. Psychon Bull. Rev. 18, 46–60 (2011).
von Bastian, C. C. & Oberauer, K. Effects and mechanisms of working memory training: a review. Psychol. Res. 1–18. https://doi.org/10.1007/s00426-013-0524-6 (2013).
Pahor, A., Seitz, A. R. & Jaeggi, S. M. Near transfer to an unrelated N-back task mediates the effect of N-back working memory training on matrix reasoning. Nat. Hum. Behav. 6, 1243–1256 (2022).
doi: 10.1038/s41562-022-01384-w pubmed: 35726054
Goodrich, B., Fenton, M., Penn, J., Bovay, J. & Mountain, T. Battling bots: Experiences and strategies to mitigate fraudulent responses in online surveys. Appl. Econ. Perspect. Policy. 45, 762–784 (2023).
doi: 10.1002/aepp.13353
de Leeuw, J. R. & jsPsych: A JavaScript library for creating behavioral experiments in a web browser. Behav. Res. Methods. 47, 1–12 (2015).
doi: 10.3758/s13428-014-0458-y pubmed: 24683129
Swets, J. A., Tanner, W. P. Jr. & Birdsall, T. G. Decision processes in perception. Psychol. Rev. 68, 301–340 (1961).
doi: 10.1037/h0040547 pubmed: 13774292
Makowski, D. The Psycho package: An efficient and publishing-oriented workflow for psychological science. J. Open. Source Softw. 3, 470 (2018).
doi: 10.21105/joss.00470
Eriksen, B. A. & Eriksen, C. W. Effects of noise letters upon the identification of a target letter in a nonsearch task. Percept. Psychophys. 16, 143–149 (1974).
doi: 10.3758/BF03203267
Barzykowski, K., Wereszczyński, M., Hajdas, S. & Radel, R. Cognitive inhibition behavioral tasks in online and laboratory settings: Data from Stroop, SART and Eriksen Flanker tasks. Data Brief. 43, 108398 (2022).
doi: 10.1016/j.dib.2022.108398 pubmed: 35789910 pmcid: 9249604
Bilker, W. B. et al. Development of abbreviated nine-item forms of the raven’s standard progressive matrices test. Assessment. 19, 354–369 (2012).
doi: 10.1177/1073191112446655 pubmed: 22605785 pmcid: 4410094
Raven, J. C. Raven standard progressive matrices. (2016). https://doi.org/10.1037/t07027-000
Reniers, R. L. E. P., Corcoran, R., Drake, R., Shryane, N. M. & Völlm, B. A. The QCAE: A questionnaire of cognitive and affective empathy. J. Pers. Assess. 93, 84–95 (2011).
doi: 10.1080/00223891.2010.528484 pubmed: 21184334
Wilmer, H. H. & Chein, J. M. Mobile technology habits: patterns of association among device usage, intertemporal preference, impulse control, and reward sensitivity. Psychon. Bull. Rev. 1–8. https://doi.org/10.3758/s13423-016-1011-z (2016).
Wilmer, H. H., Hampton, W. H., Olino, T. M., Olson, I. R. & Chein, J. M. Wired to be connected? Links between mobile technology engagement, intertemporal preference and frontostriatal white matter connectivity. Soc. Cogn. Affect. Neurosci. 14, (2019).

Auteurs

J M Chein (JM)

Department of Psychology and Neuroscience, Temple University, Weiss Hall, 1701 N. 13th St, Philadelphia, PA, 19122, USA. jchein@temple.edu.

S A Martinez (SA)

Department of Psychology and Neuroscience, Temple University, Weiss Hall, 1701 N. 13th St, Philadelphia, PA, 19122, USA.

A R Barone (AR)

Department of Psychology and Neuroscience, Temple University, Weiss Hall, 1701 N. 13th St, Philadelphia, PA, 19122, USA.

Articles similaires

[Redispensing of expensive oral anticancer medicines: a practical application].

Lisanne N van Merendonk, Kübra Akgöl, Bastiaan Nuijen
1.00
Humans Antineoplastic Agents Administration, Oral Drug Costs Counterfeit Drugs

Smoking Cessation and Incident Cardiovascular Disease.

Jun Hwan Cho, Seung Yong Shin, Hoseob Kim et al.
1.00
Humans Male Smoking Cessation Cardiovascular Diseases Female
Humans United States Aged Cross-Sectional Studies Medicare Part C
1.00
Humans Yoga Low Back Pain Female Male

Classifications MeSH