Improving Social Bot Detection Through Aid and Training.
decision support
signal detection theory
social bots
social media
training
Journal
Human factors
ISSN: 1547-8181
Titre abrégé: Hum Factors
Pays: United States
ID NLM: 0374660
Informations de publication
Date de publication:
14 Nov 2023
14 Nov 2023
Historique:
pubmed:
14
11
2023
medline:
14
11
2023
entrez:
14
11
2023
Statut:
aheadofprint
Résumé
We test the effects of three aids on individuals' ability to detect social bots among Twitter personas: a bot indicator score, a training video, and a warning. Detecting social bots can prevent online deception. We use a simulated social media task to evaluate three aids. Lay participants judged whether each of 60 Twitter personas was a human or social bot in a simulated online environment, using agreement between three machine learning algorithms to estimate the probability of each persona being a bot. Experiment 1 compared a control group and two intervention groups, one provided a bot indicator score for each tweet; the other provided a warning about social bots. Experiment 2 compared a control group and two intervention groups, one receiving the bot indicator scores and the other a training video, focused on heuristics for identifying social bots. The bot indicator score intervention improved predictive performance and reduced overconfidence in both experiments. The training video was also effective, although somewhat less so. The warning had no effect. Participants rarely reported willingness to share content for a persona that they labeled as a bot, even when they agreed with it. Informative interventions improved social bot detection; warning alone did not. We offer an experimental testbed and methodology that can be used to evaluate and refine interventions designed to reduce vulnerability to social bots. We show the value of two interventions that could be applied in many settings.
Sections du résumé
OBJECTIVE
OBJECTIVE
We test the effects of three aids on individuals' ability to detect social bots among Twitter personas: a bot indicator score, a training video, and a warning.
BACKGROUND
BACKGROUND
Detecting social bots can prevent online deception. We use a simulated social media task to evaluate three aids.
METHOD
METHODS
Lay participants judged whether each of 60 Twitter personas was a human or social bot in a simulated online environment, using agreement between three machine learning algorithms to estimate the probability of each persona being a bot. Experiment 1 compared a control group and two intervention groups, one provided a bot indicator score for each tweet; the other provided a warning about social bots. Experiment 2 compared a control group and two intervention groups, one receiving the bot indicator scores and the other a training video, focused on heuristics for identifying social bots.
RESULTS
RESULTS
The bot indicator score intervention improved predictive performance and reduced overconfidence in both experiments. The training video was also effective, although somewhat less so. The warning had no effect. Participants rarely reported willingness to share content for a persona that they labeled as a bot, even when they agreed with it.
CONCLUSIONS
CONCLUSIONS
Informative interventions improved social bot detection; warning alone did not.
APPLICATION
CONCLUSIONS
We offer an experimental testbed and methodology that can be used to evaluate and refine interventions designed to reduce vulnerability to social bots. We show the value of two interventions that could be applied in many settings.
Identifiants
pubmed: 37963198
doi: 10.1177/00187208231210145
doi:
Types de publication
Journal Article
Langues
eng
Sous-ensembles de citation
IM