Simulation Tests of Methods in Evolution, Ecology, and Systematics: Pitfalls, Progress, and Principles.

area under the curve benchmark data sets domain of applicability equifinality evaluation validation

Journal

Annual review of ecology, evolution, and systematics
ISSN: 1543-592X
Titre abrégé: Annu Rev Ecol Evol Syst
Pays: United States
ID NLM: 101171971

Informations de publication

Date de publication:
Nov 2022
Historique:
medline: 1 11 2022
pubmed: 1 11 2022
entrez: 18 12 2023
Statut: ppublish

Résumé

Complex statistical methods are continuously developed across the fields of ecology, evolution, and systematics (EES). These fields, however, lack standardized principles for evaluating methods, which has led to high variability in the rigor with which methods are tested, a lack of clarity regarding their limitations, and the potential for misapplication. In this review, we illustrate the common pitfalls of method evaluations in EES, the advantages of testing methods with simulated data, and best practices for method evaluations. We highlight the difference between method evaluation and validation and review how simulations, when appropriately designed, can refine the domain in which a method can be reliably applied. We also discuss the strengths and limitations of different evaluation metrics. The potential for misapplication of methods would be greatly reduced if funding agencies, reviewers, and journals required principled method evaluation.

Identifiants

pubmed: 38107485
doi: 10.1146/annurev-ecolsys-102320-093722
pmc: PMC10723108
doi:

Types de publication

Journal Article

Langues

eng

Pagination

113-136

Auteurs

Katie E Lotterhos (KE)

Department of Marine and Environmental Sciences, Northeastern University, Nahant, Massachusetts, USA.

Matthew C Fitzpatrick (MC)

Appalachian Lab, University of Maryland Center for Environmental Science, Frostburg, Maryland, USA.

Heath Blackmon (H)

Department of Biology, Texas A&M University, College Station, Texas, USA.

Classifications MeSH