Spectrotemporal cues and attention jointly modulate fMRI network topology for sentence and melody perception.


Journal

Scientific reports
ISSN: 2045-2322
Titre abrégé: Sci Rep
Pays: England
ID NLM: 101563288

Informations de publication

Date de publication:
06 Mar 2024
Historique:
received: 12 09 2023
accepted: 01 03 2024
medline: 7 3 2024
pubmed: 7 3 2024
entrez: 6 3 2024
Statut: epublish

Résumé

Speech and music are two fundamental modes of human communication. Lateralisation of key processes underlying their perception has been related both to the distinct sensitivity to low-level spectrotemporal acoustic features and to top-down attention. However, the interplay between bottom-up and top-down processes needs to be clarified. In the present study, we investigated the contribution of acoustics and attention to melodies or sentences to lateralisation in fMRI functional network topology. We used sung speech stimuli selectively filtered in temporal or spectral modulation domains with crossed and balanced verbal and melodic content. Perception of speech decreased with degradation of temporal information, whereas perception of melodies decreased with spectral degradation. Applying graph theoretical metrics on fMRI connectivity matrices, we found that local clustering, reflecting functional specialisation, linearly increased when spectral or temporal cues crucial for the task goal were incrementally degraded. These effects occurred in a bilateral fronto-temporo-parietal network for processing temporally degraded sentences and in right auditory regions for processing spectrally degraded melodies. In contrast, global topology remained stable across conditions. These findings suggest that lateralisation for speech and music partially depends on an interplay of acoustic cues and task goals under increased attentional demands.

Identifiants

pubmed: 38448636
doi: 10.1038/s41598-024-56139-6
pii: 10.1038/s41598-024-56139-6
doi:

Types de publication

Journal Article

Langues

eng

Sous-ensembles de citation

IM

Pagination

5501

Subventions

Organisme : Austrian Science Fund
ID : W1262-B29
Organisme : VDS CoBeNe
ID : final fellowship
Organisme : Università degli Studi di Padova
ID : STARS Starting Grant
Organisme : ILCB
ID : ANR-16-CONV-0002
Organisme : BLRI
ID : ANR-11-LABX-0036
Organisme : Excellence Initiative of Aix-Marseille University
ID : A*MIDEX

Informations de copyright

© 2024. The Author(s).

Références

Fitch, W. T. The biology and evolution of music: A comparative perspective. Cognition 100, 173–215 (2006).
pubmed: 16412411 doi: 10.1016/j.cognition.2005.11.009
Haiduk, F. & Fitch, W. T. Understanding design features of music and language: The choric/dialogic distinction. Front. Psychol. https://doi.org/10.3389/fpsyg.2022.786899 (2022).
doi: 10.3389/fpsyg.2022.786899 pubmed: 35529579 pmcid: 9075586
Jarvis, E. D. Evolution of vocal learning and spoken language. Science 366, 50–54 (2019).
pubmed: 31604300 doi: 10.1126/science.aax0287
Kirby, S., Tamariz, M., Cornish, H. & Smith, K. Compression and communication in the cultural evolution of linguistic structure. Cognition 141, 87–102 (2015).
pubmed: 25966840 doi: 10.1016/j.cognition.2015.03.016
Rohrmeier, M., Zuidema, W., Wiggins, G. A. & Scharff, C. Principles of structure building in music, language and animal song. Philos. Trans. R. Soc. B Biol. Sci. 370, 20140097 (2015).
doi: 10.1098/rstb.2014.0097
Tomlinson, G. A Million Years of Music: The Emergence of Human modernity (MIT Press, 2015).
doi: 10.2307/j.ctt17kk95h
Krumhansl, C. L. Cognitive Foundations of Musical Pitch Vol. 17 (Oxford University Press, 2001).
doi: 10.1093/acprof:oso/9780195148367.001.0001
Ozaki, Y. et al. Globally songs are slower, higher, and use more stable pitches than speech [Stage 2 Registered Report]. Peer Community Regist. Reports (2023).
Elhilali, M. Modulation Representations for Speech and Music 335–359 (Springer International Publishing, 2019). https://doi.org/10.1007/978-3-030-14832-4_12 .
doi: 10.1007/978-3-030-14832-4_12
Poeppel, D. & Assaneo, M. F. Speech rhythms and their neural foundations. Nat. Rev. Neurosci. 21, 322–334 (2020).
pubmed: 32376899 doi: 10.1038/s41583-020-0304-4
Ding, N. et al. Temporal modulations in speech and music. Neurosci. Biobehav. Rev. 81, 181–187 (2017).
pubmed: 28212857 doi: 10.1016/j.neubiorev.2017.02.011
Mantell, J. T. & Pfordresher, P. Q. Vocal imitation of song and speech. Cognition 127, 177–202 (2013).
pubmed: 23454792 doi: 10.1016/j.cognition.2012.12.008
Kob, M. et al. Analysing and understanding the singing voice: Recent progress and open questions. Curr. Bioinform. 6, 362–374 (2011).
doi: 10.2174/157489311796904709
Sundberg, J. The Science of the Singing Voice (Northern Illinois University Press, 1989).
Shannon, R. V., Zeng, F., Kamath, V., Wygonski, J. & Ekelid, M. Speech recognition with primarily temporal cues. Science 270, 303–304 (1995).
pubmed: 7569981 doi: 10.1126/science.270.5234.303
Albouy, P., Benjamin, L., Morillon, B. & Zatorre, R. J. Distinct sensitivity to spectrotemporal modulation supports brain asymmetry for speech and melody. Science 367, 1043–1047 (2020).
pubmed: 32108113 doi: 10.1126/science.aaz3468
Albouy, P., Mehr, S. A., Hoyer, R. S., Ginzburg, J. & Zatorre, R. J. Spectro-temporal acoustical markers differentiate speech from song across cultures. bioRxiv 6, 1545 (2023).
Flinker, A., Doyle, W. K., Mehta, A. D., Devinsky, O. & Poeppel, D. Spectrotemporal modulation provides a unifying framework for auditory cortical asymmetries. Nat. Hum. Behav. 3, 393–405 (2019).
pubmed: 30971792 pmcid: 6650286 doi: 10.1038/s41562-019-0548-z
Jamison, H. L., Watkins, K. E., Bishop, D. V. M. & Matthews, P. M. Hemispheric specialization for processing auditory nonspeech stimuli. Cereb. Cortex 16, 1266–1275 (2006).
pubmed: 16280465 doi: 10.1093/cercor/bhj068
Schonwiesner, M., Rübsamen, R. & Von Cramon, D. Y. Hemispheric asymmetry for spectral and temporal processing in the human antero-lateral auditory belt cortex. Eur. J. Neurosci. 22, 1521–1528 (2005).
pubmed: 16190905 doi: 10.1111/j.1460-9568.2005.04315.x
Zatorre, R. J. & Belin, P. Spectral and temporal processing in human auditory cortex. Cereb. Cortex 11, 946–953 (2001).
pubmed: 11549617 doi: 10.1093/cercor/11.10.946
Norman-Haignere, S. et al. Distinct cortical pathways for music and speech revealed by hypothesis-free voxel decomposition. Neuron 88, 1281–1296 (2015).
pubmed: 26687225 pmcid: 4740977 doi: 10.1016/j.neuron.2015.11.035
Norman-Haignere, S. V. et al. A neural population selective for song in human auditory cortex. Curr. Biol. 32, 1470-1484.e12 (2022).
pubmed: 35196507 pmcid: 9092957 doi: 10.1016/j.cub.2022.01.069
te Rietmolen, N., Mercier, M., Trébuchon, A., Morillon, B. & Schön, D. Speech and music recruit frequency-specific distributed and overlapping cortical networks. bioRxiv 25, 2051. https://doi.org/10.1101/2022.10.08.511398 (2023).
doi: 10.1101/2022.10.08.511398
Deutsch, D., Henthorn, T. & Lapidis, R. Illusory transformation from speech to song. J. Acoust. Soc. Am. 129, 2245–2252 (2011).
pubmed: 21476679 doi: 10.1121/1.3562174
van der Burght, C. L., Goucha, T., Friederici, A. D., Kreitewolf, J. & Hartwigsen, G. Intonation guides sentence processing in the left inferior frontal gyrus. Cortex 117, 122–134 (2019).
pubmed: 30974320 doi: 10.1016/j.cortex.2019.02.011
Prete, G., Marzoli, D., Brancucci, A. & Tommasi, L. Hearing it right: Evidence of hemispheric lateralization in auditory imagery. Hear. Res. 332, 80–86 (2016).
pubmed: 26706706 doi: 10.1016/j.heares.2015.12.011
Prete, G., Tommasi, V. & Tommasi, L. Right news, good news! The valence hypothesis and hemispheric asymmetries in auditory imagery. Lang. Cogn. Neurosci. 35, 409–419 (2020).
doi: 10.1080/23273798.2019.1659990
Hymers, M. et al. Neural mechanisms underlying song and speech perception can be differentiated using an illusory percept. Neuroimage 108, 225–233 (2015).
pubmed: 25512041 doi: 10.1016/j.neuroimage.2014.12.010
Bendixen, A. Predictability effects in auditory scene analysis: A review. Front. Neurosci. 8, 1–16 (2014).
doi: 10.3389/fnins.2014.00060
Morillon, B. & Baillet, S. Motor origin of temporal predictions in auditory attention. Proc. Natl. Acad. Sci. U. S. A. 114, E8913–E8921 (2017).
pubmed: 28973923 pmcid: 5651745 doi: 10.1073/pnas.1705373114
Sankaran, A. N., Leonard, M. K., Theunissen, F. & Chang, E. F. Encoding of melody in the human auditory cortex. bioRxiv 63, 1646. https://doi.org/10.1101/2023.10.17.562771 (2023).
doi: 10.1101/2023.10.17.562771
Zalesky, A., Fornito, A. & Bullmore, E. T. Network-based statistic: Identifying differences in brain networks. Neuroimage 53, 1197–1207 (2010).
pubmed: 20600983 doi: 10.1016/j.neuroimage.2010.06.041
Den Hartigh, R. J. R., Cox, R. F. A. & Van Geert, P. L. C. Complex versus complicated models of cognition. In Springer Handbook of Model-Based Science (eds Magnani, L. & Bertolotti, T.) 657–669 (Springer International Publishing, 2017).
doi: 10.1007/978-3-319-30526-4_30
Rinne, T. et al. Distributed cortical networks for focused auditory attention and distraction. Neurosci. Lett. 416, 247–251 (2007).
pubmed: 17368939 pmcid: 2888503 doi: 10.1016/j.neulet.2007.01.077
Fritz, J. B., Elhilali, M., David, S. V. & Shamma, S. A. Auditory attention—Focusing the searchlight on sound. Curr. Opin. Neurobiol. 17, 437–455 (2007).
pubmed: 17714933 doi: 10.1016/j.conb.2007.07.011
Zatorre, R. J. Hemispheric asymmetries for music and speech: Spectrotemporal modulations and top-down influences. Front. Neurosci. 16, 1–7 (2022).
doi: 10.3389/fnins.2022.1075511
Angenstein, N., Scheich, H. & Brechmann, A. Interaction between bottom-up and top-down effects during the processing of pitch intervals in sequences of spoken and sung syllables. Neuroimage 61, 715–722 (2012).
pubmed: 22503936 doi: 10.1016/j.neuroimage.2012.03.086
Lee, A. K. C., Larson, E., Maddox, R. K. & Shinn-Cunningham, B. G. Using neuroimaging to understand the cortical mechanisms of auditory selective attention. Hear. Res. 307, 111–120 (2014).
pubmed: 23850664 doi: 10.1016/j.heares.2013.06.010
Chennu, S. et al. Expectation and attention in hierarchical auditory prediction. J. Neurosci. 33, 11194–11205 (2013).
pubmed: 23825422 pmcid: 3718380 doi: 10.1523/JNEUROSCI.0114-13.2013
Watts, D. J. & Strogatz, S. H. Strogatz—Small world network nature. Nature 393, 440–442 (1998).
pubmed: 9623998 doi: 10.1038/30918
Fornito, A., Zalesky, A. & Bullmore, E. T. Fundamentals of Brain Network Analysis. doi: https://doi.org/10.1016/C2012-0-06036-X . (2016).
Alavash, M., Tune, S. & Obleser, J. Modular reconfiguration of an auditory control brain network supports adaptive listening behavior. Proc. Natl. Acad. Sci. U. S. Am. 116, 660–669 (2019).
doi: 10.1073/pnas.1815321116
Quante, L., Kluger, D. S., Bürkner, P. C., Ekman, M. & Schubotz, R. I. Graph measures in task-based fMRI: Functional integration during read-out of visual and auditory information. PLoS One 13, 1–18 (2018).
doi: 10.1371/journal.pone.0207119
Mcgettigan, C. & Scott, S. K. Cortical asymmetries in speech perception: What’s wrong, what’s right, and what’s left?. Trends Cogn. Sci. 16, 269–276 (2012).
pubmed: 22521208 pmcid: 4083255 doi: 10.1016/j.tics.2012.04.006
Elliott, T. M. & Theunissen, F. E. The modulation transfer function for speech intelligibility. PLoS Comput. Biol. 5, e1000302 (2009).
pubmed: 19266016 pmcid: 2639724 doi: 10.1371/journal.pcbi.1000302
Hoenig, J. M. & Heisey, D. M. The abuse of power: The pervasive fallacy of power calculations for data analysis. Am. Stat. 55, 19–24 (2001).
doi: 10.1198/000313001300339897
Lenth, R. V. Post Hoc Power : Tables and Commentary. Dep. Stat. Actuar. Sci. Technical Report No. 378 (2007).
Kumle, L., Võ, M. L. H. & Draschkow, D. Estimating power in (generalized) linear mixed models: An open introduction and tutorial in R. Behav. Res. Methods 53, 2528–2543 (2021).
pubmed: 33954914 pmcid: 8613146 doi: 10.3758/s13428-021-01546-0
Cheung, V. K. M., Meyer, L., Friederici, A. D. & Koelsch, S. The right inferior frontal gyrus processes nested non-local dependencies in music. Sci. Rep. https://doi.org/10.1038/s41598-018-22144-9 (2018).
doi: 10.1038/s41598-018-22144-9 pubmed: 30353027 pmcid: 6199268
Rutten, S., Santoro, R., Hervais-Adelman, A., Formisano, E. & Golestani, N. Cortical encoding of speech enhances task-relevant acoustic information. Nat. Hum. Behav. 3, 974–987 (2019).
pubmed: 31285622 doi: 10.1038/s41562-019-0648-9
Waller, L. et al. GraphVar 2.0: A user-friendly toolbox for machine learning on functional connectivity measures. J. Neurosci. Methods 308, 21–33 (2018).
pubmed: 30026069 doi: 10.1016/j.jneumeth.2018.07.001
Kruschwitz, J. D., List, D., Waller, L., Rubinov, M. & Walter, H. GraphVar: A user-friendly toolbox for comprehensive graph analyses of functional brain connectivity. J. Neurosci. Methods 245, 107–115 (2015).
pubmed: 25725332 doi: 10.1016/j.jneumeth.2015.02.021
Glasser, M. F. et al. A multi-modal parcellation of human cerebral cortex. Nature 536, 171–178 (2016).
pubmed: 27437579 pmcid: 4990127 doi: 10.1038/nature18933
Bassett, D. S., Meyer-Lindenberg, A., Achard, S., Duke, T. & Bullmore, E. Adaptive reconfiguration of fractal small-world human brain functional networks. Proc. Natl. Acad. Sci. U. S. A. 103, 19518–19523 (2006).
pubmed: 17159150 pmcid: 1838565 doi: 10.1073/pnas.0606005103
Bassett, D. S., Nelson, B. G., Mueller, B. A., Camchong, J. & Lim, K. O. Altered resting state complexity in schizophrenia. Neuroimage 59, 2196–2207 (2012).
pubmed: 22008374 doi: 10.1016/j.neuroimage.2011.10.002
R Core Team. R: A Language and Environment for Statistical Computing. (2019).
Field, A. Andy field—Discovering statistics using SPSS. J. Adv. Nurs. 58, 303–303 (2005).
Fox, J. et al. Package “car”: Companion to applied regression. (2011).
Bates, D., Maechler, M., Bolker, B. & Walker, S. lme4: Linear mixed-effects models using Eigen and S4. (2015).
Fischl, B. FreeSurfer. Neuroimage 62, 774–781 (2012).
pubmed: 22248573 doi: 10.1016/j.neuroimage.2012.01.021
Vasil, J., Badcock, P. B., Constant, A., Friston, K. & Ramstead, M. J. D. A world unto itself: Human communication as active inference. Front. Psychol. 11, 1–26 (2020).
doi: 10.3389/fpsyg.2020.00417
Bhandari, P., Demberg, V. & Kray, J. Predictability effects in degraded speech comprehension are reduced as a function of attention. Lang. Cogn. 14, 534–551 (2022).
doi: 10.1017/langcog.2022.16
Shannon, C. E. A mathematical theory of communication. Bell Syst. Tech. J. 27, 379–423 (1948).
doi: 10.1002/j.1538-7305.1948.tb01338.x
Koelsch, S., Fritz, T., Cramon, D. Y. V., Müller, K. & Friederici, A. D. Investigating emotion with music: An fMRI study. Hum. Brain Mapp. 27, 239–250 (2006).
pubmed: 16078183 doi: 10.1002/hbm.20180
Mueller, K. et al. Investigating brain response to music: A comparison of different fMRI acquisition schemes. Neuroimage 54, 337–343 (2011).
pubmed: 20728550 doi: 10.1016/j.neuroimage.2010.08.029
Trost, W., Ethofer, T., Zentner, M. & Vuilleumier, P. Mapping aesthetic musical emotions in the brain. Cereb. Cortex 22, 2769–2783 (2012).
pubmed: 22178712 doi: 10.1093/cercor/bhr353
Bartha, L. et al. Medial temporal lobe activation during semantic language processing: fMRI findings in healthy left- and right-handers. Cogn. Brain Res. 17, 339–346 (2003).
doi: 10.1016/S0926-6410(03)00135-6
Rodd, J. M., Davis, M. H. & Johnsrude, I. S. The neural mechanisms of speech comprehension: fMRI studies of semantic ambiguity. Cereb. Cortex 15, 1261–1269 (2005).
pubmed: 15635062 doi: 10.1093/cercor/bhi009
Wallmark, Z., Deblieck, C. & Iacoboni, M. Neurophysiological effects of trait empathy in music listening. Front. Behav. Neurosci. 12, 1–19 (2018).
doi: 10.3389/fnbeh.2018.00066
Seger, C. A. et al. Clinical practice guideline for the treatment of posttraumatic stress disorder (PTSD). J. Cogn. Neurosci. 25, 1062–1077 (2013).
pubmed: 23410032 doi: 10.1162/jocn_a_00371
Geranmayeh, F., Wise, R. J. S., Mehta, A. & Leech, R. Overlapping networks engaged during spoken language production and its cognitive control. J. Neurosci. 34, 8728–8740 (2014).
pubmed: 24966373 pmcid: 4069351 doi: 10.1523/JNEUROSCI.0428-14.2014
Rothermich, K. & Kotz, S. A. Predictions in speech comprehension: FMRI evidence on the meter-semantic interface. Neuroimage 70, 89–100 (2013).
pubmed: 23291188 doi: 10.1016/j.neuroimage.2012.12.013
Kung, S. J., Chen, J. L., Zatorre, R. J. & Penhune, V. B. Interacting cortical and basal ganglia networks underlying finding and tapping to the musical beat. J. Cogn. Neurosci. 25, 401–420 (2013).
pubmed: 23163420 doi: 10.1162/jocn_a_00325
McNealy, K., Mazziotta, J. C. & Dapretto, M. Cracking the language code: Neural mechanisms underlying speech parsing. J. Neurosci. 26, 7629–7639 (2006).
pubmed: 16855090 pmcid: 3713232 doi: 10.1523/JNEUROSCI.5501-05.2006
Foster, N. E. V., Halpern, A. R. & Zatorre, R. J. Common parietal activation in musical mental transformations across pitch and time. Neuroimage 75, 27–35 (2013).
pubmed: 23470983 doi: 10.1016/j.neuroimage.2013.02.044
Briggs, R. G. et al. A connectomic atlas of the human cerebrum-Chapter 18: The connectional anatomy of human brain networks. Oper. Neurosurg. 15, S470–S480 (2018).
doi: 10.1093/ons/opy272
Holle, H., Gunter, T. C., Rüschemeyer, S. A., Hennenlotter, A. & Iacoboni, M. Neural correlates of the processing of co-speech gestures. Neuroimage 39, 2010–2024 (2008).
pubmed: 18093845 doi: 10.1016/j.neuroimage.2007.10.055
Sadato, N., Yonekura, Y., Waki, A., Yamada, H. & Ishii, Y. Role of the supplementary motor area and the right premotor cortex in the coordination of bimanual finger movements. J. Neurosci. 17, 9667–9674 (1997).
pubmed: 9391021 pmcid: 6573404 doi: 10.1523/JNEUROSCI.17-24-09667.1997
Jonas, M. et al. Do simple intransitive finger movements consistently activate frontoparietal mirror neuron areas in humans?. Neuroimage 36, 44–53 (2007).
doi: 10.1016/j.neuroimage.2007.03.028
Péran, P. et al. Mental representations of action: The neural correlates of the verbal and motor components. Brain Res. 1328, 89–103 (2010).
pubmed: 20226773 doi: 10.1016/j.brainres.2010.02.082
Garbarini, F. et al. Drawing lines while imagining circles: Neural basis of the bimanual coupling effect during motor execution and motor imagery. Neuroimage 88, 100–112 (2014).
pubmed: 24188808 doi: 10.1016/j.neuroimage.2013.10.061
Zatorre, R. J., Chen, J. L. & Penhune, V. B. When the brain plays music: Auditory-motor interactions in music perception and production. Nat. Rev. Neurosci. 8, 547–558 (2007).
pubmed: 17585307 doi: 10.1038/nrn2152
Baker, C. M. et al. A connectomic atlas of the human cerebrum-Chapter 4: The medial frontal lobe, anterior cingulate gyrus, and orbitofrontal cortex. Oper. Neurosurg. 15, S122–S174 (2018).
doi: 10.1093/ons/opy257
Rogalsky, C., Rong, F., Saberi, K. & Hickok, G. Functional anatomy of language and music perception: Temporal and structural factors investigated using functional magnetic resonance imaging. J. Neurosci. 31, 3843–3852 (2011).
pubmed: 21389239 pmcid: 3066175 doi: 10.1523/JNEUROSCI.4515-10.2011
Sammler, D. et al. Co-localizing linguistic and musical syntax with intracranial EEG. Neuroimage 64, 134–146 (2013).
pubmed: 23000255 doi: 10.1016/j.neuroimage.2012.09.035
Angulo-Perkins, A. et al. Music listening engages specific cortical regions within the temporal lobes: Differences between musicians and non-musicians. Cortex 59, 126–137 (2014).
pubmed: 25173956 doi: 10.1016/j.cortex.2014.07.013
Park, M. et al. Sadness is unique: Neural processing of emotions in speech prosody in musicians and non-musicians. Front. Hum. Neurosci. 8, 1–8 (2015).
doi: 10.3389/fnhum.2014.01049
Kyong, J. S. et al. Exploring the roles of spectral detail and intonation contour in speech intelligibility: An fMRI study. J. Cogn. Neurosci. 26, 1748–1763 (2014).
pubmed: 24568205 pmcid: 4238060 doi: 10.1162/jocn_a_00583
Bianco, R. et al. Neural networks for harmonic structure in music perception and action. Neuroimage 142, 454–464 (2016).
pubmed: 27542722 doi: 10.1016/j.neuroimage.2016.08.025
Hesling, I., Dilharreguy, B., Clément, S., Bordessoules, M. & Allard, M. Cerebral mechanisms of prosodic sensory integration using low-frequency bands of connected speech. Hum. Brain Mapp. 26, 157–169 (2005).
pubmed: 15929092 pmcid: 6871763 doi: 10.1002/hbm.20147
Humphries, C., Sabri, M., Lewis, K. & Liebenthal, E. Hierarchical organization of speech perception in human auditory cortex. Front. Neurosci. 8, 1–12 (2014).
doi: 10.3389/fnins.2014.00406
Sammler, D., Grosbras, M. H., Anwander, A., Bestelmeyer, P. E. G. & Belin, P. Dorsal and ventral pathways for prosody. Curr. Biol. 25, 3079–3085 (2015).
pubmed: 26549262 doi: 10.1016/j.cub.2015.10.009
Boebinger, D., Norman-Haignere, S. V., McDermott, J. H. & Kanwisher, N. Music-selective neural populations arise without musical training. J. Neurophysiol. 125, 2237–2263 (2021).
pubmed: 33596723 pmcid: 8285655 doi: 10.1152/jn.00588.2020
Morillon, B., Arnal, L. H. & Belin, P. The path of voices in our brain. PLoS Biol. 20, 2–4 (2022).
doi: 10.1371/journal.pbio.3001742
Weidema, J. L., Roncaglia-Denissen, M. P. & Honing, H. Top-Down modulation on the Perception and categorization of identical pitch contours in speech and music. Front. Psychol. 7, 1–11 (2016).
doi: 10.3389/fpsyg.2016.00817
Nishimura, T. et al. Evolutionary loss of complexity in human vocal anatomy as an adaptation for speech. Science 377, 760–763 (2022).
pubmed: 35951711 doi: 10.1126/science.abm1574
Tierney, A. T., Patel, A. D. & Breen, M. Acoustic foundations of the speech-to-song illusion. J. Exp. Psychol. Gen. 147, 888–904 (2018).
pubmed: 29888940 doi: 10.1037/xge0000455
McDermott, J. H. The cocktail party problem. Curr. Biol. 19, 1024–1027 (2009).
doi: 10.1016/j.cub.2009.09.005
Haiduk, F., Quigley, C. & Fitch, W. T. Song is more memorable than speech prosody: Discrete pitches aid auditory working memory. Front. Psychol. 11, 1–22 (2020).
doi: 10.3389/fpsyg.2020.586723
Schulze, K., Koelsch, S. & Williamson, V. Auditory working memory. In Springer Handbook of Systematic Musicology (ed. Bader, R.) 461–472 (Springer, 2018).
doi: 10.1007/978-3-662-55004-5_24
Albouy, P. et al. Specialized neural dynamics for verbal and tonal memory: fMRI evidence in congenital amusia. Hum. Brain Mapp. 40, 855–867 (2019).
pubmed: 30381866 doi: 10.1002/hbm.24416
Whitfield-Gabrieli, S. & Nieto-Castanon, A. Conn: A functional connectivity toolbox for correlated and anticorrelated brain networks. Brain Connect. 2, 125–141 (2012).
pubmed: 22642651 doi: 10.1089/brain.2012.0073
Mills, K. HCP-MMP1.0 projected on fsaverage. figshare. Dataset. 10.6084/m9.figshare.3498446.v2. (2016).
Nieuwenhuis, R., de Grotenhuis, M. & Pelzer, B. Influence.ME: Tools for detecting influential data in mixed effects models. R. J. 4, 38–47 (2012).
doi: 10.32614/RJ-2012-011
Dobson, A. J. An Introduction to Generalized Linear Models (Chapman & Hall/CRC, 2002).
Forstmeier, W. & Schielzeth, H. Cryptic multiple hypotheses testing in linear models: Overestimated effect sizes and the winner’s curse. Behav. Ecol. Sociobiol. 65, 47–55 (2011).
pubmed: 21297852 doi: 10.1007/s00265-010-1038-5
Barton, K. MuMIn: multi-model inference. http://r-forge.r-project.org/projects/mumin/ (2009).
Blondel, V. D., Guillaume, J. L., Lambiotte, R. & Lefebvre, E. Fast unfolding of communities in large networks. J. Stat. Mech. Theory Exp. 2008, P10008 (2008).
doi: 10.1088/1742-5468/2008/10/P10008
Fornito, A., Harrison, B. J., Zalesky, A. & Simons, J. S. Competitive and cooperative dynamics of large-scale brain functional networks supporting recollection. Proc. Natl. Acad. Sci. U. S. A. 109, 12788–12793 (2012).
pubmed: 22807481 pmcid: 3412011 doi: 10.1073/pnas.1204185109

Auteurs

Felix Haiduk (F)

Department of Behavioral and Cognitive Biology, University of Vienna, Vienna, Austria. felix.haiduk@univie.ac.at.
Department of General Psychology, University of Padua, Padua, Italy. felix.haiduk@univie.ac.at.

Robert J Zatorre (RJ)

Cognitive Neuroscience Unit, Montreal Neurological Institute, McGill University, Montreal, QC, Canada.
International Laboratory for Brain, Music and Sound Research (BRAMS) - CRBLM, Montreal, QC, Canada.

Lucas Benjamin (L)

Cognitive Neuroscience Unit, Montreal Neurological Institute, McGill University, Montreal, QC, Canada.
Cognitive Neuroimaging Unit, CNRS ERL 9003, INSERM U992, CEA, Université Paris-Saclay, NeuroSpin Center, 91191, Gif/Yvette, France.

Benjamin Morillon (B)

Aix Marseille University, Inserm, INS, Institut de Neurosciences des Systèmes, Marseille, France.

Philippe Albouy (P)

Cognitive Neuroscience Unit, Montreal Neurological Institute, McGill University, Montreal, QC, Canada.
International Laboratory for Brain, Music and Sound Research (BRAMS) - CRBLM, Montreal, QC, Canada.
CERVO Brain Research Centre, School of Psychology, Laval University, Quebec, QC, Canada.

Classifications MeSH