Serveur d'exploration sur l'opéra

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

The relationship between the neural computations for speech and music perception is context-dependent: an activation likelihood estimate study

Identifieur interne : 000027 ( Pmc/Curation ); précédent : 000026; suivant : 000028

The relationship between the neural computations for speech and music perception is context-dependent: an activation likelihood estimate study

Auteurs : Arianna N. Lacroix ; Alvaro F. Diaz ; Corianne Rogalsky

Source :

RBID : PMC:4531212

Abstract

The relationship between the neurobiology of speech and music has been investigated for more than a century. There remains no widespread agreement regarding how (or to what extent) music perception utilizes the neural circuitry that is engaged in speech processing, particularly at the cortical level. Prominent models such as Patel's Shared Syntactic Integration Resource Hypothesis (SSIRH) and Koelsch's neurocognitive model of music perception suggest a high degree of overlap, particularly in the frontal lobe, but also perhaps more distinct representations in the temporal lobe with hemispheric asymmetries. The present meta-analysis study used activation likelihood estimate analyses to identify the brain regions consistently activated for music as compared to speech across the functional neuroimaging (fMRI and PET) literature. Eighty music and 91 speech neuroimaging studies of healthy adult control subjects were analyzed. Peak activations reported in the music and speech studies were divided into four paradigm categories: passive listening, discrimination tasks, error/anomaly detection tasks and memory-related tasks. We then compared activation likelihood estimates within each category for music vs. speech, and each music condition with passive listening. We found that listening to music and to speech preferentially activate distinct temporo-parietal bilateral cortical networks. We also found music and speech to have shared resources in the left pars opercularis but speech-specific resources in the left pars triangularis. The extent to which music recruited speech-activated frontal resources was modulated by task. While there are certainly limitations to meta-analysis techniques particularly regarding sensitivity, this work suggests that the extent of shared resources between speech and music may be task-dependent and highlights the need to consider how task effects may be affecting conclusions regarding the neurobiology of speech and music.


Url:
DOI: 10.3389/fpsyg.2015.01138

Links toward previous steps (curation, corpus...)


Links to Exploration step

PMC:4531212

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">The relationship between the neural computations for speech and music perception is context-dependent: an activation likelihood estimate study</title>
<author>
<name sortKey="Lacroix, Arianna N" sort="Lacroix, Arianna N" uniqKey="Lacroix A" first="Arianna N." last="Lacroix">Arianna N. Lacroix</name>
</author>
<author>
<name sortKey="Diaz, Alvaro F" sort="Diaz, Alvaro F" uniqKey="Diaz A" first="Alvaro F." last="Diaz">Alvaro F. Diaz</name>
</author>
<author>
<name sortKey="Rogalsky, Corianne" sort="Rogalsky, Corianne" uniqKey="Rogalsky C" first="Corianne" last="Rogalsky">Corianne Rogalsky</name>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">26321976</idno>
<idno type="pmc">4531212</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4531212</idno>
<idno type="RBID">PMC:4531212</idno>
<idno type="doi">10.3389/fpsyg.2015.01138</idno>
<date when="2015">2015</date>
<idno type="wicri:Area/Pmc/Corpus">000027</idno>
<idno type="wicri:Area/Pmc/Curation">000027</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">The relationship between the neural computations for speech and music perception is context-dependent: an activation likelihood estimate study</title>
<author>
<name sortKey="Lacroix, Arianna N" sort="Lacroix, Arianna N" uniqKey="Lacroix A" first="Arianna N." last="Lacroix">Arianna N. Lacroix</name>
</author>
<author>
<name sortKey="Diaz, Alvaro F" sort="Diaz, Alvaro F" uniqKey="Diaz A" first="Alvaro F." last="Diaz">Alvaro F. Diaz</name>
</author>
<author>
<name sortKey="Rogalsky, Corianne" sort="Rogalsky, Corianne" uniqKey="Rogalsky C" first="Corianne" last="Rogalsky">Corianne Rogalsky</name>
</author>
</analytic>
<series>
<title level="j">Frontiers in Psychology</title>
<idno type="e-ISSN">1664-1078</idno>
<imprint>
<date when="2015">2015</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>The relationship between the neurobiology of speech and music has been investigated for more than a century. There remains no widespread agreement regarding how (or to what extent) music perception utilizes the neural circuitry that is engaged in speech processing, particularly at the cortical level. Prominent models such as Patel's Shared Syntactic Integration Resource Hypothesis (SSIRH) and Koelsch's neurocognitive model of music perception suggest a high degree of overlap, particularly in the frontal lobe, but also perhaps more distinct representations in the temporal lobe with hemispheric asymmetries. The present meta-analysis study used activation likelihood estimate analyses to identify the brain regions consistently activated for music as compared to speech across the functional neuroimaging (fMRI and PET) literature. Eighty music and 91 speech neuroimaging studies of healthy adult control subjects were analyzed. Peak activations reported in the music and speech studies were divided into four paradigm categories: passive listening, discrimination tasks, error/anomaly detection tasks and memory-related tasks. We then compared activation likelihood estimates within each category for music vs. speech, and each music condition with passive listening. We found that listening to music and to speech preferentially activate distinct temporo-parietal bilateral cortical networks. We also found music and speech to have shared resources in the left pars opercularis but speech-specific resources in the left pars triangularis. The extent to which music recruited speech-activated frontal resources was modulated by task. While there are certainly limitations to meta-analysis techniques particularly regarding sensitivity, this work suggests that the extent of shared resources between speech and music may be task-dependent and highlights the need to consider how task effects may be affecting conclusions regarding the neurobiology of speech and music.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Abrams, D A" uniqKey="Abrams D">D. A. Abrams</name>
</author>
<author>
<name sortKey="Bhatara, A" uniqKey="Bhatara A">A. Bhatara</name>
</author>
<author>
<name sortKey="Ryali, S" uniqKey="Ryali S">S. Ryali</name>
</author>
<author>
<name sortKey="Balaban, E" uniqKey="Balaban E">E. Balaban</name>
</author>
<author>
<name sortKey="Levitin, D J" uniqKey="Levitin D">D. J. Levitin</name>
</author>
<author>
<name sortKey="Menon, V" uniqKey="Menon V">V. Menon</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Adank, P" uniqKey="Adank P">P. Adank</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Amunts, K" uniqKey="Amunts K">K. Amunts</name>
</author>
<author>
<name sortKey="Schleicher, A" uniqKey="Schleicher A">A. Schleicher</name>
</author>
<author>
<name sortKey="Burgel, U" uniqKey="Burgel U">U. Bürgel</name>
</author>
<author>
<name sortKey="Mohlberg, H" uniqKey="Mohlberg H">H. Mohlberg</name>
</author>
<author>
<name sortKey="Uylings, H B M" uniqKey="Uylings H">H. B. M. Uylings</name>
</author>
<author>
<name sortKey="Zilles, K" uniqKey="Zilles K">K. Zilles</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Anwander, A" uniqKey="Anwander A">A. Anwander</name>
</author>
<author>
<name sortKey="Tittgemeyer, M" uniqKey="Tittgemeyer M">M. Tittgemeyer</name>
</author>
<author>
<name sortKey="Von Cramon, D Y" uniqKey="Von Cramon D">D. Y. von Cramon</name>
</author>
<author>
<name sortKey="Friederici, A D" uniqKey="Friederici A">A. D. Friederici</name>
</author>
<author>
<name sortKey="Knosche, T R" uniqKey="Knosche T">T. R. Knösche</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Baker, E" uniqKey="Baker E">E. Baker</name>
</author>
<author>
<name sortKey="Blumstein, S E" uniqKey="Blumstein S">S. E. Blumstein</name>
</author>
<author>
<name sortKey="Goodglass, H" uniqKey="Goodglass H">H. Goodglass</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Basso, A" uniqKey="Basso A">A. Basso</name>
</author>
<author>
<name sortKey="Capitani, E" uniqKey="Capitani E">E. Capitani</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Besson, M" uniqKey="Besson M">M. Besson</name>
</author>
<author>
<name sortKey="Chobert, J" uniqKey="Chobert J">J. Chobert</name>
</author>
<author>
<name sortKey="Marie, C" uniqKey="Marie C">C. Marie</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Besson, M" uniqKey="Besson M">M. Besson</name>
</author>
<author>
<name sortKey="Faita, F" uniqKey="Faita F">F. Faita</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Besson, M" uniqKey="Besson M">M. Besson</name>
</author>
<author>
<name sortKey="Schon, D" uniqKey="Schon D">D. Schön</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Brattico, E" uniqKey="Brattico E">E. Brattico</name>
</author>
<author>
<name sortKey="Tervaniemi, M" uniqKey="Tervaniemi M">M. Tervaniemi</name>
</author>
<author>
<name sortKey="N T Nen, R" uniqKey="N T Nen R">R. Näätänen</name>
</author>
<author>
<name sortKey="Peretz, I" uniqKey="Peretz I">I. Peretz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Buchsbaum, B R" uniqKey="Buchsbaum B">B. R. Buchsbaum</name>
</author>
<author>
<name sortKey="Baldo, J" uniqKey="Baldo J">J. Baldo</name>
</author>
<author>
<name sortKey="Okada, K" uniqKey="Okada K">K. Okada</name>
</author>
<author>
<name sortKey="Berman, K F" uniqKey="Berman K">K. F. Berman</name>
</author>
<author>
<name sortKey="Dronkers, N" uniqKey="Dronkers N">N. Dronkers</name>
</author>
<author>
<name sortKey="D Esposito, M" uniqKey="D Esposito M">M. D'Esposito</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Buchsbaum, B R" uniqKey="Buchsbaum B">B. R. Buchsbaum</name>
</author>
<author>
<name sortKey="Olsen, R K" uniqKey="Olsen R">R. K. Olsen</name>
</author>
<author>
<name sortKey="Koch, P" uniqKey="Koch P">P. Koch</name>
</author>
<author>
<name sortKey="Berman, K F" uniqKey="Berman K">K. F. Berman</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Cant, J S" uniqKey="Cant J">J. S. Cant</name>
</author>
<author>
<name sortKey="Goodale, M A" uniqKey="Goodale M">M. A. Goodale</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Cariani, P A" uniqKey="Cariani P">P. A. Cariani</name>
</author>
<author>
<name sortKey="Delgutte, B" uniqKey="Delgutte B">B. Delgutte</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Carrus, E" uniqKey="Carrus E">E. Carrus</name>
</author>
<author>
<name sortKey="Pearce, M T" uniqKey="Pearce M">M. T. Pearce</name>
</author>
<author>
<name sortKey="Bhattacharya, J" uniqKey="Bhattacharya J">J. Bhattacharya</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Chawla, D" uniqKey="Chawla D">D. Chawla</name>
</author>
<author>
<name sortKey="Rees, G" uniqKey="Rees G">G. Rees</name>
</author>
<author>
<name sortKey="Friston, K J" uniqKey="Friston K">K. J. Friston</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Corbetta, M" uniqKey="Corbetta M">M. Corbetta</name>
</author>
<author>
<name sortKey="Miezin, F M" uniqKey="Miezin F">F. M. Miezin</name>
</author>
<author>
<name sortKey="Dobmeyer, S" uniqKey="Dobmeyer S">S. Dobmeyer</name>
</author>
<author>
<name sortKey="Shulman, G L" uniqKey="Shulman G">G. L. Shulman</name>
</author>
<author>
<name sortKey="Petersen, S E" uniqKey="Petersen S">S. E. Petersen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Damasio, H" uniqKey="Damasio H">H. Damasio</name>
</author>
<author>
<name sortKey="Tranel, D" uniqKey="Tranel D">D. Tranel</name>
</author>
<author>
<name sortKey="Grabowski, T" uniqKey="Grabowski T">T. Grabowski</name>
</author>
<author>
<name sortKey="Adolphs, R" uniqKey="Adolphs R">R. Adolphs</name>
</author>
<author>
<name sortKey="Damasio, A" uniqKey="Damasio A">A. Damasio</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Dronkers, N F" uniqKey="Dronkers N">N. F. Dronkers</name>
</author>
<author>
<name sortKey="Wilkins, D P" uniqKey="Wilkins D">D. P. Wilkins</name>
</author>
<author>
<name sortKey="Van Valin, R D" uniqKey="Van Valin R">R. D. Van Valin</name>
</author>
<author>
<name sortKey="Redfern, B B" uniqKey="Redfern B">B. B. Redfern</name>
</author>
<author>
<name sortKey="Jaeger, J J" uniqKey="Jaeger J">J. J. Jaeger</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Eickhoff, S B" uniqKey="Eickhoff S">S. B. Eickhoff</name>
</author>
<author>
<name sortKey="Bzdok, D" uniqKey="Bzdok D">D. Bzdok</name>
</author>
<author>
<name sortKey="Laird, A R" uniqKey="Laird A">A. R. Laird</name>
</author>
<author>
<name sortKey="Kurth, F" uniqKey="Kurth F">F. Kurth</name>
</author>
<author>
<name sortKey="Fox, P T" uniqKey="Fox P">P. T. Fox</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Eickhoff, S B" uniqKey="Eickhoff S">S. B. Eickhoff</name>
</author>
<author>
<name sortKey="Laird, A R" uniqKey="Laird A">A. R. Laird</name>
</author>
<author>
<name sortKey="Grefkes, C" uniqKey="Grefkes C">C. Grefkes</name>
</author>
<author>
<name sortKey="Wang, L E" uniqKey="Wang L">L. E. Wang</name>
</author>
<author>
<name sortKey="Zilles, K" uniqKey="Zilles K">K. Zilles</name>
</author>
<author>
<name sortKey="Fox, P T" uniqKey="Fox P">P. T. Fox</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Elmer, S" uniqKey="Elmer S">S. Elmer</name>
</author>
<author>
<name sortKey="Meyer, S" uniqKey="Meyer S">S. Meyer</name>
</author>
<author>
<name sortKey="J Ncke, L" uniqKey="J Ncke L">L. Jäncke</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Fedorenko, E" uniqKey="Fedorenko E">E. Fedorenko</name>
</author>
<author>
<name sortKey="Behr, M K" uniqKey="Behr M">M. K. Behr</name>
</author>
<author>
<name sortKey="Kanwisher, N" uniqKey="Kanwisher N">N. Kanwisher</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Fedorenko, E" uniqKey="Fedorenko E">E. Fedorenko</name>
</author>
<author>
<name sortKey="Kanwisher, N" uniqKey="Kanwisher N">N. Kanwisher</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Fedorenko, E" uniqKey="Fedorenko E">E. Fedorenko</name>
</author>
<author>
<name sortKey="Patel, A" uniqKey="Patel A">A. Patel</name>
</author>
<author>
<name sortKey="Casasanto, D" uniqKey="Casasanto D">D. Casasanto</name>
</author>
<author>
<name sortKey="Winawer, J" uniqKey="Winawer J">J. Winawer</name>
</author>
<author>
<name sortKey="Gibson, E" uniqKey="Gibson E">E. Gibson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Frances, R" uniqKey="Frances R">R. Frances</name>
</author>
<author>
<name sortKey="Lhermitte, F" uniqKey="Lhermitte F">F. Lhermitte</name>
</author>
<author>
<name sortKey="Verdy, M F" uniqKey="Verdy M">M. F. Verdy</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Friederici, A D" uniqKey="Friederici A">A. D. Friederici</name>
</author>
<author>
<name sortKey="Kotz, S A" uniqKey="Kotz S">S. A. Kotz</name>
</author>
<author>
<name sortKey="Scott, S K" uniqKey="Scott S">S. K. Scott</name>
</author>
<author>
<name sortKey="Obleser, J" uniqKey="Obleser J">J. Obleser</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Friederici, A D" uniqKey="Friederici A">A. D. Friederici</name>
</author>
<author>
<name sortKey="Wang, Y" uniqKey="Wang Y">Y. Wang</name>
</author>
<author>
<name sortKey="Herrmann, C S" uniqKey="Herrmann C">C. S. Herrmann</name>
</author>
<author>
<name sortKey="Maess, B" uniqKey="Maess B">B. Maess</name>
</author>
<author>
<name sortKey="Oertel, U" uniqKey="Oertel U">U. Oertel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gazzaniga, M S" uniqKey="Gazzaniga M">M. S. Gazzaniga</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Geiser, E" uniqKey="Geiser E">E. Geiser</name>
</author>
<author>
<name sortKey="Zaehle, T" uniqKey="Zaehle T">T. Zaehle</name>
</author>
<author>
<name sortKey="Jancke, L" uniqKey="Jancke L">L. Jancke</name>
</author>
<author>
<name sortKey="Meyer, M" uniqKey="Meyer M">M. Meyer</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Grahn, J A" uniqKey="Grahn J">J. A. Grahn</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hagoort, P" uniqKey="Hagoort P">P. Hagoort</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Henschen, S E" uniqKey="Henschen S">S. E. Henschen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hickok, G" uniqKey="Hickok G">G. Hickok</name>
</author>
<author>
<name sortKey="Buchsbaum, B" uniqKey="Buchsbaum B">B. Buchsbaum</name>
</author>
<author>
<name sortKey="Humphries, C" uniqKey="Humphries C">C. Humphries</name>
</author>
<author>
<name sortKey="Muftuler, T" uniqKey="Muftuler T">T. Muftuler</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hickok, G" uniqKey="Hickok G">G. Hickok</name>
</author>
<author>
<name sortKey="Poeppel, D" uniqKey="Poeppel D">D. Poeppel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hickok, G" uniqKey="Hickok G">G. Hickok</name>
</author>
<author>
<name sortKey="Poeppel, D" uniqKey="Poeppel D">D. Poeppel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hickok, G" uniqKey="Hickok G">G. Hickok</name>
</author>
<author>
<name sortKey="Rogalsky, C" uniqKey="Rogalsky C">C. Rogalsky</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hoch, L" uniqKey="Hoch L">L. Hoch</name>
</author>
<author>
<name sortKey="Poulin Charronnat, B" uniqKey="Poulin Charronnat B">B. Poulin-Charronnat</name>
</author>
<author>
<name sortKey="Tillmann, B" uniqKey="Tillmann B">B. Tillmann</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Humphries, C" uniqKey="Humphries C">C. Humphries</name>
</author>
<author>
<name sortKey="Binder, J R" uniqKey="Binder J">J. R. Binder</name>
</author>
<author>
<name sortKey="Medler, D A" uniqKey="Medler D">D. A. Medler</name>
</author>
<author>
<name sortKey="Liebenthal, E" uniqKey="Liebenthal E">E. Liebenthal</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Humphries, C" uniqKey="Humphries C">C. Humphries</name>
</author>
<author>
<name sortKey="Love, T" uniqKey="Love T">T. Love</name>
</author>
<author>
<name sortKey="Swinney, D" uniqKey="Swinney D">D. Swinney</name>
</author>
<author>
<name sortKey="Hickok, G" uniqKey="Hickok G">G. Hickok</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Humphries, C" uniqKey="Humphries C">C. Humphries</name>
</author>
<author>
<name sortKey="Willard, K" uniqKey="Willard K">K. Willard</name>
</author>
<author>
<name sortKey="Buchsbaum, B" uniqKey="Buchsbaum B">B. Buchsbaum</name>
</author>
<author>
<name sortKey="Hickok, G" uniqKey="Hickok G">G. Hickok</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hyde, K L" uniqKey="Hyde K">K. L. Hyde</name>
</author>
<author>
<name sortKey="Peretz, I" uniqKey="Peretz I">I. Peretz</name>
</author>
<author>
<name sortKey="Zatorre, R J" uniqKey="Zatorre R">R. J. Zatorre</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hyde, K L" uniqKey="Hyde K">K. L. Hyde</name>
</author>
<author>
<name sortKey="Zatorre, R J" uniqKey="Zatorre R">R. J. Zatorre</name>
</author>
<author>
<name sortKey="Griffiths, T D" uniqKey="Griffiths T">T. D. Griffiths</name>
</author>
<author>
<name sortKey="Lerch, J P" uniqKey="Lerch J">J. P. Lerch</name>
</author>
<author>
<name sortKey="Peretz, I" uniqKey="Peretz I">I. Peretz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ivry, R B" uniqKey="Ivry R">R. B. Ivry</name>
</author>
<author>
<name sortKey="Robertson, L C" uniqKey="Robertson L">L. C. Robertson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="J Ncke, L" uniqKey="J Ncke L">L. Jäncke</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="J Ncke, L" uniqKey="J Ncke L">L. Jäncke</name>
</author>
<author>
<name sortKey="Wustenberg, T" uniqKey="Wustenberg T">T. Wüstenberg</name>
</author>
<author>
<name sortKey="Scheich, H" uniqKey="Scheich H">H. Scheich</name>
</author>
<author>
<name sortKey="Heinze, H J" uniqKey="Heinze H">H. J. Heinze</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="January, D" uniqKey="January D">D. January</name>
</author>
<author>
<name sortKey="Trueswell, J C" uniqKey="Trueswell J">J. C. Trueswell</name>
</author>
<author>
<name sortKey="Thompson Schill, S L" uniqKey="Thompson Schill S">S. L. Thompson-Schill</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Koelsch, S" uniqKey="Koelsch S">S. Koelsch</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Koelsch, S" uniqKey="Koelsch S">S. Koelsch</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Koelsch, S" uniqKey="Koelsch S">S. Koelsch</name>
</author>
<author>
<name sortKey="Friederici, A D" uniqKey="Friederici A">A. D. Friederici</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Koelsch, S" uniqKey="Koelsch S">S. Koelsch</name>
</author>
<author>
<name sortKey="Gunter, T C" uniqKey="Gunter T">T. C. Gunter</name>
</author>
<author>
<name sortKey="Wittfoth, M" uniqKey="Wittfoth M">M. Wittfoth</name>
</author>
<author>
<name sortKey="Sammler, D" uniqKey="Sammler D">D. Sammler</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Koelsch, S" uniqKey="Koelsch S">S. Koelsch</name>
</author>
<author>
<name sortKey="Kasper, E" uniqKey="Kasper E">E. Kasper</name>
</author>
<author>
<name sortKey="Sammler, D" uniqKey="Sammler D">D. Sammler</name>
</author>
<author>
<name sortKey="Schulze, K" uniqKey="Schulze K">K. Schulze</name>
</author>
<author>
<name sortKey="Gunter, T" uniqKey="Gunter T">T. Gunter</name>
</author>
<author>
<name sortKey="Friederici, A D" uniqKey="Friederici A">A. D. Friederici</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Koelsch, S" uniqKey="Koelsch S">S. Koelsch</name>
</author>
<author>
<name sortKey="Siebel, W A" uniqKey="Siebel W">W. A. Siebel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Koelsch, S" uniqKey="Koelsch S">S. Koelsch</name>
</author>
<author>
<name sortKey="Schulze, K" uniqKey="Schulze K">K. Schulze</name>
</author>
<author>
<name sortKey="Sammler, D" uniqKey="Sammler D">D. Sammler</name>
</author>
<author>
<name sortKey="Fritz, T" uniqKey="Fritz T">T. Fritz</name>
</author>
<author>
<name sortKey="Muller, K" uniqKey="Muller K">K. Müller</name>
</author>
<author>
<name sortKey="Gruber, O" uniqKey="Gruber O">O. Gruber</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Luria, A R" uniqKey="Luria A">A. R. Luria</name>
</author>
<author>
<name sortKey="Tsvetkova, L" uniqKey="Tsvetkova L">L. Tsvetkova</name>
</author>
<author>
<name sortKey="Futer, D S" uniqKey="Futer D">D. S. Futer</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Macleod, C M" uniqKey="Macleod C">C. M. MacLeod</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Maess, B" uniqKey="Maess B">B. Maess</name>
</author>
<author>
<name sortKey="Koelsch, S" uniqKey="Koelsch S">S. Koelsch</name>
</author>
<author>
<name sortKey="Gunter, T C" uniqKey="Gunter T">T. C. Gunter</name>
</author>
<author>
<name sortKey="Friederici, A D" uniqKey="Friederici A">A. D. Friederici</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Maillard, L" uniqKey="Maillard L">L. Maillard</name>
</author>
<author>
<name sortKey="Barbeau, E J" uniqKey="Barbeau E">E. J. Barbeau</name>
</author>
<author>
<name sortKey="Baumann, C" uniqKey="Baumann C">C. Baumann</name>
</author>
<author>
<name sortKey="Koessler, L" uniqKey="Koessler L">L. Koessler</name>
</author>
<author>
<name sortKey="Benar, C" uniqKey="Benar C">C. Bénar</name>
</author>
<author>
<name sortKey="Chauvel, P" uniqKey="Chauvel P">P. Chauvel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Masataka, N" uniqKey="Masataka N">N. Masataka</name>
</author>
<author>
<name sortKey="Perlovsky, L" uniqKey="Perlovsky L">L. Perlovsky</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mazoyer, B M" uniqKey="Mazoyer B">B. M. Mazoyer</name>
</author>
<author>
<name sortKey="Tzourio, N" uniqKey="Tzourio N">N. Tzourio</name>
</author>
<author>
<name sortKey="Frak, V" uniqKey="Frak V">V. Frak</name>
</author>
<author>
<name sortKey="Syrota, A" uniqKey="Syrota A">A. Syrota</name>
</author>
<author>
<name sortKey="Murayama, N" uniqKey="Murayama N">N. Murayama</name>
</author>
<author>
<name sortKey="Levrier, O" uniqKey="Levrier O">O. Levrier</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ni, W" uniqKey="Ni W">W. Ni</name>
</author>
<author>
<name sortKey="Constable, R T" uniqKey="Constable R">R. T. Constable</name>
</author>
<author>
<name sortKey="Mencl, W E" uniqKey="Mencl W">W. E. Mencl</name>
</author>
<author>
<name sortKey="Pugh, K R" uniqKey="Pugh K">K. R. Pugh</name>
</author>
<author>
<name sortKey="Fulbright, R K" uniqKey="Fulbright R">R. K. Fulbright</name>
</author>
<author>
<name sortKey="Shaywitz, S E" uniqKey="Shaywitz S">S. E. Shaywitz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Noesselt, T" uniqKey="Noesselt T">T. Noesselt</name>
</author>
<author>
<name sortKey="Shah, N J" uniqKey="Shah N">N. J. Shah</name>
</author>
<author>
<name sortKey="J Ncke, L" uniqKey="J Ncke L">L. Jäncke</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Novick, J M" uniqKey="Novick J">J. M. Novick</name>
</author>
<author>
<name sortKey="Trueswell, J C" uniqKey="Trueswell J">J. C. Trueswell</name>
</author>
<author>
<name sortKey="Thompson Schill, S L" uniqKey="Thompson Schill S">S. L. Thompson-Schill</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Oechslin, M S" uniqKey="Oechslin M">M. S. Oechslin</name>
</author>
<author>
<name sortKey="Meyer, M" uniqKey="Meyer M">M. Meyer</name>
</author>
<author>
<name sortKey="J Ncke, L" uniqKey="J Ncke L">L. Jäncke</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Patel, A" uniqKey="Patel A">A. Patel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Patel, A D" uniqKey="Patel A">A. D. Patel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Patel, A D" uniqKey="Patel A">A. D. Patel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Patel, A D" uniqKey="Patel A">A. D. Patel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Patel, A D" uniqKey="Patel A">A. D. Patel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Patel, A D" uniqKey="Patel A">A. D. Patel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Patel, A D" uniqKey="Patel A">A. D. Patel</name>
</author>
<author>
<name sortKey="Gibson, E" uniqKey="Gibson E">E. Gibson</name>
</author>
<author>
<name sortKey="Ratner, J" uniqKey="Ratner J">J. Ratner</name>
</author>
<author>
<name sortKey="Besson, M" uniqKey="Besson M">M. Besson</name>
</author>
<author>
<name sortKey="Holcomb, P" uniqKey="Holcomb P">P. Holcomb</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Patel, A D" uniqKey="Patel A">A. D. Patel</name>
</author>
<author>
<name sortKey="Peretz, I" uniqKey="Peretz I">I. Peretz</name>
</author>
<author>
<name sortKey="Tramo, M" uniqKey="Tramo M">M. Tramo</name>
</author>
<author>
<name sortKey="Labreque, R" uniqKey="Labreque R">R. Labreque</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Peelle, J E" uniqKey="Peelle J">J. E. Peelle</name>
</author>
<author>
<name sortKey="Eason, R J" uniqKey="Eason R">R. J. Eason</name>
</author>
<author>
<name sortKey="Schmitter, S" uniqKey="Schmitter S">S. Schmitter</name>
</author>
<author>
<name sortKey="Schwarzbauer, C" uniqKey="Schwarzbauer C">C. Schwarzbauer</name>
</author>
<author>
<name sortKey="Davis, M H" uniqKey="Davis M">M. H. Davis</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Peretz, I" uniqKey="Peretz I">I. Peretz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Peretz, I" uniqKey="Peretz I">I. Peretz</name>
</author>
<author>
<name sortKey="Belleville, S" uniqKey="Belleville S">S. Belleville</name>
</author>
<author>
<name sortKey="Fontaine, S" uniqKey="Fontaine S">S. Fontaine</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Peretz, I" uniqKey="Peretz I">I. Peretz</name>
</author>
<author>
<name sortKey="Champod, A S" uniqKey="Champod A">A. S. Champod</name>
</author>
<author>
<name sortKey="Hyde, K" uniqKey="Hyde K">K. Hyde</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Peretz, I Hyde K L" uniqKey="Peretz I">I. Hyde, K. L. Peretz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Peretz, I" uniqKey="Peretz I">I. Peretz</name>
</author>
<author>
<name sortKey="Kolinsky, R" uniqKey="Kolinsky R">R. Kolinsky</name>
</author>
<author>
<name sortKey="Tramo, M" uniqKey="Tramo M">M. Tramo</name>
</author>
<author>
<name sortKey="Labrecque, R" uniqKey="Labrecque R">R. Labrecque</name>
</author>
<author>
<name sortKey="Hublet, C" uniqKey="Hublet C">C. Hublet</name>
</author>
<author>
<name sortKey="Demeurisse, G" uniqKey="Demeurisse G">G. Demeurisse</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Peretz, I" uniqKey="Peretz I">I. Peretz</name>
</author>
<author>
<name sortKey="Zatorre, R J" uniqKey="Zatorre R">R. J. Zatorre</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Perruchet, P" uniqKey="Perruchet P">P. Perruchet</name>
</author>
<author>
<name sortKey="Poulin Charronnat, B" uniqKey="Poulin Charronnat B">B. Poulin-Charronnat</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Petersen, S E" uniqKey="Petersen S">S. E. Petersen</name>
</author>
<author>
<name sortKey="Posner, M I" uniqKey="Posner M">M. I. Posner</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Platel, H" uniqKey="Platel H">H. Platel</name>
</author>
<author>
<name sortKey="Price, C" uniqKey="Price C">C. Price</name>
</author>
<author>
<name sortKey="Baron, J C" uniqKey="Baron J">J. C. Baron</name>
</author>
<author>
<name sortKey="Wise, R" uniqKey="Wise R">R. Wise</name>
</author>
<author>
<name sortKey="Lambert, J" uniqKey="Lambert J">J. Lambert</name>
</author>
<author>
<name sortKey="Frackowiak, R S" uniqKey="Frackowiak R">R. S. Frackowiak</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Poeppel, D" uniqKey="Poeppel D">D. Poeppel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Price, C J" uniqKey="Price C">C. J. Price</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Raichle, M E" uniqKey="Raichle M">M. E. Raichle</name>
</author>
<author>
<name sortKey="Mintun, M A" uniqKey="Mintun M">M. A. Mintun</name>
</author>
<author>
<name sortKey="Shertz, L D" uniqKey="Shertz L">L. D. Shertz</name>
</author>
<author>
<name sortKey="Fusselman, M J" uniqKey="Fusselman M">M. J. Fusselman</name>
</author>
<author>
<name sortKey="Miezen, F" uniqKey="Miezen F">F. Miezen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rogalsky, C" uniqKey="Rogalsky C">C. Rogalsky</name>
</author>
<author>
<name sortKey="Almeida, D" uniqKey="Almeida D">D. Almeida</name>
</author>
<author>
<name sortKey="Sprouse, J" uniqKey="Sprouse J">J. Sprouse</name>
</author>
<author>
<name sortKey="Hickok, G" uniqKey="Hickok G">G. Hickok</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rogalsky, C" uniqKey="Rogalsky C">C. Rogalsky</name>
</author>
<author>
<name sortKey="Hickok, G" uniqKey="Hickok G">G. Hickok</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rogalsky, C" uniqKey="Rogalsky C">C. Rogalsky</name>
</author>
<author>
<name sortKey="Hickok, G" uniqKey="Hickok G">G. Hickok</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rogalsky, C" uniqKey="Rogalsky C">C. Rogalsky</name>
</author>
<author>
<name sortKey="Poppa, N" uniqKey="Poppa N">N. Poppa</name>
</author>
<author>
<name sortKey="Chen, K H" uniqKey="Chen K">K. H. Chen</name>
</author>
<author>
<name sortKey="Anderson, S W" uniqKey="Anderson S">S. W. Anderson</name>
</author>
<author>
<name sortKey="Damasio, H" uniqKey="Damasio H">H. Damasio</name>
</author>
<author>
<name sortKey="Love, T" uniqKey="Love T">T. Love</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rogalsky, C" uniqKey="Rogalsky C">C. Rogalsky</name>
</author>
<author>
<name sortKey="Rong, F" uniqKey="Rong F">F. Rong</name>
</author>
<author>
<name sortKey="Saberi, K Hickok G" uniqKey="Saberi K">K. Hickok, G. Saberi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rorden, C" uniqKey="Rorden C">C. Rorden</name>
</author>
<author>
<name sortKey="Brett, M" uniqKey="Brett M">M. Brett</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sammler, D" uniqKey="Sammler D">D. Sammler</name>
</author>
<author>
<name sortKey="Koelsch, S" uniqKey="Koelsch S">S. Koelsch</name>
</author>
<author>
<name sortKey="Friederici, A D" uniqKey="Friederici A">A. D. Friederici</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sanders, L D" uniqKey="Sanders L">L. D. Sanders</name>
</author>
<author>
<name sortKey="Poeppel, D" uniqKey="Poeppel D">D. Poeppel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Scheich, H" uniqKey="Scheich H">H. Scheich</name>
</author>
<author>
<name sortKey="Brechmann, A" uniqKey="Brechmann A">A. Brechmann</name>
</author>
<author>
<name sortKey="Brosch, M" uniqKey="Brosch M">M. Brosch</name>
</author>
<author>
<name sortKey="Budinger, E" uniqKey="Budinger E">E. Budinger</name>
</author>
<author>
<name sortKey="Ohl, F W" uniqKey="Ohl F">F. W. Ohl</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Schellenberg, E G" uniqKey="Schellenberg E">E. G. Schellenberg</name>
</author>
<author>
<name sortKey="Bigand, E" uniqKey="Bigand E">E. Bigand</name>
</author>
<author>
<name sortKey="Poulin Charronnat, B" uniqKey="Poulin Charronnat B">B. Poulin-Charronnat</name>
</author>
<author>
<name sortKey="Garnier, C" uniqKey="Garnier C">C. Garnier</name>
</author>
<author>
<name sortKey="Stevens, C" uniqKey="Stevens C">C. Stevens</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Schonwiesner, M" uniqKey="Schonwiesner M">M. Schönwiesner</name>
</author>
<author>
<name sortKey="Novitski, N" uniqKey="Novitski N">N. Novitski</name>
</author>
<author>
<name sortKey="Pakarinen, S" uniqKey="Pakarinen S">S. Pakarinen</name>
</author>
<author>
<name sortKey="Carlson, S" uniqKey="Carlson S">S. Carlson</name>
</author>
<author>
<name sortKey="Tervaniemi, M" uniqKey="Tervaniemi M">M. Tervaniemi</name>
</author>
<author>
<name sortKey="N T Nen, R" uniqKey="N T Nen R">R. Näätänen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Schonwiesner, M" uniqKey="Schonwiesner M">M. Schönwiesner</name>
</author>
<author>
<name sortKey="Rubsamen, R" uniqKey="Rubsamen R">R. Rübsamen</name>
</author>
<author>
<name sortKey="Von Cramon, D Y" uniqKey="Von Cramon D">D. Y. von Cramon</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Schwartz, M F" uniqKey="Schwartz M">M. F. Schwartz</name>
</author>
<author>
<name sortKey="Faseyitan, O" uniqKey="Faseyitan O">O. Faseyitan</name>
</author>
<author>
<name sortKey="Kim, J" uniqKey="Kim J">J. Kim</name>
</author>
<author>
<name sortKey="Coslett, H B" uniqKey="Coslett H">H. B. Coslett</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Scott, S K" uniqKey="Scott S">S. K. Scott</name>
</author>
<author>
<name sortKey="Blank, C C" uniqKey="Blank C">C. C. Blank</name>
</author>
<author>
<name sortKey="Rosen, S" uniqKey="Rosen S">S. Rosen</name>
</author>
<author>
<name sortKey="Wise, R J S" uniqKey="Wise R">R. J. S. Wise</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sergent, J" uniqKey="Sergent J">J. Sergent</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Shallice, T" uniqKey="Shallice T">T. Shallice</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Slevc, R L" uniqKey="Slevc R">R. L. Slevc</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Slevc, L R" uniqKey="Slevc L">L. R. Slevc</name>
</author>
<author>
<name sortKey="Rosenberg, J C" uniqKey="Rosenberg J">J. C. Rosenberg</name>
</author>
<author>
<name sortKey="Patel, A D" uniqKey="Patel A">A. D. Patel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Slevc, L R" uniqKey="Slevc L">L. R. Slevc</name>
</author>
<author>
<name sortKey="Okada, B M" uniqKey="Okada B">B. M. Okada</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Specht, K" uniqKey="Specht K">K. Specht</name>
</author>
<author>
<name sortKey="Willmes, K" uniqKey="Willmes K">K. Willmes</name>
</author>
<author>
<name sortKey="Shah, N J" uniqKey="Shah N">N. J. Shah</name>
</author>
<author>
<name sortKey="J Ncke, L" uniqKey="J Ncke L">L. Jäncke</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Spitsyna, G" uniqKey="Spitsyna G">G. Spitsyna</name>
</author>
<author>
<name sortKey="Warren, J E" uniqKey="Warren J">J. E. Warren</name>
</author>
<author>
<name sortKey="Scott, S K" uniqKey="Scott S">S. K. Scott</name>
</author>
<author>
<name sortKey="Turkheimer, F E" uniqKey="Turkheimer F">F. E. Turkheimer</name>
</author>
<author>
<name sortKey="Wise, R J" uniqKey="Wise R">R. J. Wise</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Steinbeis, N" uniqKey="Steinbeis N">N. Steinbeis</name>
</author>
<author>
<name sortKey="Koelsch, S" uniqKey="Koelsch S">S. Koelsch</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Steinbeis, N" uniqKey="Steinbeis N">N. Steinbeis</name>
</author>
<author>
<name sortKey="Koelsch, S" uniqKey="Koelsch S">S. Koelsch</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Steinke, W R" uniqKey="Steinke W">W. R. Steinke</name>
</author>
<author>
<name sortKey="Cuddy, L L" uniqKey="Cuddy L">L. L. Cuddy</name>
</author>
<author>
<name sortKey="Holden, R R" uniqKey="Holden R">R. R. Holden</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Stroop, J R" uniqKey="Stroop J">J. R. Stroop</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Tervaniemi, M" uniqKey="Tervaniemi M">M. Tervaniemi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Tervaniemi, M" uniqKey="Tervaniemi M">M. Tervaniemi</name>
</author>
<author>
<name sortKey="Hugdahl, K" uniqKey="Hugdahl K">K. Hugdahl</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Tervaniemi, M" uniqKey="Tervaniemi M">M. Tervaniemi</name>
</author>
<author>
<name sortKey="Kujala, A" uniqKey="Kujala A">A. Kujala</name>
</author>
<author>
<name sortKey="Alho, K" uniqKey="Alho K">K. Alho</name>
</author>
<author>
<name sortKey="Virtanen, J" uniqKey="Virtanen J">J. Virtanen</name>
</author>
<author>
<name sortKey="Ilmoniemi, R" uniqKey="Ilmoniemi R">R. Ilmoniemi</name>
</author>
<author>
<name sortKey="N T Nen, R" uniqKey="N T Nen R">R. Näätänen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Tervaniemi, M" uniqKey="Tervaniemi M">M. Tervaniemi</name>
</author>
<author>
<name sortKey="Mendvedev, S V" uniqKey="Mendvedev S">S. V. Mendvedev</name>
</author>
<author>
<name sortKey="Alho, K" uniqKey="Alho K">K. Alho</name>
</author>
<author>
<name sortKey="Pakhomov, S V" uniqKey="Pakhomov S">S. V. Pakhomov</name>
</author>
<author>
<name sortKey="Roudas, M S" uniqKey="Roudas M">M. S. Roudas</name>
</author>
<author>
<name sortKey="Van Zuijen, T L" uniqKey="Van Zuijen T">T. L. Van Zuijen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Tillmann, B" uniqKey="Tillmann B">B. Tillmann</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Tillmann, B" uniqKey="Tillmann B">B. Tillmann</name>
</author>
<author>
<name sortKey="Peretz, I" uniqKey="Peretz I">I. Peretz</name>
</author>
<author>
<name sortKey="Bigand, E" uniqKey="Bigand E">E. Bigand</name>
</author>
<author>
<name sortKey="Gosselin, N" uniqKey="Gosselin N">N. Gosselin</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Turkeltaub, P E" uniqKey="Turkeltaub P">P. E. Turkeltaub</name>
</author>
<author>
<name sortKey="Coslett, H B" uniqKey="Coslett H">H. B. Coslett</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Turkeltaub, P E" uniqKey="Turkeltaub P">P. E. Turkeltaub</name>
</author>
<author>
<name sortKey="Eickhoff, S B" uniqKey="Eickhoff S">S. B. Eickhoff</name>
</author>
<author>
<name sortKey="Laird, A R" uniqKey="Laird A">A. R. Laird</name>
</author>
<author>
<name sortKey="Fox, M" uniqKey="Fox M">M. Fox</name>
</author>
<author>
<name sortKey="Wiener, M" uniqKey="Wiener M">M. Wiener</name>
</author>
<author>
<name sortKey="Fox, P" uniqKey="Fox P">P. Fox</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Tzortzis, C" uniqKey="Tzortzis C">C. Tzortzis</name>
</author>
<author>
<name sortKey="Goldblum, M C" uniqKey="Goldblum M">M. C. Goldblum</name>
</author>
<author>
<name sortKey="Dang, M" uniqKey="Dang M">M. Dang</name>
</author>
<author>
<name sortKey="Forette, F" uniqKey="Forette F">F. Forette</name>
</author>
<author>
<name sortKey="Boller, F" uniqKey="Boller F">F. Boller</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Vaden, K I" uniqKey="Vaden K">K. I. Vaden</name>
</author>
<author>
<name sortKey="Kuchinsky, S E" uniqKey="Kuchinsky S">S. E. Kuchinsky</name>
</author>
<author>
<name sortKey="Cute, S L" uniqKey="Cute S">S. L. Cute</name>
</author>
<author>
<name sortKey="Ahlstrom, J B" uniqKey="Ahlstrom J">J. B. Ahlstrom</name>
</author>
<author>
<name sortKey="Dubno, J R" uniqKey="Dubno J">J. R. Dubno</name>
</author>
<author>
<name sortKey="Eckert, M A" uniqKey="Eckert M">M. A. Eckert</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Von Kriegstein, K" uniqKey="Von Kriegstein K">K. Von Kriegstein</name>
</author>
<author>
<name sortKey="Eiger, E" uniqKey="Eiger E">E. Eiger</name>
</author>
<author>
<name sortKey="Kleinschmidt, A" uniqKey="Kleinschmidt A">A. Kleinschmidt</name>
</author>
<author>
<name sortKey="Giraud, A L" uniqKey="Giraud A">A. L. Giraud</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="White, T" uniqKey="White T">T. White</name>
</author>
<author>
<name sortKey="O Leary, D" uniqKey="O Leary D">D. O'Leary</name>
</author>
<author>
<name sortKey="Magnotta, V" uniqKey="Magnotta V">V. Magnotta</name>
</author>
<author>
<name sortKey="Arndt, S" uniqKey="Arndt S">S. Arndt</name>
</author>
<author>
<name sortKey="Flaum, M" uniqKey="Flaum M">M. Flaum</name>
</author>
<author>
<name sortKey="Andreasen, N C" uniqKey="Andreasen N">N. C. Andreasen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wilson, S M" uniqKey="Wilson S">S. M. Wilson</name>
</author>
<author>
<name sortKey="Demarco, A T" uniqKey="Demarco A">A. T. DeMarco</name>
</author>
<author>
<name sortKey="Henry, M L" uniqKey="Henry M">M. L. Henry</name>
</author>
<author>
<name sortKey="Gesierich, B" uniqKey="Gesierich B">B. Gesierich</name>
</author>
<author>
<name sortKey="Babiak, M" uniqKey="Babiak M">M. Babiak</name>
</author>
<author>
<name sortKey="Mandelli, M L" uniqKey="Mandelli M">M. L. Mandelli</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wong, C" uniqKey="Wong C">C. Wong</name>
</author>
<author>
<name sortKey="Gallate, J" uniqKey="Gallate J">J. Gallate</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Xu, J" uniqKey="Xu J">J. Xu</name>
</author>
<author>
<name sortKey="Kemeny, S" uniqKey="Kemeny S">S. Kemeny</name>
</author>
<author>
<name sortKey="Park, G" uniqKey="Park G">G. Park</name>
</author>
<author>
<name sortKey="Frattali, C" uniqKey="Frattali C">C. Frattali</name>
</author>
<author>
<name sortKey="Braun, A" uniqKey="Braun A">A. Braun</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Yamadori, A" uniqKey="Yamadori A">A. Yamadori</name>
</author>
<author>
<name sortKey="Osumi, Y" uniqKey="Osumi Y">Y. Osumi</name>
</author>
<author>
<name sortKey="Masuhara, S" uniqKey="Masuhara S">S. Masuhara</name>
</author>
<author>
<name sortKey="Okubo, M" uniqKey="Okubo M">M. Okubo</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zatorre, R J" uniqKey="Zatorre R">R. J. Zatorre</name>
</author>
<author>
<name sortKey="Belin, P" uniqKey="Belin P">P. Belin</name>
</author>
<author>
<name sortKey="Penhune, V B" uniqKey="Penhune V">V. B. Penhune</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zatorre, R J" uniqKey="Zatorre R">R. J. Zatorre</name>
</author>
<author>
<name sortKey="Gandour, J T" uniqKey="Gandour J">J. T. Gandour</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zheng, Z Z" uniqKey="Zheng Z">Z. Z. Zheng</name>
</author>
<author>
<name sortKey="Munhall, K G" uniqKey="Munhall K">K. G. Munhall</name>
</author>
<author>
<name sortKey="Johnsrude, I S" uniqKey="Johnsrude I">I. S. Johnsrude</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">Front Psychol</journal-id>
<journal-id journal-id-type="iso-abbrev">Front Psychol</journal-id>
<journal-id journal-id-type="publisher-id">Front. Psychol.</journal-id>
<journal-title-group>
<journal-title>Frontiers in Psychology</journal-title>
</journal-title-group>
<issn pub-type="epub">1664-1078</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">26321976</article-id>
<article-id pub-id-type="pmc">4531212</article-id>
<article-id pub-id-type="doi">10.3389/fpsyg.2015.01138</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Psychology</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>The relationship between the neural computations for speech and music perception is context-dependent: an activation likelihood estimate study</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>LaCroix</surname>
<given-names>Arianna N.</given-names>
</name>
<uri xlink:type="simple" xlink:href="http://loop.frontiersin.org/people/258216/overview"></uri>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Diaz</surname>
<given-names>Alvaro F.</given-names>
</name>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Rogalsky</surname>
<given-names>Corianne</given-names>
</name>
<xref ref-type="author-notes" rid="fn001">
<sup>*</sup>
</xref>
<uri xlink:type="simple" xlink:href="http://loop.frontiersin.org/people/154972/overview"></uri>
</contrib>
</contrib-group>
<aff>
<institution>Communication Neuroimaging and Neuroscience Laboratory, Department of Speech and Hearing Science, Arizona State University</institution>
<country>Tempe, AZ, USA</country>
</aff>
<author-notes>
<fn fn-type="edited-by">
<p>Edited by: McNeel Gordon Jantzen, Western Washington University, USA</p>
</fn>
<fn fn-type="edited-by">
<p>Reviewed by: Lutz Jäncke, University of Zurich, Switzerland; Yi Du, McGill University, Canada</p>
</fn>
<corresp id="fn001">*Correspondence: Corianne Rogalsky, Department of Speech and Hearing Science, Arizona State University, PO Box 570102, Tempe, AZ 85287-0102, USA
<email xlink:type="simple">corianne.rogalsky@asu.edu</email>
</corresp>
<fn fn-type="other" id="fn002">
<p>This article was submitted to Auditory Cognitive Neuroscience, a section of the journal Frontiers in Psychology</p>
</fn>
</author-notes>
<pub-date pub-type="epub">
<day>11</day>
<month>8</month>
<year>2015</year>
</pub-date>
<pub-date pub-type="collection">
<year>2015</year>
</pub-date>
<volume>6</volume>
<elocation-id>1138</elocation-id>
<history>
<date date-type="received">
<day>08</day>
<month>4</month>
<year>2015</year>
</date>
<date date-type="accepted">
<day>22</day>
<month>7</month>
<year>2015</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright © 2015 LaCroix, Diaz and Rogalsky.</copyright-statement>
<copyright-year>2015</copyright-year>
<copyright-holder>LaCroix, Diaz and Rogalsky</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/">
<license-p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</license-p>
</license>
</permissions>
<abstract>
<p>The relationship between the neurobiology of speech and music has been investigated for more than a century. There remains no widespread agreement regarding how (or to what extent) music perception utilizes the neural circuitry that is engaged in speech processing, particularly at the cortical level. Prominent models such as Patel's Shared Syntactic Integration Resource Hypothesis (SSIRH) and Koelsch's neurocognitive model of music perception suggest a high degree of overlap, particularly in the frontal lobe, but also perhaps more distinct representations in the temporal lobe with hemispheric asymmetries. The present meta-analysis study used activation likelihood estimate analyses to identify the brain regions consistently activated for music as compared to speech across the functional neuroimaging (fMRI and PET) literature. Eighty music and 91 speech neuroimaging studies of healthy adult control subjects were analyzed. Peak activations reported in the music and speech studies were divided into four paradigm categories: passive listening, discrimination tasks, error/anomaly detection tasks and memory-related tasks. We then compared activation likelihood estimates within each category for music vs. speech, and each music condition with passive listening. We found that listening to music and to speech preferentially activate distinct temporo-parietal bilateral cortical networks. We also found music and speech to have shared resources in the left pars opercularis but speech-specific resources in the left pars triangularis. The extent to which music recruited speech-activated frontal resources was modulated by task. While there are certainly limitations to meta-analysis techniques particularly regarding sensitivity, this work suggests that the extent of shared resources between speech and music may be task-dependent and highlights the need to consider how task effects may be affecting conclusions regarding the neurobiology of speech and music.</p>
</abstract>
<kwd-group>
<kwd>music perception</kwd>
<kwd>speech perception</kwd>
<kwd>fMRI</kwd>
<kwd>meta-analysis</kwd>
<kwd>Broca's area</kwd>
</kwd-group>
<counts>
<fig-count count="5"></fig-count>
<table-count count="2"></table-count>
<equation-count count="0"></equation-count>
<ref-count count="129"></ref-count>
<page-count count="19"></page-count>
<word-count count="13903"></word-count>
</counts>
</article-meta>
</front>
<body>
<sec sec-type="intro" id="s1">
<title>Introduction</title>
<p>The relationship between the neurobiology of speech and music has been investigated and debated for nearly a century. (Henschen,
<xref rid="B33" ref-type="bibr">1924</xref>
; Luria et al.,
<xref rid="B55" ref-type="bibr">1965</xref>
; Frances et al.,
<xref rid="B26" ref-type="bibr">1973</xref>
; Peretz,
<xref rid="B74" ref-type="bibr">2006</xref>
; Besson et al.,
<xref rid="B7" ref-type="bibr">2011</xref>
). Early evidence from case studies of brain-damaged individuals suggested a dissociation of aphasia and amusia (Yamadori et al.,
<xref rid="B126" ref-type="bibr">1977</xref>
; Basso and Capitani,
<xref rid="B6" ref-type="bibr">1985</xref>
; Peretz et al.,
<xref rid="B78" ref-type="bibr">1994</xref>
,
<xref rid="B75" ref-type="bibr">1997</xref>
; Steinke et al.,
<xref rid="B109" ref-type="bibr">1997</xref>
; Patel et al.,
<xref rid="B72" ref-type="bibr">1998b</xref>
; Tzortzis et al.,
<xref rid="B119" ref-type="bibr">2000</xref>
; Peretz and Hyde,
<xref rid="B77" ref-type="bibr">2003</xref>
). However, more recent patient work examining specific aspects of speech and music processing indicate at least some overlap in deficits across the two domains. For example, patients with Broca's aphasia have both linguistic and harmonic structure deficits, and patients with amusia exhibit pitch deficits in both speech and music (Patel,
<xref rid="B65" ref-type="bibr">2003</xref>
,
<xref rid="B66" ref-type="bibr">2005</xref>
,
<xref rid="B70" ref-type="bibr">2013</xref>
). Electrophysiological (e.g., ERP) studies also suggest shared resources between speech and music; for example, syntactic and harmonic violations elicit indistinguishable ERP responses such as the P600 response, which is hypothesized to originate from anterior temporal or inferior frontal regions (Patel et al.,
<xref rid="B71" ref-type="bibr">1998a</xref>
; Maillard et al.,
<xref rid="B58" ref-type="bibr">2011</xref>
; Sammler et al.,
<xref rid="B92" ref-type="bibr">2011</xref>
). Music perception also interacts with morphosyntactic representations of speech: the early right anterior negativity (ERAN) ERP component sensitive to chord irregularities interacts with the left anterior negativity's (LAN's) response to morphosyntactic violations or irregularities (Koelsch et al.,
<xref rid="B51" ref-type="bibr">2005</xref>
; Steinbeis and Koelsch,
<xref rid="B108" ref-type="bibr">2008b</xref>
; Koelsch,
<xref rid="B49" ref-type="bibr">2011</xref>
).</p>
<p>Several studies of trained musicians and individuals with absolute pitch also suggest an overlap between speech and music as there are carry-over effects of musical training onto speech processing performance (e.g., Oechslin et al.,
<xref rid="B64" ref-type="bibr">2010</xref>
; Elmer et al.,
<xref rid="B22" ref-type="bibr">2012</xref>
; for a review see Besson et al.,
<xref rid="B7" ref-type="bibr">2011</xref>
).</p>
<p>There is a rich literature of electrophysiological and behavioral work regarding the relationship between music and language (for reviews see Besson et al.,
<xref rid="B7" ref-type="bibr">2011</xref>
; Koelsch,
<xref rid="B49" ref-type="bibr">2011</xref>
; Patel,
<xref rid="B69" ref-type="bibr">2012</xref>
,
<xref rid="B70" ref-type="bibr">2013</xref>
; Tillmann,
<xref rid="B115" ref-type="bibr">2012</xref>
; Slevc and Okada,
<xref rid="B104" ref-type="bibr">2015</xref>
). This work has provided numerous pieces of evidence of overlap between the neural resources of speech and music, including in the brainstem, auditory cortex and frontal cortical regions (Koelsch,
<xref rid="B49" ref-type="bibr">2011</xref>
). This high degree of interaction between speech and music coincides with Koelsch et al.'s view that speech and music, and therefore the brain networks supporting them, cannot be separated because of their numerous shared properties, i.e., there is a “music-speech continuum” (Koelsch and Friederici,
<xref rid="B50" ref-type="bibr">2003</xref>
; Koelsch and Siebel,
<xref rid="B53" ref-type="bibr">2005</xref>
; Koelsch,
<xref rid="B49" ref-type="bibr">2011</xref>
). However, evidence from brain-damaged patients suggests that music and speech abilities may dissociate, although there are also reports to the contrary (see above). Patel's (
<xref rid="B65" ref-type="bibr">2003</xref>
,
<xref rid="B67" ref-type="bibr">2008</xref>
,
<xref rid="B69" ref-type="bibr">2012</xref>
) Shared Syntactic Integration Resource Hypothesis (SSIRH) is in many ways a remedy to the shared-vs.-distinct debate in the realm of structural/syntactic processing. Stemming in part from the patient and electrophysiological findings, Patel proposes that language and music utilize overlapping cognitive resources but also have unique neural representations. Patel proposes that the shared resources reside in the inferior frontal lobe (i.e., Broca's area) and that distinct processes for speech and music reside in the temporal lobes (Patel,
<xref rid="B65" ref-type="bibr">2003</xref>
).</p>
<p>The emergence of functional neuroimaging techniques such as fMRI have continued to fuel the debate over the contributions of shared vs. distinct neural resources for speech and music. FMRI lacks the high temporal resolution of electrophysiological methods and can introduce high levels of ambient noise potentially contaminating recorded responses to auditory stimuli. However, the greater spatial resolution of fMRI may provide additional information regarding the neural correlates of speech and music, and MRI scanner noise can be minimized using sparse sampling scanning protocols and reduced-noise continuous scanning techniques (Peelle et al.,
<xref rid="B73" ref-type="bibr">2010</xref>
). Hundreds of fMRI papers have investigated musical processes, and thousands have investigated the neural substrates of speech. Conversely, to our knowledge and as Slevc and Okada (
<xref rid="B104" ref-type="bibr">2015</xref>
) noted, only a few studies have directly compared activations to hierarchical speech and music (i.e., sentences and melodies) using fMRI (Abrams et al.,
<xref rid="B1" ref-type="bibr">2011</xref>
; Fedorenko et al.,
<xref rid="B23" ref-type="bibr">2011</xref>
; Rogalsky et al.,
<xref rid="B90" ref-type="bibr">2011</xref>
). Findings from these studies conflict with the ERP literature (e.g., Koelsch,
<xref rid="B48" ref-type="bibr">2005</xref>
; Koelsch et al.,
<xref rid="B51" ref-type="bibr">2005</xref>
) in that the fMRI studies identify distinct neuroanatomy and/or activation response patterns for music and speech processing, although there are notable differences across these studies, particularly relating to the involvement of Broca's area in speech and music.</p>
<p>The differences found across neuroimaging studies regarding the overlap of the neural correlates of speech and music likely arise from the tasks used in each of these studies. For example, Rogalsky et al. used passive listening and found no activation of Broca's area to either speech or music compared to rest. Conversely, Fedorenko et al. used a reading/memory probe task for sentences and an emotional ranking for music and found Broca's area to be preferentially activated by speech but also activated by music compared to rest. There is also evidence that the P600, the ERP component that is sensitive to both speech and music violations, is only present when subjects are actively attending to the stimulus (Besson and Faita,
<xref rid="B8" ref-type="bibr">1995</xref>
; Brattico et al.,
<xref rid="B10" ref-type="bibr">2006</xref>
; Koelsch,
<xref rid="B49" ref-type="bibr">2011</xref>
). The inclusion of a task may affect not only the brain regions involved, but also reliability of results: an fMRI study of visual tasks reported that tasks with high attentional loads also had the highest reliability measures compared to passive conditions (Specht et al.,
<xref rid="B105" ref-type="bibr">2003</xref>
). This finding in the visual domain suggests the possibility that greater (within and between) subject variability in passive listening conditions may lead to null effects in group-averaged results.</p>
<p>Given the scarcity of within-subject neuroimaging studies of speech and music, it is particularly critical to examine across-study, between-subjects findings to build a better picture regarding the neurobiology of speech and music. A major barrier in interpreting between-subject neuroimaging results is the variety of paradigms and tasks used to investigate speech and music neural resources. Most scientists studying the neurobiology of speech and/or music would likely agree that they are interested in understanding the neural computations employed in naturalistic situations that are driven by the input of speech or music, and the differences between the two. However, explicit tasks such as discrimination or error detection are often used to drive brain responses in part by increasing the subject's attention to the stimuli and/or particular aspects of the stimuli. This may be problematic: the influence of task demands on the functional neuroanatomy recruited by speech is well documented (e.g., Baker et al.,
<xref rid="B5" ref-type="bibr">1981</xref>
; Noesselt et al.,
<xref rid="B62" ref-type="bibr">2003</xref>
; Scheich et al.,
<xref rid="B94" ref-type="bibr">2007</xref>
; Geiser et al.,
<xref rid="B30" ref-type="bibr">2008</xref>
; Rogalsky and Hickok,
<xref rid="B87" ref-type="bibr">2009</xref>
) and both speech and music processing engage domain-general cognitive, memory, and motor networks in likely distinct, but overlapping ways (Besson et al.,
<xref rid="B7" ref-type="bibr">2011</xref>
). Task effects are known to alter inter and intra hemisphere activations to speech (Noesselt et al.,
<xref rid="B62" ref-type="bibr">2003</xref>
; Tervaniemi and Hugdahl,
<xref rid="B112" ref-type="bibr">2003</xref>
; Scheich et al.,
<xref rid="B94" ref-type="bibr">2007</xref>
; Geiser et al.,
<xref rid="B30" ref-type="bibr">2008</xref>
; Rogalsky and Hickok,
<xref rid="B87" ref-type="bibr">2009</xref>
). For example, there is evidence that right hemisphere fronto-temporal-parietal networks are significantly activated during an explicit task (rhythm judgment) with speech stimuli but not during passive listening to the same stimuli (Geiser et al.,
<xref rid="B30" ref-type="bibr">2008</xref>
). The neurobiology of speech perception, and auditory processing more generally, also can vary based on the type of explicit task even when the same stimuli are used across tasks (Platel et al.,
<xref rid="B82" ref-type="bibr">1997</xref>
; Ni et al.,
<xref rid="B61" ref-type="bibr">2000</xref>
; Von Kriegstein et al.,
<xref rid="B121" ref-type="bibr">2003</xref>
; Geiser et al.,
<xref rid="B30" ref-type="bibr">2008</xref>
; Rogalsky and Hickok,
<xref rid="B87" ref-type="bibr">2009</xref>
). This phenomenon is also well documented in the visual domain (Corbetta et al.,
<xref rid="B17" ref-type="bibr">1990</xref>
; Chawla et al.,
<xref rid="B16" ref-type="bibr">1999</xref>
; Cant and Goodale,
<xref rid="B13" ref-type="bibr">2007</xref>
). For example, in the speech domain, syllable discrimination and single-word comprehension performance (as measured by a word-picture matching task) doubly dissociate in stroke patients with aphasia (Baker et al.,
<xref rid="B5" ref-type="bibr">1981</xref>
). Syllable discrimination implicates left-lateralized dorsal frontal-parietal networks, while speech comprehension and passive listening tasks engage mostly mid and posterior temporal regions (Dronkers et al.,
<xref rid="B19" ref-type="bibr">2004</xref>
; Schwartz et al.,
<xref rid="B98" ref-type="bibr">2012</xref>
; Rogalsky et al.,
<xref rid="B89" ref-type="bibr">2015</xref>
). Similarly, contextual effects have been reported regarding pitch: when pitch is needed for linguistic processing, such as in a tonal language, there is a left hemisphere auditory cortex bias, while pitch processing in a melody discrimination task yields a right hemisphere bias (Zatorre and Gandour,
<xref rid="B128" ref-type="bibr">2008</xref>
). Another example of the importance of context in pitch processing is in vowel perception: vowels and tones have similar acoustic features and when presented in isolation (i.e., just a vowel, not in a consonant-vowel (CV) pair as would typically be perceived in everyday life) no significant differences have been found in temporal lobe activations (Jäncke et al.,
<xref rid="B46" ref-type="bibr">2002</xref>
). However, there is greater superior temporal activation for CVs than tones suggesting that the context of the vowel modulates the temporal networks activated (Jäncke et al.,
<xref rid="B46" ref-type="bibr">2002</xref>
).</p>
<p>One way to reduce the influence of a particular paradigm or task is to use meta-analysis techniques to identify areas of activation that consistently activate to a particular stimulus (e.g., speech, music) across a range of tasks and paradigms. Besson and Schön (
<xref rid="B9" ref-type="bibr">2001</xref>
) noted that meta-analyses of neuroimaging data would provide critical insight into the relationship between the neurobiology of language and music. They also suggested that meta-analyses of music-related neuroimaging data were not feasible due to the sparse number of relevant studies. Now, almost 15 years later, there is a large enough corpus of neuroimaging work to conduct quantitative meta-analyses of music processing with sufficient power. In fact, such meta-analyses have begun to emerge, for specific aspects of musical processing, in relation to specific cognitive functions [e.g., Slevc and Okada's (
<xref rid="B104" ref-type="bibr">2015</xref>
) cognitive control meta-analysis in relation to pitch and harmonic ambiguity], in addition to extensive qualitative reviews (e.g., Tervaniemi,
<xref rid="B111" ref-type="bibr">2001</xref>
; Jäncke,
<xref rid="B45" ref-type="bibr">2008</xref>
; Besson et al.,
<xref rid="B7" ref-type="bibr">2011</xref>
; Grahn,
<xref rid="B31" ref-type="bibr">2012</xref>
; Slevc,
<xref rid="B102" ref-type="bibr">2012</xref>
; Tillmann,
<xref rid="B115" ref-type="bibr">2012</xref>
).</p>
<p>The present meta-analysis addresses the following outstanding questions: (1) has functional neuroimaging identified significant distinctions between the functional neuroanatomy of speech and music and (2) how do specific types of tasks affect how music recruits speech-processing networks? We then discuss the implications of our findings for future investigations of the neural computations of language and music.</p>
</sec>
<sec sec-type="materials|methods" id="s2">
<title>Materials and methods</title>
<p>An exhaustive literature search was conducted via Google Scholar to locate published fMRI and PET studies reporting activations to musical stimuli. The following search terms were used to locate papers about music: “fMRI music,” “fMRI and music,” “fMRI pitch,” and “fMRI rhythm.” To the best of our knowledge, all relevant journal research articles have been collected for the purposes of this meta-analysis.</p>
<p>All journal articles that became part of the meta-analysis reported peak coordinates for relevant contrasts. Peak coordinates reported in the papers identified by the searches were divided into four categories that encompassed the vast majority of paradigms used in the articles: music passive listening, music discrimination, music error detection, and music memory
<xref ref-type="fn" rid="fn0001">
<sup>1</sup>
</xref>
. Passive listening studies included papers in which participants listened to instrumental melodies or tone sequences with no explicit task as well as studies that asked participants to press a button when the stimulus concluded. Music discrimination studies included those that asked participants to compare two musical stimuli (e.g., related/unrelated, same/different). Music error detection studies included studies that instructed participants to identify a dissonant melody, unexpected note or deviant instrument. The music memory category included papers that asked participants to complete an n-back task, familiarity judgment, or rehearsal (covert or overt) of a melodic stimulus.</p>
<p>Only coordinates from healthy adult, non-musician, control subjects were included. In studies that included a patient group and a control group, only the control group's coordinates were included. Studies were excluded from the final activation likelihood estimate (ALE) if the data did not meet the requirements for being included in ALE calculations, including for the following reasons: coordinates not reported, only approximate anatomical location reported, stereotaxic space not reported, inappropriate contrasts (e.g., speech > music only), activations corresponding to participant's emotional reactions to music, studies of professional/trained musicians, and studies of children.</p>
<p>In addition to collecting the music-related coordinates via an exhaustive search, we also gathered a representative sample of fMRI and PET studies that reported coordinates for passive listening to intelligible speech compared to some type of non-speech control (e.g., tones, noise, rest, visual stimuli). Coordinates corresponding to the following tasks were also extracted: speech discrimination, speech detection, and speech memory. The purpose of these speech conditions is to act as comparison groups for the music groups. Coordinates for this purpose were extracted from six sources: five well-cited review papers, Price (
<xref rid="B84" ref-type="bibr">2010</xref>
), Zheng et al. (
<xref rid="B129" ref-type="bibr">2010</xref>
), Turkeltaub and Coslett (
<xref rid="B117" ref-type="bibr">2010</xref>
), Rogalsky et al. (
<xref rid="B90" ref-type="bibr">2011</xref>
), and Adank (
<xref rid="B2" ref-type="bibr">2012</xref>
) and the brain imaging meta-analysis database Neurosynth.org. The Price (
<xref rid="B84" ref-type="bibr">2010</xref>
), Zheng et al. (
<xref rid="B129" ref-type="bibr">2010</xref>
), Turkeltaub and Coslett (
<xref rid="B117" ref-type="bibr">2010</xref>
), Rogalsky et al. (
<xref rid="B90" ref-type="bibr">2011</xref>
), and Adank (
<xref rid="B2" ref-type="bibr">2012</xref>
) papers yielded a total of 42 studies that fit the aforementioned criteria. An additional 49 relevant papers were found using the Neurosynth.org database with the search criteria “speech perception,” “speech processing,” “speech,” and “auditory working memory.” These methods resulted in 91 studies in which control subjects passively listened to speech or completed an auditory verbal memory, speech discrimination, or speech detection task. The passive listening speech condition included studies in which participants listened to speech stimuli with no explicit task as well as studies that asked participants to press a button when the stimulus concluded. Papers were included in the speech discrimination category if they asked participants to compare two speech stimuli (e.g., a same/different task). The speech detection category contained papers that asked participants to detect semantic, intelligibility, or grammatical properties or detect phonological, semantic, or syntactic errors. Studies included in the speech memory category were papers that instructed participants to complete an n-back task or rehearsal (covert or overt) of a speech (auditory verbal) stimulus.</p>
<p>Analyses were conducted using the meta-analysis software GingerALE to calculate ALEs for each condition based on the coordinates collected (Eickhoff et al.,
<xref rid="B21" ref-type="bibr">2009</xref>
,
<xref rid="B20" ref-type="bibr">2012</xref>
; Turkeltaub et al.,
<xref rid="B118" ref-type="bibr">2012</xref>
). All results are reported in Talairach space. Coordinates originally reported in MNI space were transformed to Talairach space using GingerALE's stereotaxic coordinate converter. Once all coordinates were in Talairach space, each condition was analyzed individually using the following GingerALE parameters: less conservative (larger) mask size, Turkeltaub nonadditive ALE method (Turkeltaub et al.,
<xref rid="B118" ref-type="bibr">2012</xref>
), subject-based FWHM (Eickhoff et al.,
<xref rid="B21" ref-type="bibr">2009</xref>
), corrected threshold of
<italic>p</italic>
< 0.05 using false discovery rate (FDR), and a minimum cluster volume of 200 mm
<sup>3</sup>
. We obtained subtraction contrasts between two given conditions by directly comparing activations between two conditions. To correct for multiple comparisons, each contrast's threshold was set to
<italic>p</italic>
< 0.05, whole-brain corrected following the FDR algorithm with p value permutations set at 10,000, and a minimum cluster size of 200 mm
<sup>3</sup>
(Eickhoff et al.,
<xref rid="B21" ref-type="bibr">2009</xref>
). ALE statistical maps were rendered onto the Colin Talairach template brain using the software MRIcron (Rorden and Brett,
<xref rid="B91" ref-type="bibr">2000</xref>
).</p>
</sec>
<sec sec-type="results" id="s3">
<title>Results</title>
<sec>
<title>Search results</title>
<p>The literature search yielded 80 music studies (76 fMRI studies, 4 PET studies) and 91 relevant speech papers (88 fMRI, 3 PET studies) meeting the inclusion criteria described above. Table
<xref ref-type="table" rid="T1">1</xref>
indicates the number of studies, subjects, and coordinates in each of the four music conditions, as well as for each of the four speech conditions.</p>
<table-wrap id="T1" position="float">
<label>Table 1</label>
<caption>
<p>
<bold>Activations included in the present meta-analysis</bold>
.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th valign="top" align="left" rowspan="1" colspan="1">
<bold>Condition</bold>
</th>
<th valign="top" align="center" rowspan="1" colspan="1">
<bold>Number of studies</bold>
</th>
<th valign="top" align="center" rowspan="1" colspan="1">
<bold>Number of subjects</bold>
</th>
<th valign="top" align="center" rowspan="1" colspan="1">
<bold>Number of coordinates</bold>
</th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1">Music passive listening</td>
<td valign="top" align="center" rowspan="1" colspan="1">41</td>
<td valign="top" align="center" rowspan="1" colspan="1">540</td>
<td valign="top" align="center" rowspan="1" colspan="1">526</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1">Music discrimination</td>
<td valign="top" align="center" rowspan="1" colspan="1">12</td>
<td valign="top" align="center" rowspan="1" colspan="1">211</td>
<td valign="top" align="center" rowspan="1" colspan="1">168</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1">Music error detection</td>
<td valign="top" align="center" rowspan="1" colspan="1">25</td>
<td valign="top" align="center" rowspan="1" colspan="1">355</td>
<td valign="top" align="center" rowspan="1" colspan="1">489</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1">Music memory</td>
<td valign="top" align="center" rowspan="1" colspan="1">14</td>
<td valign="top" align="center" rowspan="1" colspan="1">190</td>
<td valign="top" align="center" rowspan="1" colspan="1">207</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1">Speech passive listening</td>
<td valign="top" align="center" rowspan="1" colspan="1">31</td>
<td valign="top" align="center" rowspan="1" colspan="1">454</td>
<td valign="top" align="center" rowspan="1" colspan="1">337</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1">Speech discrimination</td>
<td valign="top" align="center" rowspan="1" colspan="1">31</td>
<td valign="top" align="center" rowspan="1" colspan="1">405</td>
<td valign="top" align="center" rowspan="1" colspan="1">318</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1">Speech detection</td>
<td valign="top" align="center" rowspan="1" colspan="1">17</td>
<td valign="top" align="center" rowspan="1" colspan="1">317</td>
<td valign="top" align="center" rowspan="1" colspan="1">248</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1">Speech memory</td>
<td valign="top" align="center" rowspan="1" colspan="1">19</td>
<td valign="top" align="center" rowspan="1" colspan="1">259</td>
<td valign="top" align="center" rowspan="1" colspan="1">324</td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
<sec>
<title>Passive listening to music vs. passive listening to speech</title>
<p>The music passive listening ALE identified large swaths of voxels bilaterally, spanning the length of the superior temporal gyri (STG), as well as additional smaller clusters, including in the bilateral inferior frontal gyrus (pars opercularis), bilateral postcentral gyrus, bilateral insula, left inferior parietal lobule, left medial frontal gyrus, right precentral gyrus, and right middle frontal gyrus (Figure
<xref ref-type="fig" rid="F1">1A</xref>
, Table
<xref ref-type="table" rid="T2">2</xref>
). The speech passive listening ALE also identified bilateral superior temporal regions as well as bilateral precentral and inferior frontal (pars opercularis) regions. Notably, the speech ALE identified bilateral anterior STG, bilateral superior temporal sulcus (i.e., both banks, the middle and superior temporal gyri) and left inferior frontal gyrus (pars triangularis) regions not identified by the music ALE (Figure
<xref ref-type="fig" rid="F1">1A</xref>
, Table
<xref ref-type="table" rid="T2">2</xref>
). ALEs used a threshold of
<italic>p</italic>
< 0.05, FDR corrected.</p>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption>
<p>
<bold>(A)</bold>
Representative sagittal slices of the ALE for passive listening to speech,
<italic>p</italic>
< 0.05, corrected, overlaid on top of the passive music listening ALE.
<bold>(B)</bold>
Speech vs. music passive listening contrasts results,
<italic>p</italic>
< 0.05 corrected.</p>
</caption>
<graphic xlink:href="fpsyg-06-01138-g0001"></graphic>
</fig>
<table-wrap id="T2" position="float">
<label>Table 2</label>
<caption>
<p>
<bold>Locations, peaks and cluster size for significant voxel clusters for each condition's ALE and for each contrast of interest</bold>
.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th valign="top" align="left" rowspan="1" colspan="1">
<bold>Condition</bold>
</th>
<th valign="top" align="left" rowspan="1" colspan="1">
<bold>Anatomical locations</bold>
</th>
<th valign="top" align="center" rowspan="1" colspan="1">
<bold>Peak coordinates</bold>
</th>
<th valign="top" align="center" rowspan="1" colspan="1">
<bold>Voxels</bold>
</th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1">Music passive listening</td>
<td valign="top" align="left" rowspan="1" colspan="1">Left inferior frontal gyrus (pars opercularis)
<sup>*</sup>
</td>
<td valign="top" align="center" rowspan="1" colspan="1">−46, 10, 26</td>
<td valign="top" align="center" rowspan="1" colspan="1">32</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left medial frontal gyrus
<sup>*</sup>
, left subcallosal gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">−2, 26,−14</td>
<td valign="top" align="center" rowspan="1" colspan="1">65</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left medial frontal gyrus
<sup>*</sup>
</td>
<td valign="top" align="center" rowspan="1" colspan="1">−2, 2, 62</td>
<td valign="top" align="center" rowspan="1" colspan="1">48</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left postcentral gyrus
<sup>*</sup>
, left inferior parietal lobule</td>
<td valign="top" align="center" rowspan="1" colspan="1">−34,−36, 54</td>
<td valign="top" align="center" rowspan="1" colspan="1">27</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left superior temporal gyrus
<sup>*</sup>
, left transverse temporal gyrus, left middle temporal gyrus, left insula</td>
<td valign="top" align="center" rowspan="1" colspan="1">−52,−20, 6</td>
<td valign="top" align="center" rowspan="1" colspan="1">2073</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Right inferior frontal gyrus
<sup>*</sup>
</td>
<td valign="top" align="center" rowspan="1" colspan="1">48, 10, 28</td>
<td valign="top" align="center" rowspan="1" colspan="1">43</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Right precentral gyrus
<sup>*</sup>
, right postcentral gyrus, right middle frontal gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">52,−2, 44</td>
<td valign="top" align="center" rowspan="1" colspan="1">173</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Right superior temporal gyrus
<sup>*</sup>
, right transverse temporal gyrus, right middle temporal gyrus, right insula</td>
<td valign="top" align="center" rowspan="1" colspan="1">58,−20, 6</td>
<td valign="top" align="center" rowspan="1" colspan="1">2154</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Right insula
<sup>*</sup>
, right inferior frontal gyrus, right precentral gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">42, 14, 0</td>
<td valign="top" align="center" rowspan="1" colspan="1">206</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Right lingual gyrus
<sup>*</sup>
, right culmen</td>
<td valign="top" align="center" rowspan="1" colspan="1">16,−54,−2</td>
<td valign="top" align="center" rowspan="1" colspan="1">27</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1">Music discrimination</td>
<td valign="top" align="left" rowspan="1" colspan="1">Left medial frontal gyrus
<sup>*</sup>
, left middle frontal gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">−8,−4, 58</td>
<td valign="top" align="center" rowspan="1" colspan="1">224</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left precentral gyrus
<sup>*</sup>
, left postcentral gyrus, left inferior parietal lobule</td>
<td valign="top" align="center" rowspan="1" colspan="1">−48,−12, 48</td>
<td valign="top" align="center" rowspan="1" colspan="1">259</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left precentral gyrus
<sup>*</sup>
, left inferior frontal gyrus (pars opercularis)</td>
<td valign="top" align="center" rowspan="1" colspan="1">−50, 2, 26</td>
<td valign="top" align="center" rowspan="1" colspan="1">67</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left superior temporal gyrus
<sup>*</sup>
, left transverse temporal gyrus, left precentral gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">−54,−16, 8</td>
<td valign="top" align="center" rowspan="1" colspan="1">239</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left superior temporal gyrus
<sup>*</sup>
, left middle temporal gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">−58,−34, 8</td>
<td valign="top" align="center" rowspan="1" colspan="1">92</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left insula
<sup>*</sup>
, left inferior frontal gyrus (pars triangularis)</td>
<td valign="top" align="center" rowspan="1" colspan="1">−34, 22, 2</td>
<td valign="top" align="center" rowspan="1" colspan="1">48</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left cerebellum
<sup>*</sup>
</td>
<td valign="top" align="center" rowspan="1" colspan="1">−28,−62,−24</td>
<td valign="top" align="center" rowspan="1" colspan="1">127</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Right inferior frontal gyrus
<sup>*</sup>
, right middle frontal gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">52, 12, 28</td>
<td valign="top" align="center" rowspan="1" colspan="1">58</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Right precentral gyrus
<sup>*</sup>
, right middle frontal gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">46,−6, 44</td>
<td valign="top" align="center" rowspan="1" colspan="1">170</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Right superior temporal gyrus
<sup>*</sup>
, right middle temporal gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">62,−24, 8</td>
<td valign="top" align="center" rowspan="1" colspan="1">310</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Right superior temporal gyrus
<sup>*</sup>
, right precentral gyrus, right insula</td>
<td valign="top" align="center" rowspan="1" colspan="1">50, 6,−2</td>
<td valign="top" align="center" rowspan="1" colspan="1">91</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1">Music error detection</td>
<td valign="top" align="left" rowspan="1" colspan="1">Left medial frontal gyrus
<sup>*</sup>
</td>
<td valign="top" align="center" rowspan="1" colspan="1">−4,−4, 58</td>
<td valign="top" align="center" rowspan="1" colspan="1">49</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left superior temporal gyrus
<sup>*</sup>
, Let transverse temporal gyrus,Left postcentral gyrus, left insula</td>
<td valign="top" align="center" rowspan="1" colspan="1">−50,−18, 8</td>
<td valign="top" align="center" rowspan="1" colspan="1">1448</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left inferior parietal lobule
<sup>*</sup>
, left supramarginal gyrus, left angular gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">−40,−48, 40</td>
<td valign="top" align="center" rowspan="1" colspan="1">41</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left lentiform nucleus
<sup>*</sup>
, left putamen</td>
<td valign="top" align="center" rowspan="1" colspan="1">−22, 6, 10</td>
<td valign="top" align="center" rowspan="1" colspan="1">263</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Right middle frontal gyrus
<sup>*</sup>
</td>
<td valign="top" align="center" rowspan="1" colspan="1">36, 42, 18</td>
<td valign="top" align="center" rowspan="1" colspan="1">43</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Right middle frontal gyrus
<sup>*</sup>
, right precentral gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">32,−4, 56</td>
<td valign="top" align="center" rowspan="1" colspan="1">35</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Right superior frontal gyrus
<sup>*</sup>
, right medial frontal gyrus, left superior frontal gyrus, left medial frontal gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">2, 10, 52</td>
<td valign="top" align="center" rowspan="1" colspan="1">95</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Right superior temporal gyrus
<sup>*</sup>
, right transverse temporal gyrus, right insula, right precentral gyrus, right middle temporal gyrus, right claustrum</td>
<td valign="top" align="center" rowspan="1" colspan="1">50,−18, 6</td>
<td valign="top" align="center" rowspan="1" colspan="1">1228</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Right parahippocampal gyrus
<sup>*</sup>
</td>
<td valign="top" align="center" rowspan="1" colspan="1">22,−14,−12</td>
<td valign="top" align="center" rowspan="1" colspan="1">36</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Right inferior parietal lobule
<sup>*</sup>
, right supramarginal gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">36,−44, 40</td>
<td valign="top" align="center" rowspan="1" colspan="1">103</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Right insula
<sup>*</sup>
, right inferior frontal gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">32, 22, 12</td>
<td valign="top" align="center" rowspan="1" colspan="1">329</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Right lentiform nucleus
<sup>*</sup>
, right putamen, right caudate</td>
<td valign="top" align="center" rowspan="1" colspan="1">18, 6, 12</td>
<td valign="top" align="center" rowspan="1" colspan="1">144</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Right thalamus
<sup>*</sup>
</td>
<td valign="top" align="center" rowspan="1" colspan="1">12,−16, 8</td>
<td valign="top" align="center" rowspan="1" colspan="1">33</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Right cerebellum
<sup>*</sup>
</td>
<td valign="top" align="center" rowspan="1" colspan="1">26,−50,−26</td>
<td valign="top" align="center" rowspan="1" colspan="1">28</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1">Music memory</td>
<td valign="top" align="left" rowspan="1" colspan="1">Left inferior frontal gyrus (pars opercularis)
<sup>*</sup>
, left precentral gyrus, left middle frontal gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">−50, 4, 26</td>
<td valign="top" align="center" rowspan="1" colspan="1">206</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left inferior frontal gyrus (pars triangularis
<sup>*</sup>
, pars orbitalis), left insula</td>
<td valign="top" align="center" rowspan="1" colspan="1">−34, 24,−2</td>
<td valign="top" align="center" rowspan="1" colspan="1">57</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left inferior frontal gyrus (pars triangularis)
<sup>*</sup>
</td>
<td valign="top" align="center" rowspan="1" colspan="1">−44, 26, 10</td>
<td valign="top" align="center" rowspan="1" colspan="1">25</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left medial frontal gyrus
<sup>*</sup>
</td>
<td valign="top" align="center" rowspan="1" colspan="1">−4, 52, 12</td>
<td valign="top" align="center" rowspan="1" colspan="1">31</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left middle frontal gyrus
<sup>*</sup>
</td>
<td valign="top" align="center" rowspan="1" colspan="1">−32, 4, 54</td>
<td valign="top" align="center" rowspan="1" colspan="1">29</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left precentral gyrus
<sup>*</sup>
</td>
<td valign="top" align="center" rowspan="1" colspan="1">−44,−10, 42</td>
<td valign="top" align="center" rowspan="1" colspan="1">33</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left superior frontal gyrus
<sup>*</sup>
, left medial frontal gyrus, right superior frontal gyrus, right medial frontal gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">−0, 12, 50</td>
<td valign="top" align="center" rowspan="1" colspan="1">373</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left middle temporal gyrus
<sup>*</sup>
</td>
<td valign="top" align="center" rowspan="1" colspan="1">−50,−20,−10</td>
<td valign="top" align="center" rowspan="1" colspan="1">72</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left middle temporal gyrus
<sup>*</sup>
, left superior temporal gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">−46, 4,−18</td>
<td valign="top" align="center" rowspan="1" colspan="1">35</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left inferior parietal lobule
<sup>*</sup>
, left superior temporal gyrus, left middle temporal gyrus, left supramarginal gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">−48,−44, 22</td>
<td valign="top" align="center" rowspan="1" colspan="1">224</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left thalamus
<sup>*</sup>
</td>
<td valign="top" align="center" rowspan="1" colspan="1">−14,−14, 14</td>
<td valign="top" align="center" rowspan="1" colspan="1">37</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Right inferior frontal gyrus
<sup>*</sup>
, right insula, right claustrum</td>
<td valign="top" align="center" rowspan="1" colspan="1">32, 26, 8</td>
<td valign="top" align="center" rowspan="1" colspan="1">90</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Right middle frontal gyrus
<sup>*</sup>
</td>
<td valign="top" align="center" rowspan="1" colspan="1">38, 44, 14</td>
<td valign="top" align="center" rowspan="1" colspan="1">27</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Right superior temporal gyrus
<sup>*</sup>
, right middle temporal gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">54,−38, 10</td>
<td valign="top" align="center" rowspan="1" colspan="1">35</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Right parahippocampal gyrus
<sup>*</sup>
, right hippocampus</td>
<td valign="top" align="center" rowspan="1" colspan="1">30,−10,−20</td>
<td valign="top" align="center" rowspan="1" colspan="1">35</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Right cerebellum
<sup>*</sup>
</td>
<td valign="top" align="center" rowspan="1" colspan="1">30,−56,−18</td>
<td valign="top" align="center" rowspan="1" colspan="1">47</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1">Speech passive listening</td>
<td valign="top" align="left" rowspan="1" colspan="1">Left inferior frontal gyrus (pars triangularis, pars opercularis)
<sup>*</sup>
, left insula, left precentral gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">−44, 20, 8</td>
<td valign="top" align="center" rowspan="1" colspan="1">296</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left inferior frontal gyrus (pars triangularis
<sup>*</sup>
, pars opercularis), left middle frontal gyrus, left precentral gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">−48, 10, 28</td>
<td valign="top" align="center" rowspan="1" colspan="1">162</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left precentral gyrus
<sup>*</sup>
, left postcentral gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">−52,−10, 40</td>
<td valign="top" align="center" rowspan="1" colspan="1">294</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left medial frontal gyrus
<sup>*</sup>
, left superior frontal gyrus, left medial frontal gyrus, left cingulate gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">−8, 8, 50</td>
<td valign="top" align="center" rowspan="1" colspan="1">164</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left superior temporal gyrus
<sup>*</sup>
,
<break></break>
Left middle temporal gyrus, left postcentral gyrus, left transverse temporal gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">−58,−14,−2</td>
<td valign="top" align="center" rowspan="1" colspan="1">2101</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left superior temporal gyrus
<sup>*</sup>
</td>
<td valign="top" align="center" rowspan="1" colspan="1">−46, 12,−14</td>
<td valign="top" align="center" rowspan="1" colspan="1">107</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left fusiform gyrus
<sup>*</sup>
, left inferior occipital gyrus, left middle occipital gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">−38,−78,−12</td>
<td valign="top" align="center" rowspan="1" colspan="1">35</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Right inferior frontal gyrus
<sup>*</sup>
, right insula, right precentral gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">44, 18, 10</td>
<td valign="top" align="center" rowspan="1" colspan="1">81</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Right middle frontal gyrus
<sup>*</sup>
, right precentral gyrus, right inferior frontal gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">46, 2, 38</td>
<td valign="top" align="center" rowspan="1" colspan="1">118</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Right superior temporal gyrus
<sup>*</sup>
, right middle temporal gyrus, Right insula, right precentral gyrus, right transverse temporal gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">−52, 20, 0</td>
<td valign="top" align="center" rowspan="1" colspan="1">1800</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1">Speech discrimination</td>
<td valign="top" align="left" rowspan="1" colspan="1">Left inferior frontal gyrus (pars orbitalis
<sup>*</sup>
, pars triangularis), left insula, left middle frontal gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">−38, 26,−4</td>
<td valign="top" align="center" rowspan="1" colspan="1">115</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left inferior frontal gyrus (pars triangularis
<sup>*</sup>
, pars opercularis), left precentral gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">−44, 20, 10</td>
<td valign="top" align="center" rowspan="1" colspan="1">44</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left middle frontal gyrus
<sup>*</sup>
, left inferior frontal gyrus (pars triangularis, pars opercularis)</td>
<td valign="top" align="center" rowspan="1" colspan="1">−46, 16, 30</td>
<td valign="top" align="center" rowspan="1" colspan="1">187</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left middle frontal gyrus
<sup>*</sup>
, left precentral gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">−46,−0, 42</td>
<td valign="top" align="center" rowspan="1" colspan="1">26</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left superior temporal gyrus
<sup>*</sup>
, left postcentral gyrus, left transverse temporal gyrus, left middle temporal gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">−58,−20, 4</td>
<td valign="top" align="center" rowspan="1" colspan="1">1737</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left thalamus
<sup>*</sup>
, left caudate</td>
<td valign="top" align="center" rowspan="1" colspan="1">−14,−16, 10</td>
<td valign="top" align="center" rowspan="1" colspan="1">147</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left cerebellum</td>
<td valign="top" align="center" rowspan="1" colspan="1">−38,−60,−16</td>
<td valign="top" align="center" rowspan="1" colspan="1">36</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Right inferior frontal gyrus
<sup>*</sup>
, right precentral gyrus, right insula</td>
<td valign="top" align="center" rowspan="1" colspan="1">46, 20, 4</td>
<td valign="top" align="center" rowspan="1" colspan="1">38</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Right superior temporal gyrus
<sup>*</sup>
, right middle temporal gyrus, right transverse temporal gyrus, right insula</td>
<td valign="top" align="center" rowspan="1" colspan="1">58,−14, 0</td>
<td valign="top" align="center" rowspan="1" colspan="1">1223</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Right precuneus
<sup>*</sup>
, right cuneus</td>
<td valign="top" align="center" rowspan="1" colspan="1">4,−78, 38</td>
<td valign="top" align="center" rowspan="1" colspan="1">34</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1">Speech detection</td>
<td valign="top" align="left" rowspan="1" colspan="1">Left inferior frontal gyrus (pars opercularis)
<sup>*</sup>
, left middle frontal gyrus, left insula</td>
<td valign="top" align="center" rowspan="1" colspan="1">−48, 10, 22</td>
<td valign="top" align="center" rowspan="1" colspan="1">361</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left inferior frontal gyrus (pars triangularis)
<sup>*</sup>
</td>
<td valign="top" align="center" rowspan="1" colspan="1">−48, 28, 12</td>
<td valign="top" align="center" rowspan="1" colspan="1">101</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left inferior frontal gyrus (pars triangularis
<sup>*</sup>
, pars orbitalis), left insula</td>
<td valign="top" align="center" rowspan="1" colspan="1">−34, 24, 2</td>
<td valign="top" align="center" rowspan="1" colspan="1">61</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left postcentral gyrus
<sup>*</sup>
, left precentral gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">−50,−12, 46</td>
<td valign="top" align="center" rowspan="1" colspan="1">92</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left medial frontal gyrus
<sup>*</sup>
, left superior frontal gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">−6,−6, 60</td>
<td valign="top" align="center" rowspan="1" colspan="1">54</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left superior temporal gyrus
<sup>*</sup>
, left middle temporal gyrus, left transverse temporal gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">−60,−22,−2</td>
<td valign="top" align="center" rowspan="1" colspan="1">1010</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left superior temporal gyrus
<sup>*</sup>
, left supramarginal gyrus, left inferior parietal lobule</td>
<td valign="top" align="center" rowspan="1" colspan="1">−60,−42, 20</td>
<td valign="top" align="center" rowspan="1" colspan="1">66</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left superior temporal gyrus
<sup>*</sup>
, left middle temporal gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">−50, 12,−14</td>
<td valign="top" align="center" rowspan="1" colspan="1">34</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left superior temporal gyrus
<sup>*</sup>
</td>
<td valign="top" align="center" rowspan="1" colspan="1">−42, 18,−24</td>
<td valign="top" align="center" rowspan="1" colspan="1">28</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left transverse temporal gyrus
<sup>*</sup>
, left superior temporal gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">−36,−30, 12</td>
<td valign="top" align="center" rowspan="1" colspan="1">38</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left precuneus
<sup>*</sup>
, left superior parietal lobule, left inferior parietal lobule</td>
<td valign="top" align="center" rowspan="1" colspan="1">−30,−62, 40</td>
<td valign="top" align="center" rowspan="1" colspan="1">66</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Right inferior frontal gyrus
<sup>*</sup>
, right insula</td>
<td valign="top" align="center" rowspan="1" colspan="1">34, 24, 6</td>
<td valign="top" align="center" rowspan="1" colspan="1">62</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Right inferior frontal gyrus
<sup>*</sup>
</td>
<td valign="top" align="center" rowspan="1" colspan="1">40, 24,−4</td>
<td valign="top" align="center" rowspan="1" colspan="1">31</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Right inferior frontal gyrus
<sup>*</sup>
</td>
<td valign="top" align="center" rowspan="1" colspan="1">52, 8, 22</td>
<td valign="top" align="center" rowspan="1" colspan="1">29</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Right superior temporal gyrus
<sup>*</sup>
, right transverse temporal gyrus, right middle temporal gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">58,−14, 4</td>
<td valign="top" align="center" rowspan="1" colspan="1">788</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Right middle frontal gyrus
<sup>*</sup>
</td>
<td valign="top" align="center" rowspan="1" colspan="1">48, 14, 32</td>
<td valign="top" align="center" rowspan="1" colspan="1">36</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1">Speech memory</td>
<td valign="top" align="left" rowspan="1" colspan="1">Left middle frontal gyrus
<sup>*</sup>
, left inferior frontal gyrus (pars triangularis, pars opercularis), left precentral gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">−50, 22, 22</td>
<td valign="top" align="center" rowspan="1" colspan="1">476</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left superior frontal gyrus
<sup>*</sup>
, left medial frontal gyrus, right medial frontal gyrus, right superior frontal gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">−2, 4, 56</td>
<td valign="top" align="center" rowspan="1" colspan="1">73</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left precentral gyrus
<sup>*</sup>
, left postcentral gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">−50,−10, 44</td>
<td valign="top" align="center" rowspan="1" colspan="1">127</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left insula
<sup>*</sup>
, left inferior frontal gyrus (pars triangularis), left claustrum</td>
<td valign="top" align="center" rowspan="1" colspan="1">−30, 18, 4</td>
<td valign="top" align="center" rowspan="1" colspan="1">39</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left superior temporal gyrus
<sup>*</sup>
, left middle temporal gyrus, left insula</td>
<td valign="top" align="center" rowspan="1" colspan="1">−62,−24, 6</td>
<td valign="top" align="center" rowspan="1" colspan="1">937</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left superior temporal gyrus
<sup>*</sup>
, left middle temporal gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">−50, 10,−10</td>
<td valign="top" align="center" rowspan="1" colspan="1">62</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left superior parietal lobule
<sup>*</sup>
, left precuneus, left inferior parietal lobule</td>
<td valign="top" align="center" rowspan="1" colspan="1">−30,−62, 48</td>
<td valign="top" align="center" rowspan="1" colspan="1">109</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left inferior parietal lobule
<sup>*</sup>
</td>
<td valign="top" align="center" rowspan="1" colspan="1">−40,−46, 50</td>
<td valign="top" align="center" rowspan="1" colspan="1">93</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left caudate
<sup>*</sup>
, left thalamus</td>
<td valign="top" align="center" rowspan="1" colspan="1">−16,−2, 16</td>
<td valign="top" align="center" rowspan="1" colspan="1">36</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left cerebellum
<sup>*</sup>
, left fusiform gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">−40,−44,−20</td>
<td valign="top" align="center" rowspan="1" colspan="1">67</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Right superior temporal gyrus
<sup>*</sup>
, right middle temporal gyrus, right transverse temporal gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">58,−14, 0</td>
<td valign="top" align="center" rowspan="1" colspan="1">773</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Right superior temporal gyrus
<sup>*</sup>
, right middle temporal gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">48, 8,−14</td>
<td valign="top" align="center" rowspan="1" colspan="1">58</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Right cerebellum</td>
<td valign="top" align="center" rowspan="1" colspan="1">24,−64,−16</td>
<td valign="top" align="center" rowspan="1" colspan="1">50</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1">Music passive > speech passive</td>
<td valign="top" align="left" rowspan="1" colspan="1">Left insula
<sup>*</sup>
, left superior temporal gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">−44,−6, 2</td>
<td valign="top" align="center" rowspan="1" colspan="1">148</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left superior temporal gyrus
<sup>*</sup>
, left insula, left middle temporal gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">−42,−40, 14</td>
<td valign="top" align="center" rowspan="1" colspan="1">146</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left subcallosal gyrus
<sup>*</sup>
, left medial frontal gyrus, left anterior cingulate</td>
<td valign="top" align="center" rowspan="1" colspan="1">−4, 22,−14</td>
<td valign="top" align="center" rowspan="1" colspan="1">53</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Right inferior frontal gyrus
<sup>*</sup>
, right insula</td>
<td valign="top" align="center" rowspan="1" colspan="1">44, 18,−2</td>
<td valign="top" align="center" rowspan="1" colspan="1">49</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Right superior temporal gyrus
<sup>*</sup>
, right postcentral gyrus, right transverse temporal gyrus, right precentral gyrus, right insula</td>
<td valign="top" align="center" rowspan="1" colspan="1">66,−20, 10</td>
<td valign="top" align="center" rowspan="1" colspan="1">457</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1">Music passive < speech passive</td>
<td valign="top" align="left" rowspan="1" colspan="1">Left inferior frontal gyrus (pars triangularis)
<sup>*</sup>
, left precentral gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">−42, 30, 2</td>
<td valign="top" align="center" rowspan="1" colspan="1">177</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left precentral gyrus
<sup>*</sup>
, left postcentral gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">−56,−10, 40</td>
<td valign="top" align="center" rowspan="1" colspan="1">191</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left middle temporal gyrus
<sup>*</sup>
, left inferior temporal gyrus, left superior temporal gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">−56,−12,−12</td>
<td valign="top" align="center" rowspan="1" colspan="1">856</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left middle temporal gyrus
<sup>*</sup>
, left superior temporal gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">−50, 6,−18</td>
<td valign="top" align="center" rowspan="1" colspan="1">91</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left cingulate gyrus
<sup>*</sup>
, left medial frontal gyrus, left superior frontal gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">−10, 4, 46</td>
<td valign="top" align="center" rowspan="1" colspan="1">70</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Right middle temporal gyrus
<sup>*</sup>
, right superior temporal gyrus, right insula</td>
<td valign="top" align="center" rowspan="1" colspan="1">56,−22,−8</td>
<td valign="top" align="center" rowspan="1" colspan="1">277</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Right middle temporal gyrus
<sup>*</sup>
, right superior temporal gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">52, 2,−12</td>
<td valign="top" align="center" rowspan="1" colspan="1">167</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1">Music discrimination > speech discrimination</td>
<td valign="top" align="left" rowspan="1" colspan="1">Left inferior frontal gyrus (pars opercularis)
<sup>*</sup>
, left precentral gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">−52, 4, 24</td>
<td valign="top" align="center" rowspan="1" colspan="1">56</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left postcentral gyrus
<sup>*</sup>
, left inferior parietal lobule, left precentral gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">−48,−18, 44</td>
<td valign="top" align="center" rowspan="1" colspan="1">253</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left medial frontal gyrus
<sup>*</sup>
, left superior frontal gyrus, right medial frontal gyrus, right superior frontal gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">−8,−6, 54</td>
<td valign="top" align="center" rowspan="1" colspan="1">224</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left superior temporal gyrus
<sup>*</sup>
, left transverse temporal gyrus, left precentral gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">−52,−10, 8</td>
<td valign="top" align="center" rowspan="1" colspan="1">122</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left cerebellum</td>
<td valign="top" align="center" rowspan="1" colspan="1">−28,−64,−28</td>
<td valign="top" align="center" rowspan="1" colspan="1">114</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Right inferior frontal gyrus
<sup>*</sup>
, right middle frontal gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">50, 8, 26</td>
<td valign="top" align="center" rowspan="1" colspan="1">53</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Right precentral gyrus
<sup>*</sup>
, right middle frontal gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">36,−6, 42</td>
<td valign="top" align="center" rowspan="1" colspan="1">170</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Right precentral gyrus
<sup>*</sup>
, right insula, right superior temporal gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">48, 4, 8</td>
<td valign="top" align="center" rowspan="1" colspan="1">91</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Right superior temporal gyrus
<sup>*</sup>
, right transverse temporal gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">66,−26, 10</td>
<td valign="top" align="center" rowspan="1" colspan="1">93</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1">Music discrimination < speech discrimination</td>
<td valign="top" align="left" rowspan="1" colspan="1">Left middle temporal gyrus
<sup>*</sup>
, left superior temporal gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">−62,−18,−8</td>
<td valign="top" align="center" rowspan="1" colspan="1">456</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Right middle temporal gyrus
<sup>*</sup>
, right superior temporal gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">66,−8,−4</td>
<td valign="top" align="center" rowspan="1" colspan="1">38</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1">Music detection > speech detection</td>
<td valign="top" align="left" rowspan="1" colspan="1">Left insula
<sup>*</sup>
, left superior temporal gyrus, left precentral gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">−40,−16, 8</td>
<td valign="top" align="center" rowspan="1" colspan="1">126</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left insula
<sup>*</sup>
, left superior temporal gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">−42, 4,−6</td>
<td valign="top" align="center" rowspan="1" colspan="1">76</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left superior temporal gyrus
<sup>*</sup>
, left transverse temporal gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">−48,−34, 16</td>
<td valign="top" align="center" rowspan="1" colspan="1">131</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Right insula
<sup>*</sup>
, right transverse temporal gyrus, right superior temporal gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">44,−10,−4</td>
<td valign="top" align="center" rowspan="1" colspan="1">507</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Right middle frontal gyrus
<sup>*</sup>
, right insula</td>
<td valign="top" align="center" rowspan="1" colspan="1">38, 16, 24</td>
<td valign="top" align="center" rowspan="1" colspan="1">78</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Right middle frontal gyrus
<sup>*</sup>
, right precentral gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">32,−4, 54</td>
<td valign="top" align="center" rowspan="1" colspan="1">35</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1">Music detection < speech detection</td>
<td valign="top" align="left" rowspan="1" colspan="1">Left inferior frontal gyrus (par opercularis)
<sup>*</sup>
</td>
<td valign="top" align="center" rowspan="1" colspan="1">−56, 16, 20</td>
<td valign="top" align="center" rowspan="1" colspan="1">240</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left inferior frontal gyrus (pars triangularis)
<sup>*</sup>
</td>
<td valign="top" align="center" rowspan="1" colspan="1">−52, 28, 12</td>
<td valign="top" align="center" rowspan="1" colspan="1">101</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left middle temporal gyrus
<sup>*</sup>
, left superior temporal gyrus, left transverse temporal gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">−60,−32,−2</td>
<td valign="top" align="center" rowspan="1" colspan="1">561</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left superior temporal gyrus
<sup>*</sup>
</td>
<td valign="top" align="center" rowspan="1" colspan="1">−44, 18,−24</td>
<td valign="top" align="center" rowspan="1" colspan="1">28</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Right middle temporal gyrus
<sup>*</sup>
, right superior temporal gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">62,−12,−4</td>
<td valign="top" align="center" rowspan="1" colspan="1">361</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1">Music memory > speech memory</td>
<td valign="top" align="left" rowspan="1" colspan="1">Left cingulate gyrus
<sup>*</sup>
, left superior frontal gyrus, left medial frontal gyrus, right cingulate gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">−6, 20, 32</td>
<td valign="top" align="center" rowspan="1" colspan="1">161</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left superior temporal gyrus
<sup>*</sup>
, left supramarginal gyrus, left inferior parietal lobule</td>
<td valign="top" align="center" rowspan="1" colspan="1">−46,−48, 14</td>
<td valign="top" align="center" rowspan="1" colspan="1">45</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1">Music memory < speech memory</td>
<td valign="top" align="left" rowspan="1" colspan="1">Left inferior frontal gyrus (pars triangularis
<sup>*</sup>
, pars opercularis)</td>
<td valign="top" align="center" rowspan="1" colspan="1">−54, 24, 20</td>
<td valign="top" align="center" rowspan="1" colspan="1">80</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left superior temporal gyrus
<sup>*</sup>
, left middle temporal gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">−60,−16, 6</td>
<td valign="top" align="center" rowspan="1" colspan="1">606</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Right superior temporal gyrus
<sup>*</sup>
, right middle temporal gyrus, right transverse temporal gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">52,−26, 2</td>
<td valign="top" align="center" rowspan="1" colspan="1">506</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1">Music passive listening > music discrimination</td>
<td valign="top" align="left" rowspan="1" colspan="1">Left insula
<sup>*</sup>
, left superior temporal gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">−42,−12,−4</td>
<td valign="top" align="center" rowspan="1" colspan="1">116</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left superior temporal gyrus
<sup>*</sup>
, left insula</td>
<td valign="top" align="center" rowspan="1" colspan="1">−42,−42, 12</td>
<td valign="top" align="center" rowspan="1" colspan="1">261</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Right superior temporal gyrus
<sup>*</sup>
, right insula</td>
<td valign="top" align="center" rowspan="1" colspan="1">52,−12, 4</td>
<td valign="top" align="center" rowspan="1" colspan="1">157</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1">Music passive listening < music discrimination</td>
<td valign="top" align="left" rowspan="1" colspan="1">Left medial frontal gyrus
<sup>*</sup>
, right medial frontal gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">−8,−6, 54</td>
<td valign="top" align="center" rowspan="1" colspan="1">165</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left precentral gyrus
<sup>*</sup>
, left superior temporal gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">−52, 2, 8</td>
<td valign="top" align="center" rowspan="1" colspan="1">80</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left postcentral gyrus
<sup>*</sup>
, left inferior parietal lobule, left precentral gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">−46,−18, 46</td>
<td valign="top" align="center" rowspan="1" colspan="1">228</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left cerebellum
<sup>*</sup>
</td>
<td valign="top" align="center" rowspan="1" colspan="1">−24,−62,−24</td>
<td valign="top" align="center" rowspan="1" colspan="1">90</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Right precentral gyrus
<sup>*</sup>
, right middle frontal gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">44,−6, 42</td>
<td valign="top" align="center" rowspan="1" colspan="1">105</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Right precentral gyrus
<sup>*</sup>
, right insula, right superior temporal gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">50, 6, 6</td>
<td valign="top" align="center" rowspan="1" colspan="1">30</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1">Music passive > music error detection</td>
<td valign="top" align="left" rowspan="1" colspan="1">Left middle temporal gyrus
<sup>*</sup>
</td>
<td valign="top" align="center" rowspan="1" colspan="1">−58,−32, 0</td>
<td valign="top" align="center" rowspan="1" colspan="1">82</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left superior temporal gyrus
<sup>*</sup>
</td>
<td valign="top" align="center" rowspan="1" colspan="1">−58,−10, 4</td>
<td valign="top" align="center" rowspan="1" colspan="1">81</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Right precentral gyrus
<sup>*</sup>
, right middle frontal gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">50, 2, 48</td>
<td valign="top" align="center" rowspan="1" colspan="1">64</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Right postcentral gyrus
<sup>*</sup>
, right superior temporal gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">62,−24, 16</td>
<td valign="top" align="center" rowspan="1" colspan="1">44</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Right superior temporal gyrus
<sup>*</sup>
, right transverse temporal gyrus, right middle temporal gyrus, right precentral gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">60,−16, 0</td>
<td valign="top" align="center" rowspan="1" colspan="1">336</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1">Music passive < music error detection</td>
<td valign="top" align="left" rowspan="1" colspan="1">Left medial frontal gyrus
<sup>*</sup>
</td>
<td valign="top" align="center" rowspan="1" colspan="1">−4,−8, 56</td>
<td valign="top" align="center" rowspan="1" colspan="1">30</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left superior frontal gyrus
<sup>*</sup>
, left medial frontal gyrus, right superior frontal gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">−0, 8, 48</td>
<td valign="top" align="center" rowspan="1" colspan="1">93</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left postcentral gyrus
<sup>*</sup>
, left transverse temporal gyrus, left precentral gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">−52,−22, 16</td>
<td valign="top" align="center" rowspan="1" colspan="1">79</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left inferior parietal lobule
<sup>*</sup>
, left supramarginal gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">−40,−48, 38</td>
<td valign="top" align="center" rowspan="1" colspan="1">37</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left superior temporal gyrus
<sup>*</sup>
, left precentral gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">−52, 2, 4</td>
<td valign="top" align="center" rowspan="1" colspan="1">92</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left insula
<sup>*</sup>
, left superior temporal gyrus, left transverse temporal gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">−40,−28, 14</td>
<td valign="top" align="center" rowspan="1" colspan="1">67</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left lentiform nucleus
<sup>*</sup>
, left caudate</td>
<td valign="top" align="center" rowspan="1" colspan="1">−18, 10, 8</td>
<td valign="top" align="center" rowspan="1" colspan="1">211</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Right inferior parietal lobule
<sup>*</sup>
, right supramarginal gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">36,−50, 42</td>
<td valign="top" align="center" rowspan="1" colspan="1">101</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Right insula
<sup>*</sup>
, right inferior frontal gyrus, right middle frontal gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">38, 18, 16</td>
<td valign="top" align="center" rowspan="1" colspan="1">227</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Right insula
<sup>*</sup>
, right superior temporal gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">40,−20, 16</td>
<td valign="top" align="center" rowspan="1" colspan="1">139</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Right insula
<sup>*</sup>
, right superior temporal gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">42,−8, 0</td>
<td valign="top" align="center" rowspan="1" colspan="1">42</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Right caudate
<sup>*</sup>
, right lentiform nucleus</td>
<td valign="top" align="center" rowspan="1" colspan="1">14, 6, 14</td>
<td valign="top" align="center" rowspan="1" colspan="1">143</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Right thalamus
<sup>*</sup>
</td>
<td valign="top" align="center" rowspan="1" colspan="1">14,−18, 6</td>
<td valign="top" align="center" rowspan="1" colspan="1">32</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Right cerebellum
<sup>*</sup>
</td>
<td valign="top" align="center" rowspan="1" colspan="1">28,−54,−26</td>
<td valign="top" align="center" rowspan="1" colspan="1">28</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1">Music passive listening > music memory</td>
<td valign="top" align="left" rowspan="1" colspan="1">Left superior temporal gyrus
<sup>*</sup>
, left middle temporal gyrus, left insula</td>
<td valign="top" align="center" rowspan="1" colspan="1">−54,−22, 6</td>
<td valign="top" align="center" rowspan="1" colspan="1">943</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Right superior temporal gyrus
<sup>*</sup>
, right insula, right postcentral gyrus, right precentral gyrus right transverse temporal gyrus, right middle temporal gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">52,−20, 4</td>
<td valign="top" align="center" rowspan="1" colspan="1">1350</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Right insula
<sup>*</sup>
, right inferior frontal gyrus, right superior temporal gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">46, 10, 2</td>
<td valign="top" align="center" rowspan="1" colspan="1">32</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1">Music passive listening < music memory</td>
<td valign="top" align="left" rowspan="1" colspan="1">Left inferior frontal gyrus (pars opercularis)
<sup>*</sup>
, left precentral gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">−44, 4, 30</td>
<td valign="top" align="center" rowspan="1" colspan="1">79</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left inferior frontal gyrus (pars orbitalis)
<sup>*</sup>
, left insula</td>
<td valign="top" align="center" rowspan="1" colspan="1">−32, 24,−6</td>
<td valign="top" align="center" rowspan="1" colspan="1">53</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left middle frontal gyrus
<sup>*</sup>
, left inferior frontal gyrus (pars triangularis)</td>
<td valign="top" align="center" rowspan="1" colspan="1">−42, 18, 28</td>
<td valign="top" align="center" rowspan="1" colspan="1">29</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left middle frontal gyrus
<sup>*</sup>
</td>
<td valign="top" align="center" rowspan="1" colspan="1">−32, 6, 54</td>
<td valign="top" align="center" rowspan="1" colspan="1">29</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left superior frontal gyrus
<sup>*</sup>
, left medial frontal gyrus, right medial frontal gyrus, right superior frontal gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">−0, 8, 48</td>
<td valign="top" align="center" rowspan="1" colspan="1">329</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left superior temporal gyrus
<sup>*</sup>
, left middle temporal gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">−44,−20,−10</td>
<td valign="top" align="center" rowspan="1" colspan="1">69</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left inferior parietal lobule
<sup>*</sup>
, left supramarginal gyrus, left superior temporal gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">−54,−44, 28</td>
<td valign="top" align="center" rowspan="1" colspan="1">89</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left thalamus
<sup>*</sup>
</td>
<td valign="top" align="center" rowspan="1" colspan="1">−10,−16, 14</td>
<td valign="top" align="center" rowspan="1" colspan="1">37</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Right inferior frontal gyrus
<sup>*</sup>
, right insula, right claustrum</td>
<td valign="top" align="center" rowspan="1" colspan="1">32, 26, 4</td>
<td valign="top" align="center" rowspan="1" colspan="1">83</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Right parahippocampal gyrus
<sup>*</sup>
, right hippocampus</td>
<td valign="top" align="center" rowspan="1" colspan="1">32,−12,−24</td>
<td valign="top" align="center" rowspan="1" colspan="1">35</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1">Speech passive listening > music discrimination</td>
<td valign="top" align="left" rowspan="1" colspan="1">Left middle temporal gyrus
<sup>*</sup>
, left superior temporal gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">−56,−20,−8</td>
<td valign="top" align="center" rowspan="1" colspan="1">298</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Right middle temporal gyrus
<sup>*</sup>
, right superior temporal gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">56,−18,−4</td>
<td valign="top" align="center" rowspan="1" colspan="1">308</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1">Speech passive listening < music discrimination</td>
<td valign="top" align="left" rowspan="1" colspan="1">Left precentral gyrus
<sup>*</sup>
, left superior temporal gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">−52, 2, 8</td>
<td valign="top" align="center" rowspan="1" colspan="1">105</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left postcentral gyrus
<sup>*</sup>
, left precentral gyrus, left inferior parietal lobule</td>
<td valign="top" align="center" rowspan="1" colspan="1">−50,−14, 52</td>
<td valign="top" align="center" rowspan="1" colspan="1">199</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left cerebellum</td>
<td valign="top" align="center" rowspan="1" colspan="1">−28,−64,−28</td>
<td valign="top" align="center" rowspan="1" colspan="1">127</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Right inferior frontal gyrus
<sup>*</sup>
, right middle frontal gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">50, 10, 30</td>
<td valign="top" align="center" rowspan="1" colspan="1">50</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Right medial frontal gyrus
<sup>*</sup>
, right superior frontal gyrus, left medial frontal gyrus, left superior frontal gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">2,−6, 62</td>
<td valign="top" align="center" rowspan="1" colspan="1">166</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Right precentral gyrus
<sup>*</sup>
, right middle frontal gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">38,−8, 42</td>
<td valign="top" align="center" rowspan="1" colspan="1">67</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Right superior temporal gyrus
<sup>*</sup>
</td>
<td valign="top" align="center" rowspan="1" colspan="1">64,−26, 8</td>
<td valign="top" align="center" rowspan="1" colspan="1">76</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Right superior temporal gyrus
<sup>*</sup>
, right precentral gyrus, right superior temporal gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">50, 6, 4</td>
<td valign="top" align="center" rowspan="1" colspan="1">47</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1">Speech passive listening > music detection</td>
<td valign="top" align="left" rowspan="1" colspan="1">Left inferior frontal gyrus (pars triangularis
<sup>*</sup>
, pars opercularis),</td>
<td valign="top" align="center" rowspan="1" colspan="1">−50, 22, 10</td>
<td valign="top" align="center" rowspan="1" colspan="1">107</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left middle frontal gyrus
<sup>*</sup>
, left precentral gyrus, left postcentral gyrus,</td>
<td valign="top" align="center" rowspan="1" colspan="1">−54, 2, 40</td>
<td valign="top" align="center" rowspan="1" colspan="1">138</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left middle temporal gyrus
<sup>*</sup>
, left superior temporal gyrus, left inferior temporal gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">−60,−8,−10</td>
<td valign="top" align="center" rowspan="1" colspan="1">1052</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left superior temporal gyrus
<sup>*</sup>
</td>
<td valign="top" align="center" rowspan="1" colspan="1">−48, 16,−16</td>
<td valign="top" align="center" rowspan="1" colspan="1">29</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Right middle temporal gyrus
<sup>*</sup>
, right superior temporal gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">60,−18,−8</td>
<td valign="top" align="center" rowspan="1" colspan="1">651</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1">Speech passive listening < music detection</td>
<td valign="top" align="left" rowspan="1" colspan="1">Left precentral gyrus
<sup>*</sup>
, left superior temporal gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">−52, 2, 6</td>
<td valign="top" align="center" rowspan="1" colspan="1">54</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left insula
<sup>*</sup>
, left superior temporal gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">−50,−20, 16</td>
<td valign="top" align="center" rowspan="1" colspan="1">430</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left insula
<sup>*</sup>
, left superior temporal gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">−40, 6,−4</td>
<td valign="top" align="center" rowspan="1" colspan="1">31</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left inferior parietal lobule
<sup>*</sup>
, left supramarginal gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">−42,−50, 38</td>
<td valign="top" align="center" rowspan="1" colspan="1">40</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left lentiform nucleus
<sup>*</sup>
, left claustrum, left insula</td>
<td valign="top" align="center" rowspan="1" colspan="1">−20, 6, 6</td>
<td valign="top" align="center" rowspan="1" colspan="1">203</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Right middle frontal gyrus
<sup>*</sup>
, right inferior frontal gyrus, right insula</td>
<td valign="top" align="center" rowspan="1" colspan="1">42, 16, 32</td>
<td valign="top" align="center" rowspan="1" colspan="1">220</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Right middle frontal gyrus
<sup>*</sup>
, right precentral gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">30,−6, 56</td>
<td valign="top" align="center" rowspan="1" colspan="1">35</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Right superior frontal gyrus
<sup>*</sup>
, right middle frontal gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">32, 44, 16</td>
<td valign="top" align="center" rowspan="1" colspan="1">40</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Right superior frontal gyrus
<sup>*</sup>
</td>
<td valign="top" align="center" rowspan="1" colspan="1">4, 12, 54</td>
<td valign="top" align="center" rowspan="1" colspan="1">36</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Right insula
<sup>*</sup>
, right transverse temporal gyrus, right superior temporal gyrus, right precentral gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">44,−12, 12</td>
<td valign="top" align="center" rowspan="1" colspan="1">519</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Right inferior parietal lobule
<sup>*</sup>
, right supramarginal gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">40,−48, 38</td>
<td valign="top" align="center" rowspan="1" colspan="1">103</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Right thalamus
<sup>*</sup>
, right caudate</td>
<td valign="top" align="center" rowspan="1" colspan="1">8,−2, 10</td>
<td valign="top" align="center" rowspan="1" colspan="1">142</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Right thalamus
<sup>*</sup>
</td>
<td valign="top" align="center" rowspan="1" colspan="1">10,−14, 6</td>
<td valign="top" align="center" rowspan="1" colspan="1">33</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1">Speech passive listening > music memory</td>
<td valign="top" align="left" rowspan="1" colspan="1">Left middle temporal gyrus
<sup>*</sup>
, left middle temporal gyrus, left transverse temporal gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">−58,−38,−4</td>
<td valign="top" align="center" rowspan="1" colspan="1">1256</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1">[-10pt]</td>
<td valign="top" align="left" rowspan="1" colspan="1">Right superior temporal gyrus
<sup>*</sup>
, right transverse temporal gyrus, right middle temporal gyrus, right postcentral gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">58,−2, 2</td>
<td valign="top" align="center" rowspan="1" colspan="1">1056</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1">Speech passive listening < music memory</td>
<td valign="top" align="left" rowspan="1" colspan="1">Left inferior frontal gyrus (pars orbitalis
<sup>*</sup>
, pars triangularis)</td>
<td valign="top" align="center" rowspan="1" colspan="1">−32, 24,−6</td>
<td valign="top" align="center" rowspan="1" colspan="1">31</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1">[-10pt]</td>
<td valign="top" align="left" rowspan="1" colspan="1">Left medial frontal gyrus
<sup>*</sup>
, left superior frontal gyrus, left cingulate, right superior frontal gyrus, right medial frontal gyrus, right cingulate</td>
<td valign="top" align="center" rowspan="1" colspan="1">−0, 22, 46</td>
<td valign="top" align="center" rowspan="1" colspan="1">336</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left precentral gyrus
<sup>*</sup>
, left inferior frontal gyrus (pars opercularis)</td>
<td valign="top" align="center" rowspan="1" colspan="1">−52, 2, 30</td>
<td valign="top" align="center" rowspan="1" colspan="1">34</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left precentral gyrus
<sup>*</sup>
</td>
<td valign="top" align="center" rowspan="1" colspan="1">−38,−10, 38</td>
<td valign="top" align="center" rowspan="1" colspan="1">32</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left supramarginal gyrus
<sup>*</sup>
, left inferior parietal lobule, left superior temporal gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">−46,−46, 26</td>
<td valign="top" align="center" rowspan="1" colspan="1">113</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left inferior parietal lobule
<sup>*</sup>
, left postcentral gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">−48,−36, 48</td>
<td valign="top" align="center" rowspan="1" colspan="1">71</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Left middle temporal gyrus
<sup>*</sup>
, left superior temporal gyrus</td>
<td valign="top" align="center" rowspan="1" colspan="1">−46,−24,−10</td>
<td valign="top" align="center" rowspan="1" colspan="1">44</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">Right insula
<sup>*</sup>
, right inferior frontal gyrus, right claustrum</td>
<td valign="top" align="center" rowspan="1" colspan="1">34, 22, 4</td>
<td valign="top" align="center" rowspan="1" colspan="1">48</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p>The x, y, z coordinates are in Talairach space and refer to the peak voxel activated in each contrast. All contrasts are thresholded at p = 0.05. Asterisks indicate anatomical location of peak voxel.</p>
</table-wrap-foot>
</table-wrap>
<p>Pairwise contrasts of passive listening to music vs. passive listening to speech were calculated to identify any brain regions that were significantly activated more by speech or music, respectively. Results were as follows (
<italic>p</italic>
< 0.05, FDR corrected): the speech > music contrast identified significant regions on both banks of the bilateral superior temporal sulcus extending the length of the left temporal lobe and mid/anterior right temporal lobe, left inferior frontal lobe (pars triangularis), left precentral gyrus, and left postcentral gyrus regions. Music > speech identified bilateral insula and bilateral superior temporal/parietal operculum clusters as well as a right inferior frontal gyrus region (Figure
<xref ref-type="fig" rid="F1">1B</xref>
, Table
<xref ref-type="table" rid="T2">2</xref>
). These results coincide with previous reports of listening to speech activating a lateral temporal network particularly in the superior temporal sulcus and extending into the anterior temporal lobe, while listening to music activated a more dorsal medial temporal-parietal network (Jäncke et al.,
<xref rid="B46" ref-type="bibr">2002</xref>
; Rogalsky et al.,
<xref rid="B90" ref-type="bibr">2011</xref>
). These results also coincide with Fedorenko et al.'s (
<xref rid="B23" ref-type="bibr">2011</xref>
) finding that Broca's area, the pars triangularis in particular, is preferentially responsive to language stimuli.</p>
</sec>
<sec>
<title>Music tasks vs. speech tasks</title>
<p>The passive listening ALE results identify distinct and overlapping regions of speech and music processing. We now turn to the question of how do these distinctions change as a function of the type of task employed? First, ALEs were computed for each music task condition,
<italic>p</italic>
< 0.05 FDR corrected (Figure
<xref ref-type="fig" rid="F1">1</xref>
, Table
<xref ref-type="table" rid="T2">2</xref>
). The music task conditions' ALEs all significantly identified bilateral STG and bilateral precentral gyrus, and inferior parietal regions, overlapping with the passive listening music ALE (Figure
<xref ref-type="fig" rid="F2">2</xref>
). The tasks also activated additional inferior frontal and inferior parietal regions not identified by the passive listening music ALE; these differences are discussed in a subsequent section.</p>
<fig id="F2" position="float">
<label>Figure 2</label>
<caption>
<p>
<bold>Representative sagittal slices of the ALEs for the (A) music discrimination, (B) music error detection and (C) music memory task conditions,
<italic>
<bold>p</bold>
</italic>
< 0.05, corrected, overlaid on top of the passive music listening ALE for comparison</bold>
.</p>
</caption>
<graphic xlink:href="fpsyg-06-01138-g0002"></graphic>
</fig>
<p>To compare the brain regions activated by each music task to those activated by speech in similar tasks, pairwise contrasts of the ALEs for each music task vs. its corresponding speech task group were calculated (Figure
<xref ref-type="fig" rid="F3">3</xref>
, Table
<xref ref-type="table" rid="T2">2</xref>
). Music discrimination > speech discrimination identified regions including bilateral inferior frontal gyri (pars opercularis), bilateral pre and postcentral gyri, bilateral medial frontal gyri, left inferior parietal lobule, and left cerebellum, whereas speech discrimination > music discrimination identified bilateral regions in the anterior superior temporal sulci (including both superior and middle temporal gyri). Music detection > speech detection identified a bilateral group of clusters spanning the superior temporal gyri, bilateral precentral gyri, bilateral insula and bilateral inferior parietal regions, as well as clusters in the right middle frontal gyrus. Speech detection > music detection identified bilateral superior temporal sulci regions as well as left inferior frontal regions (pars triangularis and pars opercularis). Music memory > speech memory identified a left posterior superior temporal/inferior parietal region and bilateral medial frontal regions; speech memory > music memory identified left inferior frontal gyrus (pars opercularis and pars triangularis) and bilateral superior and middle temporal gyri.</p>
<fig id="F3" position="float">
<label>Figure 3</label>
<caption>
<p>
<bold>Representative slices of the contrast results for the comparison of (A) music discrimination, (B) music error detection, and (C) music memory task conditions, compared to the corresponding speech task,
<italic>
<bold>p</bold>
</italic>
< 0.05, corrected</bold>
.</p>
</caption>
<graphic xlink:href="fpsyg-06-01138-g0003"></graphic>
</fig>
<p>In sum, the task pairwise contrasts in many ways mirror the passive listening contrast: music tasks activated more dorsal/medial superior temporal and inferior parietal regions, while speech tasks activated superior temporal sulcus regions, particularly in the anterior temporal lobe. In addition, notable differences were found in Broca's area and its right hemisphere homolog: in discrimination tasks music significantly activated Broca's area (specifically the pars opercularis) more than speech. However, in detection and memory tasks speech activated Broca's area (pars opercularis and pars triangularis) more than music. The right inferior frontal gyrus responded equally to speech and music in both detection and memory tasks, but responded more to music than speech in discrimination tasks. Also notably, in the memory tasks, music activated a lateral superior temporal/inferior parietal cluster (in the vicinity of Hickok and Poeppel's “area Spt”) more than speech while an inferior frontal cluster including the pars opercularis was activated more for speech than music. Both area Spt and the pars opercularis previously have been implicated in a variety of auditory working memory tasks (including speech and pitch working memory) in both lesion patients and control subjects (Koelsch and Siebel,
<xref rid="B53" ref-type="bibr">2005</xref>
; Koelsch et al.,
<xref rid="B54" ref-type="bibr">2009</xref>
; Buchsbaum et al.,
<xref rid="B11" ref-type="bibr">2011</xref>
) and are considered to be part of an auditory sensory-motor integration network (Hickok et al.,
<xref rid="B34" ref-type="bibr">2003</xref>
; Hickok and Poeppel,
<xref rid="B35" ref-type="bibr">2004</xref>
,
<xref rid="B36" ref-type="bibr">2007</xref>
).</p>
</sec>
<sec>
<title>Music tasks vs. passive listening to speech</title>
<p>Findings from various music paradigms and tasks are often reported as engaging language networks because of location; a music paradigm activating Broca's area or superior temporal regions is frequently described as recruiting classic language areas. However, it is not clear if these music paradigms are in fact engaging the language networks engaged in the natural, everyday process of listening to speech. Thus, pairwise contrasts of the ALEs for listening to speech vs. the music tasks were calculated (Figure
<xref ref-type="fig" rid="F4">4</xref>
; Table
<xref ref-type="table" rid="T2">2</xref>
). Music discrimination > speech passive listening identified regions in bilateral precentral gyri, bilateral medial frontal gyri, left postcentral gyrus, left inferior parietal lobule, left cerebellum, right inferior and middle frontal gyri, and right superior temporal gyrus. Music error detection > speech identified bilateral precentral gyri, bilateral superior temporal gyri, bilateral insula, bilateral basal ganglia, left postcentral gyrus, left cerebellum, bilateral inferior parietal lobe, right middle frontal gyrus, right inferior frontal gyrus and the right thalamus. Music memory > speech identified portions of bilateral inferior frontal gyri, bilateral medial frontal gyri, left inferior parietal lobe, left pre and postcentral gyri, and right insula. Compared to all three music tasks, speech significantly activated bilateral superior temporal sulcus regions and only activated Broca's area (specifically the pars triangularis) more than music detection. The recruitment of Broca's area and adjacent regions for music was task dependent: compared to listening to speech, music detection and discrimination activated additional bilateral inferior precentral gyrus regions immediately adjacent to Broca's area and music memory activated the left inferior frontal gyrus more than speech (in all three subregions: pars opercularis, pars triangularis, and pars orbitalis). In the right hemisphere homolog of Broca's area, all three music tasks activated this region more than listening to speech as well as adjacent regions in the right middle frontal gyrus. All together these results suggest that the recruitment of neural resources used in speech for music processing depends on the experimental paradigm. The finding of music memory tasks eliciting widespread activation in Broca's area compared to listening to speech is likely due to the inferior frontal gyrus, and the pars opercularis in particular being consistently implicated in articulatory rehearsal and working memory (Hickok et al.,
<xref rid="B34" ref-type="bibr">2003</xref>
; Buchsbaum et al.,
<xref rid="B11" ref-type="bibr">2011</xref>
,
<xref rid="B12" ref-type="bibr">2005</xref>
), resources that are likely recruited by the music memory tasks.</p>
<fig id="F4" position="float">
<label>Figure 4</label>
<caption>
<p>
<bold>Representative slices of the contrast results for the comparison of (A) music discrimination, (B) music error detection, (C) music memory task conditions, compared to passive listening to speech,
<italic>
<bold>p</bold>
</italic>
< 0.05, corrected</bold>
.</p>
</caption>
<graphic xlink:href="fpsyg-06-01138-g0004"></graphic>
</fig>
</sec>
<sec>
<title>Music tasks vs. passive listening to music</title>
<p>Lastly we compared the music task ALEs to the music passive listening ALE using pairwise contrasts to better characterize task-specific activations to music. Results (
<italic>p</italic>
< 0.05, FDR corrected) include: (1) music discrimination > music listening identified bilateral inferior precentral gyri, bilateral medial frontal regions, left postcentral gyrus, left inferior parietal lobule, left cerebellum, right middle frontal gyrus and right insula (2) music error detection > music listening identified bilateral medial frontal, bilateral insula, bilateral inferior parietal areas, bilateral superior temporal gyri, bilateral basal ganglia, left pre and post central gyri, right inferior and middle frontal gyri and right cerebellum; (3) music memory > passive listening identified bilateral inferior frontal gyri (pars opercularis, triangularis and orbitalis in the left hemisphere, only the latter two in the right hemisphere), bilateral medial frontal gyri, bilateral insula, bilateral cerebellum, left middle frontal gyrus, left inferior parietal lobe, left superior and middle temporal gyri, right basal ganglia, right hippocampus and right parahippocampal gyrus (Figure
<xref ref-type="fig" rid="F5">5</xref>
, Table
<xref ref-type="table" rid="T2">2</xref>
). The medial frontal and inferior parietal activations identified in the tasks compared to listening likely reflect increased vigilance and attention due to the presence of a task, as activation in these regions is known to increase as a function of effort and performance on tasks across a variety of stimuli types and domains (Petersen and Posner,
<xref rid="B81" ref-type="bibr">2012</xref>
; Vaden et al.,
<xref rid="B120" ref-type="bibr">2013</xref>
). To summarize the findings in Broca's area and its right hemisphere homolog, music memory tasks activated Broca's area more than just listening to music, while music discrimination and detection tasks activated right inferior frontal gyrus regions more than listening to music. Also note that all three music tasks compared to listening to music implicate regions on the anterior bank of the inferior portion of the precentral gyrus immediately adjacent to Broca's area. Significant clusters more active for music passive listening than for each of the three task conditions are found in the bilateral superior temporal gyri (Table
<xref ref-type="table" rid="T2">2</xref>
).</p>
<fig id="F5" position="float">
<label>Figure 5</label>
<caption>
<p>
<bold>Representative slices of the contrast results for the comparison of (A) music discrimination, (B) music error detection, (C) music memory task conditions, compared to passive listening to music,
<italic>
<bold>p</bold>
</italic>
< 0.05, corrected</bold>
.</p>
</caption>
<graphic xlink:href="fpsyg-06-01138-g0005"></graphic>
</fig>
</sec>
</sec>
<sec sec-type="discussion" id="s4">
<title>Discussion</title>
<p>The present meta-analysis examined data from 80 functional neuroimaging studies of music and 91 studies of speech to characterize the relationship between the brain networks activated by listening to speech vs. listening to music. We also compared the brain regions implicated in three frequently used music paradigms (error detection, discrimination, and memory) to the regions implicated in similar speech paradigms to determine how task effects may change how the neurobiology of music processing is related to that of speech. We replicated across a large collection of studies' previous within-subject findings that speech activates a predominately lateral temporal network, while music preferentially activates a more dorsal medial temporal network extending into the inferior parietal lobe. In Broca's area, we found overlapping resources for passive listening to speech and music in the pars opercularis, but speech “specific” resources in pars triangularis; the right hemisphere homolog of Broca's area was equally responsive to listening to speech and music. The use of a paradigm containing an explicit task (error detection, discrimination or memory) altered the relationship between the brain networks engaged in music and speech. For example, speech discrimination tasks do not activate the pars triangularis (i.e., the region identified as “speech specific” by the passive listening contrast) more than music discrimination tasks, and both speech detection and memory tasks activate the pars opercularis (i.e., the region responding equally to music and speech passive listening) more than the corresponding music tasks, while music discrimination activates pars opercularis more than speech discrimination. These findings suggest that inferior frontal contributions to music processing, and their overlap with speech resources, may be modulated by task. The following sections discuss these findings in relation to neuroanatomical models of speech and music.</p>
<sec>
<title>Hemispheric differences for speech and music</title>
<p>The lateralization of speech and music processing has been investigated for decades. While functional neuroimaging studies report bilateral activation for both speech and music (Jäncke et al.,
<xref rid="B46" ref-type="bibr">2002</xref>
; Abrams et al.,
<xref rid="B1" ref-type="bibr">2011</xref>
; Fedorenko et al.,
<xref rid="B23" ref-type="bibr">2011</xref>
; Rogalsky et al.,
<xref rid="B90" ref-type="bibr">2011</xref>
), evidence from amusia, aphasia and other patient populations have traditionally identified the right hemisphere as critical for music and the left for basic language processes in most individuals (Gazzaniga,
<xref rid="B29" ref-type="bibr">1983</xref>
; Peretz et al.,
<xref rid="B76" ref-type="bibr">2003</xref>
; Damasio et al.,
<xref rid="B18" ref-type="bibr">2004</xref>
; Hyde et al.,
<xref rid="B43" ref-type="bibr">2006</xref>
). Further evidence for hemispheric differences comes from asymmetries in early auditory cortex: left hemisphere auditory cortex has better temporal resolution and is more sensitive to rapid temporal changes critical for speech processing, while the right hemisphere auditory cortex has higher spectral resolution and is more modulated by spectral changes, which optimize musical processing (Zatorre et al.,
<xref rid="B127" ref-type="bibr">2002</xref>
; Poeppel,
<xref rid="B83" ref-type="bibr">2003</xref>
; Schönwiesner et al.,
<xref rid="B97" ref-type="bibr">2005</xref>
; Hyde et al.,
<xref rid="B42" ref-type="bibr">2008</xref>
). Thus, left auditory cortex has been found to be more responsive to phonemes than chords, while right auditory cortex is more responsive to chords than phonemes (Tervaniemi et al.,
<xref rid="B113" ref-type="bibr">1999</xref>
,
<xref rid="B114" ref-type="bibr">2000</xref>
). This hemispheric specialization coincides with evidence from both auditory and visual domains, suggesting that the left hemisphere tends to be tuned to local features, while the right hemisphere is tuned to more global features (Sergent,
<xref rid="B100" ref-type="bibr">1982</xref>
; Ivry and Robertson,
<xref rid="B44" ref-type="bibr">1998</xref>
; Sanders and Poeppel,
<xref rid="B93" ref-type="bibr">2007</xref>
).</p>
<p>Hemispheric differences in the present study for speech and music vary by location. We did not find any qualitative hemispheric differences between speech and music in the temporal lobe. Speech bilaterally activated lateral superior and middle temporal regions, while music bilaterally activated more dorsal medial superior temporal regions extending into the inferior parietal lobe. However, these bilateral findings should not be interpreted as evidence against hemispheric asymmetries for speech vs. music. The hemispheric differences widely reported in auditory cortex almost always are a matter of degree, e.g., phonemes and tones both activate bilateral superior temporal regions, but a direct comparison indicates a left hemisphere preference for the speech and a right hemisphere preference for the tones (Jäncke et al.,
<xref rid="B46" ref-type="bibr">2002</xref>
; Zatorre et al.,
<xref rid="B127" ref-type="bibr">2002</xref>
). These differences would not be reflected in our ALE results because both conditions reliably activate the same regions although to different degrees and the ALE method does not assign weight to coordinates (i.e., all the significant coordinates reported for contrasts of interest in the studies used) based on their beta or statistical values.</p>
<p>The frontal lobe results, however, did include some laterality differences of interest: passive listening to speech activated portions of the left inferior frontal gyrus (i.e., Broca's area), namely in the pars triangularis, significantly more than listening to music. A right inferior frontal gyrus cluster, extending into the insula, was activated significantly more for listening to music than speech. These findings in Broca's area coincide with Koelsch's neurocognitive model of music perception, in that right frontal regions are more responsive to musical stimuli and that the pars opercularis, but not the pars triangularis, is engaged in structure building of auditory stimuli (Koelsch,
<xref rid="B49" ref-type="bibr">2011</xref>
). It is also noteworthy that the inclusion of a task altered hemispheric differences in the frontal lobes: the music discrimination tasks activated the left pars opercularis more than speech discrimination, while speech detection and memory tasks activated all of Broca's area (pars opercularis and pars triangularis) more than music detection and memory tasks; music detection and discrimination tasks, but not music memory tasks, activated the right inferior frontal gyrus more than corresponding speech tasks. These task-modulated asymmetries in Broca's area for music are particularly important when interpreting the rich electrophysiological literature of speech and music interactions. For example, both the early right anterior negativity (ERAN) and early left anterior negativity (ELAN) are modulated by speech and music, and are believed to have sources in both Broca's area and its right hemisphere homolog (Friederici et al.,
<xref rid="B28" ref-type="bibr">2000</xref>
; Maess et al.,
<xref rid="B57" ref-type="bibr">2001</xref>
; Koelsch and Friederici,
<xref rid="B50" ref-type="bibr">2003</xref>
). Thus, the lateralization patterns found in the present study emphasize the need to consider that similar ERP effects for speech and music may arise from different underlying lateralization patterns that may be task-dependent.</p>
</sec>
<sec>
<title>Speech vs. music in the anterior temporal lobe</title>
<p>Superior and middle posterior temporal regions on the banks of the superior temporal sulcus were preferentially activated in each speech condition compared to each corresponding music condition in the present meta-analysis. This is not surprising, as these posterior STS regions are widely implicated in lexical semantic processing (Price,
<xref rid="B84" ref-type="bibr">2010</xref>
) and STS regions have been found to be more responsive to syllables than tones (Jäncke et al.,
<xref rid="B46" ref-type="bibr">2002</xref>
). Perhaps more interestingly, the bilateral anterior temporal lobe (ATL) also was activated more for each speech condition than by each corresponding music condition. The role of the ATL in speech processing is debated (e.g., Scott et al.,
<xref rid="B99" ref-type="bibr">2000</xref>
cf. Hickok and Poeppel,
<xref rid="B35" ref-type="bibr">2004</xref>
,
<xref rid="B36" ref-type="bibr">2007</xref>
), but the ATL is reliably sensitive to syntactic structure in speech compared to several control conditions including word lists, scrambled sentences, spectrally rotated speech, environmental sounds sequences, and melodies (Mazoyer et al.,
<xref rid="B60" ref-type="bibr">1993</xref>
; Humphries et al.,
<xref rid="B41" ref-type="bibr">2001</xref>
,
<xref rid="B40" ref-type="bibr">2005</xref>
,
<xref rid="B39" ref-type="bibr">2006</xref>
; Xu et al.,
<xref rid="B125" ref-type="bibr">2005</xref>
; Spitsyna et al.,
<xref rid="B106" ref-type="bibr">2006</xref>
; Rogalsky and Hickok,
<xref rid="B87" ref-type="bibr">2009</xref>
; Friederici et al.,
<xref rid="B27" ref-type="bibr">2010</xref>
; Rogalsky et al.,
<xref rid="B90" ref-type="bibr">2011</xref>
). One hypothesis is that the ATL is implicated in combinatorial semantic processing (Wong and Gallate,
<xref rid="B124" ref-type="bibr">2012</xref>
; Wilson et al.,
<xref rid="B123" ref-type="bibr">2014</xref>
), although pseudoword sentences (i.e., sentences lacking meaningful content words) also activate the ATL (Humphries et al.,
<xref rid="B39" ref-type="bibr">2006</xref>
; Rogalsky et al.,
<xref rid="B90" ref-type="bibr">2011</xref>
). Several of the speech activation coordinates included in the present meta-analysis were from studies that used sentences and phrases as stimuli (with and without semantic content). It is likely that these coordinates are driving the ATL findings. Our finding that music did not activate the ATL supports the idea that the ATL is not responsive to hierarchical structure
<italic>per se</italic>
but rather needs linguistic and/or semantic information for it to be recruited.</p>
</sec>
<sec>
<title>Speech vs. music in broca's area</title>
<p>There is no consensus regarding the role of Broca's area in receptive speech processes (e.g., Fedorenko and Kanwisher,
<xref rid="B24" ref-type="bibr">2011</xref>
; Hickok and Rogalsky,
<xref rid="B37" ref-type="bibr">2011</xref>
; Rogalsky and Hickok,
<xref rid="B88" ref-type="bibr">2011</xref>
). Results from the present meta-analysis indicate that listening to speech activated both the pars opercularis and pars triangularis portions of Broca's area, while listening to music only activated the pars opercularis. The pars triangularis has been proposed to be involved in semantic integration (Hagoort,
<xref rid="B32" ref-type="bibr">2005</xref>
) as well as in cognitive control processes such as conflict resolution (Novick et al.,
<xref rid="B63" ref-type="bibr">2005</xref>
; Rogalsky and Hickok,
<xref rid="B88" ref-type="bibr">2011</xref>
). It is likely that the speech stimuli contain more semantic content than the music stimuli, and thus semantic integration processes may account for the speech-only response in pars triangularis. However, there was no significant difference in activations in the pars triangularis for the music discrimination and music detection tasks vs. passive listening to speech, and the music memory tasks activated portions of the pars triangularis more than listening to speech. These music task-related activations in the pars triangularis may reflect the use of semantic resources for categorization or verbalization strategies to complete the music tasks, but may also reflect increased cognitive control processes to support reanalysis of the stimuli to complete the tasks. The activation of the left pars opercularis for both speech and music replicates numerous individual studies implicating the pars opercularis in both speech and musical syntactic processing (e.g., Koelsch and Siebel,
<xref rid="B53" ref-type="bibr">2005</xref>
; Rogalsky and Hickok,
<xref rid="B88" ref-type="bibr">2011</xref>
) as well as in a variety of auditory working memory paradigms (e.g., Koelsch and Siebel,
<xref rid="B53" ref-type="bibr">2005</xref>
; Buchsbaum et al.,
<xref rid="B11" ref-type="bibr">2011</xref>
).</p>
</sec>
<sec>
<title>Implications for neuroanatomical models of speech and music</title>
<p>It is particularly important to consider task-related effects when evaluating neuroanatomical models of the interactions between speech and music. It has been proposed that inferior frontal cortex (including Broca's area) is the substrate for shared speech-music executive function resources, such as working memory and/or cognitive control (Patel,
<xref rid="B65" ref-type="bibr">2003</xref>
; Slevc,
<xref rid="B102" ref-type="bibr">2012</xref>
; Slevc and Okada,
<xref rid="B104" ref-type="bibr">2015</xref>
) as well as auditory processes such as structure analysis, repair, working memory and motor encoding (Koelsch and Siebel,
<xref rid="B53" ref-type="bibr">2005</xref>
; Koelsch,
<xref rid="B49" ref-type="bibr">2011</xref>
). Of particular importance here is Slevc and Okada's (
<xref rid="B104" ref-type="bibr">2015</xref>
) proposal that cognitive control may be one of the shared cognitive resources for linguistic and musical processing when reanalysis and conflict resolution is necessary. Different tasks likely recruit cognitive control resources to different degrees, and thus may explain task-related differences for the frontal lobe's response to speech and music. There is ample evidence to support Slevc and Okada's hypothesis: classic cognitive control paradigms such as the Stroop task (Stroop,
<xref rid="B110" ref-type="bibr">1935</xref>
; MacLeod,
<xref rid="B56" ref-type="bibr">1991</xref>
) elicit overlapping activations in Broca's area when processing noncanonical sentence structures (January et al.,
<xref rid="B47" ref-type="bibr">2009</xref>
). Unexpected harmonic and melodic information in music interfere with Stroop task performance (Masataka and Perlovsky,
<xref rid="B59" ref-type="bibr">2013</xref>
). The neural responses to syntactic and sentence-level semantic ambiguities in language also interact with responses to unexpected harmonics in music (Koelsch et al.,
<xref rid="B51" ref-type="bibr">2005</xref>
; Steinbeis and Koelsch,
<xref rid="B108" ref-type="bibr">2008b</xref>
; Slevc et al.,
<xref rid="B103" ref-type="bibr">2009</xref>
; Perruchet and Poulin-Charronnat,
<xref rid="B80" ref-type="bibr">2013</xref>
). The present results suggest that this interaction between language and music possibly via cognitive control mechanisms, localized to Broca's area, may be task driven and not inherent to the stimuli themselves. In addition, many language/music interaction studies use a reading language task with simultaneous auditory music stimuli; it is possible that a word-by-word presentation reading paradigm engages additional reanalysis mechanisms that may dissociate from resources used in auditory speech processing (Tillmann,
<xref rid="B115" ref-type="bibr">2012</xref>
).</p>
<p>Slevc and Okada suggest that future studies should use tasks designed to drive activation of specific processes, presumably including reanalysis. However, the present findings suggest it is possible that these task-induced environments may actually drive overlap of neural resources for speech and music not because they are taxing shared sensory computations but rather because they are introducing additional processes that are not elicited during typical, naturalistic music listening. For example, consider the present findings in the left pars triangularis: this region is not activated during listening to music, but is activated while listening to speech. However, by presumably increasing the need for reanalysis mechanisms via discrimination or memory tasks, music does recruit this region.</p>
<p>There may be inferior frontal shared mechanisms that are stimulus driven while others are task driven: Broca's area is a diverse region in terms of its cytoarchitecture, connectivity and response properties (Amunts et al.,
<xref rid="B3" ref-type="bibr">1999</xref>
; Anwander et al.,
<xref rid="B4" ref-type="bibr">2007</xref>
; Rogalsky and Hickok,
<xref rid="B88" ref-type="bibr">2011</xref>
; Rogalsky et al.,
<xref rid="B86" ref-type="bibr">in press</xref>
). It is possible that some networks are task driven and some are stimulus driven. The hypotheses of Koelsch et al. are largely grounded in behavioral and electrophysiology studies that indicate an interaction between melodic and syntactic information (e.g., Koelsch et al.,
<xref rid="B51" ref-type="bibr">2005</xref>
; Fedorenko et al.,
<xref rid="B25" ref-type="bibr">2009</xref>
; Hoch et al.,
<xref rid="B38" ref-type="bibr">2011</xref>
). It is not known if these interactions are stimulus driven; a variety of tasks have been used in this literature, including discrimination, anomaly/error detection, (Koelsch et al.,
<xref rid="B51" ref-type="bibr">2005</xref>
; Carrus et al.,
<xref rid="B15" ref-type="bibr">2013</xref>
), grammatical acceptability (Patel et al.,
<xref rid="B71" ref-type="bibr">1998a</xref>
; Patel,
<xref rid="B67" ref-type="bibr">2008</xref>
), final-word lexical decision (Hoch et al.,
<xref rid="B38" ref-type="bibr">2011</xref>
), and memory/comprehension tasks (Fedorenko et al.,
<xref rid="B25" ref-type="bibr">2009</xref>
,
<xref rid="B23" ref-type="bibr">2011</xref>
). In addition, there is substantial variability across individual subjects, both functionally and anatomically, within Broca's area (Amunts et al.,
<xref rid="B3" ref-type="bibr">1999</xref>
; Schönwiesner et al.,
<xref rid="B96" ref-type="bibr">2007</xref>
; Rogalsky et al.,
<xref rid="B86" ref-type="bibr">in press</xref>
). Thus, future within-subject studies are needed to better understand the role of cognitive control and other domain-general resources in musical processing independent of task.</p>
<p>Different tasks, regardless of the nature of the stimuli, may require different attentional resources (Shallice,
<xref rid="B101" ref-type="bibr">2003</xref>
). Thus, it is possible that the inferior frontal differences between the music tasks and passive listening to music and speech are due to basic attentional differences, not the particular task
<italic>per se</italic>
. However, we find classic domain-general attention systems in the anterior cingulate and medial frontal cortex to be significantly activated across all conditions: music tasks, speech tasks, passive listening to music and passive listening to speech. These findings support Slevc and Okada's (
<xref rid="B104" ref-type="bibr">2015</xref>
) claim that domain-general attention mechanisms facilitated by anterior cingulate and medial frontal cortex are consistently engaged for music as they are for speech. Each of our music task conditions do activate these regions significantly more than the passive listening, suggesting that the midline domain-general attention mechanisms engaged by music can be further activated by explicit tasks.</p>
</sec>
<sec>
<title>Limitations and future directions</title>
<p>One issue in interpreting our results may be the proximity of distinct networks for speech and music (Peretz,
<xref rid="B74" ref-type="bibr">2006</xref>
; Koelsch,
<xref rid="B49" ref-type="bibr">2011</xref>
). Overlap in fMRI findings, particularly in a meta-analysis, does not necessarily mean that speech and music share resources in those locations. It is certainly possible that the spatial resolution of fMRI is not sufficient to visualize separation occurring at a smaller scale (Peretz and Zatorre,
<xref rid="B79" ref-type="bibr">2005</xref>
; Patel,
<xref rid="B69" ref-type="bibr">2012</xref>
). However, our findings of spatially distinct regions for music and speech clearly suggest the recruitment of distinct brain networks for speech and music.</p>
<p>Another potential issue related to the limitations of fMRI is that of sensitivity. Continuous fMRI scanning protocols (i.e., stimuli are presented simultaneously with the noise of scanning) and sparse temporal sampling fMRI protocols (i.e., stimuli are presented during silent periods between volume acquisitions) are both included in the present meta-analyses. It has been suggested that the loud scanner noise may reduce sensitivity to detecting hemodynamic response to stimuli, particularly complex auditory stimuli such as speech and music (Peelle et al.,
<xref rid="B73" ref-type="bibr">2010</xref>
; Elmer et al.,
<xref rid="B22" ref-type="bibr">2012</xref>
). Thus, it is possible that effects only detected by a sparse or continuous paradigm are not represented in our ALE results. However, a comparison of continuous vs. sparse fMRI sequences found no significant differences in speech activations in the frontal lobe between the pulse sequences (Peelle et al.,
<xref rid="B73" ref-type="bibr">2010</xref>
).</p>
<p>Priming paradigms measuring neurophysiological responses (ERP, fMRI, etc.) are one way to possibly circumvent task-related confounds in understanding the neurobiology of music in relation to that of speech. Tillmann (
<xref rid="B115" ref-type="bibr">2012</xref>
) suggests that priming paradigms may provide more insight into an individual's implicit musical knowledge than is demonstrated by performance on an explicit, overt task (e.g., Schellenberg et al.,
<xref rid="B95" ref-type="bibr">2005</xref>
; Tillmann et al.,
<xref rid="B116" ref-type="bibr">2007</xref>
). In fact, there are ERP studies that indicate that musical chords can prime processing of target words if the prime and target are semantically (i.e., emotionally) similar (Koelsch et al.,
<xref rid="B52" ref-type="bibr">2004</xref>
; Steinbeis and Koelsch,
<xref rid="B107" ref-type="bibr">2008a</xref>
). However, most ERP priming studies investigating music or music/speech interactions have included an explicit task (e.g., Schellenberg et al.,
<xref rid="B95" ref-type="bibr">2005</xref>
; Tillmann et al.,
<xref rid="B116" ref-type="bibr">2007</xref>
; Steinbeis and Koelsch,
<xref rid="B107" ref-type="bibr">2008a</xref>
). It is not known how the presence of an explicit task may affect priming mechanisms via top-down mechanisms. Priming is not explored in the present meta-analysis; to our knowledge there is only one fMRI priming study of music and speech, which focused on semantic (i.e., emotion) relatedness (Steinbeis and Koelsch,
<xref rid="B107" ref-type="bibr">2008a</xref>
).</p>
<p>The present meta-analysis examines networks primarily in the cerebrum. Even though almost all of the studies included in our analyses focused on cortical structures, we still identified some subcortical task-related activations: music detection compared to music passive listening activated the basal ganglia and music memory tasks activated the thalamus, hippocampus and basal ganglia compared to music passive listening. No significant differences between passive listening to speech and music were found in subcortical structures. These findings (and null results) in subcortical regions should be interpreted cautiously: given the relatively small size of these structures, activations in these areas are particularly vulnerable to spatial smoothing filters and group averaging (Raichle et al.,
<xref rid="B85" ref-type="bibr">1991</xref>
; White et al.,
<xref rid="B122" ref-type="bibr">2001</xref>
). There is also strong evidence that music and speech share subcortical resources in the brainstem (Patel,
<xref rid="B68" ref-type="bibr">2011</xref>
), which are not addressed by the present study. For example, periodicity is a critical aspect of both speech and music and known to modulate networks between the cochlea and inferior colliculus of the brainstem (Cariani and Delgutte,
<xref rid="B14" ref-type="bibr">1996</xref>
; Patel,
<xref rid="B68" ref-type="bibr">2011</xref>
). Further research is needed to better understand where speech and music processing networks diverge downstream from these shared early components.</p>
</sec>
</sec>
<sec sec-type="conclusions" id="s5">
<title>Conclusion</title>
<p>Listening to music and listening to speech engage distinct temporo-parietal cortical networks but share some inferior and medial frontal resources (at least at the resolution of fMRI). However, the recruitment of inferior frontal speech-processing regions for music is modulated by task. The present findings highlight the need to consider how task effects may be interacting with conclusions regarding the neurobiology of speech and music.</p>
<sec>
<title>Conflict of interest statement</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
</sec>
</body>
<back>
<ack>
<p>This work was supported by a GRAMMY Foundation Scientific Research Grant (PI Rogalsky) and Arizona State University. We thank Nicole Blumenstein and Dr. Nancy Moore for their help in the preparation of this manuscript.</p>
</ack>
<fn-group>
<fn id="fn0001">
<p>
<sup>1</sup>
The music categories included studies with stimuli of the following types: instrumental unfamiliar and familiar melodies, tone sequences and individual tones. In comparison, the speech categories described below included studies with stimuli such as individual phonemes, vowels, syllables, words, pseudowords, sentences, and pseudoword sentences. For the purposes of the present study, we have generated two distinct groups of stimuli to compare. However, music and speech are often conceptualized as being two ends of continuum with substantial gray area between the two extremes (Koelsch,
<xref rid="B49" ref-type="bibr">2011</xref>
). For example, naturally spoken sentences contain rhythmic and pitch-related prosodic features and a familiar melody likely automatically elicits a mental representation of the song's lyrics.</p>
</fn>
</fn-group>
<ref-list>
<title>References</title>
<ref id="B1">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Abrams</surname>
<given-names>D. A.</given-names>
</name>
<name>
<surname>Bhatara</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Ryali</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Balaban</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Levitin</surname>
<given-names>D. J.</given-names>
</name>
<name>
<surname>Menon</surname>
<given-names>V.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>Decoding temporal structure in music and speech relies on shared brain resources but elicits different fine-scale spatial patterns</article-title>
.
<source>Cereb. Cortex</source>
<volume>21</volume>
,
<fpage>1507</fpage>
<lpage>1518</lpage>
.
<pub-id pub-id-type="doi">10.1093/cercor/bhq198</pub-id>
<pub-id pub-id-type="pmid">21071617</pub-id>
</mixed-citation>
</ref>
<ref id="B2">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Adank</surname>
<given-names>P.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>Design choices in imaging speech comprehension: an activation likelihood estimation (ALE) meta-analysis</article-title>
.
<source>Neuroimage</source>
<volume>63</volume>
,
<fpage>1601</fpage>
<lpage>1613</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.neuroimage.2012.07.027</pub-id>
<pub-id pub-id-type="pmid">22836181</pub-id>
</mixed-citation>
</ref>
<ref id="B3">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Amunts</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Schleicher</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Bürgel</surname>
<given-names>U.</given-names>
</name>
<name>
<surname>Mohlberg</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Uylings</surname>
<given-names>H. B. M.</given-names>
</name>
<name>
<surname>Zilles</surname>
<given-names>K.</given-names>
</name>
</person-group>
(
<year>1999</year>
).
<article-title>Broca's region revisited: cytoarchitecture and intersubject variability</article-title>
.
<source>J. Comp. Neurol.</source>
<volume>412</volume>
,
<fpage>319</fpage>
<lpage>341</lpage>
.
<pub-id pub-id-type="pmid">10441759</pub-id>
</mixed-citation>
</ref>
<ref id="B4">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Anwander</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Tittgemeyer</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>von Cramon</surname>
<given-names>D. Y.</given-names>
</name>
<name>
<surname>Friederici</surname>
<given-names>A. D.</given-names>
</name>
<name>
<surname>Knösche</surname>
<given-names>T. R.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>Connectivity-based parcellation of Broca's area</article-title>
.
<source>Cereb. Cortex</source>
<volume>17</volume>
,
<fpage>816</fpage>
<lpage>825</lpage>
.
<pub-id pub-id-type="doi">10.1093/cercor/bhk034</pub-id>
<pub-id pub-id-type="pmid">16707738</pub-id>
</mixed-citation>
</ref>
<ref id="B5">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Baker</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Blumstein</surname>
<given-names>S. E.</given-names>
</name>
<name>
<surname>Goodglass</surname>
<given-names>H.</given-names>
</name>
</person-group>
(
<year>1981</year>
).
<article-title>Interaction between phonological and semantic factors in auditory comprehension</article-title>
.
<source>Neuropsychology</source>
<volume>19</volume>
,
<fpage>1</fpage>
<lpage>15</lpage>
.
<pub-id pub-id-type="doi">10.1016/0028-3932(81)90039-7</pub-id>
<pub-id pub-id-type="pmid">7231654</pub-id>
</mixed-citation>
</ref>
<ref id="B6">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Basso</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Capitani</surname>
<given-names>E.</given-names>
</name>
</person-group>
(
<year>1985</year>
).
<article-title>Spared musical abilities in a conductor with global aphasia and ideomotor apraxia</article-title>
.
<source>J. Neurol. Neurosurg. Psychiatry</source>
<volume>48</volume>
,
<fpage>407</fpage>
<lpage>412</lpage>
.
<pub-id pub-id-type="doi">10.1136/jnnp.48.5.407</pub-id>
<pub-id pub-id-type="pmid">2582094</pub-id>
</mixed-citation>
</ref>
<ref id="B7">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Besson</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Chobert</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Marie</surname>
<given-names>C.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>Transfer of training between music and speech: common processing, attention, and memory</article-title>
.
<source>F. Psychol.</source>
<volume>2</volume>
:
<issue>94</issue>
.
<pub-id pub-id-type="doi">10.3389/fpsyg.2011.00094</pub-id>
<pub-id pub-id-type="pmid">21738519</pub-id>
</mixed-citation>
</ref>
<ref id="B8">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Besson</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Faita</surname>
<given-names>F.</given-names>
</name>
</person-group>
(
<year>1995</year>
).
<article-title>An event-related potential (ERP) study of musical expectancy: comparison of musicians with nonmusicians</article-title>
.
<source>J. Exp. Psychol. Hum. Percept. Perform.</source>
<volume>21</volume>
,
<fpage>1278</fpage>
<lpage>1296</lpage>
.
<pub-id pub-id-type="doi">10.1037/0096-1523.21.6.1278</pub-id>
</mixed-citation>
</ref>
<ref id="B9">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Besson</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Schön</surname>
<given-names>D.</given-names>
</name>
</person-group>
(
<year>2001</year>
).
<article-title>Comparison between language and music</article-title>
.
<source>Ann. N.Y. Acad. Sci.</source>
<volume>930</volume>
,
<fpage>232</fpage>
<lpage>258</lpage>
.
<pub-id pub-id-type="doi">10.1111/j.1749-6632.2001.tb05736.x</pub-id>
<pub-id pub-id-type="pmid">11458832</pub-id>
</mixed-citation>
</ref>
<ref id="B10">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Brattico</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Tervaniemi</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Näätänen</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Peretz</surname>
<given-names>I.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>Musical scale properties are automatically processed in the human auditory cortex</article-title>
.
<source>Brain Res.</source>
<volume>1117</volume>
,
<fpage>162</fpage>
<lpage>174</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.brainres.2006.08.023</pub-id>
<pub-id pub-id-type="pmid">16963000</pub-id>
</mixed-citation>
</ref>
<ref id="B11">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Buchsbaum</surname>
<given-names>B. R.</given-names>
</name>
<name>
<surname>Baldo</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Okada</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Berman</surname>
<given-names>K. F.</given-names>
</name>
<name>
<surname>Dronkers</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>D'Esposito</surname>
<given-names>M.</given-names>
</name>
<etal></etal>
</person-group>
. (
<year>2011</year>
).
<article-title>Conduction aphasia, sensory-motor integration, and phonological short-term memory - an aggregate analysis of lesion and fMRI data</article-title>
.
<source>Brain Lang.</source>
<volume>119</volume>
,
<fpage>119</fpage>
<lpage>128</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.bandl.2010.12.001</pub-id>
<pub-id pub-id-type="pmid">21256582</pub-id>
</mixed-citation>
</ref>
<ref id="B12">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Buchsbaum</surname>
<given-names>B. R.</given-names>
</name>
<name>
<surname>Olsen</surname>
<given-names>R. K.</given-names>
</name>
<name>
<surname>Koch</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Berman</surname>
<given-names>K. F.</given-names>
</name>
</person-group>
(
<year>2005</year>
).
<article-title>Human dorsal and ventral auditory streams subserve rehearsal-based and echoic processes during verbal working memory</article-title>
.
<source>Neuron</source>
<volume>48</volume>
,
<fpage>687</fpage>
<lpage>697</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.neuron.2005.09.029</pub-id>
<pub-id pub-id-type="pmid">16301183</pub-id>
</mixed-citation>
</ref>
<ref id="B13">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Cant</surname>
<given-names>J. S.</given-names>
</name>
<name>
<surname>Goodale</surname>
<given-names>M. A.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>Attention to form or surface properties modulates different regions of human occipitotemporal cortex</article-title>
.
<source>Cereb. Cortex</source>
<volume>17</volume>
,
<fpage>713</fpage>
<lpage>731</lpage>
.
<pub-id pub-id-type="doi">10.1093/cercor/bhk022</pub-id>
<pub-id pub-id-type="pmid">16648452</pub-id>
</mixed-citation>
</ref>
<ref id="B14">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Cariani</surname>
<given-names>P. A.</given-names>
</name>
<name>
<surname>Delgutte</surname>
<given-names>B.</given-names>
</name>
</person-group>
(
<year>1996</year>
).
<article-title>Neural correlates of the pitch of complex tones. I. Pitch and pitch salience</article-title>
.
<source>J. Neurophysiol.</source>
<volume>76</volume>
,
<fpage>1698</fpage>
<lpage>1716</lpage>
.
<pub-id pub-id-type="pmid">8890286</pub-id>
</mixed-citation>
</ref>
<ref id="B15">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Carrus</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Pearce</surname>
<given-names>M. T.</given-names>
</name>
<name>
<surname>Bhattacharya</surname>
<given-names>J.</given-names>
</name>
</person-group>
(
<year>2013</year>
).
<article-title>Melodic pitch expectation interacts with neural responses to syntactic but not semantic violations</article-title>
.
<source>Cortex</source>
<volume>49</volume>
,
<fpage>2186</fpage>
<lpage>2200</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.cortex.2012.08.024</pub-id>
<pub-id pub-id-type="pmid">23141867</pub-id>
</mixed-citation>
</ref>
<ref id="B16">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Chawla</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Rees</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Friston</surname>
<given-names>K. J.</given-names>
</name>
</person-group>
(
<year>1999</year>
).
<article-title>The physiological basis of attentional modulation in extrastriate visual areas</article-title>
.
<source>Nat. Neurosci.</source>
<volume>2</volume>
,
<fpage>671</fpage>
<lpage>676</lpage>
.
<pub-id pub-id-type="doi">10.1038/10230</pub-id>
<pub-id pub-id-type="pmid">10404202</pub-id>
</mixed-citation>
</ref>
<ref id="B17">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Corbetta</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Miezin</surname>
<given-names>F. M.</given-names>
</name>
<name>
<surname>Dobmeyer</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Shulman</surname>
<given-names>G. L.</given-names>
</name>
<name>
<surname>Petersen</surname>
<given-names>S. E.</given-names>
</name>
</person-group>
(
<year>1990</year>
).
<article-title>Attentional modulation of neural processing of shape, color, and velocity in humans</article-title>
.
<source>Science</source>
<volume>248</volume>
,
<fpage>1556</fpage>
<lpage>1559</lpage>
.
<pub-id pub-id-type="doi">10.1126/science.2360050</pub-id>
<pub-id pub-id-type="pmid">2360050</pub-id>
</mixed-citation>
</ref>
<ref id="B18">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Damasio</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Tranel</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Grabowski</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Adolphs</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Damasio</surname>
<given-names>A.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>Neural systems behind word and concept retrieval</article-title>
.
<source>Cognition</source>
<volume>92</volume>
,
<fpage>179</fpage>
<lpage>229</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.cognition.2002.07.001</pub-id>
<pub-id pub-id-type="pmid">15037130</pub-id>
</mixed-citation>
</ref>
<ref id="B19">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Dronkers</surname>
<given-names>N. F.</given-names>
</name>
<name>
<surname>Wilkins</surname>
<given-names>D. P.</given-names>
</name>
<name>
<surname>Van Valin</surname>
<given-names>R. D.</given-names>
<suffix>Jr.</suffix>
</name>
<name>
<surname>Redfern</surname>
<given-names>B. B.</given-names>
</name>
<name>
<surname>Jaeger</surname>
<given-names>J. J.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>Lesion analysis of the brain areas involved in language comprehension</article-title>
.
<source>Cognition</source>
<volume>92</volume>
,
<fpage>145</fpage>
<lpage>177</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.cognition.2003.11.002</pub-id>
<pub-id pub-id-type="pmid">15037129</pub-id>
</mixed-citation>
</ref>
<ref id="B20">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Eickhoff</surname>
<given-names>S. B.</given-names>
</name>
<name>
<surname>Bzdok</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Laird</surname>
<given-names>A. R.</given-names>
</name>
<name>
<surname>Kurth</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Fox</surname>
<given-names>P. T.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>Activation likelihood estimation revisited</article-title>
.
<source>Neuroimage</source>
<volume>59</volume>
,
<fpage>2349</fpage>
<lpage>2361</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.neuroimage.2011.09.017</pub-id>
<pub-id pub-id-type="pmid">21963913</pub-id>
</mixed-citation>
</ref>
<ref id="B21">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Eickhoff</surname>
<given-names>S. B.</given-names>
</name>
<name>
<surname>Laird</surname>
<given-names>A. R.</given-names>
</name>
<name>
<surname>Grefkes</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>L. E.</given-names>
</name>
<name>
<surname>Zilles</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Fox</surname>
<given-names>P. T.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>Coordinate−based activation likelihood estimation meta−analysis of neuroimaging data: a random−effects approach based on empirical estimates of spatial uncertainty</article-title>
.
<source>Hum. Brain Mapp.</source>
<volume>30</volume>
,
<fpage>2907</fpage>
<lpage>2926</lpage>
.
<pub-id pub-id-type="doi">10.1002/hbm.20718</pub-id>
<pub-id pub-id-type="pmid">19172646</pub-id>
</mixed-citation>
</ref>
<ref id="B22">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Elmer</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Meyer</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Jäncke</surname>
<given-names>L.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>Neurofunctional and behavioral correlates of phonetic and temporal categorization in musically trained and untrained subjects</article-title>
.
<source>Cereb. Cortex</source>
<volume>22</volume>
,
<fpage>650</fpage>
<lpage>658</lpage>
.
<pub-id pub-id-type="doi">10.1093/cercor/bhr142</pub-id>
<pub-id pub-id-type="pmid">21680844</pub-id>
</mixed-citation>
</ref>
<ref id="B23">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Fedorenko</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Behr</surname>
<given-names>M. K.</given-names>
</name>
<name>
<surname>Kanwisher</surname>
<given-names>N.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>Functional specificity for high-level linguistic processing in the human brain</article-title>
.
<source>Proc. Natl. Acad. Sci. U.S.A.</source>
<volume>108</volume>
,
<fpage>16428</fpage>
<lpage>16433</lpage>
.
<pub-id pub-id-type="doi">10.1073/pnas.1112937108</pub-id>
<pub-id pub-id-type="pmid">21885736</pub-id>
</mixed-citation>
</ref>
<ref id="B24">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Fedorenko</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Kanwisher</surname>
<given-names>N.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>Some regions within Broca's area do respond more strongly to sentences than to linguistically degraded stimuli: a comment on Rogalsky and Hickok (2010)</article-title>
.
<source>J. Cogn. Neurosci.</source>
<volume>23</volume>
,
<fpage>2632</fpage>
<lpage>2635</lpage>
.
<pub-id pub-id-type="doi">10.1162/jocn_a_00043</pub-id>
</mixed-citation>
</ref>
<ref id="B25">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Fedorenko</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Patel</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Casasanto</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Winawer</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Gibson</surname>
<given-names>E.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>Structural integration in language and music: evidence for a shared system</article-title>
.
<source>Mem. Cogn.</source>
<volume>37</volume>
,
<fpage>1</fpage>
<lpage>9</lpage>
.
<pub-id pub-id-type="doi">10.3758/MC.37.1.1</pub-id>
<pub-id pub-id-type="pmid">19103970</pub-id>
</mixed-citation>
</ref>
<ref id="B26">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Frances</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Lhermitte</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Verdy</surname>
<given-names>M. F.</given-names>
</name>
</person-group>
(
<year>1973</year>
).
<article-title>Le deficit musical des aphasiques</article-title>
.
<source>Appl. Psychol.</source>
<volume>22</volume>
,
<fpage>117</fpage>
<lpage>135</lpage>
.
<pub-id pub-id-type="doi">10.1111/j.1464-0597.1973.tb00391.x</pub-id>
</mixed-citation>
</ref>
<ref id="B27">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Friederici</surname>
<given-names>A. D.</given-names>
</name>
<name>
<surname>Kotz</surname>
<given-names>S. A.</given-names>
</name>
<name>
<surname>Scott</surname>
<given-names>S. K.</given-names>
</name>
<name>
<surname>Obleser</surname>
<given-names>J.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>Disentangling syntax and intelligibility in auditory language comprehension</article-title>
.
<source>Hum. Brain Mapp.</source>
<volume>31</volume>
,
<fpage>448</fpage>
<lpage>457</lpage>
.
<pub-id pub-id-type="doi">10.1002/hbm.20878</pub-id>
<pub-id pub-id-type="pmid">19718654</pub-id>
</mixed-citation>
</ref>
<ref id="B28">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Friederici</surname>
<given-names>A. D.</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Herrmann</surname>
<given-names>C. S.</given-names>
</name>
<name>
<surname>Maess</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Oertel</surname>
<given-names>U.</given-names>
</name>
</person-group>
(
<year>2000</year>
).
<article-title>Localization of early syntactic processes in frontal and temporal cortical areas: a magnetoencephalographic study</article-title>
.
<source>Hum. Brain Mapp.</source>
<volume>11</volume>
,
<fpage>1</fpage>
<lpage>11</lpage>
.
<pub-id pub-id-type="doi">10.1002/1097-0193(200009)11:1<1::AID-HBM10>3.0.CO;2-B</pub-id>
<pub-id pub-id-type="pmid">10997849</pub-id>
</mixed-citation>
</ref>
<ref id="B29">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gazzaniga</surname>
<given-names>M. S.</given-names>
</name>
</person-group>
(
<year>1983</year>
).
<article-title>Right hemisphere language following brain bisection: a 20-year perspective</article-title>
.
<source>Am. Psychol.</source>
<volume>38</volume>
,
<fpage>525</fpage>
<lpage>537</lpage>
.
<pub-id pub-id-type="doi">10.1037/0003-066X.38.5.525</pub-id>
<pub-id pub-id-type="pmid">6346975</pub-id>
</mixed-citation>
</ref>
<ref id="B30">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Geiser</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Zaehle</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Jancke</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Meyer</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>The neural correlate of speech rhythm as evidenced by metrical speech processing</article-title>
.
<source>J. Cogn. Neurosci.</source>
<volume>20</volume>
,
<fpage>541</fpage>
<lpage>552</lpage>
.
<pub-id pub-id-type="doi">10.1162/jocn.2008.20029</pub-id>
<pub-id pub-id-type="pmid">18004944</pub-id>
</mixed-citation>
</ref>
<ref id="B31">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Grahn</surname>
<given-names>J. A.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>Advances in neuroimaging techniques: Implications for the shared syntactic integration resource hypothesis</article-title>
, in
<source>Language and Music as Cognitive Systems</source>
, eds
<person-group person-group-type="editor">
<name>
<surname>Rebuschat</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Rohrmeier</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Hawkins</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Cross</surname>
<given-names>>I.</given-names>
</name>
</person-group>
(
<publisher-loc>Oxford</publisher-loc>
:
<publisher-name>Oxford University Press</publisher-name>
),
<fpage>235</fpage>
<lpage>241</lpage>
.</mixed-citation>
</ref>
<ref id="B32">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hagoort</surname>
<given-names>P.</given-names>
</name>
</person-group>
(
<year>2005</year>
).
<article-title>On Broca, brain and binding: a new framework</article-title>
.
<source>Trends Cogn. Sci.</source>
<volume>9</volume>
,
<fpage>416</fpage>
<lpage>423</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.tics.2005.07.004</pub-id>
<pub-id pub-id-type="pmid">16054419</pub-id>
</mixed-citation>
</ref>
<ref id="B33">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Henschen</surname>
<given-names>S. E.</given-names>
</name>
</person-group>
(
<year>1924</year>
).
<article-title>On the function of the right hemisphere of the brain in relation to the left in speech, music and calculation</article-title>
.
<source>Brain</source>
<volume>44</volume>
,
<fpage>110</fpage>
<lpage>123</lpage>
.</mixed-citation>
</ref>
<ref id="B34">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hickok</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Buchsbaum</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Humphries</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Muftuler</surname>
<given-names>T.</given-names>
</name>
</person-group>
(
<year>2003</year>
).
<article-title>Auditory-motor interaction revealed by fMRI: speech, music, and working memory in area Spt</article-title>
.
<source>J. Cogn. Neurosci.</source>
<volume>15</volume>
,
<fpage>673</fpage>
<lpage>682</lpage>
.
<pub-id pub-id-type="doi">10.1162/089892903322307393</pub-id>
<pub-id pub-id-type="pmid">12965041</pub-id>
</mixed-citation>
</ref>
<ref id="B35">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hickok</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Poeppel</surname>
<given-names>D.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>Dorsal and ventral streams: a framework for understanding aspects of the functional anatomy of language</article-title>
.
<source>Cognition</source>
<volume>92</volume>
,
<fpage>67</fpage>
<lpage>99</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.cognition.2003.10.011</pub-id>
<pub-id pub-id-type="pmid">15037127</pub-id>
</mixed-citation>
</ref>
<ref id="B36">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hickok</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Poeppel</surname>
<given-names>D.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>The cortical organization of speech processing</article-title>
.
<source>Nat. Rev. Neurosci.</source>
<volume>8</volume>
,
<fpage>393</fpage>
<lpage>402</lpage>
.
<pub-id pub-id-type="doi">10.1038/nrn2113</pub-id>
<pub-id pub-id-type="pmid">17431404</pub-id>
</mixed-citation>
</ref>
<ref id="B37">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hickok</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Rogalsky</surname>
<given-names>C.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>What does Broca's area activation to sentences reflect?</article-title>
<source>J. Cogn. Neurosci.</source>
<volume>23</volume>
,
<fpage>2629</fpage>
<lpage>2631</lpage>
.
<pub-id pub-id-type="doi">10.1162/jocn_a_00044</pub-id>
<pub-id pub-id-type="pmid">21563891</pub-id>
</mixed-citation>
</ref>
<ref id="B38">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hoch</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Poulin-Charronnat</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Tillmann</surname>
<given-names>B.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>The influence of task-irrelevant music on language processing: syntactic and semantic structures</article-title>
.
<source>Front. Psychol.</source>
<volume>2</volume>
:
<issue>112</issue>
.
<pub-id pub-id-type="doi">10.3389/fpsyg.2011.00112</pub-id>
<pub-id pub-id-type="pmid">21713122</pub-id>
</mixed-citation>
</ref>
<ref id="B39">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Humphries</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Binder</surname>
<given-names>J. R.</given-names>
</name>
<name>
<surname>Medler</surname>
<given-names>D. A.</given-names>
</name>
<name>
<surname>Liebenthal</surname>
<given-names>E.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>Syntactic and semantic modulation of neural activity during auditory sentence comprehension</article-title>
.
<source>J. Cogn. Neurosci.</source>
<volume>18</volume>
,
<fpage>665</fpage>
<lpage>679</lpage>
.
<pub-id pub-id-type="doi">10.1162/jocn.2006.18.4.665</pub-id>
<pub-id pub-id-type="pmid">16768368</pub-id>
</mixed-citation>
</ref>
<ref id="B40">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Humphries</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Love</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Swinney</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Hickok</surname>
<given-names>G.</given-names>
</name>
</person-group>
(
<year>2005</year>
).
<article-title>Response of anterior temporal cortex to syntactic and prosodic manipulations during sentence processing</article-title>
.
<source>Hum. Brain Mapp.</source>
<volume>26</volume>
,
<fpage>128</fpage>
<lpage>138</lpage>
.
<pub-id pub-id-type="doi">10.1002/hbm.20148</pub-id>
<pub-id pub-id-type="pmid">15895428</pub-id>
</mixed-citation>
</ref>
<ref id="B41">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Humphries</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Willard</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Buchsbaum</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Hickok</surname>
<given-names>G.</given-names>
</name>
</person-group>
(
<year>2001</year>
).
<article-title>Role of anterior temporal cortex in auditory sentence comprehension: an fMRI study</article-title>
.
<source>Neuroreport</source>
<volume>12</volume>
,
<fpage>1749</fpage>
<lpage>1752</lpage>
.
<pub-id pub-id-type="doi">10.1097/00001756-200106130-00046</pub-id>
<pub-id pub-id-type="pmid">11409752</pub-id>
</mixed-citation>
</ref>
<ref id="B42">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hyde</surname>
<given-names>K. L.</given-names>
</name>
<name>
<surname>Peretz</surname>
<given-names>I.</given-names>
</name>
<name>
<surname>Zatorre</surname>
<given-names>R. J.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>Evidence for the role of the right auditory cortex in fine pitch resolution</article-title>
.
<source>Neuropsychologia</source>
<volume>46</volume>
,
<fpage>632</fpage>
<lpage>639</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.neuropsychologia.2007.09.004</pub-id>
<pub-id pub-id-type="pmid">17959204</pub-id>
</mixed-citation>
</ref>
<ref id="B43">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hyde</surname>
<given-names>K. L.</given-names>
</name>
<name>
<surname>Zatorre</surname>
<given-names>R. J.</given-names>
</name>
<name>
<surname>Griffiths</surname>
<given-names>T. D.</given-names>
</name>
<name>
<surname>Lerch</surname>
<given-names>J. P.</given-names>
</name>
<name>
<surname>Peretz</surname>
<given-names>I.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>Morphometry of the amusic brain: a two-site study</article-title>
.
<source>Brain</source>
<volume>129</volume>
,
<fpage>2562</fpage>
<lpage>2570</lpage>
.
<pub-id pub-id-type="doi">10.1093/brain/awl204</pub-id>
<pub-id pub-id-type="pmid">16931534</pub-id>
</mixed-citation>
</ref>
<ref id="B44">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Ivry</surname>
<given-names>R. B.</given-names>
</name>
<name>
<surname>Robertson</surname>
<given-names>L. C.</given-names>
</name>
</person-group>
(
<year>1998</year>
).
<source>The Two Sides of Perception</source>
.
<publisher-loc>Cambridge, MA</publisher-loc>
:
<publisher-name>MITPress</publisher-name>
.</mixed-citation>
</ref>
<ref id="B45">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Jäncke</surname>
<given-names>L.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>Music, memory and emotion</article-title>
.
<source>J. Biol.</source>
<volume>7</volume>
,
<fpage>21</fpage>
.
<pub-id pub-id-type="doi">10.1186/jbiol82</pub-id>
<pub-id pub-id-type="pmid">18710596</pub-id>
</mixed-citation>
</ref>
<ref id="B46">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Jäncke</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Wüstenberg</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Scheich</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Heinze</surname>
<given-names>H. J.</given-names>
</name>
</person-group>
(
<year>2002</year>
).
<article-title>Phonetic perception and the temporal cortex</article-title>
.
<source>Neuroimage</source>
<volume>15</volume>
,
<fpage>733</fpage>
<lpage>746</lpage>
.
<pub-id pub-id-type="doi">10.1006/nimg.2001.1027</pub-id>
<pub-id pub-id-type="pmid">11906217</pub-id>
</mixed-citation>
</ref>
<ref id="B47">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>January</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Trueswell</surname>
<given-names>J. C.</given-names>
</name>
<name>
<surname>Thompson-Schill</surname>
<given-names>S. L.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>Co-localization of stroop and syntactic ambiguity resolution in Broca's area: implications for the neural basis of sentence processing</article-title>
.
<source>J. Cogn. Neurosci.</source>
<volume>21</volume>
,
<fpage>2434</fpage>
<lpage>2444</lpage>
.
<pub-id pub-id-type="doi">10.1162/jocn.2008.21179</pub-id>
<pub-id pub-id-type="pmid">19199402</pub-id>
</mixed-citation>
</ref>
<ref id="B48">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Koelsch</surname>
<given-names>S.</given-names>
</name>
</person-group>
(
<year>2005</year>
).
<article-title>Neural substrates of processing syntax and semantics in music</article-title>
.
<source>Curr. Opin. Neurobiol.</source>
<volume>15</volume>
,
<fpage>207</fpage>
<lpage>212</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.conb.2005.03.005</pub-id>
<pub-id pub-id-type="pmid">15831404</pub-id>
</mixed-citation>
</ref>
<ref id="B49">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Koelsch</surname>
<given-names>S.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>Toward a neural basis of music perception – a review and updated model</article-title>
.
<source>Front. Psychol.</source>
<volume>2</volume>
:
<issue>110</issue>
.
<pub-id pub-id-type="doi">10.3389/fpsyg.2011.00110</pub-id>
<pub-id pub-id-type="pmid">21713060</pub-id>
</mixed-citation>
</ref>
<ref id="B50">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Koelsch</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Friederici</surname>
<given-names>A. D.</given-names>
</name>
</person-group>
(
<year>2003</year>
).
<article-title>Toward the neural basis of processing structure in music</article-title>
.
<source>Ann. N.Y. Acad. Sci.</source>
<volume>999</volume>
,
<fpage>15</fpage>
<lpage>28</lpage>
.
<pub-id pub-id-type="doi">10.1196/annals.1284.002</pub-id>
<pub-id pub-id-type="pmid">14681114</pub-id>
</mixed-citation>
</ref>
<ref id="B51">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Koelsch</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Gunter</surname>
<given-names>T. C.</given-names>
</name>
<name>
<surname>Wittfoth</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Sammler</surname>
<given-names>D.</given-names>
</name>
</person-group>
(
<year>2005</year>
).
<article-title>Interaction between syntax processing in language and in music: an ERP study</article-title>
.
<source>J. Cogn. Neurosci.</source>
<volume>17</volume>
,
<fpage>1565</fpage>
<lpage>1577</lpage>
.
<pub-id pub-id-type="doi">10.1162/089892905774597290</pub-id>
<pub-id pub-id-type="pmid">16269097</pub-id>
</mixed-citation>
</ref>
<ref id="B52">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Koelsch</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Kasper</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Sammler</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Schulze</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Gunter</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Friederici</surname>
<given-names>A. D.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>Music, language and meaning: brain signatures of semantic processing</article-title>
.
<source>Nat. Neurosci.</source>
<volume>7</volume>
,
<fpage>302</fpage>
<lpage>307</lpage>
.
<pub-id pub-id-type="doi">10.1038/nn1197</pub-id>
<pub-id pub-id-type="pmid">14983184</pub-id>
</mixed-citation>
</ref>
<ref id="B53">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Koelsch</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Siebel</surname>
<given-names>W. A.</given-names>
</name>
</person-group>
(
<year>2005</year>
).
<article-title>Towards a neural basis of music perception</article-title>
.
<source>Trends Cogn. Sci.</source>
<volume>9</volume>
,
<fpage>578</fpage>
<lpage>584</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.tics.2005.10.001</pub-id>
<pub-id pub-id-type="pmid">16271503</pub-id>
</mixed-citation>
</ref>
<ref id="B54">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Koelsch</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Schulze</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Sammler</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Fritz</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Müller</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Gruber</surname>
<given-names>O.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>Functional architecture of verbal and tonal working memory: an fMRI study</article-title>
.
<source>Hum. Brain Mapp.</source>
<volume>30</volume>
,
<fpage>859</fpage>
<lpage>873</lpage>
.
<pub-id pub-id-type="doi">10.1002/hbm.20550</pub-id>
<pub-id pub-id-type="pmid">18330870</pub-id>
</mixed-citation>
</ref>
<ref id="B55">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Luria</surname>
<given-names>A. R.</given-names>
</name>
<name>
<surname>Tsvetkova</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Futer</surname>
<given-names>D. S.</given-names>
</name>
</person-group>
(
<year>1965</year>
).
<article-title>Aphasia in a composer</article-title>
.
<source>J. Neurol. Sci.</source>
<volume>1</volume>
,
<fpage>288</fpage>
<lpage>292</lpage>
.
<pub-id pub-id-type="doi">10.1016/0022-510X(65)90113-9</pub-id>
<pub-id pub-id-type="pmid">4860800</pub-id>
</mixed-citation>
</ref>
<ref id="B56">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>MacLeod</surname>
<given-names>C. M.</given-names>
</name>
</person-group>
(
<year>1991</year>
).
<article-title>Half a century of research on the Stroop effect: an integrative review</article-title>
.
<source>Psychol. Bull.</source>
<volume>109</volume>
,
<fpage>163</fpage>
<lpage>203</lpage>
.
<pub-id pub-id-type="doi">10.1037/0033-2909.109.2.163</pub-id>
<pub-id pub-id-type="pmid">2034749</pub-id>
</mixed-citation>
</ref>
<ref id="B57">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Maess</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Koelsch</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Gunter</surname>
<given-names>T. C.</given-names>
</name>
<name>
<surname>Friederici</surname>
<given-names>A. D.</given-names>
</name>
</person-group>
(
<year>2001</year>
).
<article-title>Musical syntax is processed in Broca's area: an MEG study</article-title>
.
<source>Nat. Neurosci.</source>
<volume>4</volume>
,
<fpage>540</fpage>
<lpage>545</lpage>
.
<pub-id pub-id-type="doi">10.1038/87502</pub-id>
<pub-id pub-id-type="pmid">11319564</pub-id>
</mixed-citation>
</ref>
<ref id="B58">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Maillard</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Barbeau</surname>
<given-names>E. J.</given-names>
</name>
<name>
<surname>Baumann</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Koessler</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Bénar</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Chauvel</surname>
<given-names>P.</given-names>
</name>
<etal></etal>
</person-group>
. (
<year>2011</year>
).
<article-title>From perception to recognition memory: time course and lateralization of neural substrates of word and abstract picture processing</article-title>
.
<source>J. Cogn. Neurosci.</source>
<volume>23</volume>
,
<fpage>782</fpage>
<lpage>800</lpage>
.
<pub-id pub-id-type="doi">10.1162/jocn.2010.21434</pub-id>
<pub-id pub-id-type="pmid">20146594</pub-id>
</mixed-citation>
</ref>
<ref id="B59">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Masataka</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Perlovsky</surname>
<given-names>L.</given-names>
</name>
</person-group>
(
<year>2013</year>
).
<article-title>Cognitive interference can be mitigated by consonant music and facilitated by dissonant music</article-title>
.
<source>Sci. Rep.</source>
<volume>3</volume>
,
<fpage>1</fpage>
<lpage>6</lpage>
.
<pub-id pub-id-type="doi">10.1038/srep02028</pub-id>
<pub-id pub-id-type="pmid">23778307</pub-id>
</mixed-citation>
</ref>
<ref id="B60">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mazoyer</surname>
<given-names>B. M.</given-names>
</name>
<name>
<surname>Tzourio</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Frak</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Syrota</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Murayama</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Levrier</surname>
<given-names>O.</given-names>
</name>
<etal></etal>
</person-group>
. (
<year>1993</year>
).
<article-title>The cortical representation of speech</article-title>
.
<source>J. Cogn. Neurosci.</source>
<volume>5</volume>
,
<fpage>467</fpage>
<lpage>479</lpage>
.
<pub-id pub-id-type="doi">10.1162/jocn.1993.5.4.467</pub-id>
<pub-id pub-id-type="pmid">23964919</pub-id>
</mixed-citation>
</ref>
<ref id="B61">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ni</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Constable</surname>
<given-names>R. T.</given-names>
</name>
<name>
<surname>Mencl</surname>
<given-names>W. E.</given-names>
</name>
<name>
<surname>Pugh</surname>
<given-names>K. R.</given-names>
</name>
<name>
<surname>Fulbright</surname>
<given-names>R. K.</given-names>
</name>
<name>
<surname>Shaywitz</surname>
<given-names>S. E.</given-names>
</name>
<etal></etal>
</person-group>
. (
<year>2000</year>
).
<article-title>An event-related neuroimaging study distinguishing form and content in sentence processing</article-title>
.
<source>J. Cogn. Neurosci.</source>
<volume>12</volume>
,
<fpage>120</fpage>
<lpage>133</lpage>
.
<pub-id pub-id-type="doi">10.1162/08989290051137648</pub-id>
<pub-id pub-id-type="pmid">10769310</pub-id>
</mixed-citation>
</ref>
<ref id="B62">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Noesselt</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Shah</surname>
<given-names>N. J.</given-names>
</name>
<name>
<surname>Jäncke</surname>
<given-names>L.</given-names>
</name>
</person-group>
(
<year>2003</year>
).
<article-title>Top-down and bottom-up modulation of language related areas- an fMRI study</article-title>
.
<source>BMC Neurosci.</source>
<volume>4</volume>
:
<fpage>13</fpage>
.
<pub-id pub-id-type="doi">10.1186/1471-2202-4-13</pub-id>
<pub-id pub-id-type="pmid">12828789</pub-id>
</mixed-citation>
</ref>
<ref id="B63">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Novick</surname>
<given-names>J. M.</given-names>
</name>
<name>
<surname>Trueswell</surname>
<given-names>J. C.</given-names>
</name>
<name>
<surname>Thompson-Schill</surname>
<given-names>S. L.</given-names>
</name>
</person-group>
(
<year>2005</year>
).
<article-title>Cognitive control and parsing: reexamining the role of Broca's area in sentence comprehension</article-title>
.
<source>Cogn. Affect. Behav. Neurosci.</source>
<volume>5</volume>
,
<fpage>263</fpage>
<lpage>281</lpage>
.
<pub-id pub-id-type="doi">10.3758/CABN.5.3.263</pub-id>
<pub-id pub-id-type="pmid">16396089</pub-id>
</mixed-citation>
</ref>
<ref id="B64">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Oechslin</surname>
<given-names>M. S.</given-names>
</name>
<name>
<surname>Meyer</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Jäncke</surname>
<given-names>L.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>Absolute pitch – functional evidence of speech-relevant auditory acuity</article-title>
.
<source>Cereb. Cortex</source>
<volume>20</volume>
,
<fpage>447</fpage>
<lpage>455</lpage>
.
<pub-id pub-id-type="doi">10.1093/cercor/bhp113</pub-id>
<pub-id pub-id-type="pmid">19592570</pub-id>
</mixed-citation>
</ref>
<ref id="B65">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Patel</surname>
<given-names>A.</given-names>
</name>
</person-group>
(
<year>2003</year>
).
<article-title>Language, music, syntax and the brain</article-title>
.
<source>Nat. Neurosci.</source>
<volume>6</volume>
,
<fpage>674</fpage>
<lpage>681</lpage>
.
<pub-id pub-id-type="doi">10.1038/nn1082</pub-id>
<pub-id pub-id-type="pmid">12830158</pub-id>
</mixed-citation>
</ref>
<ref id="B66">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Patel</surname>
<given-names>A. D.</given-names>
</name>
</person-group>
(
<year>2005</year>
).
<article-title>The relationship of music to the melody of speech and to syntactic processing disorders in aphasia</article-title>
.
<source>Ann. N.Y. Acad. Sci.</source>
<volume>1060</volume>
,
<fpage>59</fpage>
<lpage>70</lpage>
.
<pub-id pub-id-type="doi">10.1196/annals.1360.005</pub-id>
<pub-id pub-id-type="pmid">16597751</pub-id>
</mixed-citation>
</ref>
<ref id="B67">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Patel</surname>
<given-names>A. D.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<source>Music, Language, and the Brain.</source>
<publisher-loc>New York, NY</publisher-loc>
:
<publisher-name>Oxford Univ. Press</publisher-name>
.</mixed-citation>
</ref>
<ref id="B68">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Patel</surname>
<given-names>A. D.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>Why would musical training benefit the neural encoding of speech? The OPERA hypothesis</article-title>
.
<source>Front. Psychol.</source>
<volume>2</volume>
:
<issue>142</issue>
.
<pub-id pub-id-type="doi">10.3389/fpsyg.2011.00142</pub-id>
<pub-id pub-id-type="pmid">21747773</pub-id>
</mixed-citation>
</ref>
<ref id="B69">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Patel</surname>
<given-names>A. D.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>Language, music, and the brain: a resource-sharing framework</article-title>
, in
<source>Language and Music as Cognitive Systems</source>
, eds
<person-group person-group-type="editor">
<name>
<surname>Rebuschat</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Rohrmeier</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Hawkins</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Cross</surname>
<given-names>I.</given-names>
</name>
</person-group>
(
<publisher-loc>New York, NY</publisher-loc>
:
<publisher-name>Oxford University Press</publisher-name>
),
<fpage>204</fpage>
<lpage>223</lpage>
.</mixed-citation>
</ref>
<ref id="B70">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Patel</surname>
<given-names>A. D.</given-names>
</name>
</person-group>
(
<year>2013</year>
).
<article-title>Sharing and nonsharing of brain resources for language and music</article-title>
, in
<source>Language, Music, and the Brain</source>
, ed
<person-group person-group-type="editor">
<name>
<surname>Arbib</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<publisher-loc>Cambridge, MA</publisher-loc>
:
<publisher-name>MIT Press</publisher-name>
),
<fpage>329</fpage>
<lpage>355</lpage>
.</mixed-citation>
</ref>
<ref id="B71">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Patel</surname>
<given-names>A. D.</given-names>
</name>
<name>
<surname>Gibson</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Ratner</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Besson</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Holcomb</surname>
<given-names>P.</given-names>
</name>
</person-group>
(
<year>1998a</year>
).
<article-title>Processing syntactic relations in language and music: an event-related potential study</article-title>
.
<source>J. Cogn. Neurosci.</source>
<volume>10</volume>
,
<fpage>717</fpage>
<lpage>733</lpage>
.
<pub-id pub-id-type="doi">10.1162/089892998563121</pub-id>
<pub-id pub-id-type="pmid">9831740</pub-id>
</mixed-citation>
</ref>
<ref id="B72">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Patel</surname>
<given-names>A. D.</given-names>
</name>
<name>
<surname>Peretz</surname>
<given-names>I.</given-names>
</name>
<name>
<surname>Tramo</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Labreque</surname>
<given-names>R.</given-names>
</name>
</person-group>
(
<year>1998b</year>
).
<article-title>Processing prosodic and musical patterns: a neuropsychological investigation</article-title>
.
<source>Brain Lang.</source>
<volume>61</volume>
,
<fpage>123</fpage>
<lpage>144</lpage>
.
<pub-id pub-id-type="doi">10.1006/brln.1997.1862</pub-id>
<pub-id pub-id-type="pmid">9448936</pub-id>
</mixed-citation>
</ref>
<ref id="B73">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Peelle</surname>
<given-names>J. E.</given-names>
</name>
<name>
<surname>Eason</surname>
<given-names>R. J.</given-names>
</name>
<name>
<surname>Schmitter</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Schwarzbauer</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Davis</surname>
<given-names>M. H.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>Evaluating an acoustically quiet EPI sequence for use in fMRI studies of speech and auditory processing</article-title>
.
<source>Neuroimage</source>
<volume>52</volume>
,
<fpage>1410</fpage>
<lpage>1419</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.neuroimage.2010.05.015</pub-id>
<pub-id pub-id-type="pmid">20483377</pub-id>
</mixed-citation>
</ref>
<ref id="B74">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Peretz</surname>
<given-names>I.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>The nature of music from a biological perspective</article-title>
.
<source>Cognition</source>
<volume>100</volume>
,
<fpage>1</fpage>
<lpage>32</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.cognition.2005.11.004</pub-id>
<pub-id pub-id-type="pmid">16487953</pub-id>
</mixed-citation>
</ref>
<ref id="B75">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Peretz</surname>
<given-names>I.</given-names>
</name>
<name>
<surname>Belleville</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Fontaine</surname>
<given-names>S.</given-names>
</name>
</person-group>
(
<year>1997</year>
).
<article-title>Dissociations between music and language functions after cerebral resection: a new case of amusia without aphasia</article-title>
.
<source>Can. J. Exp. Psychol.</source>
<volume>51</volume>
,
<fpage>354</fpage>
<lpage>368</lpage>
.
<pub-id pub-id-type="doi">10.1037/1196-1961.51.4.354</pub-id>
<pub-id pub-id-type="pmid">9687196</pub-id>
</mixed-citation>
</ref>
<ref id="B76">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Peretz</surname>
<given-names>I.</given-names>
</name>
<name>
<surname>Champod</surname>
<given-names>A. S.</given-names>
</name>
<name>
<surname>Hyde</surname>
<given-names>K.</given-names>
</name>
</person-group>
(
<year>2003</year>
).
<article-title>Varieties of musical disorders</article-title>
.
<source>Ann. N.Y. Acad. Sci.</source>
<volume>999</volume>
,
<fpage>58</fpage>
<lpage>75</lpage>
.
<pub-id pub-id-type="doi">10.1196/annals.1284.006</pub-id>
<pub-id pub-id-type="pmid">14681118</pub-id>
</mixed-citation>
</ref>
<ref id="B77">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Peretz</surname>
<given-names>I. Hyde, K. L.</given-names>
</name>
</person-group>
(
<year>2003</year>
).
<article-title>What is specific to music processing? Insights from congenital amusia</article-title>
.
<source>Trends. Cog. Sci.</source>
<volume>7</volume>
,
<fpage>362</fpage>
<lpage>367</lpage>
.
<pub-id pub-id-type="doi">10.1016/S1364-6613(03)00150-5</pub-id>
<pub-id pub-id-type="pmid">12907232</pub-id>
</mixed-citation>
</ref>
<ref id="B78">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Peretz</surname>
<given-names>I.</given-names>
</name>
<name>
<surname>Kolinsky</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Tramo</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Labrecque</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Hublet</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Demeurisse</surname>
<given-names>G.</given-names>
</name>
<etal></etal>
</person-group>
. (
<year>1994</year>
).
<article-title>Functional dissociations following bilateral lesions of auditory cortex</article-title>
.
<source>Brain</source>
<volume>117</volume>
,
<fpage>1283</fpage>
<lpage>1302</lpage>
.
<pub-id pub-id-type="doi">10.1093/brain/117.6.1283</pub-id>
<pub-id pub-id-type="pmid">7820566</pub-id>
</mixed-citation>
</ref>
<ref id="B79">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Peretz</surname>
<given-names>I.</given-names>
</name>
<name>
<surname>Zatorre</surname>
<given-names>R. J.</given-names>
</name>
</person-group>
(
<year>2005</year>
).
<article-title>Brain organization for music processing</article-title>
.
<source>Annu. Rev. Psychol.</source>
<volume>56</volume>
,
<fpage>89</fpage>
<lpage>114</lpage>
.
<pub-id pub-id-type="doi">10.1146/annurev.psych.56.091103.070225</pub-id>
<pub-id pub-id-type="pmid">15709930</pub-id>
</mixed-citation>
</ref>
<ref id="B80">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Perruchet</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Poulin-Charronnat</surname>
<given-names>B.</given-names>
</name>
</person-group>
(
<year>2013</year>
).
<article-title>Challenging prior evidence for a shared syntactic processor for language and music</article-title>
.
<source>Psychon. Bull. Rev.</source>
<volume>20</volume>
,
<fpage>310</fpage>
<lpage>317</lpage>
.
<pub-id pub-id-type="doi">10.3758/s13423-012-0344-5</pub-id>
<pub-id pub-id-type="pmid">23180417</pub-id>
</mixed-citation>
</ref>
<ref id="B81">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Petersen</surname>
<given-names>S. E.</given-names>
</name>
<name>
<surname>Posner</surname>
<given-names>M. I.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>The attention system of the human brain: 20 years later</article-title>
.
<source>Annu. Rev. Neurosci.</source>
<volume>35</volume>
,
<fpage>73</fpage>
<lpage>89</lpage>
.
<pub-id pub-id-type="doi">10.1146/annurev-neuro-062111-150525</pub-id>
<pub-id pub-id-type="pmid">22524787</pub-id>
</mixed-citation>
</ref>
<ref id="B82">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Platel</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Price</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Baron</surname>
<given-names>J. C.</given-names>
</name>
<name>
<surname>Wise</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Lambert</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Frackowiak</surname>
<given-names>R. S.</given-names>
</name>
<etal></etal>
</person-group>
. (
<year>1997</year>
).
<article-title>The structural components of music perception: a functional anatomical study</article-title>
.
<source>Brain</source>
<volume>120</volume>
,
<fpage>229</fpage>
<lpage>243</lpage>
.
<pub-id pub-id-type="doi">10.1093/brain/120.2.229</pub-id>
<pub-id pub-id-type="pmid">9117371</pub-id>
</mixed-citation>
</ref>
<ref id="B83">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Poeppel</surname>
<given-names>D.</given-names>
</name>
</person-group>
(
<year>2003</year>
).
<article-title>The analysis of speech in different temporal integration windows: cerebral lateralization as ‘assymetric sampling in time</article-title>
.
<source>Speech Commun.</source>
<volume>41</volume>
,
<fpage>245</fpage>
<lpage>255</lpage>
.
<pub-id pub-id-type="doi">10.1016/S0167-6393(02)00107-3</pub-id>
</mixed-citation>
</ref>
<ref id="B84">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Price</surname>
<given-names>C. J.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>The anatomy of language: a review of 100 fMRI studies published in 2009</article-title>
.
<source>Ann. N.Y. Acad. Sci.</source>
<volume>1191</volume>
,
<fpage>62</fpage>
<lpage>88</lpage>
.
<pub-id pub-id-type="doi">10.1111/j.1749-6632.2010.05444.x</pub-id>
<pub-id pub-id-type="pmid">20392276</pub-id>
</mixed-citation>
</ref>
<ref id="B85">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Raichle</surname>
<given-names>M. E.</given-names>
</name>
<name>
<surname>Mintun</surname>
<given-names>M. A.</given-names>
</name>
<name>
<surname>Shertz</surname>
<given-names>L. D.</given-names>
</name>
<name>
<surname>Fusselman</surname>
<given-names>M. J.</given-names>
</name>
<name>
<surname>Miezen</surname>
<given-names>F.</given-names>
</name>
</person-group>
(
<year>1991</year>
).
<article-title>The influence of anatomical variability on functional brain mapping with PET: a study of intrasubject versus intersubject averaging</article-title>
.
<source>J. Cereb. Blood Flow Metab.</source>
<volume>11</volume>
,
<fpage>S364</fpage>
.</mixed-citation>
</ref>
<ref id="B86">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Rogalsky</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Almeida</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Sprouse</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Hickok</surname>
<given-names>G.</given-names>
</name>
</person-group>
(
<year>in press</year>
).
<article-title>Sentence processing selectivity in Broca's area: evident for structure but not syntactic movement</article-title>
.
<source>Lang. Cogn. Neurosci.</source>
</mixed-citation>
</ref>
<ref id="B87">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Rogalsky</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Hickok</surname>
<given-names>G.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>Selective attention to semantic and syntactic features modulates sentence processing networks in anterior temporal cortex</article-title>
.
<source>Cereb. Cortex</source>
<volume>19</volume>
,
<fpage>786</fpage>
<lpage>796</lpage>
.
<pub-id pub-id-type="doi">10.1093/cercor/bhn126</pub-id>
<pub-id pub-id-type="pmid">18669589</pub-id>
</mixed-citation>
</ref>
<ref id="B88">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Rogalsky</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Hickok</surname>
<given-names>G.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>The role of Broca's area in sentence comprehension</article-title>
.
<source>J. Cogn. Neurosci.</source>
<volume>23</volume>
,
<fpage>1664</fpage>
<lpage>1680</lpage>
.
<pub-id pub-id-type="doi">10.1162/jocn.2010.21530</pub-id>
<pub-id pub-id-type="pmid">20617890</pub-id>
</mixed-citation>
</ref>
<ref id="B89">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Rogalsky</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Poppa</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Chen</surname>
<given-names>K. H.</given-names>
</name>
<name>
<surname>Anderson</surname>
<given-names>S. W.</given-names>
</name>
<name>
<surname>Damasio</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Love</surname>
<given-names>T.</given-names>
</name>
<etal></etal>
</person-group>
. (
<year>2015</year>
).
<article-title>Speech repetition as a window on the neurobiology of auditory-motor integration for speech: a voxel-based lesion symptom mapping study</article-title>
.
<source>Neuropsychologia</source>
<volume>71</volume>
,
<fpage>18</fpage>
<lpage>27</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.neuropsychologia.2015.03.012</pub-id>
<pub-id pub-id-type="pmid">25777496</pub-id>
</mixed-citation>
</ref>
<ref id="B90">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Rogalsky</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Rong</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Saberi</surname>
<given-names>K. Hickok, G.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>Functional anatomy of language and music perception: temporal and structural factors investigated using functional magnetic resonance imaging</article-title>
.
<source>J. Neurosci.</source>
<volume>31</volume>
,
<fpage>3843</fpage>
<lpage>3852</lpage>
.
<pub-id pub-id-type="doi">10.1523/JNEUROSCI/4515-10.2011</pub-id>
<pub-id pub-id-type="pmid">21389239</pub-id>
</mixed-citation>
</ref>
<ref id="B91">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Rorden</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Brett</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>2000</year>
).
<article-title>Stereotaxic display of brain lesions</article-title>
.
<source>Behav. Neurol.</source>
<volume>12</volume>
,
<fpage>191</fpage>
<lpage>200</lpage>
.
<pub-id pub-id-type="doi">10.1155/2000/421719</pub-id>
<pub-id pub-id-type="pmid">11568431</pub-id>
</mixed-citation>
</ref>
<ref id="B92">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sammler</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Koelsch</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Friederici</surname>
<given-names>A. D.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>Are left fronto- temporal brain areas a prerequisite for normal music-syntactic pro- cessing?</article-title>
<source>Cortex</source>
<volume>47</volume>
,
<fpage>659</fpage>
<lpage>673</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.cortex.2010.04.007</pub-id>
<pub-id pub-id-type="pmid">20570253</pub-id>
</mixed-citation>
</ref>
<ref id="B93">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sanders</surname>
<given-names>L. D.</given-names>
</name>
<name>
<surname>Poeppel</surname>
<given-names>D.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>Local and global auditory processing: behavioral and ERP evidence</article-title>
.
<source>Neuropsychologia</source>
<volume>45</volume>
,
<fpage>1172</fpage>
<lpage>1186</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.neuropsychologia.2006.10.010</pub-id>
<pub-id pub-id-type="pmid">17113115</pub-id>
</mixed-citation>
</ref>
<ref id="B94">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Scheich</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Brechmann</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Brosch</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Budinger</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Ohl</surname>
<given-names>F. W.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>The cognitive auditory cortex: task-specificity of stimulus representations</article-title>
.
<source>Hear. Res.</source>
<volume>229</volume>
,
<fpage>213</fpage>
<lpage>224</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.heares.2007.01.025</pub-id>
<pub-id pub-id-type="pmid">17368987</pub-id>
</mixed-citation>
</ref>
<ref id="B95">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Schellenberg</surname>
<given-names>E. G.</given-names>
</name>
<name>
<surname>Bigand</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Poulin-Charronnat</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Garnier</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Stevens</surname>
<given-names>C.</given-names>
</name>
</person-group>
(
<year>2005</year>
).
<article-title>Children's implicit knowledge of harmony in Western music</article-title>
.
<source>Dev. Sci.</source>
<volume>8</volume>
,
<fpage>551</fpage>
<lpage>566</lpage>
.
<pub-id pub-id-type="doi">10.1111/j.1467-7687.2005.00447.x</pub-id>
<pub-id pub-id-type="pmid">16246247</pub-id>
</mixed-citation>
</ref>
<ref id="B96">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Schönwiesner</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Novitski</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Pakarinen</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Carlson</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Tervaniemi</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Näätänen</surname>
<given-names>R.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>Heschl's gyrus, posterior superior temporal gryus, and mid-ventrolateral prefrontal cortex have different roles in the detection of acoustic changes</article-title>
.
<source>J. Neurophysiol.</source>
<volume>97</volume>
,
<fpage>2075</fpage>
<lpage>2082</lpage>
.
<pub-id pub-id-type="doi">10.1152/jn.01083.2006</pub-id>
<pub-id pub-id-type="pmid">17182905</pub-id>
</mixed-citation>
</ref>
<ref id="B97">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Schönwiesner</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Rübsamen</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>von Cramon</surname>
<given-names>D. Y.</given-names>
</name>
</person-group>
(
<year>2005</year>
).
<article-title>Hemispheric asymmetry for spectral and temporal processing in the human antero−lateral auditory belt cortex</article-title>
.
<source>Eur. J. Neurosci.</source>
<volume>22</volume>
,
<fpage>1521</fpage>
<lpage>1528</lpage>
.
<pub-id pub-id-type="doi">10.1111/j.1460-9568.2005.04315.x</pub-id>
<pub-id pub-id-type="pmid">16190905</pub-id>
</mixed-citation>
</ref>
<ref id="B98">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Schwartz</surname>
<given-names>M. F.</given-names>
</name>
<name>
<surname>Faseyitan</surname>
<given-names>O.</given-names>
</name>
<name>
<surname>Kim</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Coslett</surname>
<given-names>H. B.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>The dorsal stream contribution to phonological retrieval in object naming</article-title>
.
<source>Brain</source>
<volume>135(Pt 12)</volume>
,
<fpage>3799</fpage>
<lpage>3814</lpage>
.
<pub-id pub-id-type="doi">10.1093/brain/aws300</pub-id>
<pub-id pub-id-type="pmid">23171662</pub-id>
</mixed-citation>
</ref>
<ref id="B99">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Scott</surname>
<given-names>S. K.</given-names>
</name>
<name>
<surname>Blank</surname>
<given-names>C. C.</given-names>
</name>
<name>
<surname>Rosen</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Wise</surname>
<given-names>R. J. S.</given-names>
</name>
</person-group>
(
<year>2000</year>
).
<article-title>Identification of a pathway for intelligible speech in the left temporal lobe</article-title>
.
<source>Brain</source>
<volume>123</volume>
,
<fpage>2400</fpage>
<lpage>2406</lpage>
.
<pub-id pub-id-type="doi">10.1093/brain/123.12.2400</pub-id>
<pub-id pub-id-type="pmid">11099443</pub-id>
</mixed-citation>
</ref>
<ref id="B100">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sergent</surname>
<given-names>J.</given-names>
</name>
</person-group>
(
<year>1982</year>
).
<article-title>About face: left-hemisphere involvement in processing phsyiognomies</article-title>
.
<source>J. Exp. Psychol. Hum. Percept. Perform.</source>
<volume>8</volume>
,
<fpage>1</fpage>
<lpage>14</lpage>
.
<pub-id pub-id-type="pmid">6460075</pub-id>
</mixed-citation>
</ref>
<ref id="B101">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Shallice</surname>
<given-names>T.</given-names>
</name>
</person-group>
(
<year>2003</year>
).
<article-title>Functional imaging and neuropsychology findings: how can they be linked?</article-title>
<source>Neuroimage</source>
<volume>20</volume>
,
<fpage>S146</fpage>
<lpage>S154</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.neuroimage.2003.09.023</pub-id>
<pub-id pub-id-type="pmid">14597308</pub-id>
</mixed-citation>
</ref>
<ref id="B102">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Slevc</surname>
<given-names>R. L.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>Language and music: sound, structure and meaning</article-title>
.
<source>WIREs Cogn. Sci.</source>
<volume>3</volume>
,
<fpage>483</fpage>
<lpage>492</lpage>
.
<pub-id pub-id-type="doi">10.1002/wcs.1186</pub-id>
</mixed-citation>
</ref>
<ref id="B103">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Slevc</surname>
<given-names>L. R.</given-names>
</name>
<name>
<surname>Rosenberg</surname>
<given-names>J. C.</given-names>
</name>
<name>
<surname>Patel</surname>
<given-names>A. D.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>Making psycholinguistics musical: self-paced reading time evidence for shared processing of linguistic and musical syntax</article-title>
.
<source>Psychon. Bull. Rev.</source>
<volume>16</volume>
,
<fpage>374</fpage>
<lpage>381</lpage>
.
<pub-id pub-id-type="doi">10.3758/16.2.374</pub-id>
<pub-id pub-id-type="pmid">19293110</pub-id>
</mixed-citation>
</ref>
<ref id="B104">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Slevc</surname>
<given-names>L. R.</given-names>
</name>
<name>
<surname>Okada</surname>
<given-names>B. M.</given-names>
</name>
</person-group>
(
<year>2015</year>
).
<article-title>Processing structure in language and music: a case for shared reliance on cognitive control</article-title>
.
<source>Psychon. Bull. Rev.</source>
<volume>22</volume>
,
<fpage>637</fpage>
<lpage>652</lpage>
.
<pub-id pub-id-type="doi">10.3758/s13423-014-0712-4</pub-id>
<pub-id pub-id-type="pmid">25092390</pub-id>
</mixed-citation>
</ref>
<ref id="B105">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Specht</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Willmes</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Shah</surname>
<given-names>N. J.</given-names>
</name>
<name>
<surname>Jäncke</surname>
<given-names>L.</given-names>
</name>
</person-group>
(
<year>2003</year>
).
<article-title>Assessment of reliability in functional imaging studies</article-title>
.
<source>J. Magn. Reson. Imag.</source>
<volume>17</volume>
,
<fpage>463</fpage>
<lpage>471</lpage>
.
<pub-id pub-id-type="doi">10.1002/jmri.10277</pub-id>
<pub-id pub-id-type="pmid">12655586</pub-id>
</mixed-citation>
</ref>
<ref id="B106">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Spitsyna</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Warren</surname>
<given-names>J. E.</given-names>
</name>
<name>
<surname>Scott</surname>
<given-names>S. K.</given-names>
</name>
<name>
<surname>Turkheimer</surname>
<given-names>F. E.</given-names>
</name>
<name>
<surname>Wise</surname>
<given-names>R. J.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>Converging language streams in the human temporal lobe</article-title>
.
<source>J. Neurosci.</source>
<volume>26</volume>
,
<fpage>7328</fpage>
<lpage>7336</lpage>
.
<pub-id pub-id-type="doi">10.1523/JNEUROSCI.0559-06.2006</pub-id>
<pub-id pub-id-type="pmid">16837579</pub-id>
</mixed-citation>
</ref>
<ref id="B107">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Steinbeis</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Koelsch</surname>
<given-names>S.</given-names>
</name>
</person-group>
(
<year>2008a</year>
).
<article-title>Comparing the processing of music and language meaning using EEG and fMRI provides evidence for similar and distinct neural representations</article-title>
.
<source>PLoS ONE</source>
.
<volume>3</volume>
:
<fpage>e2226</fpage>
.
<pub-id pub-id-type="doi">10.1371/journal.pone.0002226</pub-id>
<pub-id pub-id-type="pmid">18493611</pub-id>
</mixed-citation>
</ref>
<ref id="B108">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Steinbeis</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Koelsch</surname>
<given-names>S.</given-names>
</name>
</person-group>
(
<year>2008b</year>
).
<article-title>Shared neural resources between music and language indicate semantic processing of musical tension-resolution patterns</article-title>
.
<source>Cereb. Cortex</source>
<volume>18</volume>
,
<fpage>1169</fpage>
<lpage>1178</lpage>
.
<pub-id pub-id-type="doi">10.1093/cercor/bhm149</pub-id>
<pub-id pub-id-type="pmid">17720685</pub-id>
</mixed-citation>
</ref>
<ref id="B109">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Steinke</surname>
<given-names>W. R.</given-names>
</name>
<name>
<surname>Cuddy</surname>
<given-names>L. L.</given-names>
</name>
<name>
<surname>Holden</surname>
<given-names>R. R.</given-names>
</name>
</person-group>
(
<year>1997</year>
).
<article-title>Dissociation of musical tonality and pitch memory from nonmusical cognitive abilities</article-title>
.
<source>Can. J. Exp. Psychol.</source>
<volume>51</volume>
:
<fpage>316</fpage>
.
<pub-id pub-id-type="doi">10.1037/1196-1961.51.4.316</pub-id>
<pub-id pub-id-type="pmid">9687195</pub-id>
</mixed-citation>
</ref>
<ref id="B110">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Stroop</surname>
<given-names>J. R.</given-names>
</name>
</person-group>
(
<year>1935</year>
).
<article-title>Studies of interference in serial verbal reactions</article-title>
.
<source>J. Exp. Psychol.</source>
<volume>18</volume>
,
<fpage>643</fpage>
<lpage>662</lpage>
.
<pub-id pub-id-type="doi">10.1037/h0054651</pub-id>
</mixed-citation>
</ref>
<ref id="B111">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Tervaniemi</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>2001</year>
).
<article-title>Musical sound processing in the human brain. Evidence from electric and magnetic recordings</article-title>
.
<source>Ann. N.Y. Acad. Sci.</source>
<volume>930</volume>
,
<fpage>259</fpage>
<lpage>272</lpage>
.
<pub-id pub-id-type="doi">10.1111/j.1749-6632.2001.tb05737.x</pub-id>
<pub-id pub-id-type="pmid">11458833</pub-id>
</mixed-citation>
</ref>
<ref id="B112">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Tervaniemi</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Hugdahl</surname>
<given-names>K.</given-names>
</name>
</person-group>
(
<year>2003</year>
).
<article-title>Lateralization of auditory-cortex functions</article-title>
.
<source>Brain Res. Brain Res. Rev.</source>
<volume>43</volume>
,
<fpage>231</fpage>
<lpage>246</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.brainresrev.2003.08.004</pub-id>
<pub-id pub-id-type="pmid">14629926</pub-id>
</mixed-citation>
</ref>
<ref id="B113">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Tervaniemi</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Kujala</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Alho</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Virtanen</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Ilmoniemi</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Näätänen</surname>
<given-names>R.</given-names>
</name>
</person-group>
(
<year>1999</year>
).
<article-title>Functional specialization of the human auditory cortex in processing phonetic and musical sounds: a magnetoencephalographic (MEG) study</article-title>
.
<source>Neuroimage</source>
<volume>9</volume>
,
<fpage>330</fpage>
<lpage>336</lpage>
.
<pub-id pub-id-type="doi">10.1006/nimg.1999.0405</pub-id>
<pub-id pub-id-type="pmid">10075902</pub-id>
</mixed-citation>
</ref>
<ref id="B114">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Tervaniemi</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Mendvedev</surname>
<given-names>S. V.</given-names>
</name>
<name>
<surname>Alho</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Pakhomov</surname>
<given-names>S. V.</given-names>
</name>
<name>
<surname>Roudas</surname>
<given-names>M. S.</given-names>
</name>
<name>
<surname>Van Zuijen</surname>
<given-names>T. L.</given-names>
</name>
<etal></etal>
</person-group>
. (
<year>2000</year>
).
<article-title>Lateralized automatic auditory processing of phonetic versus musical information: a PET study</article-title>
.
<source>Hum. Brain Mapp.</source>
<volume>10</volume>
,
<fpage>74</fpage>
<lpage>79</lpage>
.
<pub-id pub-id-type="doi">10.1002/(SICI)1097-0193(200006)10:2<74::AID-HBM30>3.0.CO;2-2</pub-id>
<pub-id pub-id-type="pmid">10864231</pub-id>
</mixed-citation>
</ref>
<ref id="B115">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Tillmann</surname>
<given-names>B.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>Music and language perception: expectations, structural integration, and cognitive sequencing</article-title>
.
<source>Top. Cogn. Sci.</source>
<volume>4</volume>
,
<fpage>568</fpage>
<lpage>584</lpage>
.
<pub-id pub-id-type="doi">10.1111/j.1756-8765.2012.01209.x</pub-id>
<pub-id pub-id-type="pmid">22760955</pub-id>
</mixed-citation>
</ref>
<ref id="B116">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Tillmann</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Peretz</surname>
<given-names>I.</given-names>
</name>
<name>
<surname>Bigand</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Gosselin</surname>
<given-names>N.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>Harmonic priming in an amusic patient: the power of implicit tasks</article-title>
.
<source>Cogn. Neuropsychol.</source>
<volume>24</volume>
,
<fpage>603</fpage>
<lpage>622</lpage>
.
<pub-id pub-id-type="doi">10.1080/02643290701609527</pub-id>
<pub-id pub-id-type="pmid">18416511</pub-id>
</mixed-citation>
</ref>
<ref id="B117">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Turkeltaub</surname>
<given-names>P. E.</given-names>
</name>
<name>
<surname>Coslett</surname>
<given-names>H. B.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>Localization of sublexical speech perception components</article-title>
.
<source>Brain Lang.</source>
<volume>114</volume>
,
<fpage>1</fpage>
<lpage>15</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.bandl.2010.03.008</pub-id>
<pub-id pub-id-type="pmid">20413149</pub-id>
</mixed-citation>
</ref>
<ref id="B118">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Turkeltaub</surname>
<given-names>P. E.</given-names>
</name>
<name>
<surname>Eickhoff</surname>
<given-names>S. B.</given-names>
</name>
<name>
<surname>Laird</surname>
<given-names>A. R.</given-names>
</name>
<name>
<surname>Fox</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Wiener</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Fox</surname>
<given-names>P.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>Minimizing within−experiment and within−group effects in activation likelihood estimation meta−analyses</article-title>
.
<source>Hum. Brain Mapp.</source>
<volume>33</volume>
,
<fpage>1</fpage>
<lpage>13</lpage>
.
<pub-id pub-id-type="doi">10.1002/hbm.21186</pub-id>
<pub-id pub-id-type="pmid">21305667</pub-id>
</mixed-citation>
</ref>
<ref id="B119">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Tzortzis</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Goldblum</surname>
<given-names>M. C.</given-names>
</name>
<name>
<surname>Dang</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Forette</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Boller</surname>
<given-names>F.</given-names>
</name>
</person-group>
(
<year>2000</year>
).
<article-title>Absence of amusia and preserved naming of musical instruments in an aphasic composer</article-title>
.
<source>Cortex</source>
<volume>36</volume>
,
<fpage>227</fpage>
<lpage>242</lpage>
.
<pub-id pub-id-type="doi">10.1016/S0010-9452(08)70526-4</pub-id>
<pub-id pub-id-type="pmid">10815708</pub-id>
</mixed-citation>
</ref>
<ref id="B120">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Vaden</surname>
<given-names>K. I.</given-names>
<suffix>Jr.</suffix>
</name>
<name>
<surname>Kuchinsky</surname>
<given-names>S. E.</given-names>
</name>
<name>
<surname>Cute</surname>
<given-names>S. L.</given-names>
</name>
<name>
<surname>Ahlstrom</surname>
<given-names>J. B.</given-names>
</name>
<name>
<surname>Dubno</surname>
<given-names>J. R.</given-names>
</name>
<name>
<surname>Eckert</surname>
<given-names>M. A.</given-names>
</name>
</person-group>
(
<year>2013</year>
).
<article-title>The cingulo-opercular network provides word-recognition benefit</article-title>
.
<source>J. Neurosci.</source>
<volume>33</volume>
,
<fpage>18979</fpage>
<lpage>18986</lpage>
.
<pub-id pub-id-type="doi">10.1523/JNEUROSCI.1417-13.2013</pub-id>
<pub-id pub-id-type="pmid">24285902</pub-id>
</mixed-citation>
</ref>
<ref id="B121">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Von Kriegstein</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Eiger</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Kleinschmidt</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Giraud</surname>
<given-names>A. L.</given-names>
</name>
</person-group>
(
<year>2003</year>
).
<article-title>Modulation of neural responses to speech by directing attention to voices or verbal content</article-title>
.
<source>Cogn. Brain Res.</source>
<volume>17</volume>
,
<fpage>48</fpage>
<lpage>55</lpage>
.
<pub-id pub-id-type="doi">10.1016/S0926-6410(03)00079-X</pub-id>
</mixed-citation>
</ref>
<ref id="B122">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>White</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>O'Leary</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Magnotta</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Arndt</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Flaum</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Andreasen</surname>
<given-names>N. C.</given-names>
</name>
</person-group>
(
<year>2001</year>
).
<article-title>Anatomic and functional variability: the effects of filter size in group fMRI data analysis</article-title>
.
<source>Neuroimage</source>
<volume>13</volume>
,
<fpage>577</fpage>
<lpage>588</lpage>
.
<pub-id pub-id-type="doi">10.1006/nimg.2000.0716</pub-id>
<pub-id pub-id-type="pmid">11305887</pub-id>
</mixed-citation>
</ref>
<ref id="B123">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wilson</surname>
<given-names>S. M.</given-names>
</name>
<name>
<surname>DeMarco</surname>
<given-names>A. T.</given-names>
</name>
<name>
<surname>Henry</surname>
<given-names>M. L.</given-names>
</name>
<name>
<surname>Gesierich</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Babiak</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Mandelli</surname>
<given-names>M. L.</given-names>
</name>
<etal></etal>
</person-group>
. (
<year>2014</year>
).
<article-title>What role does the anterior temporal lobe play in sentence-level processing? Neural correlates of syntactic processing in semantic variant primary progressive aphasia</article-title>
.
<source>J. Cogn. Neurosci.</source>
<volume>26</volume>
,
<fpage>970</fpage>
<lpage>985</lpage>
.
<pub-id pub-id-type="doi">10.1162/jocn_a_00550</pub-id>
<pub-id pub-id-type="pmid">24345172</pub-id>
</mixed-citation>
</ref>
<ref id="B124">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wong</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Gallate</surname>
<given-names>J.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>The function of the anterior temporal lobe: a review of the empirical evidence</article-title>
.
<source>Brain Res.</source>
<volume>1449</volume>
,
<fpage>94</fpage>
<lpage>116</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.brainres.2012.02.017</pub-id>
<pub-id pub-id-type="pmid">22421014</pub-id>
</mixed-citation>
</ref>
<ref id="B125">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Xu</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Kemeny</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Park</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Frattali</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Braun</surname>
<given-names>A.</given-names>
</name>
</person-group>
(
<year>2005</year>
).
<article-title>Language in context: emergent features of word, sentence, and narrative comprehension</article-title>
.
<source>Neuroimage</source>
<volume>25</volume>
,
<fpage>1002</fpage>
<lpage>1015</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.neuroimage.2004.12.013</pub-id>
<pub-id pub-id-type="pmid">15809000</pub-id>
</mixed-citation>
</ref>
<ref id="B126">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Yamadori</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Osumi</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Masuhara</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Okubo</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>1977</year>
).
<article-title>Preservation of singing in Broca's aphasia</article-title>
.
<source>J. Neurol. Neurosurg. Psychiatry</source>
<volume>40</volume>
,
<fpage>221</fpage>
<lpage>224</lpage>
.
<pub-id pub-id-type="doi">10.1136/jnnp.40.3.221</pub-id>
<pub-id pub-id-type="pmid">886348</pub-id>
</mixed-citation>
</ref>
<ref id="B127">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zatorre</surname>
<given-names>R. J.</given-names>
</name>
<name>
<surname>Belin</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Penhune</surname>
<given-names>V. B.</given-names>
</name>
</person-group>
(
<year>2002</year>
).
<article-title>Structure and function of auditory cortex: music and speech</article-title>
.
<source>Trends Cogn. Sci.</source>
<volume>6</volume>
,
<fpage>37</fpage>
<lpage>46</lpage>
.
<pub-id pub-id-type="doi">10.1016/S1364-6613(00)01816-7</pub-id>
<pub-id pub-id-type="pmid">11849614</pub-id>
</mixed-citation>
</ref>
<ref id="B128">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zatorre</surname>
<given-names>R. J.</given-names>
</name>
<name>
<surname>Gandour</surname>
<given-names>J. T.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>Neural specializations for speech and pitch: moving beyond the dichotomies</article-title>
.
<source>Philos. Trans. R. Soc. Lond. B Biol. Sci.</source>
<volume>363</volume>
,
<fpage>1087</fpage>
<lpage>1104</lpage>
.
<pub-id pub-id-type="doi">10.1098/rstb.2007.2161</pub-id>
<pub-id pub-id-type="pmid">17890188</pub-id>
</mixed-citation>
</ref>
<ref id="B129">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zheng</surname>
<given-names>Z. Z.</given-names>
</name>
<name>
<surname>Munhall</surname>
<given-names>K. G.</given-names>
</name>
<name>
<surname>Johnsrude</surname>
<given-names>I. S.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>Functional overlap between regions involved in speech perception and in monitoring one's own voice during speech production</article-title>
.
<source>J. Cogn. Neurosci.</source>
<volume>22</volume>
,
<fpage>1770</fpage>
<lpage>1781</lpage>
.
<pub-id pub-id-type="doi">10.1162/jocn.2009.21324</pub-id>
<pub-id pub-id-type="pmid">19642886</pub-id>
</mixed-citation>
</ref>
</ref-list>
</back>
</pmc>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Wicri/Musique/corpus/OperaV1/Data/Pmc/Curation
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 000027 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Wicri/Musique
   |area=    OperaV1
   |flux=    Pmc
   |étape=   Curation
   |type=    RBID
   |clé=     PMC:4531212
   |texte=   The relationship between the neural computations for speech and music perception is context-dependent: an activation likelihood estimate study
}}

Wicri

This area was generated with Dilib version V0.6.21.
Data generation: Thu Apr 14 14:59:05 2016. Site generation: Wed Aug 16 22:49:51 2017