language of music
June 28, 2007 10:14 AM   Subscribe

Essential tones of music rooted in human speech. Original Duke University paper by Deborah Ross, Jonathan Choi and Dale Purves [pdf].
posted by nickyskye (49 comments total) 21 users marked this as a favorite
 
No music, except modern experimental pieces, uses all 12 tones.
JS Bach, for example, that notorious enfant terrible of modern, experimental music.
posted by Wolfdog at 10:20 AM on June 28, 2007 [2 favorites]


Fascinating, but I wonder if this is a chicken/egg question:

"This predominance of musical intervals hidden in speech suggests that the chromatic scale notes in music sound right to our ears because they match the formant ratios we are exposed to all the time in speech, even though we are quite unaware of this exposure."

As devil's advocate--perhaps the musical intervals came first, and then we adopted them into our fundamental speech patterns: I'm in the philosophical camp that the foundations of our tonal structures in music are derived from nature rather than invented, namely from overtones, or the harmonic series (a major chord, for example, occurs within the first six partials of any given fundamental). We've been making music with a diatonic pitch system for at least 40,000 years (the oldest known flutes appear to have used the aeolian mode), which is at least as old as speech (in some theories--some theorize spoken language may be as old as 2 million years).

So I wonder if musical patterns influenced the fundamental development of speech, rather than the other way around?
posted by LooseFilter at 10:31 AM on June 28, 2007 [1 favorite]


Also, Wolfdog, heh. I was happy that they got the just vs. even tempered intonation thing right, though.
posted by LooseFilter at 10:33 AM on June 28, 2007


The more I think about it, the more it's just crap. They've just rediscovered the harmonic series. Their claim that the 12-tone system "sounds right" to us because it's embedded in our vocal structure is nonsense. If you synthesize tones appropriately you can play music based on quite exotic, even arbitrary scale systems and listeners will report it as sounding consonant, in-tune, and pleasant.
posted by Wolfdog at 10:40 AM on June 28, 2007


LooseFilter has it right for me, I'd say that it's quite likely that the human voice is bound by the overtone series in the same way as are all sounds, (I'm not a physicist but a composer so I may be wrong with this, but I imagine that it's one rule-for-all) the 12-note set is a derivation from this.
posted by ob at 10:44 AM on June 28, 2007


bound by the overtone series in the same way as are all sounds
No, no, no. It's just certain common types of sound - vibrating strings or air columns - have the same overtone structure. Circular membranes, for example, don't have the same overtones at all. Nor do electronically synthesized tones, for that matter, which can be as simple as a pure fundamental or include any combination of frequencies whatsoever.
posted by Wolfdog at 10:48 AM on June 28, 2007


Sorry I meant all naturally occurring sounds, although it appears that I don't have that right either. Oh well! Anyway, the point is that I'm with the bullshit camp on this too.
posted by ob at 10:57 AM on June 28, 2007


Oh yeah and I was annoyed with their 12-note comment too. Also how does this explain other scales from non-western cultures that divide up the octave differently?
posted by ob at 10:59 AM on June 28, 2007


No, no, no. It's just certain common types of sound

What do electronically synthesized tones have to do with the formation of language and music in early human culture?

Yes, different overtones do exist, but the ratios of the harmonic series are found throughout nature, and are the ones from which our basic musical language is derived. So these folks are telling us they are also the ones of our basic speech. Go figure.
posted by LooseFilter at 11:03 AM on June 28, 2007


Well, I didn't say they did! But it's silly to say that all sounds, or even all "natural" sounds - unless that's something like "true" scotsmen - have the same overtone series.
posted by Wolfdog at 11:09 AM on June 28, 2007


Yes, definitely. I'm just saying that it's sort of orthogonal to go into all that.
posted by LooseFilter at 11:16 AM on June 28, 2007


How KIND of you to let me come.
posted by Ambrosia Voyeur at 11:25 AM on June 28, 2007


The group's next study concerns our intuitive understanding that a musical piece tends to sound happy if it's in a major key but relatively sad if it's in a minor key. That, too, may come from the characteristics of the human voice, Purves suggests.

This is the one that interests me most. I'm usually drawn to non-major music.
posted by symbioid at 11:27 AM on June 28, 2007


Thinking on this a little more, in sum: they've found correlation, which is interesting, but appear to be assuming causation.
posted by LooseFilter at 11:42 AM on June 28, 2007


It is a chicken/egg question of sorts. As it is my own academic field, I'll just say the evidence is as strong for a common signaling system (from which speech and music diverged) as the point of origin as it is for the evolutionary priority of either. And some of us think these are the wrong questions anyway.
posted by spitbull at 12:04 PM on June 28, 2007


But thanks for the post -- very interesting stuff.
posted by spitbull at 12:05 PM on June 28, 2007


The study sucks.
When the Duke researchers looked at the ratios of the first two formants in speech spectra, they found that the ratios formed musical relationships.
Umm. Most of speech has to do with how formants move, not static relations. Take a look at this spectrogram of "The damage reduces the value.". Look at the "you" sound at the very end and you see a resonance that does a prominent sweep up and down. What is the "musical relation" in that?

Yes static vowels select overtones and music is based on overtones but that gives no primacy of speech overtones for music because speech overtones are just a special case of any overtones.
posted by MonkeySaltedNuts at 12:07 PM on June 28, 2007


I always thought that the innate, consistent, emotional response to music was somehow related to our language facilities. It makes nose sense otherwise, why would people have an innate desire for music, when none exists in nature? In my view music is a refined, distilled form of sounds we would otherwise find pleasant, most likely other people's voices.

On the other hand, it is possible that music co-evolved with speech, and so humans did adapt to it for some reason. Either way I think research into how humans respond to music would be really interesting. I'd like to see studies done with fMRIs and whatnot to see how it's interpreted in the brain.
posted by delmoi at 12:13 PM on June 28, 2007 [1 favorite]


The group's next study concerns our intuitive understanding that a musical piece tends to sound happy if it's in a major key but relatively sad if it's in a minor key. That, too, may come from the characteristics of the human voice, Purves suggests.

Is so much drivel first propagated by Hevner in '35. The ability to generalize mood based on modality does not apply to children under the age of 6 (Kastner & Crowder, '90). When manipulating modality and tempo independently, children between the age of 6 and 8 years responded as well as adults to both modality and tempo, five year old children respond only to tempo, and children younger than five were unable to generalize mood at all.

While these findings suggest that emotional association with music in a particular modality is acquired, Western adult subjects are largely able to perceive the intended emotion of classical North Indian ragas (Balkwill & Thompson, 1999), which uses a scale in equal temperament, suggests that there may be contributions from tempo and mode that are culturally invariant.

Not surprisingly, given these observations, consonance judgements between American and Japanese subjects were largely similar (Butler & Daston, 1968) and infants as young as 2 months old (Trainor &al., 2002) focused on music sources generating consonant melodies, made fewer movements, and showed less signs of distress than when presented with dissonant melodies (Zenter & Kagan, 1996, 1998). Of particular interest however, when comparing consonance judgements between Canadian and Indian subjects, Indian subjects were more tolerant of dissonant intervals (Maher, 1976) which suggest that there may not necessarily be pre-tuned neural frequency ratio detectors or that these detectors have a degree of plasticity. It is possible that the use of just intonation in classical Indian music and the prevalence of shruti, intervals smaller than those normally found between notes, may account for differences.

-Balkwill, LL. & Thompson, WF. A cross-cultural investigation of the perception of emotion in music: Psychophysical and cultural cues. Mus. Percep. 1999; 17:43-64
-Butler, JW. & Daston, PG. Musical consonance as musical preference: A cross-cultural study. J. Gen. Psych. 1968; 79:129-142
-Hevner, K. The affective character of the major and minor modes of music. Am. J. Psych. 1935, 47:103-18
-Kastner, MP & Crowder, RG. Perception of the major/minor distinction: IV. Emotional connotations in young children. Music Perc. 1990; 8:189-201
-Maher, TF. “Need for resolution” ratings for harmonic musical intervals: A comparison between Indians and Canadians. J. Cross Cult. Pysch. 1976; 7:259-76
-Trainor, LJ. Effect of frequency ratio on infants’ and adults’ discrimination of simultaneous intervals. J. Exp. Psych. 1997; 23:1427-38
-Zenter, MR. & Kagan, J. Perception of music by infants. Nature. 1996; 383:29
-Zenter, MR. & Kagan, J. Infants’ perception of consonance and dissonance in music. Inf. Behav. Dev. 1998; 21:483-92

posted by porpoise at 12:27 PM on June 28, 2007 [5 favorites]


delmoi: it's strongly suspected that music and language are related in many ways in the brain. Many patterns [PDF] arise in brain imaging when doing studies (such as testing to see the mental result of a subject hearing harmonic and linguistic incongruities, as in the article) that seem to suggest they are treated similarly. In all honesty, I think a lot of musicians would probably consider this strongly in line with their own subjective experiences...I would.
posted by invitapriore at 12:37 PM on June 28, 2007


The actual abstract of the paper says:
Throught history and across cultures, humans have created music using pitch intervals that divide octaves into the 12 tones of the chromatic scale. Why these specific intervals in music are preferred, however, is not known.
Wrong, the 12 tone scale is mostly a European invention. And why 12 tones is easily known from harmonic considerations. Start with the definitions and desires:
  • A "note" is equivalent to any octave relation of the note's frequency.
  • For any note you want the 3rd and 5th harmonic of that note's frequency also to be notes.
  • You want a finite set of notes.
The smallest solution to these constraints gives 12 notes, though the harmonic relations are slightly off for some of the notes (just temperment) or equally off for all of them (even temperment).
posted by MonkeySaltedNuts at 12:37 PM on June 28, 2007 [1 favorite]


Ah, the Patel paper (invitapriore's link). Some other neuroimaging studies also find that Broca's area is involved in both syntactic processing of both music and language and Wernicke's area is involved in both processing harmonics in music and language comprehension.

Interestingly, there are differences in the spatial activation of Broca’s area when processing internal speech with syntactic elements when using a native language compared to a second language acquired during adulthood with increasing anatomical separation co-varying with the age at which the second language was acquired (Kim &al., 1997) but is activated homogeneously regardless of language when retrieving words with no syntactic context (Halsband, 2006). So "part of the brain involved in language" is very likely quite a bit more complicated than at first blush.

However, just because similar areas of the brain are metabolically active doesn't necessarily imply that they're involved in exactly the same way. A lot of imaging studies have notoriously poor resolution (fMRI BOLD uses blood oxygen levels as a surrogate marker of metabolic activity as a marker for neuronal activity - readouts are little blobs that light up. Lots and lots of neurons covered in the blob's areas and even if those neurons are doing very different things it'll still show up the same).

With that said, I still think that music and language probably share some of the same computational resources but... it can be argued that only humans (and perhaps ocean-dwelling cetaceans) are the only organisms that actually have "real music" (as opposed to music-like vocalization such as birdsong which the birds don't actually use as music per se like humans do). So I feel that the evolution of language and music may very well be distinct - as in they developed under different evolutionary pressures. Since both are a function of audition, it makes sense that they share some of the same hardware, but I'm not convinced that they are directly a result of one another ("co-evolve").

Most primates lack song and music altogether and experiments on musicality in primates are notoriously negative since the test animals don't seem to understand the idea of music and fail to respond to it.

Oh, one last note about "happy" and "sad" - there are brain areas that activate in response to happy music that also activate in response to happy emotion and the same for sad but those happy and sad areas that light up also light up in response to other cues (ie., happy area also lights up in response to punishment, &c&c) and I think that some of the areas that light up are more of a response to saliency (how much something is important) rather than happy or sad.
posted by porpoise at 1:00 PM on June 28, 2007 [2 favorites]


i'm no studiosus of the matter, but, judging by the profound response my small kids had to a year's worth of weekly Gordon Music Learning hours, seems to me there's some definite connection there (at the very least in terms of how language and music are acquired).
posted by progosk at 2:03 PM on June 28, 2007


porpoise: thanks for that clarification -- your hypothesis makes a lot of sense. Unfortunately I come to this debate as a musician, not a neuroscientist, but it's fascinating nonetheless. What comes into question for me after reading your post is: what is the evolutionary pressure for music? Most other artforms have fairly easily traced primitive equivalents, but music doesn't seem to.
posted by invitapriore at 2:07 PM on June 28, 2007


On review, it's not that I'm asking you directly, because I don't think anyone knows, but it seems like the next point of concern.
posted by invitapriore at 2:09 PM on June 28, 2007


"The more I think about it, the more it's just crap. They've just rediscovered the harmonic series."

Agreed -- at least if the story accurately reflects their claims. Vocal chords are subject to the same physical laws as guitar strings. Correspondence with the harmonic series follows.

While the tones of western music are (approximately) coincident, many of those of other cultures are not ... as Helmholtz recorded in detail in 1863 ("On the Sensations of Tone as a Physiological Basis for the Theory of Music").

An interesting question ... psychoacoustics is complex, though ...
posted by Twang at 2:13 PM on June 28, 2007


invitapriore - that's a very good question and you're right, there's a lot of speculation but no-one honestly claims to have a proveable/testable answer to why and how musicality arose in humans.

Although music as a form of communication has less information density than sentence of the same length, music has a profound ability to convey emotions (Peretz &al., '98, Gilboa &al., '06) and to alter the affective state (current emotional state) of other humans (Cantor & Zillmann '73). Music, historically, has been a central portion of courtship. Additionally, music can be a potent activator of the locomotor areas of the brain (ie., some tunes you just have to dance/tap your feet/&c to).

This could be one pressure; an additional ability to convey and alter emotional states could conceivably be very useful for social animals with a predilection for violence.

Amusia (a functional inability to perceive music as such) is exceedingly uncommon in humans suggesting that it's hardwired in us and thus a locus for evolutionary selection. Also, even though many people self-describe as being tone-deaf, cases of true tone-deafness is actually quite rare. I find really interesting that people are very good at recognizing lullabies from other cultures as such further lending evidence that it's innate (and that babies universally respond to that type of music) in addition to being able to perceive the intended emotional state (at least in broad categories such as happy, neutral, and sad) in unfamiliar cross cultural music.

Hmm, come to think of it I might take back the "only humans have real music" now that I remember the study that mice vocalize during their mating ritual. It's still really odd that vast majority of the closest evolutionary relatives of humans (the great apes) completely lack song. iirc gibbons might have something akin to music but no-one has described music in chimps and gorillas nor gotten them to respond to anything resembling music.

Josh McDermott has written a very interesting paper "The Origins of Music: Innateness, Uniqueness, and Evolution" [pdf] that discusses the issue.

One of the biggest hurdles in examining the origins of musicality in humans is to determine whether any aspects of music are innate and thus potential targets of natural selection. Personally, I lean towards "yes" there are aspects of music that are innate and arise from unique structures in the brain that are distinct from those that are involved in language (albeit I expect many circuits to overlap). I don't have a ton of evidence to back up the separation of music and language but in some people who stutter, some can learn to sing what they want to say perfectly without stuttering which if nothing else suggest that there are portions of the brain involved in music that are capable to overcoming some defect in some circuits primarily responsible for language/speech.

But as to what exactly the pressures are that gave rise to music in humans... ?
posted by porpoise at 3:34 PM on June 28, 2007 [1 favorite]


Interestingly, amusia - in the sense of being unable to distinguish between consonance and disonance - is independent of the ability to distinguish between happy and sad implications in music, which seems to be cued by tempo and rhythm at least as much as by mode or harmony.
posted by Wolfdog at 3:44 PM on June 28, 2007


re: evolutionary pressure for music

Duh, I can't believe I didn't realize the obvious.

Musicians get laid more!

You know, that guy with the long hair an guitar, or rock stars, &c. Castrati, not so much.

Wolfdog - interesting fascinating, I did not realize that, which I guess further dogpiles on Purves' ill-found speculation wrt major/minor happy/sad simplification.
posted by porpoise at 3:48 PM on June 28, 2007


Cortical deafness to Dissonance (Peretz, et al)
posted by Wolfdog at 3:53 PM on June 28, 2007


I always thought that the innate, consistent, emotional response to music was somehow related to our language facilities. It makes nose sense otherwise, why would people have an innate desire for music, when none exists in nature? In my view music is a refined, distilled form of sounds we would otherwise find pleasant, most likely other people's voices.

This is absolutely a key to the relationship between language and music, in my opinion, and can be used to argue that music preceded language, and that language is an epiphenomenon of the earliest form of music.

In babies, language begins as babbling, which gradually is shaped-- and shapes itself-- into speech. The widely observed obvious pleasure and emotional satisfaction babies get from babbling makes it a self-sustaining phenomenon; what makes it start may be another matter, but babies born totally deaf do engage in vocal babbling.

The Wikipedia article on babbling contains a very interesting passage:

Human babies engage in babble as a sort of vocal play that occurs in a few other primate species, all which belong to the family Callitrichidae (marmosets & tamarins) and are cooperative breeders. "Interestingly, marmoset and tamarin babies also babble. It may be that the infants of cooperative breeders are specially equipped to communicate with caretakers. This is not to say that babbling is not an important part of learning to talk, only to question which came first—babbling so as to develop into a talker, or a predisposition to evolve into a talker because among cooperative breeders, babies that babble are better tended and more likely to survive." [2]{^ Mothers and Others, Sarah Blaffer Hrdy, Natural History Magazine, May 2001}

Terrence W. Deacon infers that human infants don't even need to be particularly excited or even upset to babble because the fact is that human babies will babble spontaneously and incessantly only when emotionally calm. Deacon adds, "It is the first sign that human vocal motor output is at least partially under the control of the cortical motor system because babbling is basically vocal mimickry that happens in correspondence to the maturation of the cortical motor output pathways in the human brain."


If we accept Hrdy's hypothesis that babbling babies are better tended, then that implies that the sound might well attract and please adult tenders ("if music be the food of love...play on") as well as the babies themselves, and we can give a specific original form to delmoi's "innate, consistent, emotional response to music." That innate response is to the sound of a contented, healthy baby.

From this point of view, then, music is the babbling of babies vastly extended, reshaped, refined, and amplified; but still trailing the clouds of glory (that is, direct access to our emotions) which it's aboriginal importance to reproduction has conferred upon it.

Language, on the other hand, if this is true, developed secondarily, and probably much later than babbling.

But babies do make another common vocalization with tremendous emotional power, of course-- they cry. Crying begins earlier than babbling; and after babbling begins the two kinds of vocalization relate in a kind of counterpoint. It will be interesting to see if the Duke group makes any connection between crying and minor key music.
posted by jamjam at 3:56 PM on June 28, 2007 [5 favorites]


Duh, I can't believe I didn't realize the obvious.

Musicians get laid more!

Porpoise, musicians get laid more because women instinctively want babies who will be more compelling criers and more pleasing babblers.
posted by jamjam at 4:05 PM on June 28, 2007


Metafilter: It makes nose sense

sorry delmoi, but that was one of the best typos EVAR.
posted by flapjax at midnite at 4:58 PM on June 28, 2007


ah, *happy sigh* what a stunningly interesting and informative thread! Thank you all for your wonderful additional links and superb input. Particularly loved your points porpoise, Wolfdog, delmoi and invitapriore.

I always thought of the origins of instrumental music as having more to do with math, both the divisions of the rhythm, the harmonics, its use in rallying emotions in courtship dances or aggression in war dances. Male.

Singing I thought of as more an expression of narrative speech, love songs, a way to remember poetry, pass on history that could be sung or to put a child to sleep with lullabies. Female.

Both instrumental music and singing seem to be part of connecting/communicating with others and both seem to express emotions, which are also attributes of language.

Mosaic pieces of information make the whole, complex picture comprehensible and this essay explores some interesting aspects of the connection between music and language. Wish the authors of the paper were also sharing in this thread.
posted by nickyskye at 8:45 PM on June 28, 2007


ps, porpoise, was just thinking about your comment in regard to chimps and gorillas not being musical, when I came across this dancing gorilla video.

And a little MeFite mash note, you're a very likeable brainiac. :)
posted by nickyskye at 9:03 PM on June 28, 2007 [1 favorite]


I find the distinction between music and language interesting. When I look at the evolutionary path of hearing, it seems apparent that music had to have developed first. Language is just less tonally dynamic music with highly complex systems of interpretation.

It would seem more reasonable to suggest that music, which requires no training in order to attempt and has a much broader range of exploration and meaning, would be acquired before language. Just as fish, after realizing the extra functionality of their vestibular system, likely made random sounds before they made purposeful sounds and then made generalized sounds before they made refined and specific sounds. Piecemeal development went on through frogs and birds and on and on to us. Since it developed as such over evolutionary time, I would imagine similar scenarios for species and individuals. I also imagine that after language proved to be such a successful survival strategy, parental care focused those aspects of music on developing language in children (hence the singsong quality of parent-infant interaction). So there is this step-by-step advancement: hearing, noise making, "vocalizing," music, language. I can't imagine putting language before music.

Now, I'm just a casual observer so I'm being fast and loose with this. I'm obviously using a very broad definition of music, something like: any specific arrangement of tones and tonal relationships intended to evoke a strong (emotional) response in an "audience." That let's me get away with a lot like including frogs and birds and parent-infant interaction in the music maker/consumer category. But I'm okay with that because when I think about that emotional impact that music can have on us, and how mysterious that seems, and then I read something like this:

Vocal communication in frogs

Darcy B Kelley
Department of Biological Sciences, MC2432, Columbia University, New York, New York 10027, USA

The robust nature of vocal communication in frogs has long attracted the attention of natural philosophers and their biologically inclined successors. Each frog species produces distinctive calls that facilitate pre-mating reproductive isolation and thus speciation. In many terrestrial species, a chorus of simultaneously calling males attracts females to breeding sites; reproductive females then choose and locate one male, using distinctive acoustic cues. Males compete with each other vocally and sometimes physically as well. Anuran acoustic signaling systems are thus subject to the strong pressures of sexual selection. We are beginning to understand the ways in which vocal signals are produced and decoded by the nervous system and the roles of neurally active hormones in both processes.


Then it all kinda makes sense. Sex and territory. That and our ears are still involved in balance. Music is just more primal.
posted by effwerd at 9:53 PM on June 28, 2007


The smallest solution to these constraints gives 12 notes

Actually the first solution is with 5 notes and the next with 7 (thus pentatonic scales with 5 notes and diatonic-type scales with 7 different notes).

But with that fudging (or just ignoring or even embracing the discrepancies), the 5, 7, and 12 tone solutions (as well as other solutions) have all been used to make all sorts of music.

As you mention, all of these schemes (including the 12-tone) require some kind of fudging of tones in order to preserve intonation.

That's how we got our B-flat (and other "accidentals")--"Hey, that B-F interval sounds terrible, I'm just going to adjust it a little by singing this nice soft (flat) B instead so it's more in tune".
posted by flug at 10:48 PM on June 28, 2007


nickyskye - lol, Dramatic and Entertaining Evidence for Direct Musical Stimulation of non-Human Primate Primary Motor Cortex!, and aw shucks.

This isn't the field that I work in, I only had references handy because I found this topic compelling enough to do a research assignment on the degree of innateness of musicality in humans for a systems neuroscience class last year.
posted by porpoise at 10:46 AM on June 29, 2007


porpoise, I'm glad to know you had those sources handy, because I was thinking that either a) you put waay too much time into discussion threads; or b) holy shit you keep a lot of information in your head.

Great posts, many thanks. One question for you: in the past year I've started reading about music and the brain, are there any books that come to mind you'd recommend for a musician keenly interested in this?
posted by LooseFilter at 11:01 AM on June 29, 2007


Well, porpoise, we lucked out in this thread, benefiting from your research and delicious neurological prowess.

The innateness of musicality in humans is really interesting, any way one looks at it. Today I listened to the singing of a corny 1930's movie, Naughty Marietta and such experienced intense bliss that involuntary tears sprung into my eyes. Couldn't help wondering at the mystery of music/language/emotions and the brain.

May I join LooseFilter in the question line and ask you if you think there is any connection between the innateness of music and the human capacity for math? Do you know if they're in a similar part of the brain? Or overlap in their neurological function?

Loved your hilarious, scholarly version of Gorilla Dancing as "Dramatic and Entertaining Evidence for Direct Musical Stimulation of non-Human Primate Primary Motor Cortex".

Just for fun I put those words into Google and came up with a paper, suitably serious sounding: this one, "Prolegomenon for a hypothesis on music as expression of an evolutionary early homeostatic feedback-mechanism. A biomusicological proposal".
posted by nickyskye at 11:28 AM on June 29, 2007


LooseFilter - no, I'm actually really horrible about remember who wrote what or which lab published which result (but I can remember the results, just not the citation) and I'm really jealous of the people around me who can. That McDermott paper I linked to earlier isn't a bad place to start, but unfortunately, I'm not personally familiar with any books that talk about music and the brain.

I've just skimmed this Sci American article and it seems interesting, though. There doesn't seem to exist a fantastically large collection of research into music and the brain and I suspect it's because funding for this type of research can be hard to get.

nickyskye - yeah, I agree that a connection between musicality and mathematics is intuitively attractive. For example there are cases of savants and autistic people who are both/either mathematical prodigies and/or musical virtuosos. It also makes sense from the point that structured music can be very geometrically complex and parts of the brain involved in one may also play a role in the other. Then there's Einstein who was said to be a very good violin player (and ignore the urban myth that Einstein was bad at math; he had poor grades in grade school but that was probably because he was bored out of his gourd). Anecdotally, though, my father and my sister are both very musically talented but their skill at mathematics are quite lamentable.

The short answer to your question is, "I don't know."

AFAIK, though, no-one has been able to demonstrate or describe an association of cognitive structures involved in music and mathematics. iirc the capacity for simple mathematics isn't limited to humans and some animals can be taught to count and add. For more complex and higher order mathematics, however...

If I was to speculate, I think that the common perception that musically gifted children tend to be very good mathematicians may be a result of their learning environment (parents who encourage their children to learn music typically also encourage academic achievement) and/or that the process of acquiring musical skills develops parts of the brain that serve more general tasks.

For example, trained musicians acquire new motor skills more effectively than non-musicians and this is attributed to better developed inter-hemispheric connections (ie., the corpus callosum) which connects the left and right hemispheres of the brain. There's also evidence that practicing music enhances white matter development.

However, I wouldn't be surprised if one day someone manages to show that the ability to process the syntax and contour of music invokes some of the same neural circuitry that is also active in understanding or visualizing higher-order mathematics.

Heh - thanks for the serendipitously discovered article, I've printed it out to peruse later. Ran across this article; Music and Autism may require a subscription. Looks interesting but I haven't had a chance to read it through, yet.
posted by porpoise at 1:45 PM on June 29, 2007


Um. This seems pretty obvious to me. The overtone series of musical frequencies is a direct result of physical law and mathematical ratios. So our vocal cords, like all musical instruments, conform to laws of physics. Surprise!

The first paragraph of that article is misleading, too. It states that our vocal chords conform to the 12-tone system in Western music, when it actually conforms to the mathematical ratios of the overtone series that the 12-tone system merely approximates. They address this later on, but it's really poorly written. They basically lie and then correct themselves. The dude who wrote this seems to know little about science and even less about music.

OK, my pomposity quota for today is filled!
posted by speicus at 2:29 PM on June 29, 2007


On skimming the thread, it seems like people already made the points I just made. Oh wells!
posted by speicus at 2:32 PM on June 29, 2007


As a bit of a tangent, I thought I'd bring up the use of bits of recorded speech as a source for melodies in works such as Steve Reich's Different Trains.
posted by musicinmybrain at 3:56 PM on June 29, 2007


Good call on the Reich.
posted by LooseFilter at 4:55 PM on June 29, 2007


Thanks porpoise for your insights, scientific knowledge, what you do know and for your honest "I don't know."

Apparently there are quite a few books about music and math but none I could find discussing any neurological aspect of math-and-music.

LooseFilter, your question prompted me to research the brain and music. No surprise really, there is an Institute for Music and Brain Science connected with both Harvard and MIT. Bet if you emailed the director: mark_tramo at hms dot harvard dot edu < mark_tramo@hms.harvard.edu>, he would have some book lsuggestions for you. (In fact, I've just written and asked him).

One book, This Is Your Brain on Music: The Science of a Human Obsession by Daniel J. Levitin and the related website.

A couple of PubMed papers on the brain and metaphor. Can't help thinking that one aspect, among many, of music, is that it is a metaphor for emotions, perceptions or mental states. Neural correlates of metaphor processing.
posted by nickyskye at 4:57 PM on June 29, 2007


porpoise, thanks for the info and insight. The couple of layman's books in this area I've found quite good are This Is Your Brain on Music by Daniel Levitin and The Singing Neanderthals by Steven Mithen. Also, Francis Rauscher and Gordon Shaw have done some interesting work for the past couple of decades on music cognition (music educators cite their work all the time--mostly, sadly, to justify programs).
posted by LooseFilter at 5:08 PM on June 29, 2007


nickyskye--nice suggestion! I'll send him a note.

Great connections re: metaphor. I've been thinking along those lines for the past few years, especially after reading some Lakoff & Johnson. Subjectively, I notice tremendous correlation with past growth in my understanding of music and my ability to think metaphorically about anything.

Perhaps it's in the abstract nature of music--for me to understand (and especially perform) a large-scale musical work, I have to be able to hold it completely in my imagination. So I have to develop a lot of conceptual frames of reference, and I typically use some combination of aural, visual, and kinesthetic modes to do that. Hm.
posted by LooseFilter at 5:32 PM on June 29, 2007


Dear LooseFilter, Enclosed my correspondance with Mark Tramo.

A delightful and informative response from him! Looks like a potential goldmine on the topic.

"Dear Mark Tramo,

A number of us on MetaFilter are discussing the brain and music.

http://www.metafilter.com/62459/language-of-music

Your wisdom would be greatly appreciated. Would you be so kind to recommend a few books or any online papers on the topic of music and the brain?

Respectfully,
Nicky

**************

Thank you for your email, Nicky -

Please visit www.BrainMusic.org - we will be posting dozens of pdf files for my Harvard courses this summer and fall on the Education & Information, Auditory Neuroscience, and Health & Medicine links. Best, Mark

Mark Jude Tramo, MD, PhD
Director, The Institute for Music & Brain Science
Dept of Neurology, Harvard Medical School & Massachusetts General Hospital
Steering Committee, Harvard University Mind/Brain/Behavior Interfaculty Initiative
Advisory Board, National Center for Human Performance
Best Doctors in America®/ America’s Top Physicians®/Best of Boston®
Songwriter Member, ASCAP

Email MTramo@HMS.Harvard.edu
URL http://www.BrainMusic.org
posted by nickyskye at 6:00 PM on June 30, 2007 [1 favorite]


« Older Where is Tatsuya Ichihashi?   |   The Brain That Wouldn't Die movie review Newer »


This thread has been archived and is closed to new comments