We All Hear Differently
December 1, 2015 11:34 AM   Subscribe

The analogy Kraus uses is that the world around us is like a great concert — and our brains are a mixing board. How that mixing board translates what we’re hearing can have a profound impact on what we understand about what’s going on around us... Here’s the good news: Kraus also firmly believes that our brains can be be trained to hear more clearly. She’s found that musicians and people who are bilingual are able to process sound better than the rest of us.
WNYC's Only Human brings you Listen Up! - a project "to help us all become better listeners."

Listen Up's week of daily challenges:
Day 1: Face to Face
Day 2: Mirror, Mirror
Day 3: Take a Breather
Day 4: Memorize This!
Day 5: Yes and...!

Other Listen Up! segments:
When To Worry About Your Hearing
How Old Are Your Ears?
Take a listening quiz here.
posted by melissasaurus (16 comments total) 58 users marked this as a favorite
 
Yet another reason I regret a) not mastering a second language b) not mastering a musical instrument.
posted by [insert clever name here] at 11:53 AM on December 1, 2015


Very cool.

I have a personal theory that listening to audiobooks also really helps to improve general listening skills. It certainly has assisted me.
posted by bearwife at 11:55 AM on December 1, 2015 [1 favorite]


confused tone deaf bilingual here.

(incidentally,what improved my spanish wasn't listening, but going to a somethingologist who fnally explained what people are doing with their damn tongues).
posted by andrewcooke at 11:57 AM on December 1, 2015 [2 favorites]


That's pretty cool. The bit about pattern recognition is also why people will hear satanic messages in backward masking.

One of the best classes I took in college was an audio and video production class, where one of the exercises was to pick a song and listen until you could pick out all the individual tracks (and then draw a track diagram of it). That kind of close listening has been really helpful over time, though one of the fears I have now is that too many loud concerts (even with earplugs) has reduced my ability to clearly locate sounds. I definitely notice that I have more trouble with e.g. conversations in bars now than I used to, and I don't think all of it can be put down to the massive increase in ear hair that I developed from ages 25 to 35.
posted by klangklangston at 12:15 PM on December 1, 2015 [2 favorites]


From "Day 3: Take a Breather"

Silence gives your ears and your mind a chance to recalibrate. So often we scramble to fill up that space with something, anything. But the best listeners know when others just need to be heard.

[...]

Truly empathetic listening, Feinberg says, is not just about saying the right thing, but knowing when the other person just needs to be heard. And one thing that helps give people an empathetic ear? A little solitude.


Miles Davis is widely reputed (and disputed) to have said, "It's not the notes you play, it's the notes you don't play." Actual provenance of that quote aside, it's the part of music that teaches you when NOT to play - when to lay out, when to back off, etc., that's so very valuable when it comes to listening to others in verbal conversations.

People I work with who are fond talking over and across each other, or try to get in the last word even though they have nothing meaningful to say (or are just repeating points they've already covered), are just like the person who shows up at a jam session and proceeds to play over everybody, being too loud at the wrong moments, never laying back or laying out, and just generally cluttering things up.

There's a time to give 'er and a time to STFU.
posted by mandolin conspiracy at 12:29 PM on December 1, 2015 [3 favorites]


Thanks for this post. I didn't know about the Listen Up project, and it's pretty cool.

A few remarks from your neighborhood audiologist. The first is: Nina Kraus is a goddamned badass and I love her so much.

For me, the truly remarkable thing about Kraus's work is not that people hear things differently; I think that's always been intuitive for any amateur philosopher who questions whether their green is your green. What's remarkable is her experiments which have quantified it (albeit at an extremely basic level), and the linking this different processing to currently very controversial pathologies like Auditory Processing Disorders. APD is one of those things that we are pretty sure exists, and yet the diagnoses are terrible (at this point it's generally a diagnosis by omission, and the tests that are generally used for it are silly and unreliable), and, in part because the diagnoses are terrible, the treatments are all more or less useless. Kraus's work is the groundwork for a better approach to this family of pathologies that no doubt are really detrimental for a lot of (especially children's) lives.

That said, we shouldn't get too exited, I don't think, and Kraus's examples are much over-simplified. Our understanding of central auditory processes above about the brainstem is very weak. What we do know, and what Kraus does provide much evidence for, is that auditory processing is very dynamic and involves all sorts of parts of the brain, not just the auditory cortex (also why APD and autism so often go hand-in-hand). We're only beginning to understand the complex interactions between sensory and top-down processes that give rise to our various auditory percepts, and what abnormal percepts might consist of and be caused by. The problem with saying that a "dyslexic listener hears this" and an "autistic listener hears like this" is that it oversimplifies them in the way that people used to say that dyslexic people read words backwards or jumbled, when it's much more complicated than that. For example, just having neurons not fire in sync is not necessarily going to create some kind of echo effect; the neural processes of sound aren't that simple (we used to think certain neurons coded for time, others for frequency, etc., but this turns out just to be not true).

Even with things like EEG measurements, it can be very hard to say. It's an indirect way of measuring exactly what's happening in the percept itself. For example, you can have a kid with really wonky brain waves to sound, and they can hear and understand just fine. Measuring something like pitch especially with EEG can be very complicated, as pitch is perceivable through a number of different cues in the auditory signal.

That said, it's great groundwork that Kraus has done. Truly she is one of the most exciting researchers in this field.

I'm pretty dubious of all online hearing tests and "hearing age" type things. They are meaningless. Online hearing tests tell you nothing because of calibration, transducers, background noise. I'm biased of course, but we put a lot of work into trying to measure hearing accurately. Also, the highest frequency you can hear is a pretty meaningless measure as well, unless we're talking below the 10kHz range. Percent hearing capacity is also meaningless.

You also have to keep in mind with these hearing tests, even the highly specialized and accurate ones I use in the clinic, that they measure the softest thing you can hear. That's a very specific bit of information. We know that most people have hearing loss and damage long before it shows up on their audiogram, i.e. affects the softest sound they can hear. Before you start losing hearing thresholds, your nerves are dying and you start having a harder time hearing in noisy places and the like. By the time the hearing loss has showed up on your audiogram, you've been missing stuff for a while. Lots of people do fine on these tests and think everything is okay, except for they would do poorly on a different test, like ABR or OAE. Hell, I'm a 30 year old lifelong participant in jazz and rock and marching bands, and my audiogram thresholds hover around 10 dBHL, but if you take my OAEs, I've got a nice notch at 3kHz, which means in about 20 years, it's gonna get tough for me to hear in a bar, and in 30 or 40 years, I'll have a hard time hearing in quiet. A better test of whether your hearing is getting worse is: do you have a harder time having a conversation in a noisy restaurant than you did when you were younger?

As always, if you think you have hearing loss, see someone! There's so much interesting research coming out linking hearing loss to dementia and depression and all sorts of things. Hearing aids have lots of problems, but they can be life changing too.
posted by Lutoslawski at 12:46 PM on December 1, 2015 [24 favorites]


Here's a link to her Neuroeducation project.
posted by garbhoch at 12:59 PM on December 1, 2015 [1 favorite]


the highest frequency you can hear is a pretty meaningless measure as well, unless we're talking below the 10kHz range

Well, unless you're an audio engineer. I know I'm a little more deficient above 15kHz in my right ear than in my left, so I compensate, but it doesn't make up for actually being able to hear it.

As always, if you think you have hearing loss, see someone! There's so much interesting research coming out linking hearing loss to dementia and depression and all sorts of things. Hearing aids have lots of problems, but they can be life changing too.

Can you email my mother, please? I was just having this conversation (about dementia) with her last week (at an uncomfortably-high volume). My mother-in-law, who is much younger, just got a hearing aid and she's amazed at the difference in quality-of-life; I'm hoping she can put in a good word.
posted by uncleozzy at 1:00 PM on December 1, 2015 [1 favorite]


Just wanted to say: Lutoslawski's comment is excellent, and the last two paragraphs warrant more than one read.
posted by herrdoktor at 1:12 PM on December 1, 2015


fascinating post, thanks!

Yesterday, we were having a family dinner at a restaurant, and the kids start discussing what music is playing in the background. Ex was a DJ before and we share a profound love of music, and suddenly we both realize we cannot hear the singing or whatever other instruments might have been involved - only the underlying bass. Woah. That was old age coming in fast.

We could still participate in the conversation, but yes, Lutoslawski's comment strikes a nerve today.
posted by mumimor at 1:17 PM on December 1, 2015 [1 favorite]


Even though I'm almost completely deaf in one ear (6 surgeries over 40 years), I have certain things I hear really well. Machinery -- cars, industrial assembly machines -- each have a rhythm, a musicality to them. Once you learn the tune, it's easy to tell when something isn't right. Apparently, though, not everyone can hear things that way, even with training and experience.

Things sounding "wrong" can occasionally be very unsettling. I've had to remove myself from places because of uncomfortable sounds (not loudness, or anything like that, just "wrongness"). The sound clips in the primary link (Kraus) are all uncomfortable in a visceral way.

On a different note - probably in part due to my partial deafness, I've always paid close attention to peoples mouths when they are talking. I wouldn't call it lip-reading, but I do hear better when I can see the speaker's mouth. So I'm apparently hyper-sensitive to mismatches between audio and video tracks in movies and TV. When it is bad, I can't stand to watch/listen at all.

There's a hearing aid company advertisement that drives me up the wall. The guy's lips don't sync correctly to the audio. plus his mouth is sometimes doing things that his words don't require. I swear it is a highly-engineered, intentional effort to subtly increase people's doubts about their hearing ability.
posted by yesster at 2:03 PM on December 1, 2015 [1 favorite]


I wouldn't call it lip-reading, but I do hear better when I can see the speaker's mouth.

This is something I used to work with people on in aural rehabilitation (we'd call it speech reading). People can find it very helpful in challenging environments. It also promoted the good habit of listener and speaker facing each other.

Interestingly, it was also a bit of a confidence builder for some people which helped them with other strategies: since they felt more confident in their communication abilities, they were more comfortable asking for clarification if they missed something rather than just nodding along and getting even more lost. Half the time they'd find out someone else also didn't hear the speaker clearly, but hadn't wanted to say anything.

And I'm with you on misaligned audio and video tracks. I'd rather just turn the subtitles on and mute, unless those are also really off.
posted by ghost phoneme at 2:44 PM on December 1, 2015 [2 favorites]


"A better test of whether your hearing is getting worse is: do you have a harder time having a conversation in a noisy restaurant than you did when you were younger?"

On some level, I'm just reluctant because I think it might be like the experience I had with going to the eye doctor recently. I've noticed a major shift in the amount of detail from a distance that I can pick out, more fuzziness up close, and more tiredness from low-light reading and screens and shit. And the diagnosis after a ton of tests was… you've basically got normal vision, this is what vision is for most people and there's no corrective action we can take. Which at least got me past my "Oh my god, this is my inevitable diabetes coming through!" but didn't make me feel better overall, and only really got me peace of mind for another couple years.

Since I've noticed more with my hearing, my current plan is just to be more vigilant with earplugs (on my keychain now) and to be more careful with exposure to high volume for even moderate amounts of time. Which sucks, because I love rock shows, but I don't really think that hearing aids would do much good.

"As always, if you think you have hearing loss, see someone! There's so much interesting research coming out linking hearing loss to dementia and depression and all sorts of things. Hearing aids have lots of problems, but they can be life changing too."

One of my friends, her new boyfriend has significant hearing loss — I think from a fever, but I forget. Anyway, he's got prescription hearing aids, and just had to do a weird repair-replace run-around. I was kind of surprised that, from talking to him, the signal processing on the hearing aids he has is incredibly primitive, and that they're really not customizable in the way I would have expected. I searched around and saw some Kickstarter for an open-source hearing aid, but it doesn't seem like the one that got started ever ended up in production. Is it just because of the difficulty in doing signal processing fast enough to avoid the bad dub effect?
posted by klangklangston at 8:26 PM on December 1, 2015


Off and on, maybe 2-4 times a year, I've had the experience of not understanding english. English is my native and only language.

It'll be one phrase, clearly articulated by someone with basically the same accent as me right in front of me, but I'll have no idea what the person said. 3-4 repetitions.. nope. Change a word, and most of the time, I understand, but sometimes it persists. This doesn't happen often, but when it does, it's obviously a little freaky. Kind of like when you repeat words until they're meaningless, except it happens all at once.

I'm probably doomed. :-)
posted by smidgen at 10:22 AM on December 2, 2015 [1 favorite]


Is it just because of the difficulty in doing signal processing fast enough to avoid the bad dub effect?

That is a huge, huge part of it, yeah. Whatever processing you do, the group delay has to ideally be around 2-3 milliseconds, and absolutely no more than 6. That's a huge challenge. The more you filter, you trade time.

In general, I think hearing aid processing is very sophisticated in a lot of ways. It turns out that trying to mimic the incredible non-linearity of the healthy ear in a computer chip is just incredibly difficult. There are so many cues in the auditory signal that help us perceive pitch, and we use different cues at different levels and frequencies, and then there's the whole problem of all the vastly different acoustic environments we find ourselves in everyday - it makes it really difficult to come up a big processing schema. And even when we find something that works really well in the lab - it always seems to be a lot less beneficial once someone is out in the real world.

Hearing aids do directional mics really well. The newer ones are third order, meaning they use four mics, and they're adaptive, so they move around to where the signal is, and that works pretty well. Amplitude compression works pretty well, and that's also usually adaptive these days, so the time constants change depending on the signal to noise ratio coming in.

All of the other algorithms work, but a lot less well. Digital noise reduction that actually amplifies the signal and cuts the noise is the holy grail. And the difficulty of doing that is not just because of the time constraints of the hearing aid. That's just an extremely difficult DSP problem. Frequency compression works...okay.

Now, how much any person can customize all of this stuff on their hearing aid is a different question. And this is where the manufacturers get really shitty. They do two things: they lock certain things even from the audiologist, and then they lock certain things depending on the level of the hearing aid you bought. So, for example, as the audiologist, certain manufacturers give me free reign over their hearing aids - I can change compression ratios and max output and move 20 frequency bands around and change knee points and all sorts of stuff. Others basically let me turn the stupid thing up or down. I'm sure they all have their reasons for doing this. My guess is the ones that don't let you do much think they've just made a great product and they don't want an audiologist who doesn't know what they're doing to screw it up and make the patient unhappy and blame it on the manufacturer.

Then there's the technology level business. So, a manufacturer puts the same chip in all their hearing aids, but they will essentially lock certain features depending on the level. So if you buy the entry level hearing aid, there may be 20 channels in that hearing aid, but they only let you access 4. You buy the next one up, you get 10, and so forth. Same with programs. You want two programs? Buy the entry level one. You want 10? Buy the top of the line one. Same hearing aid. Same chip. Same DAC. Same mics and receiver. It's bullshit to be honest. Drives me nuts.

So yeah, it's complicated. Certainly the fact that you are trying to fit all of this processing power into a teeny-tiny box that doesn't suck tons of power and that has a 2ms delay is a huge part of the challenge. On the other hand, the newer models have a 33 kHz sampling rate and 4 18 bit DACs and often think they should then either be better than they are or at least cheaper.

I think in the future we're going to see a lot more control over the hearing aids given to the end-user, and I welcome that (we already see that a little bit). But, you also have to keep in mind that a lot of the folks that wear hearing aids are older and don't want to fuss with anything, and just want something easy to use that helps them communicate. It's a strange and difficult market.
posted by Lutoslawski at 4:51 PM on December 2, 2015 [7 favorites]


Digital noise reduction that actually amplifies the signal and cuts the noise is the holy grail

Just building on Lutoslawski's point, part of the difficulty is that people with normal hearing (as in normal sensitivity and normal processing) are more able to filter out background noise and direct their auditory attention. When someone has a hearing impairment, that ability is frequently compromised. So you don't necessarily want hearing aids to just pick up and amplify everything, you want them to focus a bit more (or a lot more, depending on the loss and the environment) on the target signal.

But what is the target signal? The environment can be filled with a lot of potential signals and listening goals can vary person to person and environment to environment. Two people standing next to each other at a cocktail party may want to listen to two different people. Some processing strategies that can help with speech understanding make for poorer music appreciation, so at a concert you want it to process for music rather than speech. In a mostly quiet room you may want to focus more on speech but also provide environmental noises so you hear the door open and close (and how the aid is processing that impulse noise has implications for comfort vs clarity).

Hearing aids that have adaptive directional mics and automatically change processing strategies can be wonderful, but there will still be times when it doesn't do exactly what you want/need. On the flipside manually adjust can be great, if you have the dexterity for it (or the right accessory), but you also don't want people to have to constantly fiddle with their hearing instead of focusing on their communication partners. Some hearing aids "learn" what you like, but that's a bit of a mixed bag in how accurately they learn (at least was a few years ago, they may have improved).

And as noted in the title, everyone also hears differently, so processing strategies that work great for one person in one environment is grating or useless for another. There are also times where the damage to the person's auditory pathway is significant enough that even if you can get a great signal with no background noise to the eardrum, they still will have poor speech understanding. (Depending on the severity, an implant might then enter the discussion.)

So hearing loss can be complicated just on its own, and then there are other factors that can come into play for some people (arthritis, cognitive function, poor vision, family/social factors, willingness/ability to clean and maintain their devices).

I know the focus of series was on being a good listener (which is great!), but since they included some information on hearing, I sort of wish they would have included a section on being a good communicator from the speaking side (or do a companion series). Sometimes people can be doing their darnedest to listen (spending thousands of dollars in hearing aids and hours on auditory rehabilitation), and it will still be a struggle for them. Speaker's could provide a lot of help by adjusting some bad habits (gain attention first, don't talk from a different room, face each other etc). It's not something that comes naturally to a lot of people.
posted by ghost phoneme at 2:09 PM on December 3, 2015 [4 favorites]


« Older In The News: Dino Eggs   |   The biggest cliche in photography is sunrise and... Newer »


This thread has been archived and is closed to new comments