Your eyes are bad and your brain is a liar
July 4, 2018 12:22 PM   Subscribe

You want to know something about how bullshit insane our brains are? OK, so there's a physical problem with our eyes
Twitter user @foone explains how your brain lies about the passage of time in order to mask a shortcoming in human visual processing. [threadreader version]
posted by Sokka shot first (60 comments total) 98 users marked this as a favorite
 
metafilter: we're apparently computers programmed by batshit insane drunkards in Visual Basic 5
posted by sciatrix at 12:43 PM on July 4 [21 favorites]


And James Wu chimes in with a massive thread on the bullshit of the sensorimotor system (Thread reader version -- a nice, polite web page for Twitter haters).
posted by maudlin at 12:43 PM on July 4 [8 favorites]


I got so angry at my own brain reading that last night, and at how much of a (yes, yes) blind spot in my familiarity with optical perception hijinks this whole thing has revealed to me. Bunch of interesting reading to do. One little sliver to get started there: the wikipedia article on saccadic masking.
posted by cortex at 12:48 PM on July 4 [2 favorites]


I love all the various experiments that you can do work out how human vision is implemented. Like various optical illusions work basically by tricking some processing layer in the optic nerve pathway so you see squares that aren't there or static lines seem to be jiggling or whatever.

You can play similar tricks on artificial vision systems too, like making every object be recognized as a toaster.
posted by Nelson at 12:50 PM on July 4 [3 favorites]


BTW, since a few people have brought it up: There's a great sci-fi novel by Peter Watts called Blindsight. In it humans encounter an alien race they call Scramblers, who can move very fast and precisely, and they exploit saccades.

because if they only move during saccades, we never see them moving. and since so much of our vision is based on just filling in what we think is there, if they stay out of the direct center of our vision, we'll just visually fill them in, like they were never there.

Check it out if you're into hard SF stories of first contact. It's got some really neat ideas about human vision, very unique aliens, the nature of conciousness, the future of humanity in the face of perfect VR, and vampires.
(Really, it has "vampires", while still being hard-SF)
Oh hell, yes. Just re-read this on vacation for like the 5th time and it easily remains my all-time favorite science fiction novel. Here's the relevant excerpt from the text (which you can read free online or buy from Amazon):
"You're blind," he said without turning. "Did you know that?"

"I didn't."

"You. Me. Everyone." He interlocked his fingers and clenched as if in prayer, hard enough to whiten the knuckles. Only then did I notice: no cigarette.

"Vision's mostly a lie anyway," he continued. "We don't really see anything except a few hi-res degrees where the eye focuses. Everything else is just peripheral blur, just— light and motion. Motion draws the focus. And your eyes jiggle all the time, did you know that, Keeton? Saccades, they're called. Blurs the image, the movement's way too fast for the brain to integrate so your eye just—shuts down between pauses. It only grabs these isolated freeze-frames, but your brain edits out the blanks and stitches an — an illusion of continuity into your head."

He turned to face me. "And you know what's really amazing? If something only moves during the gaps, your brain just—ignores it. It's invisible." [...]

"Are you listening, Keeton? Do you know what I'm saying?"

"You've figured out why I couldn't—you're saying these things can somehow tell when our eyes are offline, and..."

I didn't finish. It just didn't seem possible.

Cunningham shook his head. Something that sounded disturbingly like a giggle escaped his mouth. "I'm saying these things can see your nerves firing from across the room, and integrate that into a crypsis strategy, and then send motor commands to act on that strategy, and then send other commands to stop the motion before your eyes come back online. All in the time it would take a mammalian nerve impulse to make it halfway from your shoulder to your elbow. These things are fast, Keeton. Way faster than we could have guessed even from that high-speed whisper line they were using. They're bloody superconductors." [...]

"Supposing it's just— instinct," I suggested. "Flounders hide against their background pretty well, but they don't think about it."

"Where are they going to get that instinct from, Keeton? How is it going to evolve? Saccades are an accidental glitch in mammalian vision. Where would scramblers have encountered them before now?" Cunningham shook his head. "That thing, that thing Amanda's robot fried— it developed that strategy on its own, on the spot. It improvised."

The word intelligent barely encompassed that kind of improvisation.
I also did a post awhile back on the book and the rest of Watts' ouevre if you want more.
posted by Rhaomi at 12:55 PM on July 4 [45 favorites]


I totally want a baseball cap with an image of the toaster sticker on the front of it. I know it won't work, but it'll amuse me to no ends to think that image recognition systems would be tricked into sensing a toaster walking around.
posted by el io at 12:56 PM on July 4 [1 favorite]


It's not publicly talked about but I have a feeling the movie/TV/video industry do a lot of tricks and short-cuts with that knowledge. I know that animation has been using half the frame rate that film allows them since the earliest toon-making.... instead of 24 drawings for the 24 FPS, they usually do 12 per second, with the same image showing up in two consecutive frames. If the animators doing the drawing can get by making less drawings, they do (of course, some ultra-low-budget toons, like the first ones for TV, cut down too much and it was noticeable, even by the little kid audience). I wonder how often CGI animation does the same thing to save computer time with rendering. Of course, it was considered a big deal recently when Peter Jackson increased the FPS on the Lord of the Rings/Hobbit movies from 24 to 48, and people reportedly noticed (most often if they went in expecting something).
posted by oneswellfoop at 1:09 PM on July 4


Isn't it disingenuous to claim that perception "lies"? Compared to what? Thinking?
posted by thelonius at 1:18 PM on July 4 [3 favorites]


I pay attention to saccade blur often enough that I can see the blur voluntarily. Takes some practice, and if you want to train your brain accordingly, start with a dark room and some fairy lights. You’ll see the trails if you look for them.

Brains are clever but also flexible.
posted by seanmpuckett at 1:20 PM on July 4 [6 favorites]


The ret-conning of time, so to speak, I find absolutely fascinating. I never heard it as the explanation for why the first tick of a seconds hand seems so much longer before, and I love how commonplace that experience is, while still demonstrating this eerie untrustworthiness of our own perception.

I seem to recall a similar experiment done with touches on a person's arm, where the experimenters somehow got the subjects to perceive being touched well before they actually were, by exploiting the brain's time-compensation system. I've done a cursory google search that came up blank; might make a good post on its own.
posted by dbx at 1:23 PM on July 4 [2 favorites]


VR can now use saccades to trick you into walking in circles while you think you are walking straight. This let's them simulate a big space in a smaller room. SIGGRAPH link.
posted by fings at 1:23 PM on July 4 [20 favorites]


I also did a post awhile back on the book and the rest of Watts' ouevre if you want more.

I love me some Peter Watts, and in particular dig the way he exploits the best available science in his imaginings, I just wish he weren't quite so utterly misanthropically angry. I'm no superfan of the species at the very best of times, and he makes me look like a flower child — even his footnotes are highly pressurized outventings of rage. I kinda just want to give him a hug.
posted by adamgreenfield at 1:23 PM on July 4 [7 favorites]


I plan on punishing my stupid lying brain this evening with Bourbon.
posted by Keith Talent at 1:26 PM on July 4 [14 favorites]


It is being _way_ too generous to programmers to think they wouldn't us this trick if they could figure out how to do it.
posted by All Out of Lulz at 1:41 PM on July 4 [3 favorites]


you can see this effect happen if you watch an analog clock with a second hand.
Look away (with just your eyes, not your head), then look back to the second hand.
It'll seem like it takes longer than a second to move, then resumes moving as normal.


I’m off to find my old Seiko wristwatch and hope the battery is till good!
posted by TedW at 1:59 PM on July 4


Having been through that thread yesterday when a few different people retweeted it, it's mostly correct with a few pieces that could use some clarification.

So, the first piece (on saccadic suppression) is pretty good. You can push it to 500 ms under laboratory conditions, but since most saccades are 10-15º of visual angle, with inflight times of 50-70 ms, you don't suppress that much of the visual world. The whole saccade planning and execution process is really pretty fast; there is a window immediately prior to saccade takeoff (when the orb of the eye begins to move) where you're nearly insensitive to input (call it -50 ms to 0 ms, relative to when the eye moves; think of it as "the visual system has locked in the target, and can't cancel the command to move the eyes").

Remapping actually starts a bit before this, potentially starting around -125 ms pre-saccade, and ending by -50 ms or so (although nailing down these timelines behaviorally is hard), and seems to be the visual system saying "I'm gonna go here, so let's see what's there now so I can compare it after things settle down after the saccade." The question of what information the visual system gets right before the eye movement is, quite literally, a chapter in my dissertation, and I'd argue you're getting the identity of the whole object (like, say, a face) that you're saccading to.

The blind spot stuff is good: yep, you're basically insensitive to your blind spot because your visual system fills it in perceptually. A former labmate has done some recent work on what you can or can't fill in here.

However, the bit on peripheral color vision is a really common (and incorrect) understanding of peripheral vision. So, if you look at a photoreceptor density plot, you'll see that cone density in the fovea (labeled as 0º on the plot, because it's pretty much the center of the world on your retina) looks like it falls off a cliff. It does, but there's a big difference between "falls from ~180k cones per mm^2 of retina" to "falls to nothing" versus "falls to ~2-4k cones per mm^2 of retina", which is what the plot shows. Lots of people want to interpret this as "you have no color vision beyond the central ~5º of the visual field" and that's just not the case. You've got plenty of photoreceptor input for color (and detail!), it's just not as glorious as your fovea. So the whole thing of "color in your periphery is a story your brain tells you?" Nope. You've got plenty of data.

The inversion experiment is really classic work by George Stratton, and is, if anything, even weirder and cooler than it sounds in the thread. Stratton's prisms had really limited fields of view, so not only was he adapting for days (which, while you're adapting, is a fast trip to nausea-town), he was moving through the world with a very limited visual field. I'm pretty sure UC Berkeley's Psychology department has his experimental equipment on display (I think I remember it in a glass case from my grad student days). I've run mini-versions of this experiment for students when teaching Sensation and Perception (it's easy to get the goggles).

And yes, human eyes are absolutely wired "backwards" - we've got the photoreceptors behind everything else, which totally isn't how you'd engineer a sensor. For that matter, you wouldn't (generally) engineer a sensor to look like the photoreceptor density function I linked to earlier, you'd go for an equal density across the entire sensor. Trying to understand vision as an engineering problem pretty much doesn't work, because nature isn't an engineer, and the eye (and visual system) really, really aren't a camera.

One interesting piece that didn't come up in the thread is how you go from the retina to the brain, and how those connections vary as a function of where on the retina we're talking about. This is, broadly, is the idea of cortical magnification and the problem of peripheral vision. So, the fovea (call it the central 1.7-2º of the retina) gets something like ~50% of visual cortex, but we've got the remaining 99%+ of the retina to deal with. The brain needs to represent all of that input, but there's not enough cortex to do it the way we do it with the fovea, so there's an ongoing question of how we represent the periphery, and what that information is good for. Does the brain throw a lot of it out? Does it code it in some weird, compressed fashion? What can we do with this information... and this is basically what I study (although I'm mostly looking at it in a pretty applied case, when it comes to driving). For what amounts to the current theory and model on this, I'd point anyone interested at this 2016 paper, which is a good overview of the misconceptions and what the current state of the field is.
posted by Making You Bored For Science at 2:36 PM on July 4 [135 favorites]


This is great overall, but a pet peeve of mine is the “we see everything upside down but our brains get used to it” trope. If you think about this for even a second, you realize it makes zero sense unless you’re assuming there’s a little “upright” homonculus in there looking at an “upside down” movie screen. But there isn’t, so our brains don’t “see upside down” any more than a stereo system would “hear backwards” if you had its right-stereo input on the physically left side of the case.
posted by dmd at 2:43 PM on July 4 [20 favorites]


I wonder if this has anything to do with visual strobing and time distortions induced by certain substances. Like, instead of pretending the blur was crystal clear, your brain pretends nothing has happened and no time has passed.
posted by Sys Rq at 2:48 PM on July 4 [4 favorites]


This is basically just one step away from "There are things out brains are hardwired not to perceive", and there you have your horror or urban fantasy story.
posted by happyroach at 2:48 PM on July 4 [2 favorites]


The vision party continues! MYBFS covered basically all the corrections I came here to say.

Also, while human eyes are wired "backwards," cephalopod eyes are wired "properly" -- they have photoreceptors in front of all the other nerve and blood vessels and stuff. They also have only one kind of photoreceptor instead of our three, so their photoreceptors cannot perceive color...which is weird, right, because they can change their body color to match their environment, so how do they know what color to change to? It's been proposed that squid's weirdly shaped eyes produce optical effects that depend on the wavelength of light, and in this way they can encode color.
posted by nicodine at 2:52 PM on July 4 [8 favorites]


Does the brain throw a lot of it out?

Am I right in considering the freaky motion after-effect you get after having stared into a waterfall, as the brain's compression-coding (to deal with excess motion-saccading input) taking some time to wear off? (Pretty sure that's not a great explanation - just try it sometime, you catch your eye/brain warping space-time...)
posted by progosk at 2:54 PM on July 4


Sys Rq it's really hard to say, since doing rigorous research with pretty much all of the substances in question became next to impossible after the Controlled Substances Act in 1970. I don't think there's much of a body of work there from before it became impossible, but I'd love to know if there is.

In terms of "things we're hardwired not to perceive", as mentioned by happyroach, oh yeah, absolutely. The way I think about it, we've got a sensitivity envelope when it comes to visual features (e.g., wavelengths of light), but also with things like motion (things can move too fast for us to see), and spatial frequency (there are details that we just can't see). So, yeah, an enterprising SF/F writer could have a lot of fun there with a creature who sat outside our envelope.

What progosk is describing is the Motion Aftereffect, and that's a function of adaptation at the neuronal level, mostly. If you want illusions governed by eye movements (admittedly, really small eye movements - microsaccades - which you make when you're fixating on a single point) you want Akiyoshi Kitaoka's illusions (warning: rotating snake illusion), which seem to be a microsaccade effect.
posted by Making You Bored For Science at 2:58 PM on July 4 [4 favorites]


Ok but hang on a sec... in the Emerging Technologies Showcase at SIGGRAPH 2007, Japanese researchers demonstrated a saccade display. It was a vertical line of LEDs, but when you focused on a point to the left of it, and then shifted your gaze to a point to the right of it, you would perceive an image. My memory is that one demo was the Mona Lisa.

It was stunning. If you didn't know it was there, and you just glanced past, you'd find an image of the Mona Lisa in your mind.

It was cycling through the columns of the image at a rate determined by the relatively-constant angular velocity of the eyeball during saccade. Clearly this demonstrates that the eye is not blind during this period.
posted by rlk at 3:01 PM on July 4 [7 favorites]


That is neat. Saccadic suppression isn't total, but it's pretty significant, because you don't notice your own eye movements. There's been a bunch of work over the years (mostly in the last 20, as eyetrackers have gotten accurate and fast enough to enable it) showing that it isn't total, but I've never seen that much information getting through during flight. I'd bet that demo is pretty brittle; make a smaller or a larger saccade, and it probably stops working as well, although the basically-stable velocity of the orb makes it easier to do than it would be otherwise.
posted by Making You Bored For Science at 3:08 PM on July 4


I suspect it's because the LEDs are very bright, and so what you're seeing is effectively a positive after-image that has a duration longer than the saccade.
posted by Pyry at 3:15 PM on July 4 [2 favorites]


I always think about things like the ball size effect when a sci-fi movie or show plays back a visual representation of someone’s memory on a screen. Even if you take as given the possibility of extracting a discrete memory that way, I kind of wonder if the result would still be unintelligible considering all of the ways that our visual input is not anything like a video stream.
posted by invitapriore at 3:25 PM on July 4 [1 favorite]


However, the bit on peripheral color vision is a really common (and incorrect) understanding of peripheral vision. So, if you look at a photoreceptor density plot, you'll see that cone density in the fovea (labeled as 0º on the plot, because it's pretty much the center of the world on your retina) looks like it falls off a cliff. It does, but there's a big difference between "falls from ~180k cones per mm^2 of retina" to "falls to nothing" versus "falls to ~2-4k cones per mm^2 of retina", which is what the plot shows. Lots of people want to interpret this as "you have no color vision beyond the central ~5º of the visual field" and that's just not the case. You've got plenty of photoreceptor input for color (and detail!), it's just not as glorious as your fovea. So the whole thing of "color in your periphery is a story your brain tells you?" Nope. You've got plenty of data.

Just wanted to add to this a little bit. The plot that people typically show (high cone density in fovea with rapid drop-off with eccentricity) is a horizontal slice of the retina. But, the retina is not just a line, it's got area! So, a better representation would be more like a melted Hershey's Kiss: high cone density in the very centre (= fovea), low cone density in the periphery, with circular drop-off.

What this means, though, is that the horizontal slice vastly misrepresents the size of the periphery: it's much, much larger than implied by the horizontal slice. Thus, even though the cone density of the periphery is low, the cone count is actually quite high, because density is integrated over a large area. From Tyler (2015):
Despite the high concentration of cones in the fovea, even the central 5° of the retina contains only about 50,000 cones (1% of the total), while the remainder of the total population of about 5 million cones is distributed throughout the peripheral retina with an average density of about 5,000 cones/mm2 (beyond about 10° eccentricity).

(emphasis mine)
Also, hello, fellow vision people! I'm more of a VSS person than an SfN person, but I'd love to have a meet-up one day!
posted by tickingclock at 3:43 PM on July 4 [12 favorites]


the more i learn about this horrible flesh prison the more i want to be a murder robot
posted by poffin boffin at 3:48 PM on July 4 [23 favorites]


Isn't it disingenuous to claim that perception "lies"? Compared to what? Thinking?

Yeah, seems this whole line of discussion buys into the "homonculus" paradigm, where the liar is the little man in your head that watches the screens, and relays what he is seeing. Problem is, that he would have to have his own little "homonculus" watching his internal screen, and so on, all the way down.

It all a big hairball of survival mechanisms that create a persistent illusion of integrated reality, as a grossly simplifying schematic sketch of being in the world.
posted by StickyCarpet at 3:51 PM on July 4


Some of the items in this post will make traffic-related town planning visibility cases a lot more fun! I see so many static images of new buildings 'as seen from the highway ' while in reality they're imperceptable for a whole slew of reasons like these.
posted by unearthed at 3:57 PM on July 4


tickingclock there's a non-zero chance we've met, then. I've been going to VSS (Vision Sciences Society, for everyone else in the thread who is going "what is this acronym") since 2009.
posted by Making You Bored For Science at 3:58 PM on July 4


a neat trick with saccadic masking: go look in a hand mirror. no matter how close you bring it to your eyes, and how much you look around, you will never see your eyes move.

If you really want to watch your own eyeballs move, slowly cross and uncross them whole looking in a mirror. Works for me at any rate.
posted by fimbulvetr at 4:08 PM on July 4


I did an experiment this morning at the coffee shop where I took iPhone time-lapse video of myself darting my eyes around and I'm thrilled to report that this makes you feel like a totally conspicuous idiot.
posted by cortex at 4:16 PM on July 4 [12 favorites]


Dude, eyes and brains are bizarre. Here's my related story.
A few years ago I was getting daily migraines and severe vertigo. Like, I literally couldn't eat or sleep because everything was spinning all the time. I would knock into stuff all the time. I'm chronically ill so it all got mixed up with other symptoms. I had an MRI and saw a neurologist, and ENT, and did balance testing.

One of the balance tests was to follow a dot LED on a bar as it bounced around with your eyes only as goggles follow your eye movement. If your eyes jump around you may have inner ear problems. According to the doctor I had one of the best scores he'd ever seen. They were honestly gasping at my test results. (Maybe my years of video games?)

Then they put hot air in my ears and made me feel like I was Alice tumbling down the rabbit hole. Then they put me in a harness and did a tilt test - and I fell after I closed my eyes and the floor shifted.

All they could tell me was my left ear was functioning at 85% compared to the right, that I didn't have an obvious inner ear problem, and to try specific physical therapy to try to retrain my inner ear. As I did the therapy my symptoms got worse.

Until one night, I was horribly sick in bed and watching something on my laptop. I closed my right eye against the pillow, and suddenly my laptop screen went fuzzy. Terrified, I keep blinking between the eyes. One was horribly fuzzy. I printed off an eye test and had my husband give it to me, he thought I was lying when I couldn't read a few lines down with my left eye.

At the eye doctor I still felt like I could read things totally "normally" with my right eye and with both eyes, but the left was all fuzzy. Then I got my test. My right eye is 20/40 and my left eye is 20/70 with severe astigmatism causing double vision in only my left eye.

I then started to realize how many things I had been seeing double - for YEARS! Headlights in a rearview mirror were one of them. My depth perception was awful.

But I completely thought I had normal vision. I had better than perfect vision according to an eye test 6 years prior. But the whole time my "better" eye was overcompensating. It was completely lying to me. My vision with both eyes open was nearly the same as my "good" eye. It was almost ignoring input from the crappy eye. But that screwed up my balance system. Now that I have contacts and glasses I'm astounded I lived this way for years. It's like having one contact in - all the time.

But my eyes were trying to fill in missing information and fooling my brain in some ways but couldn't fool my balance system.

So uhhh, get your eyes checked regularly people.
posted by Crystalinne at 4:57 PM on July 4 [36 favorites]


Jeez, Crystalinne, glad you got that sorted.
posted by Johnny Wallflower at 5:22 PM on July 4 [2 favorites]


Not only are human eyes wired backwards, but they're wired for high-availability, but with the emphasis on stereo vision retention, instead of full visual field retention.

Maybe you've heard about the stuff where your left side parts are wired to your right brain hemisphere, and the right side parts are wired to the left hemisphere, and you probably didn't even consider it (which is fine, because it's somewhat wrong) but for the eyes? oh man, is that not how things are done.

The eyes? The LEFT HALF of your vision from the LEFT EYE and the RIGHT EYE.. That goes over to the visual cortex on the right hemisphere. the RIGHT HALF of your vision is wired to the left hemisphere's visual cortex. Right the hell down the middle.

Oh, and get this.... When you blow out one of those visual cortexes from brain damage? Your visual system just picks right up and lies to you.. There's no "dark spot" or anything.. it's just like the blind spot you have right now behind your head.... only it's in a place you would have typically had visual input.
posted by Xyanthilous P. Harrierstick at 5:54 PM on July 4


But there isn’t, so our brains don’t “see upside down” any more than a stereo system would “hear backwards” if you had its right-stereo input on the physically left side of the case.

Even better! If you affix prisms in front of the eyes to invert the image before it enters the eyeball, the brain will adapt and present it as "normal" after a fashion.
posted by Xyanthilous P. Harrierstick at 5:57 PM on July 4


This is the least of the shit my brain lies to me about.
posted by Thorzdad at 6:07 PM on July 4 [26 favorites]


This explains something from my childhood! I had a real fear of being late - so much so that I would sit close to the classroom for ages towards the end of lunch time, instead of running around, stuff like that. (I think this stems from that one time I was engrossed in a game with a friend, both of us sitting in a big old concrete pipe in the back of the playground, and I didn't hear the end of playtime bell. I would have been about 5, I think.)

I did not trust my own inner sense of time passing at all, and would look at clocks a lot to make sure I was keeping track of time, but I was afraid of stopped clocks giving me a false sense of security (nah it's only 10:30, I'm fine!) So there would be that stomach-dropping moment when the clock would appear to be stopped before continuing to tick as normal.

Darn brain. I mean, it's the darn brain that dishes out the anxiety (when I realised I had it, this explained so much about my childhood to me) but it's like the visual cortex was messing with the rest of my brain and freaking me out.
posted by freethefeet at 6:22 PM on July 4 [2 favorites]


So, asking as someone with zero ocular knowledge, how would this interact with strabismus? Asking for a, um, friend, who has esotropia and frequently misses objects right in front of her... *cough*
posted by brook horse at 7:15 PM on July 4


BTW, since a few people have brought it up: There's a great sci-fi novel by Peter Watts called Blindsight. In it humans encounter an alien race they call Scramblers, who can move very fast and precisely, and they exploit saccades.
Cool, great, nightmare fuel; I wonder how many SCP articles can be explained with cheating the saccades (looking at you, 173! ...along with two other people).

I do love this concept, however, and I will be loading it onto my Kindle later today

Also, the demonstration of the blind spot was neat but absolutely infuriating.
posted by lesser weasel at 7:16 PM on July 4


Motion aftereffect - so, that's what was happening when we rode in the back of the pickup truck, facing backwards, driving down the back roads and then the truck came to a stop and the landscape came rushing towards us? Good to know.

Thought it was the drugs.
posted by shoesfullofdust at 7:25 PM on July 4 [4 favorites]


Became aware of the blind spot back when I started riding motorbikes, and was vacuuming in as much info as I could. See a rider weaving for no apparent reason? Is there an intersection coming up?

I mean yeah, sometimes it's just because "Wheeeeeeeeeemotorcycle!!", but it's also a strategic thing.
posted by calamari kid at 8:04 PM on July 4


Samizdata: "It took y'all this long to notice Foone? Really?"

The 10K

I wonder if this sort of thing explains the 3 times I've been waiting at a gravel road to cross a 4 lane highway and experienced a car just pop out of nowhere very close to my car. I've wrote it off in the past to just not paying attention or being distracted but maybe it got eaten by the saccade interruptions. It is seriously freaky when it happens.
posted by Mitheral at 8:33 PM on July 4 [3 favorites]


Being red-green colorblind, as well as a bit of blue-yellow, I have known I can't trust my color vision since I was about 2 -- I puzzled out reading by 3 by recognizing the color descriptions on crayons (not actually reading, but getting started.) Now, I can see colors, but can't distinguish between many shades (a rainbow has red yellow blue . . . I understand most people can see more than that) and cannot readily identify many colors. But my brain compensates, so if I believe something is a specific color, that's how I see it. I always saw "blue jeans" as blue, only to learn much later that they're actually a bit more purplish, indigo, something I never saw. Dry, brown grass was still green to my eyes, and I once bought what I was sure was a nice green shirt, only to learn at work that it was actually a vivid pink. But I saw that shirt as green as grass at first, and on being told (and confirming, many asshats have enjoyed trying to mess me up color-wise) there was a quite noticeable flip from green to pink in my view. Growing up, we had a color TV back in the 70s that apparently tended to make people green and otherwise shift colors around (the joys of the old tube-based CRTs), but I never saw it.

One job I had involved, believe it or not, color correction on one of the first print-to-plate desktop publishing systems, which was great because all I had to do was print a test image to the four plates (CMYK), then check them with a densitometer -- once the numbers matched, we were good to go for a print run. When my boss found out, he was furious -- he was insistent that somehow someone with good color correction was needed to balance the color ratios. This was arrant nonsense - the plates were monochromatic, you didn't get a full color image until they hit the presses, but he refused to believe that an relieved me of that duty. In my current career, though, I simply cannot reliably make up ethernet or RS242 cables -- the wire colors of green, brown, red used are just too similar, and it is often difficult for me to distinguish between the red, amber, and green "lights" on many GUIs for telco gear, as the programmers chose shades too similar and of the same intensity when programming them. Still, I manage, though sometimes I have to ask someone to check a color for me.
posted by Blackanvil at 9:30 PM on July 4 [9 favorites]


We think of consciousness as being a unified, instantaneous phenomenon, but the reality is a lot messier. To borrow from the old Cartesian theater metaphor, consciousness is not so much a movie you're watching; it's more like a live news broadcast of a chaotic event. The anchorperson is trying to form a meaningful narrative of events as they unfold, but new information is trickling in gradually and not all of it is correct, so the story is having to be continually updated and in some cases revised. Also, the anchorperson is you.

Have you ever seen a bug crawling around on the ground, only to look again and realize that it was just a speck of dirt, and therefore couldn't have moving? Your brain "saw" it moving because in that immediate instant "bug" was the most meaningful interpretation. But the second look gave you enough additional information to change interpretations, at which point your perception shifted suddenly from bug to speck.

This sort of thing happens all the time, only you don't always notice it. Your brain ret-cons the narrative on the fly as needed to make sense of what it experiences. See, for instance, the color phi phenomenon, where your brain retroactively imagines transitional circles to create a seamless visual narrative. Daniel Dennett in Consciousness Explained posited that there were two ways to explain this kind of phenomena: what he called Stalinesque revisionism, where your brain alters visual information before it reaches consciousness, and Orwellian revisionism, where your memory is changed after the fact to correct what you initially thought you saw. Dennett rejected both of these explanations, arguing that both operate under the assumption that subjective consciousness (in the immediate sense) is a finished product rather than a fuzzy, nonlinear draft under continuous revision. Your brain may even have multiple interpretations existing simultaneously, fighting it out to assume the role of subjective truth.
posted by dephlogisticated at 10:07 PM on July 4 [15 favorites]


Making You Bored For Science, no way! Yeah, I've been going pretty much every year for a while, though I can't quite remember since when. We should definitely have a meet-up. Are you going to ECVP? I'm based in Europe these days, so I'm planning to go. I'm leaving this comment in the thread in case other vision science people show up, but feel free to memail me!
posted by tickingclock at 3:18 AM on July 5


You can see striking and obvious forms of visual processing failures in everyday animals. Your household cat can't really see things that are really close to it. If you put stimuli too close to their face they will completely lose sight of them. This is part of why they are so determined to catch red laser dots despite never being able to because at the crucial catching point they switch from systems from their eyes to their whiskers and are confounded. They don't get to realize the dot is impossible prey. They just think it got away during the switch over every single time.

With ducks and geese they will only really detect you if you are moving in an understandable "biological" way. Walking or running they sense you and respond. Cycling they don't. Your movement appears to be invisible to them. It's a real pain in the ass on narrow British canal paths
posted by srboisvert at 4:32 AM on July 5 [3 favorites]


To answer brook horse, a very quick look at the literature doesn't suggest notable interactions between saccadic suppression and esotropia, but this really isn't my area of research. It's much closer to what Dr Bored for Science, my wife, studies. There are significant variations in saccade behavior (and, really, all eye movements) with strabismus, and (often) some degree of suppression of input from the strabismic eye, which might be more of an explanation.

What Mitheral described is a huge problem in driving; usually phrased (depending who you talk to) as "the vehicle came out of nowhere" or "looking but not seeing." Why this happens is a significant debate in the driver behavior literature; it's usually interpreted as a failure of attention, in the sense that the visual system should have the information, the driver just didn't consciously perceive it. Interestingly, this problem gets worse with age (like everything else in the visual system), and has been the impetus for assessments like the Useful Field of View, which, pretty much, measures where it's easy to attend (not what the driver can see).

The issue of expectation in scene perception, which underlies a lot of what dephlogisticated mentioned, is huge. You can get the gist of a static scene (a rough sense of what's there and what's going on) in less time than it takes you to make an eye movement (stimulus durations ~75 ms), but it's totally possible to break the gist percept if I show you a weird scene. My favorite demo of this is to show a scene like this, and observers who only see it for 75 ms will report "yep, that's a restaurant scene" but are less likely to report that it's a very weird restaurant with the diners sitting on toilets and eating out of bathroom fixtures. It's also a huge factor in visual search, where your knowledge of how a scene should be influences where you look for what you're looking for (you are unlikely to look for a laptop in the bathtub).
posted by Making You Bored For Science at 7:08 AM on July 5 [8 favorites]


Hey, blackanvil if you've got an iPhone-- head into `Settings` -> `General` -> `Accessibiliy` -> `Display Accomodations` and go into `Color Filters` and pick your particular brand of colorblindness.

Then also under `Accessibility` scroll down to the bottom and turn on the `Accessbility Shortcut` to be `Color Filters`.

Then when you triple tap your home button-- it'll put your phone, including the realtime camera feed into, or out of colorblind mode. So if you need to do your cabling, you can just triple tap-- then hold the camera up to the cabling, voila.

It's been super-helpful to me, I've increased the intensity to make it extra obvious (green/brown for me)
posted by Static Vagabond at 7:13 AM on July 5 [11 favorites]


This is basically just one step away from "There are things out brains are hardwired not to perceive", and there you have your horror or urban fantasy story.

Well, I have bad news: the call is coming from inside the house.
posted by benbenson at 9:23 AM on July 5


Have you ever seen a bug crawling around on the ground, only to look again and realize that it was just a speck of dirt, and therefore couldn't have moving? Your brain "saw" it moving because in that immediate instant "bug" was the most meaningful interpretation. But the second look gave you enough additional information to change interpretations, at which point your perception shifted suddenly from bug to speck.

I was this close to posting an AskMe to find out if there was a name for this phenomenon! At least once a week I see a beetle scurry up and stop and magically transform into a sunflower seed.
posted by Devoidoid at 9:44 AM on July 5 [2 favorites]


Those are actually tiny mimics. When they get older, they'll turn into treasure chests.
posted by RobotHero at 9:50 AM on July 5 [7 favorites]


This explains all those times I thought the crosswalk red hand had stopped it's blinking because it lasted longer than it should when I glanced over at it so I think I'm good to cross the street the other way and whoops no it's still blinking and what the hell, brain, are you trying to kill me here?!
posted by Grither at 10:02 AM on July 5 [1 favorite]


I might need to steal "what the hell, brain, are you trying to kill me?!?" as a paper title. Then again, I have an experiment that had the nickname of "Holy Shit, Moose" this past spring, so there's some precedent in the lab.

What was "Holy Shit, Moose", you ask? I had subjects watch very brief (< 500 ms) dashcam videos from right before a collision or near-collision event to see how long they needed to see these scenes to understand them well enough to do a couple of tasks. It turns out, you need very little information (read: the videos can be very short) to either (1) detect collision precursors and (2) to localize whatever's happening in the scene well enough to decide to steer away from it. If you think this sounds like the scene gist ideas I mentioned above, you'd be right, except it's with moving scenes recorded from moving vehicles, not static images. It's probably a good thing it's this fast, because the moose who walks into the roadway to say hi isn't going to wait for you to figure out that he's a moose. You want to evade him, if you can, and all you care about is "that's a thing that shouldn't be here, and it's moving that-a-way, so I should go this-a-way."
posted by Making You Bored For Science at 10:25 AM on July 5 [7 favorites]


I wonder if this sort of thing explains the 3 times I've been waiting at a gravel road to cross a 4 lane highway and experienced a car just pop out of nowhere very close to my car. I've wrote it off in the past to just not paying attention or being distracted but maybe it got eaten by the saccade interruptions. It is seriously freaky when it happens.

How about this for an explanation, Mitheral:

Watching the gravel stream by you and disappear under your car as you're driving is enough to trigger the waterfall effect below the level of attention when you come to a stop at the edge of the 4 lane highway, and that waterfall effect masks the perception of motion in your peripheral vision that would otherwise be caused by an approaching car, so that when you finally do notice it, it seems to pop out of nowhere.
posted by jamjam at 1:16 PM on July 6 [3 favorites]


Chewing on Mitheral's question a bit more (prompted by jamjam's proposed explanation), it sounds like what Mitheral is describing is waiting, while unmoving, at an intersection of a gravel road and a 4-lane roadway. My guess, without a more detailed sense of the environment, is that these vehicles are either occluded from view, or difficult to distinguish from the background, irrespective of the motion cue. Peripheral vision does some weird things, particularly at increased eccentricity (distance from the point of gaze), so it might be hard to tell that there's a car there (e.g., it's crowded from awareness, and perceptually seems to blend in to the background), even if it's moving at speed perpendicular to where Mitheral is waiting in their vehicle.

It could also be an attentional issue (which takes us back to the idea of the Useful Field, the region of space where it's easy to attend) - the assessment for it, Ball and Owsley's Useful Field of View, was pretty much developed to probe exactly this kind of scenario. It's not that you can't detect stimuli outside the Useful Field, it's just a lot harder, and it's particularly harder to do, for reasons we only kind of understand, for older drivers.

It doesn't sound like there's a way to get a strong motion aftereffect (the waterfall illusion) in this scenario. Usually, you want constant motion to induce it, so maybe really heavy traffic could do it, but it'd be a poor inducer compared to looking at a waterfall, or a spinning spiral, or even looking at where you're walking on an uneven path for a while. I've induced it in a driving simulator when it's driving itself and I'm just staring at the vanishing point of the roadway, but even that takes a couple minutes to induce. It's also a perceptual aftereffect, and while motion can mask stimuli, I don't usually think of the motion aftereffect as a good mask in its own right. Motion signals are great for mislocalizing stimuli, but those are usually fairly small mislocalizations, and I can't think of results that would account for something like Mitheral's experience.
posted by Making You Bored For Science at 1:48 PM on July 7


Mitheral was waiting at a gravel road to cross a 4 lane highway, MYBFS.

My guess was that the waterfall effect was induced by the previous process of driving on the gravel road to get to that point.

Staring down a gravel road, which has a great deal of texture, as it disappears beneath you as you drive, does seem substantially similar to staring at a waterfall.

I don't know where you got the idea from what I wrote that the 4 lane highway itself was the cause of a putative waterfall effect, since I clearly said "watching the the gravel stream by and disappear under your car as you're driving is enough to trigger the waterfall effect ..."

In this scenario, stopping at the highway with its relatively uniform stretch of asphalt is the equivalent of looking at a blank wall after looking at a waterfall for a significant period.
posted by jamjam at 2:20 PM on July 7


Ok, jamjam, that makes more sense now. I agree with you that a gravel road, as a texture, is fairly similar to a waterfall, and, given the right setup, probably can induce a motion aftereffect. However, driving is unlikely to be a good environment to induce it, because drivers don't keep their eyes at a single point in the scene, and fixating a single location is required to induce the motion aftereffect. Drivers move their eyes continually; expert drivers move their eyes more than novice drivers (and across a wider range of locations in the scene; Mourant and Rockwell, 1972), but they don't stay fixated at a single location, which is pretty much essential for inducing a strong motion aftereffect.

Inducing the motion aftereffect, with ideal stimuli (e.g., a rotating spiral), usually takes 30 seconds or more of fixation for a robust perceived effect (there's a good review of the state of research on the motion aftereffect in Anstis et.al 1998), and it's a strongly retinotopic effect (Knapen et.al 2009), meaning that it's limited to the region of the retina corresponding to the location of the adapting stimulus.

So, even if we assume a driver who is doing something that drivers simply don't do (fixate a single location for tens of seconds at a time, or even single-digit seconds at a time), we'd get, at most, a very limited motion aftereffect in one particular location in the visual field corresponding to where the road was. A passenger might have a better chance in inducing the aftereffect in the gravel road scenario, since they have no need to move their eyes around the scene and can just fixate the road ahead if they so desire.
posted by Making You Bored For Science at 4:52 PM on July 7


It's not some sort of motion after effect; I only drive about 30m from the parking lot to the highway.
posted by Mitheral at 12:53 AM on July 8


« Older Trust me, grandson / The war was in color.   |   a reckoning can’t begin and end with the self. Newer »


This thread has been archived and is closed to new comments