Disabled Users vs Jakob Nielsen’s “Accessibility has failed”
March 11, 2024 3:51 PM   Subscribe

Jakob Nielsen has a very long history in web UX design. His most recent post claims Accessibility Has Failed: Try Generative UI = Individualized UX. Accessibility pioneer Adrian Roselli summarizes the responses from many disabled web designers with equally long histories at Jakob Has Jumped the Shark.

Léonie Watson, a blind developer with 30 years experience challenges the problem statement:
Jakob Nielsen thinks that accessibility has failed.

I give this some thought as I make my lunch with ingredients I purchased from an online grocery store. I keep thinking about it as I return to my desk and respond to a few emails using my online mailbox. I check my online calendar for upcoming meetings (there are two, both to be held using one ubiquitous VOIP platform or another), and I keep thinking about it.

I jot down some of my thoughts in a text editor, using my laptop with an accessible OS and an integrated screen reader, then check the time on my accessible watch (that has a different integrated screen reader).
posted by Jesse the K (29 comments total) 18 users marked this as a favorite
 
Showing an example of ChatGPT sucking at making accessible ALT text, which you then throw out and rewrite, as evidence for why we should hand over control to AI is… an argument.
posted by brook horse at 4:01 PM on March 11 [12 favorites]


Welp, bye Jakob Nielsen.
posted by Artw at 4:02 PM on March 11 [2 favorites]


I was expecting a bit more from the Adrian Roselli "critique"... it's just a grab bag of one-liner attacks, some wildly personal, and some of which logically contradict each other. Where is his actual argument?
posted by Klipspringer at 4:05 PM on March 11 [3 favorites]


Adrian Roselli isn't writing his own critique. He's doing a summary of the various things OTHER people are saying about the piece. Like a mini Rotten Tomatoes or something.
posted by hippybear at 4:09 PM on March 11 [3 favorites]


Aside from the 12 links to people (11 if you don’t include himself) discussing the Nielsen article, he points out that there is no reasoning as to how AI will identify disabled users or how it will serve custom UI.
posted by The River Ivel at 4:11 PM on March 11 [4 favorites]


Did he call disabled people a "special-interest group?" Jesus.

Worth contemplating, just on a karmic basis, that in his dream world of "second generation generative AI," 90% of UX designers would no longer be employed.
posted by praemunire at 4:14 PM on March 11 [4 favorites]


GenAI is going to be huge for accessibility. I say this as a UX designer with 10+ years experience, and who has done deep accessibility work with companies who take accessibility very seriously.

I don't know if Neilsen's suggestion that "Generative UI" will solve accessibility. I sort of doubt it. But it's obvious that human-language-as-an-input will fundamentally change how we all interact with computers. The stumbling blocks of accessibility — primarily visual perception and fine motor control — are just not that relevant when the computer can describe it's current state with text, and when the user can command the computer in plain language.
posted by TurnKey at 4:16 PM on March 11 [9 favorites]


Welp, bye Jakob Nielsen.

Thanks to Zeldman, that was me 25 years ago.
posted by alex_skazat at 4:55 PM on March 11 [4 favorites]


This debate between UX experts seems unnecessarily fighty. They all want the same thing: For the tools we rely on to be easier to use, for everyone. AI-as-scare-term seems like a perfect way to turn an interesting conversation into folks just getting their hackles up. The huge improvement in both voice-as-input and voice-as-output in the last few years? That's all roughly "Generative AI". Will some future (5+ years as Neilsen guestimates) AI be capable of improving the UX experience for the groups he identifies? It seems quite likely to me. By how much, I have no idea.

The most disappointing thing about the article for me is that back in the day when I followed him, Neilsen's biggest differentiator was that he actually did rigorous user testing, and published the results with his papers. Back then, when most "user testing" meant asking your boss if they liked the Photoshop comp, it was a huge step forward for web design.

Finally, re: Adrian Riselli-- holy crap, evolt.org! I haven't thought about that site in a looong time.
posted by gwint at 5:23 PM on March 11 [4 favorites]


A generative AI, seeded with the existing interfaces he thinks are terrible and UI principles he thinks are misguided, will somehow output individualized and adaptive solutions for users of all needs?

This is someone who does not understand the current wave of AI tech or is uninterested in discussing it honestly.
posted by Riki tiki at 6:42 PM on March 11 [7 favorites]


Somewhere on an old hard drive I have an antique browser extension that does nothing but add a giant picture of Jakob Nielsen to metafilter. That's all. I don't know why I have this and I certainly didn't write it.
posted by stet at 6:51 PM on March 11 [6 favorites]


A generative AI, seeded with the existing interfaces he thinks are terrible and UI principles he thinks are misguided, will somehow output individualized and adaptive solutions for users of all needs?

I was thinking about that, too. "AI" doesn't come up with new stuff. It just replicates existing patterns it's decided are recognizable to humans. So...how is it going to synthesize new solutions when the existing solutions are so unsatisfactory?
posted by praemunire at 7:03 PM on March 11 [3 favorites]


Accessibility Has Failed

Based on the WAVE accessibility report on his site, he's not just talking the talk, he's walking the walk. Three linked images missing alternative text?!? What kind of monster are you?
posted by kirkaracha at 7:15 PM on March 11 [2 favorites]


Their AI just beat up his AI.
posted by kirkaracha at 7:15 PM on March 11 [1 favorite]


At least in the realm of coding, I can say from personal experience just in the last few weeks that the current best of class LLMs (GPT-4 (almost a year old now!) and Claude 3 Opus (released last week)) have amazed me with their ability to both write and debug fairly complex code. Yesterday I was stuck on a bug for awhile and just for a lark ran the file and a few lines of text explaining where I was stuck and could it please find the error for me? It came back with precisely what was wrong along with a detailed explanation of the issue and presented a (correct) solution. My project wasn't rocket science, so I'm sure there was similar code included in GPT-4's training data, and yet... website frontends can be complex but they aren't rocket science either. It's not hard to imagine these tools getting good enough in a few years to automatically tune a site to a particular user's special needs.
posted by gwint at 7:33 PM on March 11 [1 favorite]


A generative AI, seeded with the existing interfaces he thinks are terrible and UI principles he thinks are misguided, will somehow output individualized and adaptive solutions for users of all needs?

Why assume this model would be trained with bad examples? It’s perfectly possible, although maybe not easy, to train the model with good examples. The charitable read on this argument is head-in-the-sand. Less charitable is simply that it is in bad faith.
posted by TurnKey at 8:21 PM on March 11 [1 favorite]


I think "AI" is best understood not as a new technology, or even as a culmination of previously existing technologies, but as a sort of mental tick, wherein a sufferer compulsively inserts those two letters into any attempt to think of a solution to any problem at all. It may very well be that there is no cure, and as the condition worsens, patients progress from imagining "AI" to actually implementing it, treating random strings of tokens as if they were coherent concepts.
posted by jy4m at 9:00 PM on March 11 [5 favorites]


Ignoring that "head-in-the-sand or bad faith" were basically the same options I gave, he categorically states that existing accessibility designs are a 30-year failure.

So even if you were manually curating the input to just "good" examples, it doesn't sound like he thinks there are a lot of those. Which is unfortunate for his argument, because you need a lot of data to make modern AI even barely passable at imitating things that have already been done.
posted by Riki tiki at 9:03 PM on March 11


he categorically states that existing accessibility designs are a 30-year failure.

A lot of the content linked in the post demonstrates how wrong Nielsen is about this.

So even if you were manually curating the input to just "good" examples, it doesn't sound like he thinks there are a lot of those. Which is unfortunate for his argument, because you need a lot of data to make modern AI even barely passable at imitating things that have already been done.

There are organizations with large enough codebases that meet enough accessibility requirements that training a model is not a problem. For them, at least.

Again, I don’t really think “generative UI” is going to solve accessibility. But I do think that GenAI is going to make it much easier for everyone to use computers, especially people with disabilities or impairments.
posted by TurnKey at 9:27 PM on March 11


A growing ability to “translate” GUIs into text or voice does seem to be on the horizon, which presumably has some relevance to accessibility. Perhaps one could even use computer vision tools to translate graphics into some intermediate code representation and build customized interfaces off of that. But a whole lot of GUIs that people regularly interact with already come as intermediate code representations, which designers fail to leverage to support accessibility on the level that it could already be supported. So the big vision here kind of comes off as “don’t worry too much about it, people will kludge together a way to make it work for them with new tech!” Which they might, but that doesn’t seem like something to be proud of oneself about as a designer. “We’ve failed at accessibility but it’s alright because somebody else might solve it for us.”
posted by atoxyl at 9:44 PM on March 11 [3 favorites]


It’s perfectly possible, although maybe not easy, to train the model with good examples.

His position is that good examples basically don't exist, though!
posted by praemunire at 9:51 PM on March 11


AI has it easy because at every step you compare an actual, now situation ("accessibility sucks") with what AI could hypothetically do in a best-case future ("solve UX, solve climate change, solve world peace, etc").

Whereas the only thing that we actually know for sure is that AI is fundamentally undemocratic, opaque, and is entirely controlled by a small handful of incredibly wealthy capitalist behemoths.

As a wise woman once said: the master's tools will never dismantle the master's house.
posted by splitpeasoup at 10:58 PM on March 11 [6 favorites]


Grifter identifies underserved community to exploit as a staging ground for variant on already-tired technology bamboozle.
posted by GoblinHoney at 1:00 AM on March 12 [2 favorites]


>Did he call disabled people a "special-interest group?" Jesus.

A plain reading shows that he did not:
the accessibility movement has been a miserable failure... Where I have always differed from the accessibility movement is that I consider users with disabilities to be simply users. This means that usability and task performance are the goals. It’s not a goal to adhere to particular design standards promulgated by a special interest group that has failed to achieve its mission.
Emphasis mine to point out that he specifically says the opposite. The special interest group which failed is clearly "the accessibility movement".

I'm quite skeptical of generative AI but these knee-jerk reactions to his argument seem incredibly simplistic — generative UI is one rare area where the tech seems to IMO at least potentially have value to society. I find his framing (standards-centric "accessibility approach" vs. "usability approach") compelling and his two critiques (variety --> cost and second-class interfaces) at least worth considering.

He's been working on the problem for decades and isn't sure the dominant approach is working. Even if, like Watson, you disagree, are we really so sure we've solved this problem optimally that we shouldn't explore new approaches? That seems extreme. To me it reads like he's speculating about how we could break out of our current limitations with an entirely new paradigm, like the original 1972 Dynabook article did.
posted by daveliepmann at 2:28 AM on March 12 [7 favorites]


Jakob is a classic ‘Applied Futurist’ - one that predicts the future based on their experience and knowledge of a particular set of fields. His goal isn’t to get it right, but to instead spark discussion, and ultimately, interest in his consulting practice. To that end, he has succeeded. It will result in an uptick in perspective UX designers flocking to pay good money to learn more. This is the playbook of many professional consultants. It’s throwing gasoline on a smoldering fire when your business is to train firefighters for profit. Having gone through a number of Jakob’s courses over the years, I can attest that Accessibility is something that they dabble in, but it’s not what their clients have historically been interested in. Rather, accessibility is an afterthought - a set of bolt-on features based on minimum standards required by law or to keep 80% of your client base happy. To that end, AI presents a mediocre ‘easy button’ opportunity to address accessibility for tech companies. Through that lens, Jakob is in the right place at the right time.
posted by WorkshopGuyPNW at 5:56 AM on March 12 [1 favorite]


Wow, spend over three decades making the web a better, more useful, more accessible place, give most of your research away for free, eventually get called a "grifter"
posted by gwint at 7:53 AM on March 12 [1 favorite]


Yeah, some of these responses are attributing way too much malice into what is at worst too optimistic about future technology.

Personally, as a disabled user with vision impairment and regular computer problems which present in laugh-until-you-cry ways (such as my company’s record system bugs out if a laptop is plugged into a monitor and the computer is set at anything but 100% zoom, so I can either have text at a readable size OR a usable amount of screen space, but not both, unless I train myself to click every button approximately an inch to the left of where it appears on the screen), I’m not dismissing the idea of GenAI for disabled UX design out of hand, but nothing in his argument is convincing.

If he could provide an example of where this worked I would be interested, but I don’t see one. And his one analogue is something whose implementation broke as much as it fixed for disabled users. Responsive design is the bane of my existence because most websites that try and take advantage of this break when set to 175% zoom, usually resulting in some elements entirely covering other elements so the information is totally inaccessible. So instead of having to tediously scroll back and forth on a zoomed in page, now I just don’t get access to the information at all! This is what I see easily happening with genAI, and I see nothing in the article to explain why this won’t happen or even the smallest example of genAI being used to improve a disabled user’s experience in anything but hypotheticals. And hypothetical new technology accessibility tools are also the bane of my existence, because they are so often hyped up with little disabled user input which is how you get bullshit like the Revolve foldable wheelchair.

Also, calling it “failed” after 30 years of trying just has me chuckling because in my field a topic we’ve been looking at for 30 years is just getting on its feet. Any topic we started researching in 1994 is probably finally starting to get some useful data just about now. You’re welcome to start researching genAI, but it’ll be a good 30 years before you get any useful data on that so maybe don’t throw accessibility out entirely, just in case it doesn’t live up to the hype.
posted by brook horse at 8:38 AM on March 12 [7 favorites]


The special interest group which failed is clearly "the accessibility movement".

(a) Are we going to pretend that we're brand new and are wholly unaware of the usage of "special interest group" with respect to minorities in the modern era?

Come on. "I consider users with disabilities to be simply users." Just like conservatives in the 1990s "didn't see race," they just thought that the NAACP was a special interest group, right? This type of rhetoric is well-known, especially for those of us Gen X and older.

(b) WTF would that actually even mean, for "the accessibility movement" to be a special interest group? Is he suggesting that they're professional activists of some kind, people who don't care about accomplishing the stated goal of accessibility so much as they care about getting their way or making a living? Again, I think I've heard that one before. (Also, a hell of an attitude to take towards an entire advocacy movement for a marginalized group.)

Setting this aside, I don't think it's utterly impossible for technology to solve accessibility problems. I don't even know how one would go about knowing that. I do think that ignoring the known limitations of the technology as well as the increasingly obvious distortions introduced by the socioeconomic context in which it is being deployed (see the FPP on Ed Zitron's recent column) while enthusiastically advocating its adoption over existing approaches is folly.
posted by praemunire at 10:58 AM on March 12 [1 favorite]


Let's do a thought experiment (pretty expansive, so feel free to skip forward if you've gotten the point and don't want the deep dive):

You have a simple grocery list app that you want to make accessible using Nielsen's imagined AI. What needs to be in the dataset in order for it to generate adaptive experiences customized to a single user's disability needs? Let's say colorblindness and reduced literacy.

Let's look at some of the types (and quantities) of data it would need and to support which inferences:
  • Lots of example interfaces...
    • The basics of visual interfaces: buttons, icons, labels, popups, all of it. Compare this to the ChatGPT problem: words can be permuted in so many ways to mean so many things, which is why they're vacuuming up as much of the history of human text as they can. Well, there are lots of ways to permute UI elements, too, and a lot of things those combinations can mean.
    • Now that there's that, which of those interfaces are "good". In the accessibility realm Nielsen thinks we've been doing a crap job, so arguably he doesn't think there's meaningful input data for this purpose. But let's ignore that and entertain the idea further, interfaces that are "good" at what? (see next two points)
    • For this example, which interfaces are colorblind-accessible. Oh, by the way, there are multiple types of colorblindness. So it'd better be able to infer from different interfaces in its source data which ones are accessible to which types of that disability. And somehow it has to infer which type is applicable to the current user.
    • Which interfaces are understandable with a low reading level: basically the entirety of ChatGPT's functionality here, but with much less tolerance for hallucinating. This data had better cross-pollinate well with its inferences on how to visually organize and label buttons, checkboxes, headers, etc.
  • Lots and lots of data on how app interfaces link up to app functionality. Let's say your app can let you filter for all perishable items on your list and also delete any produce items from the list. What happens if adapting for literacy changes an unchecked "Perishable goods" filter checkbox, into a "Remove fruits and vegetables" button? Literally what function will it call in the code? There are no right answers here. And even if there were, imagine how many grocery list-like apps it would need to have analyzed in order to reliably predict how to execute that using your app's code? Seriously, not to dwell, but all the apps it analyzed (including yours) would have to be so similar in implementation that you wonder if there's any point in your app existing in the first place.
  • On that topic, the AI would have to know what you want your app to do so it can start making accessible variations. How do you tell it that? Do you make a whole working app without accessibility, and then upload your git repo? Just write a design document? Wireframes? Do you have to use a specific prompt syntax it'll understand?
  • Oh lol I forgot to mention that this is an Arabic-language user so I hope all your models have robust examples of right-to-left UI design, and low-literacy Arabic text. My bad.
Now let's pretend none of those problems were as intractable as they obviously are, and now you have an AI that spits out an app. How the hell do you test it? Remember, Nielsen lamented that "there are too many different types of disabilities to consider for most companies to be able to conduct usability testing with representative customers with every kind of disability."

And that complaint is about being unable to test an interface rendered by a human-readable codebase! In the great AI future, the interface "logic" is essentially trillions of floating-point numbers pointing to each other (by the way, better hope Apple comes out with a 20TB iPhone in the next few years if you want your app to work offline), including a moving target of what it thinks it knows about the current user. It will change, subtly or dramatically, every time you tweak the dataset or AI code.

There's no testing that. The simply isn't a way. You are letting your users with disabilities fumble their way through a shifting interface, unable to verify whether the "share" button actually did send something or whether it would also do so the next time they use it.

An aside: Nielsen's vision here is especially ironic because "Jakob's Law" postulates that "users prefer your site to work the same way as all the other sites they already know." And he wants a site that doesn't even work the same as itself to a single user over time.

Okay, that ends the thought experiment. What can we conclude?

To me, it's the same conclusion I've drawn for decades in this industry. Each person has a lens through which they see "the problem", and there are many influential and powerful people who are too ignorant, too arrogant, or both to appreciate anything their own lens doesn't show.

Nielsen's lens is that he was an early thinker and effective communicator of how people would best adopt these new "UIs" and "web sites" that were coming into vogue. But his lens doesn't show him how to actually implement his guidelines, how to deal with the million ways your app doesn't allow perfect adherence to them, how to do a budget or project plan for implementing your product with those guidelines in mind, or what your product should be in the first place. They're also dated and (for the purposes of this topic) didn't anticipate the evolving discussion of users with disabilities. That's totally fine, no one expects a single person to have solved everything forever.

Unfortunately, though, he is using his fame to insert himself into the modern conversation, yet has chosen not to step back and recognize his limits (let alone consult others who could broaden his perspective). He doesn't understand the experiences of actual users with disabilities, nor the hard work of people developing tools.

In short, he sees a long climb ahead to get to the summit of Interface Mountain and says "I bet we'd get there faster if we split up and climbed all the smaller mountains in the world."

Whether he's saying that because he is actually that clueless, or because he's too invested in his own prominence and success to stay silent is left as an exercise for the reader.
posted by Riki tiki at 1:13 PM on March 12 [4 favorites]


« Older Ghosting   |   Researchers use CT scans to peer inside... Newer »


This thread has been archived and is closed to new comments