Mortgage backed securities, but for knowing things
July 27, 2019 6:51 PM   Subscribe

Because AI Said So - The Blue's own zittrain dons a sandwich board to warn us of the dangers of "intellectual debt": accepting answers proffered by "AI" (and often building upon them) without bothering to understand the principles, or lack thereof, upon which they're built – a veritable house of punch cards!

If you listen closely, you can hear those HN keyboards furiously clacking away at their well-considered counterpoints right about now.
posted by leppert (18 comments total) 32 users marked this as a favorite
 
One reason AIs are fooled by different adversarial images than humans are is that Where We See Shapes, AI Sees Textures.
posted by a snickering nuthatch at 7:53 PM on July 27, 2019 [7 favorites]


Good article. Google expressed somewhat similar sentiments in Machine Learning: The High Interest Credit Card of Technical Debt, though it's for a different audience.

The lesson is easy to see as far back as Roman bridges. We marvel at the constructions, but never saw all the people killed by the failures.
posted by cowcowgrasstree at 7:56 PM on July 27, 2019 [7 favorites]


What we lose over time is the ability to even be able to tell if the AI systems in question are wholly wrong, fully correct, or have given us a partially right answer that turns out to be both misleading and irreversibly damaging to critical social or infrastructure systems but only after implementation. The debt part also points to a loss of interest over time in fundamental research. It will still happen, but if we can get an answer without the full effort and the answer works why fund the research? Especially when the debt part might never seem to come due.

Though if we are smart enough to handle these new tools well, I look forward to the moment when decompiling and decoding neural networks becomes a full time career (not to be confused with using neural networks to decompile programs). Pulling out algorithms that no human would ever have come up with might help shed light on the processes that we have trouble understanding.
posted by Ignorantsavage at 8:50 PM on July 27, 2019 [8 favorites]


the part about consumers/workers/students/everyday people needing to employ AI to fit into or curry favor with the pattern recognition algorithms used by corporate/school/govt/police AI is terrifying. it will be one more expensive thing that only rich people will have that could become necessary to get credit or a job or admission to a university or to avoid police attention. i have my internet robots, you have yours, and we'll see who can fool whom. very bad stuff.
posted by wibari at 9:20 PM on July 27, 2019 [6 favorites]


" I look forward to the moment when decompiling and decoding neural networks becomes a full time career"

This is actually already true - when we apply machine learning to, eg, biomedical phenomena, we interpret the trained model to generate hypotheses about how something works.
posted by esker at 9:40 PM on July 27, 2019 [1 favorite]


I was thinking more an identifiable profession that folks at a party might recognize. E.g., "I'm a Psychiatric Nurse, what do you do for a living?"

"I'm a Neural Network Analyst."

As I look around all I can find are some papers about theoretical models of interpreting results but that was not exactly what I had in mind. (Only a few links are provided.) I was thinking of a job wherein someone is using standardized tools and techniques to analyze and improve both neural network development and interpretation. Some day relatively soon I expect some school to offer a degree in Neural Network Analysis as it seems to be a field with a lot of growth potential.

As far as the idea of consumer AI trying to meet the expectations of corporate/government AI, there is no fight. The big guys win that fight. There will be hacks to the system that folks figure out or stumble upon but the truth of that is, you cannot lose, if you do not play. Either a progressive political movement will have to occur that regulates and checks these issues or parallel systems will have to develop so that those left with inferior AI assistance will be able to function is a brave new world. Given the energy costs of AI this may be a conflict that we elide because we are too busy moving inland and away from newly forming deserts. 'Tis a golden age of crises.
posted by Ignorantsavage at 11:32 PM on July 27, 2019 [1 favorite]


when we apply machine learning to, eg, biomedical phenomena, we interpret the trained model to generate hypotheses about how something works.

That sounds like a lot of work - perhaps we can train a neural net to do that for us?
posted by DreamerFi at 2:17 AM on July 28, 2019 [4 favorites]


I was once tasked with writing a program to write the program for validating input data in different environments.
It was only after spending 6 months struggling with it (alongside other work) and getting a partial solution that the company admitted that they didn't expect a solution at all and that it was given to see how quickly people gave up.
They did actually use my work though.
posted by Burn_IT at 4:13 AM on July 28, 2019 [4 favorites]


That sounds like a lot of work - perhaps we can train a neural net to do that for us?

They’re called grad students.
posted by wigner3j at 5:24 AM on July 28, 2019 [8 favorites]


I feel that a relatable example of this that we're experiencing right now is our reliance on GPS and wayfinding technology. There's certainly a benefit and convenience to leveraging the crowdsourcing of a thousand points of traffic data and using that to help me route around slowdowns and delays. And it's definitely welcome to have my phone assume the mental load of helping me find a destination in an unfamiliar city, but I am definitely troubled when I feel like I've visited a city, but have no mental conception of what its overhead map looks like, or how I got from point A to point B without the help of an AI. And, more recently, for friends who have moved in the past five years, I was realizing that I had trouble remembering how to get to the literal last mile of their house without GPS. My brain knew how to get to their town and their main street, but their street name and how to get to that street was difficult to recall unaided. I didn't actually know where my own friends lived anymore.

I have also definitely felt like there have been times when my navigation software did not take me down an optimal path and instead of guiding me through a group of wide and low congestion boulevards, shunted me to a twisty maze of side streets simply because it needed to collect data on how fast those streets are at this time of day and I just happened to fall on the wrong side of an A/B test. Then I wonder, to the discussion's theme, about the ripeness for abuse on creating a market for this traffic data, so that you can make money by telling retail operators where most of their customers tend to drive or walk, and you get a bizarre physical version of SEO where real estate investors try to optimize the placement of their buildings according to the navigation algorithms.
posted by bl1nk at 5:32 AM on July 28, 2019 [4 favorites]


bl1nk,

If you really want an examination of your fears let me suggest, The Age of Surveillance Capitalism, by Shoshana Zuboff. You can hear an interview with her about the book here [Audio link].
posted by Ignorantsavage at 6:13 AM on July 28, 2019


t's definitely welcome to have my phone assume the mental load of helping me find a destination in an unfamiliar city, but I am definitely troubled when I feel like I've visited a city, but have no mental conception of what its overhead map looks like, or how I got from point A to point B without the help of an AI.

Before GPS and smartphones, that was my life anyway. I had a really poor concept of where I was habitually driving and how it mapped spatially to other landmarks and familiar routes.

Online maps and GPS help me with that general understanding.
posted by Foosnark at 6:33 AM on July 28, 2019 [5 favorites]


My wife and her family have similar struggles, Foosnark, and I don't mean my earlier comment to be a general message about how "GPS is the downfall of civilization". I appreciate the aid it gives to people who struggle with geographic orientation and navigation. I am just concerned how it's training many of us who are already capable of general navigation with being less capable.
posted by bl1nk at 6:39 AM on July 28, 2019


I think the idea of intellectual debt is an interesting one and definitely skeptical of the new "AI" movement, particularly the main point that "statistical correlation engines" (what a great turn of phrase) don't point out causation and can lead to a (dangerous) false sense of confidence.

However, at the risk of nitpicking too finely, the statement

"In the past, intellectual debt has been confined to a few areas amenable to trial-and-error discovery, such as medicine."

strikes me as being a bit... untrue? Historically speaking, hasn't the predominant amount of technology been of this type? Fire, metallurgy, electricity/magnetism, light, flight, etc. were all applied widely before being understood on a theoretical level.

Of course, the scale of impact of tech like AI and nuclear is quite different from those past technologies, and it makes sense to tread with greater caution.
posted by jarek at 7:19 AM on July 28, 2019 [1 favorite]


They’re called grad students.


And of course there's a relevant xkcd...
posted by DreamerFi at 7:23 AM on July 28, 2019 [4 favorites]


As a scientist who works with complex systems and has some professional familiarity with AI:

I avoid it like the plague, and encourage everyone to do so (not like it matters). It’s only useful if you have no interest in understanding things but want an answer spat out, and that’s pretty much the opposite of what I’m supposed to do.
posted by SaltySalticid at 8:38 AM on July 28, 2019 [4 favorites]


I'm a software engineer working on AI-adjacent projects. The reason I say AI-adjacent is because research sponsors these days basically won't fund anything that isn't in some way tied to machine learning, so the people who write the project proposals lard them up with AI/ML keywords. Then we go develop our software and there's no actual AI/ML to speak of, but it does what it's supposed to do for the people doing the actual work for it, so we keep getting funded.

We've gone so far as to write detailed reports on how the problems we're solving aren't AI problems, and that machine learning methods aren't particularly useful for the domain we're working in except for very specific problems... "Nope, keep doing that machine learning stuff", they say. So we keep doing our non-machine learning machine learning stuff.

We're not selling snake oil to cure a headache -- we're selling aspirin labeled as snake oil to cure a headache because the people writing the checks insist that snake oil be used to cure headaches.

It's madness.
posted by tonycpsu at 8:41 PM on July 29, 2019 [5 favorites]


That is horrible but also hilarious, in a way not much worse than any other buzz-word chasing, but in another way, so much worse!
posted by SaltySalticid at 4:19 AM on July 30, 2019 [1 favorite]


« Older Saturday Matinee: the original Ghost Busters, from...   |   how have I made it this long without knowing about... Newer »


This thread has been archived and is closed to new comments