It's not even very good!
February 28, 2024 5:11 AM   Subscribe

AI is already better than you. "You cannot shame this technology into disuse any more. That only works if quality is something the people with money care about. The problem with the continuing erosion of the games industry, the dehumanisation of game workers and the brutal treatment of outsourced work, is that many roles in the games industry are already treated as if they were automated. You are appealing to the better nature of money men who do not have one."
posted by simmering octagon (78 comments total) 23 users marked this as a favorite
 
Originally posted as a comment by ob1quixote in another thread.
posted by simmering octagon at 5:13 AM on February 28 [1 favorite]


I remember when I used to be excited about AI, especially "acting" as characters in games like this. But now the whole subject just makes me tired.

I kind of want to think "you don't hate AI, you just hate capitalism." I would like to see the potential in it for assisting humans to make art, decorating spaces that wouldn't have otherwise been decorated, for more immersive virtual experiences, maybe even solving some of society's problems (instead of creating new problems and exacerbating some old ones, which is what it's mostly good for right now).

But I'm just so weary of it, I just kind of think "fuck AI anyway" and refuse to use/play with the tools anymore.
posted by Foosnark at 5:42 AM on February 28 [37 favorites]


What are we actually supposed to do about this? How can we actually make change here? This is absolutely an open question, not a sour challenge!
posted by The Last Sockpuppet at 5:44 AM on February 28 [3 favorites]


I think the addendum at the end is very important: "even if people may not always care about quality, it is still valuable to fight for it and encourage them to care." It's an interesting argument and maybe a little subtle--the money doesn't care about quality, but ordinary people get hung up on symbolic measures of quality (the AI hands) that will sap their concerns of force, if those particular issues of quality ever get fixed. But we still have to care, because the seven-fingers, the weird pupils, the bad dialogue, it all points to the moral injury of AI--the fact that you could do something so much better, but you're expensive, and for now, AI appears cheap.

As much as I love AI toys, I am so worried. Take radiology AI. The buzz is that a good system can read your x-ray better than a radiologist. The AI is going to catch your cancer earlier, or is going to accurately see that the little shadow is a benign artifact, or whatever. Except is that true at all? There's the science behind testing who's better at it, AI or humans--but then there's the "science" that sales forces use to convince hospital CFOs. Quality really matters there, it's a life-or-death problem--but the decision-making will have the same technophilic blind spots that game companies have. We, as a culture, don't have any voice at all in it, and even if we did have a voice, how would we get actual information?
posted by mittens at 5:47 AM on February 28 [47 favorites]


In terms of quality, I think that's going to be the problem AI is going to have for a while. Consider Totino's frozen pizza. Substandard by pretty much every metric for pizza, and incredibly cheap / efficient to make. By the current fears regarding AI, you'd assume that Totino's has absolutely obliterated both the frozen pizza and pizza restaurant industry, due to figuring out the cheapest way to make pizza.

However, Totino's instead has cornered the market of "I'm drunk/broke and want something resembling pizza in some form." It's wildly popular, but far from a monopoly because people who care about specific qualities of pizza generally don't go for Totino's.*

In the same way, people who care about specific quality in art or music or video games probably won't be potential customer for whatever AI powered slop comes out in the future. Don't get me wrong, some people will be totally ok with AI slop and the slopmongers will make bank, but the specificities and limitations within the inherent architecture of LLM's are going to limit its reach and power. It will still cause damage to the creative field, but it won't be a complete extinction of creative jobs. People will still care about quality, about things that speak to them in ways they never expected, about new ideas, and about the specific qualities of a work made by a person. I would place money on this.

*(If you genuinely enjoy Totino's, you are exempt from this generalization and also I envy you)
posted by Philipschall at 6:15 AM on February 28 [20 favorites]


Apart from the enshittifying touch of capitalism, I'm not so worried about purpose-built AI models like those that could screen x-rays for cancer. The problem domain is pretty well defined and as you say, there are very scientific ways of measuring the performance of the AI vs the performance of a human. It's also pretty easy to translate value judgements onto the ROC curve--we're probably willing to accept a higher degree of false positives if the benefits of early intervention outweigh the risks of operating on something that isn't cancer. And even then this can still be just one component of a process whereby results are then forwarded on to humans who can make the actual judgement call. Yes it's still a little unnerving that the AI model is a bit of an opaque box and we don't have a good understanding of why it's interpreting images the way it does, but given the limited nature of what it's doing there are ways of mitigating that.

What's scary are these attempts like ChatGPT to build some sort of infinity do-all box. Trying to completely replicate all the all the generalities and performance of a human doctor is just something we have neither the technology to do well, nor the understanding to even attempt it at all.
posted by RonButNotStupid at 6:18 AM on February 28 [16 favorites]


The Last Sockpuppet: What are we actually supposed to do about this? How can we actually make change here? This is absolutely an open question, not a sour challenge!

Personally I don't think we, as in you and I, can! No matter what kinds of policies are put in place, people will find a way to circumvent them (think 'move the servers to the Bahamas' and so on). Now that the technology exists Pandora's box is open. But I think that it's important to understand and be informed about the technology, how it works, what it is and isn't capable of, why people have issues with it, what other options exist to perform tasks like writing video game dialogue.

But I have a broader point which is: I don't think that we are actually mike cook's primary audience for this piece? The last paragraph is making an argument about how 1) AI research is funded and used, 2) who performs the research and why, and 3) who makes video games - so I think that the call-to-action of this piece is primarily directed at people who actually work in the industry or do related research. It seems like (knowing nothing about the author) mike cook wanted a general audience like us to be informed or learn something about the topic but we aren't the people who have something we are supposed to do.
posted by capricorn at 6:21 AM on February 28 [4 favorites]


In terms of quality, I think that's going to be the problem AI is going to have for a while.

They're very much aware: a significant portion of investment from the AI manufacturers is going towards manual quality validation and rating by human beings. Incredible amounts of data validation work is being performed right now and it's only going to get bigger.

Also "Consider Totino's frozen pizza." is a great sockpuppet account name.
posted by slimepuppy at 6:25 AM on February 28 [12 favorites]


What are we actually supposed to do about this? How can we actually make change here? This is absolutely an open question, not a sour challenge!

The hope, if there is one, is to make AI tacky. Brand it as cheap, stupid, made by scammers for rubes. Remember how polyester and double-knit synthetic fabrics were everywhere in the 70s? And as soon as the culture was tired of them, they were radioactive. Even the word "polyester" was a joke. Of course, synthetic fabrics are still around because they do have good uses, just as AI does, but they aren't considered high-quality outside of camping trips, costuming, and so forth.

The Willy Wonka Experience has helped with that, but it's just one news story. The giant rat penis is another.
posted by Countess Elena at 6:25 AM on February 28 [30 favorites]


I like my videographer/short film making friend's take on this: The video editing tools he has used his entire career to build a successful one man business and make a series of short films and animations are full of AI already. It's what allows small creators to do what once took a studio. The fact that AI may be able to pump out stuff that looks like 40-60% of the human made content out there means that that content was already shitty. So while this will no doubt make studio movies and AAA game developers increasingly awful to work with or for, these tools will also make it easier for truly creative people to do more with less. This has been the march of tech in the arts forever.

The only real issue is the blatant mining and theft of real human creative work. Studios can hire lawyers instead of artists and never have to sweat being properly sued or shutdown for stealing.
posted by es_de_bah at 6:29 AM on February 28 [11 favorites]


In the same way, people who care about specific quality in art or music or video games probably won't be potential customer for whatever AI powered slop comes out in the future.

Yeah, but that's worse - it means a world of bespoke care for people who can afford it, and a world of shitty care for people who can't.
posted by corb at 6:31 AM on February 28 [27 favorites]


corb, you're right. This is where some legislation may help, such as mandated labeling, but I don't know how constitutional that would be. (Or how possible, considering the power of studios.)

It so happens Chuck Tingle just made an extremely relevant post:

something interesting about willy wonka experience scam is that some were tricked by ai art. goofballs loved to say ai 'art' will make EVERYONE professional artists but when you see an add with ai art your brain already thinks 'this is a fake company'. it already signals 'cheap'
what scoundrel techgoofs never seem to understand is that most successful art that buds like is not entirely about technique it is about taste, and you cannot fake taste. the ai art my look 'technically' proficient, but it also evokes 'scam' 'cheap' and 'things i block on twitter'
art is usually about EVOKING something, and skill of HOW to evoke these feelings is predicated on empathy, which these goofs often lack. will be interesting to see companies realize how much selling out human artists cheapens their brand on a VISCERAL level, where the true marketing lies

posted by Countess Elena at 6:43 AM on February 28 [35 favorites]


Yeah, but that's worse - it means a world of bespoke care for people who can afford it, and a world of shitty care for people who can't.

Isn't that pretty much what we already have because capitalism?
posted by RonButNotStupid at 6:43 AM on February 28 [15 favorites]


The fact that AI may be able to pump out stuff that looks like 40-60% of the human made content out there means that that content was already shitty.

This argument makes no sense. It belies a basic lack of understanding of what it means to take an average.

This has been the march of tech in the arts forever.

Capitalism is not a force of nature. Technological innovation is not necessarily progress.
posted by grog at 6:44 AM on February 28 [4 favorites]


Two thoughts I've seen online about AI really ring true for me.

On AI writing articles and stories: "If nobody cared enough to write it, why should I care enough to read it?"

"AI was supposed to take over the tedious tasks so people could spend more time creating, but instead it's taking over creative tasks so people can spend more time on tedium."
posted by Servo5678 at 6:46 AM on February 28 [55 favorites]


AI was supposed to take over the tedious tasks

Yes, instead of working on hard problems that would be good for people, AI researchers have focused on tractable problems, the benefits of which are not actually that clear, and the drawbacks of which seem patently obvious. Nobody really wanted automatic image generators or bullshit generators, but it's the only thing the researchers could figure out how to do, so now we have automatic image generators and bullshit generators.

To me it's like saying "Curing cancer was too hard so we made an atomic bomb instead, isn't that cool and disruptive?"
posted by grog at 6:58 AM on February 28 [40 favorites]


I've been talking a lot to teens in my writing classes about AI. First, I tell them that it's not ethically sound to use anything an AI writes for you, because it's not creating-- it has learned, based on whatever data the programmers fed it. And that might be great works of literature in the public domain... but it's also the entire collected works of OT3, and every Nazi manifesto floating around on the Internet. It didn't have permission to scrape all of that data, either-- so original work by those artists is being used without their knowledge and consent-- especially true for visual artists.

I tell them about the chat bot that had to be removed from the internet after a few short hours, because it learned to be a racist, sexist, homophobic fascist. Unless you personally fed your AI ethical learning sets, you can't know what AI is recreating and reusing, and it can't have opinions. It can only give you other people's opinions, and that's not how storytelling works.

But I also tell them, you can use Chat GPT to see an *example* of the kind of writing you're trying to do. For example, my recent novel, I wanted to include a newspaper article. Obviously, I read newspapers all the time, so I have a vague understanding of how articles are put together. But I wanted to see how an article shaped with a specific set of data might look.

I made one in Chat GPT, because can I look at 5000000000 newspapers and compare them to learn the style? Sure. Is it easier to let something that has already looked at 500000000 newspapers give me an example? Yes. Totino's pizza of article. But it helped me get my head around what I was trying to do, so I could write my own, original article for my book.

Some students have already heard about the ethical use and misuse of AI, and understand that the output from an AI is not original work and almost definitely contains seeds of other people's work, and possibly may contain straight-up plagiarism. Telling them, don't use this! just makes it appealing. So hopefully, I'm helping them make better decisions about how they personally use AI as they create... while also hoping that it doesn't put me out of a job.

Now I want some fucking pizza.
posted by headspace at 7:02 AM on February 28 [17 favorites]


I don't like their argument that the human authors half of the dialog.

AI-tuned (nyuk) dialog, where what you type is rewritten by the AI to be "what your character would say", is too obvious and easy.

The BG3 times 1000x size is going to come out of this. BG3 has 1000s of hours of dialog and a relatively huge event tree and massive maps. If AI can make this 10000x cheaper to make, you can make a game with 1000x the content for 1/10th the cost. They'll suck, initially, of course.
posted by NotAYakk at 7:05 AM on February 28


So, since it came out, I've used ChatGPT extensively when writing - not to generate text, but to be a sounding board to make sure my ideas track and as a "bucket" to dump my thoughts in when I'm doing research without annoying all of my friends. I feel like a "guns don't kill people" person here, but LLMs are useful tools when not aimed at pumping the market full of garbage, and its frustrating that those concepts are so tightly coupled now it's hard to talk about the actual benefits of using things like ChatGPT.
posted by bookwo3107 at 7:05 AM on February 28 [12 favorites]


*PS There is nothing wrong with OT3. I love it; I have fanwork on it. But fanfiction has different forms and conventions from original fiction writing. When AI mashes them up, you have no idea if you're getting text that meets the parameters for original fiction, or if you're secretly getting kryptonite mcguffins from fanfiction in your results. If you don't already know every fanfic convention, you won't be aware that AI is using them. The same is true for political dog whistles and other connotative speech- if you don't already know, you could use them accidentally, which rarely happens when you're just making shit up as a writer, like you're supposed to.
posted by headspace at 7:06 AM on February 28 [6 favorites]


I've started seeing more "AI" models pop up in traffic safety modeling, usually framed as a way to better identify causal factors in crashes. However, for the time being, these results tend to be uninteresting or even factually incorrect. For example, I recommended rejecting a paper that used a machine learning algorithm to essentially identify correlations in a large traffic safety data set. The authors, who were computer scientists and not traffic safety researchers, looked at the correlations and made recommendations like, "Drivers can increase their safety by turning the heat on at night to be more comfortable." Just a frustrating, fundamental failure to understand causality. I've used random forest regression and other machine learning processes to perform a first pass on data, but you have to have an actual human think about the results after that pass.

Frustratingly, despite my insistence that that paper should never see the light of day, I think the editor rubber stamped it.
posted by TheKaijuCommuter at 7:07 AM on February 28 [6 favorites]


it's taking over creative tasks so people can spend more time on tedium.

I think it was Zitron who pointed out that one of the things AI would actually be really good at is replacing the CEO in most companies -- the kind of company that hires consultants like McKinsey would be an excellent candidate for replacing the CEO with an ML system. Feed it the consultant spreadsheets, give it access to the financial data, and an ML system could probably do a better job than most human CEOs, especially if you used nifty adversarial tricks (CEO-ML, CFO-ML, etc., all adversarial-training on each other).

Of course, that will be the LAST thing that happens, because nobody actually WANTS efficiency, they want to see little people squirm because that's the whole reason they fought to be a CEO in the first place.

...having said that, if you want to set up an amusing fight, build that ML-CEO system, price it at a bit over a million a year, and then get yourself some activist investors to start pushing resolutions at the shareholder meetings, with perhaps the occasional lawsuit about fiduciary responsibility.

Enough of the training data is probably public (SEC filings and so on) that a small dev team could probably get something workable.
posted by aramaic at 7:14 AM on February 28 [38 favorites]


(Caveat up front: I believe that ChatGPT is likely to cause far more harm than good - and may be the beginning of the end if we're really headed to general AI. There. That's out of the way.)
This was a fascinating article for me read as a DM and as a LARP director and player. It makes the really good point that players themselves write (or act at RPGs and LARPs!) half the dialog if given the chance, and we're pretty bad at it.
That's what tabletop RPGs and LARPs are all about, really - improv dialog that sometimes soars to great heights, but sadly usually involves tropes and stereotypes. And we love it!
Which gets to what I'm already using ChatGPT for - and I'm sure other DMs too - it's really, really good at making up side adventures on the fly (like, *during a session*), at providing descriptions and dialog where needed. I recently had it generate Shakespearean dialog for a troglodyte 'king' who is supposed to speak in iambic pentameter. I told GPT the situation, and then fed it each line to be translated. It made stanzas that I read out over Zoom - and it was brilliant. Cheesy and wonderful.
For the superhero LARP I ran last fall, GPT made me flyers to post about the game setting, and even walked me through the steps of physically constructing the big machine prop for final battle. Like, it told me how to make a Lava Extractor. Seriously.
As for computer RPGs, there's no way to have a human writer available to real-time chat with players. I mean, I did that as a DM in Neverwinter Nights for years - I was fast at it - but for other non-DM'd games (all the rest), it's not an option. Having a LLM available to roleplay with is going to feel amazing for players once it's a regular thing. It will be as cheesy as any tabletop RPG, and just as much fun.
Which gets back to my caveat and the article's point. What do we do about the fact that LLMs are *already* better than many alternatives for gaming? Our current capitalist system will use that to ruin the industry, to ruin the lives of those involved, in the short term.
posted by Flight Hardware, do not touch at 7:18 AM on February 28 [4 favorites]


It's interesting reading this thread. I did a lot of historical research about the Industrial Revolution, and the conversations back then about the diminished quality of mechanized work vs. hand work in that era are exactly the conversations we are having now about AI. So it may be instructive to see how that shook out over the next 100 years or so.
posted by rednikki at 7:54 AM on February 28 [9 favorites]


but when you see an add with ai art your brain already thinks 'this is a fake company'. it already signals 'cheap'

Something I've been forced to confront over the years, though, is that a lot of people don't read signals in the same way. Stuff that screams "fake" and "cheap" to me is taken seriously by a whole lot of people, and a lot of signals are just invisible to them. I mean, see how many people did show up and pay for the fake Wonka thing. (Some things are also taken as signaling affiliation rather than quality; see the Trump team's apparently intentional strategy of not just not correcting the grammar and capitalization issues in the posts he writes - which strongly signal "ignorant" and "huckster" to some people - but amplifying them and imitating them in posts people ghostwrite for him, because they signal something different to his target audience; or the American right-wing strategy of making graphic design choices that would fail you out of most design programs.)

Also, even if you are sensitive to things signaling cheapness and lack of quality, the 'this is a fake company' radar is complicated by how tacky and huckstery so much "legitimate" corporate marketing and corporate language is. If anything, ChatGPT-style text is much closer to the type of slick, meaningless bullshit you see on corporate sites than to the mis-spelled, ungrammatical text that has often signaled 'this is a fake company/person', or at least 'this is a company/person too cheap to pay a proofreader'.

Chuck Tingle was talking about art and not text in that blurb (and interestingly, Tingle's posts have a spelling and grammar style that either violates or plays with traditional quality signals in writing), but I think this applies even more to art - my guess is that there are more people able to identify spelling mistakes and broken grammar than people who look closely at images and say "you know, I think this is photoshopped" or "I think this is AI" or "I think this picture is just not very good"- and at least as many people willing to say "who cares" about issues with images than with text.

"Who cares," "everyone does it", "I don't see what the problem is, you're just being snobby", and "whatever, I still like it" are the responses I get from a whole lot of people when I point out quality issues. I think taking it for granted that people will reject AI stuff based on quality is much too optimistic.

art is usually about EVOKING something

If you grow up eating the shittiest boxed macaroni and cheese, then that shitty boxed extruded product might evoke at least as strong a reaction in you as the finest macaroni made from scratch with the best ingredients and technique. Visceral responses have to do with a lot of factors, we have not all been raised on a diet of strictly the most powerful art and design, and for better or worse even the most blatantly extruded art can evoke positive reactions in a lot of people on a lot of levels.
posted by trig at 8:00 AM on February 28 [18 favorites]


Brief AI stuff first:
I wrote something recently about the structural differences between LLMs and neural networks in mammalian brains - both in terms of how they approach implementation, and also deeper theory-of-mind structural limitations. TL;DR: if you’re waiting for / dreading AGI, don’t. If you’re waiting for something that can do enough basic chain-of-thought / reasoning approximation for a digital assistant then OpenAI has a pretty good research proposal for that, which they’ve supposedly implemented as a proof of concept. Likely full public release in early 2026, maybe a few early bits in GPT-5’s upcoming post-election release. It is not, despite the marketing hype, intelligent in the way humans are. At all.

More interesting gamedev stuff:
As a Technical Game Systems Designer my job involves weaving between game engine C++, HLSL code for GPUs, and a profoundly complex gameplay visual scripting system on a daily basis (plus loads of LUTs and utility textures/meshes in Photoshop and 3ds MAX). I submitted some of each yesterday, and will again today. All of these systems are based on fundamentally different frameworks and stacked mental models, each having internal systemic considerations that directly impact the others, and all of them are massively altered in usage and expression by the particular game being made at the time, and by the behavior/expectations of human players of its genre. Ultimately: we build exorbitantly complicated meta-models that aim to produce fun based upon a deep understanding of human psychology, while attempting to ward off interference by thousands of clever monkeys operating in bad faith.

This is basically the final boss for AI. I think most developers underestimate the degree to which their jobs outside of technical design are likewise cross-discipline. Even if I’m an extreme outlier in that regard, I truly don’t believe AI will do more than make the boring 60% of most jobs go faster and become more editorial in nature. Every level artist will begin playing art director a bit during the initial roughout before doing the polish stages themselves. That role will increasingly hybridize with specialized AI prompt engineering over the next several years (and once you get into ControlNet/LoRA/style masking combinatorics: it is fully worthy of the “engineering” descriptor) in order to remain competitive.

Player expectations rise every fucking year and this is more like a godsend. What people outside the industry rarely appreciate is that shit planning by hack managers is only a large minority of why we crunch. The actual majority of crunch stems from firm limits on creative vision unity vs team size - which is why we scale from 15 to 45 to 120 to 300 developers over the four phases of a modern production cycle. We cannot simply throw more bodies at the problem across the board and solve with scale, at least not until the final phase when the vision is either crystal clear or we need to go back and remake the game.

My actual problem as a gamedev has nothing to do with AI: from day one my problem has been capitalism. We make a fraction of the rest of the software industry because the ownership class are fully aware I would do this job for free if we had Universal Basic Income. Fuck it: I would crawl over a mountain of broken glass to be permitted to do my job. It is in many ways more fun than playing any game could ever be. Certainly more challenging and fulfilling.

Many game developers who manage to stay ahead of the very long line of kids eager to take each of our jobs for less pay already have this mentality. We are our own perfect scabs, and I’m already seeing AI incorporated into workflows: I use it as a junior systems design intern during Documentation-fucking-sucks Week to convert five bullet points into an initial two-page generic design template before adding the proprietary/project-specific bits and cleaning up.

Most of our concept artists use AI as Targeted Pinterest 2.0 during their initial reference/inspiration hunt, before going and drawing their concept from scratch for legal reasons.

And while Narrative Designers will absolutely demolish AI for key sequence dialogue now and for the foreseeable - AI can’t do human theory of mind, so the context has nothing to pair off with - over 70% of all games voiceover is systemic reactions/combat barks. None of which needs to be human authored and all of which is just wasting writers’ time cranking out monkey work.

So: I am extremely familiar with the latest AI developments, have shipped games with credits in every department except marketing over the last 27 years, and yet am feeling very good about the way things are headed in terms of their integration with the industry. Also the impact Gen Z and their values are having. I’m even looking forward to having the boring shit taken off my hands.

Less ethical and more idiotically-managed studios are going to have a few rounds of contraction-expansion: “yay now we can fire everyone and pocket even more billions!” …“oh wait, turns out we needed them for all the non-obvious stuff and now that we’re looking to expand we’ll have to rehire the exact same people.”

These things balance out and ultimately nearly everyone will be fine. It’s going to be a rough few years, but I’m a dozen times more worried about the election than AI.
posted by Ryvar at 8:02 AM on February 28 [33 favorites]


A bit of a derail: Rose Totino was a noted female entrepreneur in Minnesota in the mid-20th Century, in an era when it was difficult to succeed as a woman in such efforts.

But yeah, you can give her at least some of the blame for the ubiquity of frozen pizza today.

(Also, decline in quality has happened without the help of AI....but I can definitely see where AI is an accelerant.)
posted by gimonca at 8:09 AM on February 28 [5 favorites]


My big worry about the adoption of generative AI is just how fucking bullshit prompt design is. Trying to give instructions or describe a process using natural language sucks because of all the inherent ambiguity of natural languages. There's a reason why even high level programming languages are still worlds apart from the spoken word. I don't want my interactions with technology to be inhibited by some perverse game of telephone where I have to sprinkle pleasantries and magic words into my instructions and hope that the computer understands what I mean.
posted by RonButNotStupid at 8:16 AM on February 28 [8 favorites]


It will still cause damage to the creative field, but it won't be a complete extinction of creative jobs.

Acknowledging that not every creative person will lose their jobs, what happens to those people who do?
posted by EmpressCallipygos at 8:20 AM on February 28 [3 favorites]


just how fucking bullshit prompt design is

QFT. I saw a post in /r/StableDiffusion the other day about how if you want consistent faces across a series of Stable Diffusion renders it seriously helps to give the AI a name: instead of “a Swedish Woman” which will get you a completely random blonde-hair/blue-eyed/high-cheekbones face (it is what it is: AI is trained off humanity’s photos and popularity is a weight), if you instead ask for “a Swedish Woman named Jessica” you will get either the same face or - most of the time - a passably similar one. As if names were a separate series of facial seed values hard-coded into the Diffusion Model. So fucking weird, but apparently it mostly works.
posted by Ryvar at 8:32 AM on February 28 [8 favorites]


metafilter: Acknowledging that not every creative person will lose their jobs, what happens to those people who do
posted by MonsieurPEB at 8:43 AM on February 28


Yes, instead of working on hard problems that would be good for people, AI researchers have focused on tractable problems, the benefits of which are not actually that clear, and the drawbacks of which seem patently obvious. Nobody really wanted automatic image generators or bullshit generators, but it's the only thing the researchers could figure out how to do, so now we have automatic image generators and bullshit generators.

To me it's like saying "Curing cancer was too hard so we made an atomic bomb instead, isn't that cool and disruptive?"


I mean, I technically work with an "AI" (machine learning) tool: I've been training models to estimate joint positions on each frame of recorded videos of animal behavior so that I can extract research data about that behavior. Right? That's actually a potentially incredibly valuable technology for research work! You can do some really cool things with that data that we're just beginning to untangle how to analyze and dig into!

The dogshit stuff is about how marketers and tech startups have focused on directing the technology to produce immediate capitalist profits. That's an entirely different kettle of fish. And it's all societal and emphasizing social incentives, it's not necessarily the fault of researchers or even tech workers more generally--this is really about finance, marketing, and business. Here's a discussion I found really helpful for laying out what is happening and why around AI this morning. TL; DR: It's all about the money, not the tech.
posted by sciatrix at 8:45 AM on February 28 [10 favorites]


As much as I love AI toys, I am so worried. Take radiology AI. The buzz is that a good system can read your x-ray better than a radiologist. The AI is going to catch your cancer earlier, or is going to accurately see that the little shadow is a benign artifact, or whatever. Except is that true at all?

I think this is a classic computer vision and classification sort of ML application that would have just quietly produced an effective medical tool if there weren’t this huge hype wave about the broader field. Because of that hype wave people are encouraged to view it in a polarizing way, as a contest between man and machine, but if the question is - will the manifest improvements in image segmentation and classification in recent years be useful in radiology, I mean, I’d be a little surprised if they aren’t?
posted by atoxyl at 8:46 AM on February 28 [6 favorites]


....now that I've mulled it a bit more, I'm starting to think that whole CEO-ML thing I mentioned earlier might actually be a viable concept, with a couple of really interesting potential consequences.

First, deploying it anywhere would immediately spark a "oh shit" moment in the executive boardrooms and I'd bet you'd see a sudden upsurge in concerns about the consequences of AI among people who would otherwise be perfectly happy to replace YOU with a robot (a "I never expected the face-eating leopards to eat MY face!" type of moment). So that would be fun, and possibly useful in setting limits on AI generally.

Second, it might kick off a round of downward pressure on executive pay. Not to say it would happen easily, but a few rounds of unruly shareholder meetings and you might see some successes on that front. Moreover, IF (large if) your system had some positive results (even in simulations with real data) you may be able to get one or more large investors adding their own pressure. An entity like CalPERS carries a very very large hammer, if only they'd use it.

...basically, flip the script on the bastards. Use their own tools against them, both technically and legally (after all, duty to shareholders is one of their favorite reasons to crush people, so use that same "duty" to crush them instead). Kinda like the Musk-Delaware thing, but everywhere.
posted by aramaic at 8:49 AM on February 28 [9 favorites]


I honestly don't understand why companies aren't more skeptical towards generative AI because how many opportunities it creates for individual employees to become some sort of irreplaceable model whisperer whose intimate knowledge and nuance are required in order to get the desired output.

Or maybe they know this and everything is just an elaborate scheme to steal job security from those who posses skills (programmers, artists, writers, etc) and reallocate it a few rungs up the management structure to those who traditionally don't. MBAs are replaceable, but a manager who knows how to literally seduce ChatGPT into producing the right output is worth their weight in gold.
posted by RonButNotStupid at 8:54 AM on February 28


Mmkay, well, love mentioning the open-source tools I'm using in my purely non-commercial research and getting immediately compared to a Nazi. That's definitely a fun experience in community conversation. Sorry I'm not blockable, I guess?
posted by sciatrix at 8:54 AM on February 28 [3 favorites]


The buzz is that a good system can read your x-ray better than a radiologist.

I work at a hospital, and a couple of weeks ago I attended an hour-long lecture by a visiting lecturer, an AI expert who is also a medical doctor. (Keeping it vague because I am not supposed to identify my employer on social media, which they would consider MeFi to be.) He talked a lot about the way they are incorporating AI into processes at his big academic hospital to speed up dealing with patient messages and writing clinic notes.

So doctors currently spend a big proportion of their time creating documentation and answering questions. And those things need to be done! But doing them more quickly is always a goal because it frees them up for more patient care. So we already have a bunch of things to make things quicker, like:

* not making them type (or hunt and peck because these are doctors) so much, creating macros and templates for things that can automatically be loaded and a few blanks filled in to customize for the patient

* hiring scribes, people who take down dictation during the visit, so the doctor talks to the patient about what is happening and occasionally interjects to the scribe with specific things that need to be notated, and then the doctor can clean up the scribe's note and submit it instead of having to do al that stuff from scratch. A good scribe who has been trained appropriately can save a doctor enough time in clinic to see 30-50% more patients. But because they are usually medical students working part-time, there's a lot of turnover, plus some medical systems don't like using them for various reasons.

* having nurses whose job is to answer patient questions, and they are empowered to handle things up to certain complexity.

Having AI to do the job of the scribe or the job of the nurse would be great, assuming the AI is as good. And at least in the case of the nurse, it would be helpful to the system since we have a shortage of nurses.

Getting back to the lecture, the lecturer gave a lot of examples of the AI answering questions making weird mistakes, like they have had to tell it exactly what to say in answer to queries where the science has changed since its training data ended. It was a cautionary talk, in a lot of ways. But the caution wasn't don't use AI, it was never give the AI autonomy, that is, everything has to be checked over by a human. But checking its work is a lot faster than having to do the work themselves, so for the doctors, it's a clear win, which allows them to see more patients or to have more time for the rest of their lives (burnout also being something we're very concerned about).

These were doctors from a non-profit, academic institution where doctors (not MBAs) are the ones in leadership, talking to other doctors at other institutions with similar leadership, so I am fairly confident that the way they roll out this tech will be cautious and effects on patient care will be scrutinized. I am not at all sanguine about similar programs at for-profit and CEO/MBA-run institutions.
posted by joannemerriam at 8:55 AM on February 28 [16 favorites]


Feed it the consultant spreadsheets, give it access to the financial data, and an ML system could probably do a better job than most human CEOs, especially if you used nifty adversarial tricks (CEO-ML, CFO-ML, etc., all adversarial-training on each other).

....now that I've mulled it a bit more, I'm starting to think that whole CEO-ML thing I mentioned earlier might actually be a viable concept, with a couple of really interesting potential consequences.


or,
the computer did that auto-layoff thing to everybody


(Better keep Bullshit Jobs out of the training corpus, so SkyHR doesn't eliminate half of the workforce overnight.)
posted by snuffleupagus at 8:57 AM on February 28 [1 favorite]


Acknowledging that not every creative person will lose their jobs, what happens to those people who do?

Exactly what we do with all the dead/straggling taxi drivers pushed out by Uber. Absolutely nothing!

I mean look, we already literally work animators to death. Comic artists living off of food stamps. Writers of our beloved tv shows getting screwed out of royalties. Why would we change? We allow our raw materials for electronics to be harvested by literal slaves, you think we're gonna lift a finger for ~artists~?

But you know, years from now we'll get the image/video/text equivalent of those conservative/right-wing twitter users who bemoan why we don't see great works of art in sculpture or classic decorative architecture, where we now just see boring samey rectangle buildings and a complimentary fountain. And then a bunch of artists and carpenters will go "uh yeah, we know how to do that if people would PAY US" but this does not happen because capitalism.

Also some artist similar to Roy Lichtenstein who knows how to schmooze with rich white dudes will incorporate AI in an art piece that sells for 1 million dollars and everyone will pat themselves on the back going "Hmm yes, AI has not killed art because this museum full of Artists The Guerrilla Girls Are Mad At featured it and there's lots of thinking and navel-gazing to do in the gallery. Art is not dead!"

So, that will also happen.
posted by picklenickle at 8:58 AM on February 28 [12 favorites]


It just seems like a good bet to me that medicine in the future will have more magic boxes that help process the results of various optical/radiological/acoustic scans of the human body into actionable results - they seem to like that sort of thing!
posted by atoxyl at 9:02 AM on February 28 [1 favorite]


So doctors currently spend a big proportion of their time creating documentation and answering questions.

I think the potential for transcription and “fancy search” applications is very large, as just a minimal extrapolation from the language and audio transcription models we already have. It’s also potentially very labor-unfriendly as it means companies can try to capture institutional knowledge permanently. On the other hand we already have lots of searchable data stores and nobody bothers to check them or to put things in the right place and sometimes companies don’t want to keep a complete record of communications because what if we end up in court?
posted by atoxyl at 9:13 AM on February 28 [1 favorite]


OK, so I read TFA before commenting, though there were like 2 comments when I started, and I'm not reading the subsequent 40 before coming here to say that the "AI" that is "already better than you" is talking about very niche notions of "AI" and "you," and what "you" are good at that the AI competes with.

Insert boilerplate about how LLMs are a blind alley if what you're looking for is "strong AI" or "artificial general intelligence." They have a lot of amazing behaviors, but they don't have any kind of knowledge representation. People who hope that knowledge representation will emerge spontaneously from digesting a big enough snapshot of the Internet are barking up the wrong tree. Knowledge representation will emerge spontaneously in systems that have to interact with the real world, or something like it initially, in an iterative evolutionary way where unsuccessful attempts to represent knowledge get killed off.
posted by Aardvark Cheeselog at 9:19 AM on February 28 [5 favorites]


What people don't seem get - or get but somehow don't accept - is that economic AI doesn't have to be good, it just has to be good enough to generate more profit per unit of production than the specific alternative.

And supplanting that specific alternative doesn't mean nicer things at higher prices or lower margins go away. 12 hours of misery in seat 50B for $700 doesn't replace or even meaningfully compete with 12 hours in luxury in seat 2B for $5,000. For a long transition period we will see many analogies to that, and we'll probably always have some handcrafted boutique products/services that can at least be priced to a premium to AI products/services ... sometimes because they are better in some clear way, other times just because they seem better.

And super-economic transformative AI doesn't have to be perfect, it just has to be better than almost all non-AI solutions in some sufficiently important absolute. That's why (for example) everyone will be in self-driving cars within a short amount of time: so much safer, even allowing for the occasional accident, that the legacy solution (people hand driving) will just go away.
posted by MattD at 9:28 AM on February 28 [2 favorites]


Having AI to do the job of the scribe or the job of the nurse would be great, assuming the AI is as good. And at least in the case of the nurse, it would be helpful to the system since we have a shortage of nurses.

Seriously?
posted by splitpeasoup at 9:33 AM on February 28 [2 favorites]


Having AI to do the job of the scribe or the job of the nurse would be great, assuming the AI is as good. And at least in the case of the nurse, it would be helpful to the system since we have a shortage of nurses.

Dragon Dictation has a medical-focused version of their product (I think it's mainly just an expanded domain-specific vocabulary) which I've seen used in a GP clinic my company does some work for. Some of the docs still prefer to type up their patient notes manually, but for the docs who record them the process went from
  • Transcriptionist (a staffer with other duties as well) types up notes from recording
  • Doc reviews and corrects as necessary
  • Notes go into patient records
to
  • Transcriptionist feeds recordings into dictation software, reviews transcription
  • Doc reviews, corrects etc.
  • Notes go into patient records
Critically, there are still trained human eyes on every step of this. The software isn't trusted enough to work unsupervised, and everyone involved understands its limits. And no-one lost a job when it was adopted, it just improved the throughput.

Of course, Dragon's product is nearly 30 years old at this point and not remotely related to the current LLM/GAN boom that may end up doing for the term "AI" what blockchain hype did for "crypto".
posted by figurant at 9:34 AM on February 28 [3 favorites]


It's good to know doctors have no problem replacing nurses with robots, but I can't help but wonder how doctors feel about replacing the doctors as well. After all, it's easier to build a robot that can look at an X-ray or a chart than a robot that can start an IV.
posted by grog at 9:53 AM on February 28 [5 favorites]


But the caution wasn't don't use AI, it was never give the AI autonomy, that is, everything has to be checked over by a human

I think there are (at least) three problems here: one is that this ignores the likelihood that systemic time and budget pressures, as well as individual doctors' personal characteristics like laziness, corner cutting, or credulous faith in AI will eat away at the "checking over" stage. There are no multiple levels of verification here - just one, subject to the same factors that made people look for AI shortcuts in the first place. Checking well take energy and time.

Another is that sometimes, as any editor has experienced, in reviewing content you didn't create you miss mistakes and false statements that you would never generate yourself.

The third is a trust issue. What does it do to patient-doctor relationships when you know the information you've gotten was machine-generated and only maybe checked over by your doctor? What does it do to trust in your personal medical care, and in the healthcare system in general?
posted by trig at 9:59 AM on February 28 [8 favorites]


Some things are also taken as signaling affiliation rather than quality; see the Trump team's apparently intentional strategy of not just not correcting the grammar and capitalization issues in the posts he writes - which strongly signal "ignorant" and "huckster" to some people - but amplifying them and imitating them in posts people ghostwrite for him, because they signal something different to his target audience

I don't think this works in politics but there's an amazing IT security paper, which asks a question whose answer is obvious if you've studied economics or biology: "Why do Nigerian scammers say they are from Nigeria?" The argument is roughly that scams are a multistage funnel, so efficient scammers want all the people who can spot a scam to select out at the cheaper first stage.

I guess it still applies but it's not clear to me whether "covfefe" is a signal or a screen. If I'm being honest, it kinda seems like Trump is not subject to normal selection pressures at the ballot box (or the jury box, or the soap box) and thus this is less a deliberate strategy and more of an absence of selection pressure to do better.
posted by pwnguin at 10:14 AM on February 28 [2 favorites]


What are we actually supposed to do about this? How can we actually make change here? This is absolutely an open question, not a sour challenge!

Here's some possibilities, in approximate order of reasonableness:

1) Stay informed about the misleading hype, dubious morality, and shady practices involved in the "AI" boom and share this knowledge and perspective, especially with people who may be less likely to be well-informed.

2) Reject, internally and socially, the false narratives promoted by "AI" hype; for example, don't refer to it as "AI" which is inaccurate at best, call it machine learning, LLMs (large language machines), or if you want to be more abrasive, stochastic parrots or plagiarism machines.

3) Raise a stink on social media when companies use plagiarism machines in their creations. Make corporations realize there's a downside.

4) Boycott companies and institutions that use stochastic parrots (this will likely become increasingly difficult).

5) Join or start a consumers' union or association to organize and amplify your power. (Or maybe a readers' union, a watchers' union, a browsers' union.)

6) Butlerian Jihad!!!
posted by overglow at 10:26 AM on February 28 [13 favorites]


everyone will be in self-driving cars within a short amount of time: so much safer, even allowing for the occasional accident, that the legacy solution (people hand driving) will just go away.

Self-driving cars are a way harder target than diagnostic imaging, really, because they need to get it right in real time and aren’t much good until they can do that with no unexpected handoff to the occupant (who is playing games on his phone). There’s something of a step function of practical value.

A tool for analyzing x-rays just fits into the arsenal that medical professionals already use as soon as it is good enough to offer some value, and then maybe at some point it’s good enough that you don’t need as many medical professionals .
posted by atoxyl at 10:31 AM on February 28 [4 favorites]


It's good to know doctors have no problem replacing nurses with robots, but I can't help but wonder how doctors feel about replacing the doctors as well. After all, it's easier to build a robot that can look at an X-ray or a chart than a robot that can start an IV.

I mean, on the gripping hand: there are not nurses right now in the numbers we need them. Seriously. Nationally and internationally, we are not training enough nurses, and we are not paying them enough to retain them without burning them out.

Now, we can respond to that shortage in a wide range of ways, right? We can pay better (enticing more people into the field); we can make training cheaper (reducing the cost of training) or otherwise change the certification standards (most important in terms of accepting immigrants with training in other nations which may or may not meet quality standards of the recruiting nation).

Of course, we-the-people-in-conversation are not the people who actually get to make those decisions. A for-profit hospital is going to find a lot of these solutions painful in some way because it has a bottom line, but even a not-for-profit whose mission is to deliver health care is going to look at that shortage and wince unless it has essentially unlimited funding. Which is, I have to stress, not the case for any governmental institute. So if nursing becomes "more expensive" to pay for--which is very possible, given the currents of "remembering how COVID treated healthcare staff especially nurses," "labor shortages reflecting early deaths and poor training decisions," and "burnout because of stupid staffing choices,"--it's worth considering the situation in the context of "how do we use the nurses we have with maximal efficiency and retention" rather than "how can we get the most healthcare product at a shittiness level that people will accept and pay for with the lowest possible input cost."

Which still leaves places to improve the working experience with technology. It just means that we need to have a society where employers--whether our not their paycheck comes directly from a nationalized institution that frees them from the "need" to make a profit in terms of motivation--are making decisions within a range that has been agreed on as acceptable by both workers and consumers, rather than a range that has been decreed as acceptable wholesale only by employers. Our entire society and economical strategy has been dicatated by "employers" (aka capital) this entire fucking time. The problem is hemming them in legislatively given that they've had an essentially free hand with the landscape since before I was born.

A simple problem, really. *laughs despairingly into hands*
posted by sciatrix at 10:38 AM on February 28 [10 favorites]


12 hours of misery in seat 50B for $700 doesn't replace or even meaningfully compete with 12 hours in luxury in seat 2B for $5,000.

It does in that the pricing regime you designated "2B" is permitted to take up space on the plane.

Deregulation, what a shitshow
Deregulation, now everything blows
We know you're wishin'
That it'd go away
But Deregulation's been here and it's here to prey!

posted by snuffleupagus at 10:40 AM on February 28 [2 favorites]


Re: Machine Learning medical diagnosis - yes it should always be 100% doctor reviewed, but for all the reasons trig points out we will definitely fuck that up. And people will get hurt and/or die, and lawsuits, and the surviving hospitals / health providers will be the ones that get it right or only a little wrong.

A sane society would have some legislation regulating how ML expert systems of all kinds are integrated into all kinds of major indistries, but at least in the US we don’t actually have a functioning legislature, so basically a bunch of people are going to get very badly hurt until it eventually gets worked out.

It’s worth pointing out: 40,000+ open source AI models over on HuggingFace = no you cannot regulate what individuals do with AI anymore. 15-year-old shitlords making deepfake porn of the girl who rejected them using diffusion models instead of Photoshop is with us forever, now. But you sure as shit can regulate how different industries utilize any ML-based system, same as anything else (eg, HIPAA).

Acknowledging that not every creative person will lose their jobs, what happens to those people who do?

For the love of god please fact-check what I’m about to say with somebody who knows what they’re talking about (an economist?), but my understanding is that this isn’t how it works.

The goal of the ownership class is to maintain a monopoly, not only on silly things like state violence, entire industries and the means of production but also on the majority of the labor pool itself. Too many idle workers and they might start getting up to shit. Forming effective competitors to your continued parasitic harvesting of surplus types of shit. That’s no good!

Flipside, having very high employment removes the omnipresent threat of starvation from the majority of the workforce. If they aren’t surrounded by people about to lose their homes and families they might start doing things like demanding higher pay or working from home. And if they work from home then how are you going to sashay out into the open pit full of hundreds of workers with no privacy, think the very best of all possible thoughts like, “I own you,” and then run back to your office to furiously masturbate? It’s impossible!

So the entire ownership class is collectively incentivized to respond to efficiency gains with expansion until they once again have a lock on ~85% of the labor pool. Sudden spikes in efficiency and quarterly shareholder calls / “The Line Must Go Up” will result in short, sharp contractions before the expansion cycle kicks in, but there is a basic steady-state tension at work here, and even something as potentially disruptive as AI advancements (short of AGI) are unlikely to break that.

I could very easily be wrong about all this, though. Just the impression I was under.
posted by Ryvar at 10:40 AM on February 28


I haven't been to sleep (Shiren 6 is out, plus my sleeping schedule just naturally tends towards nocturnal), but then this thread happens and there's a half-dozen things I want to respond to before the thread cools down and it all becomes tl;dr.

So I'm just going to flatly state my points and duck out:
- Things intended to empower individuals usually end up empowering bigcorps, who can use those same things at scale.
- Capitalism is bad, but also bad is how moneypeople don't hear any of the qualms of the world about their actions because their peer groups are largely made of other moneypeople.
- The people who are worried about "AI" are outnumbered by those who just don't care, and that's exasperating.
- In addition to depriving creatives of jobs, these forms of "AI" are a dangerous because they make things that look like a thinking person made them, but they're just made to look like a thinking person made them. There is no thought involved despite appearances: what is there is accidental or stolen from someone else.
- The difference between these complex neural networks that cost rainforests to run and the Markov generator I hacked together in Python is much less than I was expecting considering the difference in cost and complexity.

* I put "AI" in scare quotes because it isn't.
posted by JHarris at 10:41 AM on February 28 [10 favorites]


Agreed with all that except the costs rainforests part. GPT-5’s in progress training cycle is estimated at 200 GWhr, or about 200,000 US households’ annual. Bitcoin is fucking useless and eats 600 times as much every goddamn year.

After training, GPT-5 inference estimates are likely to be 25~30 Google searches per query (<0.01 kWhr). Both my gaming PC and company workstation eat 100 times that every hour.

LLMs aren’t great but if you want to help the environment start by killing off Bitcoin and then start offering tax breaks to companies that employ remote workers in the US.
posted by Ryvar at 10:54 AM on February 28 [10 favorites]


grog, I'm talking about AI vs human output, so having 40-60% overlay is important. That's the part of the venn diagram AI can potentially eat, which it has not already eaten. Capitalism is the dominant way of conceptualizing forces of nature thru a human lens right now. I hate it, too.*

Tech IS a force of nature, and that's what I was talking about. Spiderwebs and nests are tech. Tech in arts is inevitable. Capitalism in art sucks all the meat off the bones and doesn't know what type of animal it just ate, or how to prepare it. They can buy it, tho.*

*simplifying and trying to be a little cheeky. you get it, yeah?
posted by es_de_bah at 11:08 AM on February 28 [2 favorites]


Having tried LLMs for speech recognition, the idea of them being in any way involved with medical transcription terrifies me. Everyone speaks in their own personal accent with an individualized cadence, pitch, and volume. Unless the people having the conversation are speaking like newscasters directly into their mics, what comes out is often gibberish. Or worse, the LLM thinks someone's voice is background noise and omits whole sentences entirely.

Capturing quality audio is hard enough. Capturing real-world audio that matches your training set is so much harder. The future of "AI" feels like a patient complaining about back pain but it never shows up on their chart because a machine goes "bing".
posted by lock robster at 11:31 AM on February 28 [8 favorites]


That's why (for example) everyone will be in self-driving cars within a short amount of time: so much safer, even allowing for the occasional accident, that the legacy solution (people hand driving) will just go away.

This is just wrong. Actually useful "self-driving" cars have been "a short amount of time" away for years, while the things sold as such in the real world are demonstrably much more dangerous to pedestrians and other drivers. If you want to reduce car accidents, trained humans running mass transit is a much better solution than trying to make Elon Musk's fantasies real.

A machine taking as long as necessary to sort a very specific image, at a fixed scale, with few salient details is very much the sort of thing that can be reduced to a reasonably sized algorithm and double checked by a human specialist. Real time interactions with a chaotic environment, moving tons of steel at tens of miles an hour mere feet from human beings is not.
posted by The Manwich Horror at 12:09 PM on February 28 [5 favorites]


Great thread.

Self-driving cars are a good metaphor for AI. Improving driver safety from innovations like lane-holding, lane-change warning, intelligent cruise that maintains safe distances, detection of driver fatigue - all these are helpful. Just trusting that autonomous vehicles will make all traffic related problems go away... wrong.
posted by Artful Codger at 1:00 PM on February 28 [5 favorites]


(N.B. I am a very stupid person.) WRT the Totino's analogy, what's the market segment of writing / drawing that will get replaced by a cheaper and worse machine generated equivalent? Everyone needs to eat and clothe themselves, but I don't feel I spend money on generic images or impersonal text. (This isn't to dismiss anyone's concerns, I'm just having a failure of imagination.)
posted by Hermione Dies at 1:02 PM on February 28 [1 favorite]


Minor correction of the above: the original figure I looked up for US household consumption was actually the monthly (~1 MWhr), not annual (9.5~12MWhr). So training GPT-5 is projected at 17~20,000 US households annual consumption (it’s widely rumored to be 100,000 nVidia A100s for three months, roughly four times GPT-4). Other figures remain the same, including Bitcoin.
posted by Ryvar at 1:03 PM on February 28


Having tried LLMs for speech recognition, the idea of them being in any way involved with medical transcription terrifies me.

To be clear, they were discussing ways this might be made to work. What's happening right now is scribes or transcriptionists or software like Dragon.
posted by joannemerriam at 1:11 PM on February 28


WRT the Totino's analogy, what's the market segment of writing / drawing that will get replaced by a cheaper and worse machine generated equivalent? Everyone needs to eat and clothe themselves, but I don't feel I spend money on generic images or impersonal text. (This isn't to dismiss anyone's concerns, I'm just having a failure of imagination.)

Wrt text: Website copy, marketing copy, the cheap genre books that authors already churned out at a rate of one a month or more, news articles (which is insane, but look at how many big-name news publishers have already been "experimenting" with LLM-generated articles, sometimes without even mentioning that fact anywhere). There's been a lot of talk about screenwriters being replaced by AI; hopefully that won't happen, but I wouldn't bet against it.

Images: Until now if you wanted semi-decent images for anything (websites, marketing materials, packaging, cheap game graphics, those pointless generic pictures that online news sites seem duty-bound to attach to the top of any article, etc.) you would either need to create them yourself (not common), have an artist on staff to do them, commission one-offs, or use stock images. The market for artists and photographers in the last three roles must be taking a hit (especially for stock images, I'd guess).

I wonder how long it is before AI-generated video starts making inroads on youtube, to compete with the current clickbaity stuff. There are already a lot of "computer-generated voice reads probably computer-generated text" videos for news-type coverage. And have you ever searched for videos on various skills, like sewing or woodworking or cooking or dinosaur raising? These days there are a ton of "Secrets professional seamstresses/carpenters/chefs/criminal paleontologists will never tell you!" or "Top 5 [activity] hacks/tools that will change your life!" videos, many (though not all) nearly identical, mostly (but not universally) full of dubious or not very useful "hacks", often without any dialogue and with instrumental soundtracks (that last one might vary by activity, I'm not sure). Many of these are listed as getting over a million views. Seems like a ripe target.
posted by trig at 1:42 PM on February 28 [3 favorites]


The floor of the (exponential growth metaphor) swimming pool is wet. It still seems impossible you'd ever be able to swim in it — hell, you can barely make a satisfying splash by stamping your feet — but there's water there.

Not much longer to wait to find out if the purported magical force field (vive la difference!) that will prevent the water of AI from ever filling the swimming pool of intelligence actually exists.

Also, I'm pretty sure that AIs to whom you have to speak civilly to get optimal results are exactly what we need as a society, not a bug to be fixed.
posted by lastobelus at 2:26 PM on February 28 [1 favorite]


Also, I'm pretty sure that AIs to whom you have to speak civilly to get optimal results are exactly what we need as a society, not a bug to be fixed.

I forget whether it was Clarke or Asimov who predicted a society in which people defaulted to speaking politely to all forms of AI - even the deliberately sub-sapient service types - on the theory that failure to do so would bleed over into how people treated each other. This seems so preposterously naive in the post-Trump era, when I know that it didn’t - at least for me - feel that way before.

We’ve lost something terribly important, and I don’t know that we’ll get it back.
posted by Ryvar at 2:39 PM on February 28 [4 favorites]


The Dragon medical transcription software that’s been around for years mostly works. It makes some errors, but they are generally obvious ones (e.g., a hilariously incongruous word that sort of sounds like the right word). If you’re reading medical notes in an institution that uses Dragon, you know to expect that sort of thing, and in fact the system automatically appends a disclaimer to all transcriptions. Counterintuitively, I think you could make an argument that a smarter system based on LLMs might be more dangerous, in that it might make more subtle errors that sound plausible for the context.
posted by dephlogisticated at 5:06 PM on February 28 [4 favorites]


Self-driving cars make sense. Driving is an activity that would work better if it were entirely automated, which is to say, if all cars were self-driving and had only to navigate around other, self-driving cars. At that point, driving would work like a simple program works. The problem is that you have human and non-human actors in the mix. It becomes unpredictable. Humans don't want to relinquish their sense of control, and so the best thing we could possibly come up with is a self-driving car that allows you to feel like you're driving it, like Maggie in the opening credits of The Simpsons. I don't love AI applied to art and writing, but mundane shit like this? Yes, this is what we invented tools for, man.
posted by kittens for breakfast at 4:51 AM on February 29


Driving is an activity that would work better if it were entirely automated, which is to say, if all cars were self-driving and had only to navigate around other, self-driving cars.

There are a few problems with that. One is that other cars will never be the only things on the road, Pedestrians, animals, fallen trees or rocks, damage to the road otself will all need to be avoided. Another is that, unless they are networked together, predicting what another system is going to do purely from observation of its current speed and position is another difficult problem. Especially if there are different models of computer involved, or if they are moving past each other at different speeds, complicating the process of identifying individual cars and their positions in space. And, of course, cars are wildly inefficient in ways we probably can't sustain long term. Putting every one or two people behind their own engine is massively wasteful.

You could fix all these problems. For example, by sealing roadways off underground or overhead where they will be separate from human and animal traffic, creating a single system to direct all traffic, and building single larger vehicles to carry more people going the same direction. But then you've just got a train.
posted by The Manwich Horror at 6:48 AM on February 29 [8 favorites]


Self-driving cars make sense. Driving is an activity that would work better if it were entirely automated, which is to say, if all cars were self-driving and had only to navigate around other, self-driving cars. At that point, driving would work like a simple program works.

Echoing The Manwich Horror here.

Fully-automated people-carriers running on dedicated paths makes sense. Independent driverless autonomous personal vehicles running on today's streets and roads does not. Just from a technological point of view, the best current implementations of driverless vehicles (eg container ports) involves a heavily modified environment and central control - two things missing from the pipe-dreams of the proponents of self-driving vehicles. To say nothing of the fact that self-driving cars simply won't scale up to serve everyone; only a well-off minority will benefit.

So again, fully-automated people-carriers running on dedicated paths makes sense. Trains, subways, LRT, trams, busses. Urban areas should be designed for people, not cars.
posted by Artful Codger at 8:11 AM on February 29 [4 favorites]


>>Having tried LLMs for speech recognition, the idea of them being in any way involved with medical transcription terrifies me.

>To be clear, they were discussing ways this might be made to work. What's happening right now is scribes or transcriptionists or software like Dragon.


Dragon is already testing "AI" inside of dictation. Provider sets cell phone between themselves and patient and records the whole conversation. Provider will say some specific things deliberately meant for the chart ("Patient is a 24 yo male, complaining of rash on right arm. No noticeable rash elsewhere", .etc) but then the AI listens to the rest and determines what is pleasantries and what is relevant to the chart (pt. saying "the rash hurts so much I can't sleep").

The turnaround for a fully prepared note is typically seconds to one minute.

The doctor does have to review the note and sign it (and edit as needed). The demo, of course, looked great. We have rolled it out to a small group of providers and we really haven't heard any issues. Rolling out to another small group next week, so if I have any more information I'll share.

And, yes, it was said almost explicitly that we are testing it to see if we can replace our scribes with this, which is why I am staying as far away from that project as possible.
posted by a non mouse, a cow herd at 8:40 AM on February 29 [4 favorites]


The doctor does have to review the note and sign it (and edit as needed).

It's a good thing docs are so assiduous in reading their notes before signing!
posted by mittens at 9:32 AM on February 29 [2 favorites]


I think this is a good essay, but I don't understand where this bit is possibly coming from:
We wouldn't make fun of hand-authored writing in a videogame for being bad, partly because it would be a shitty thing to do, but also becuase we know that it wouldn't convince anyone at the studio to invest more in the writing team. That's not how this works.
I think I disagree with both premises there, but I definitely don't agree with the first one! (If I squint at it, I can guess that maybe it makes sense to think it would be obviously rude to make fun of bad game writing if you assume the only thing that leads to bad game writing is underfunded writing staff? But I don't think that's really how creative endeavors work!)
posted by nobody at 10:07 AM on February 29 [1 favorite]


Humans don't want to relinquish their sense of control, and so the best thing we could possibly come up with is a self-driving car that allows you to feel like you're driving it.

I dunno about that. It’s not that nobody ever enjoys driving but I think there would be plenty of interest in a car that can just take you wherever while you work or watch a movie or nap. In a way it’s a natural evolution of the core appeal of the automobile - it makes the luxury of a private car available to the masses, now with a driver included. The technical difficulty with that vision is that “almost autonomous” isn’t good enough to deliver on it. And then the larger issue with cars is that they are never going to be the safest or most efficient way to travel a route that thousands of other people travel daily.
posted by atoxyl at 10:21 AM on February 29 [2 favorites]


There is a problem with self-driving cars, and by extension AI, that I have never seen anyone bring up, but it is significant. Presumably it would navigate the array of streets using Google Maps or an equivalent service, but sometimes, Google Maps is wrong.

There are two intersections in my town where Google Maps will not always go through then correctly, but instead will choose to go the wrong way, then come back to that intersection from a different direction (sometimes involving a U-turn further down the road), and upon coming back to it will go the proper way. This happens consistently, and for one of these intersections Google Maps has done this for over a decade. If there is a way to tell Google about the issue I don't know it, but on top of that, why is it my job to improve their multi-billion-dollar product?

Anyway, if I was in a self-driving vehicle, this problem would become rather worse. In one of the cases, the alternate route is nearly two miles longer.
posted by JHarris at 11:50 AM on February 29 [1 favorite]


LLMs aren’t great but if you want to help the environment start by killing off Bitcoin

Why not both dot jpeg
posted by The Ardship of Cambry at 1:15 PM on February 29 [3 favorites]


The fact that AI may be able to pump out stuff that looks like 40-60% of the human made content out there means that that content was already shitty.

The thing is, creative arts are lifelong endeavors that never truly make it out of the practice stage. You can't perfect creativity any more than you can perfect the universe. A certain proportion of human made content out there may be shit but that's where we start from when getting better at things.

Without people being able to make a living at practicing being mediocre on things that aren't as critical how are we going to get artisans of their craft? AI rips the guts out of creative economics and they were already pretty shit to begin with. I'd have more faith this isn't a creative implosion of our species if reality TV weren't so god damned popular.
posted by Your Childhood Pet Rock at 1:41 PM on February 29 [2 favorites]


yay. Another lawyer caught with bogus AI-generated case law. The judge is apparently giving the lawyer a pass cos she said "Sorry" quickly and nicely enough as soon as she was busted. But the law society of British Columbia is now investigating.

If the opposing lawyers hadn't caught it, this lawyer might have gotten away with it. Lovely.

Memo to companies currently designing AI products for the legal industry: maybe incorporate a seperate final pass that finds and verifies any generated research?
posted by Artful Codger at 2:55 PM on February 29 [1 favorite]


So again, fully-automated people-carriers running on dedicated paths makes sense

How many times will Aramis be resurrected and killed again?
posted by snuffleupagus at 9:04 AM on March 9


« Older The men don't know, but the little girls...   |   Skewered Meat, Skewered Newer »


This thread has been archived and is closed to new comments