can everyone kindly shut the f up about AI
March 5, 2023 5:51 PM   Subscribe

The robots aren't coming, but the people who can't shut the fuck up about them are already here. Please please please can everyone just take a moment to shut the fuck up about AI? It’s so stupid and it’s barely even started and already everyone can’t stop nutting about how cool it is BUT IT IS NOT COOL. "It's fascinating and it's going to change every single aspect of our lives forever!!!" Do you even hear yourself??? It’s a fucking chatbot just like Alexa, Siri, and the exception that proves the rule, the GOAT itself SmarterChild.
posted by SituationNormal (89 comments total) 25 users marked this as a favorite
 
The writer is sensitive, you see, because his name is Al.
posted by SPrintF at 6:04 PM on March 5, 2023 [6 favorites]


Sounds like someone's got a case of the Mondays a bit early.
posted by Halloween Jack at 6:12 PM on March 5, 2023 [7 favorites]


ROTATE THE POD
posted by clavdivs at 6:16 PM on March 5, 2023 [7 favorites]


Sometimes I feel like the original dot-com boom was a fever dream hallucination and then I'm reminded of shit like Modo folding the day before their launch party and having the party anyway or Ask Fucking Jeeves having a giant-ass balloon in the Thanksgiving Day Parade and I nod sagely to myself and say "ah, yes, it was absolutely that wildly stupid."
posted by phooky at 6:17 PM on March 5, 2023 [26 favorites]


I understand their annoyance but this feels like yelling about the internet in the 90s.
posted by simmering octagon at 6:20 PM on March 5, 2023 [9 favorites]


I started using ChatGPT for work-related tasks a couple of weeks ago (I work for a non-US government), and now it's become impossible for me to imagine a world where I don't use it daily. While it does generate a lot of poorly written output (which requires a lot of editing), in the end, it gets the job done. Nobody has noticed any significant difference in the quality of my work, and I've actually gotten compliments about a couple of speeches I've written with it in about 20 minutes.

Given that I have the workload of about three people, I really don't know how else I could manage without it. To me, the writing is on the wall. AI technology is poised to take over the jobs of half the office. The higher-ups will see the economic benefit, and we'll likely end up as a skeleton crew, with just a couple of administrators and some AI tools buzzing about. We'll be worked to death while this thing does it's dance.
posted by Omon Ra at 6:27 PM on March 5, 2023 [34 favorites]


Empty tall bank buildings where clerks no longer work, is about dumb AI. Not getting your resume read by a human is about dumb AI vetting. There is cause for concern, when it starts grading essays, right now it corrects the vrammar, but breaks the rhythem, and breaks the syntax, and misspells less used words changing them out for homonyms. Anyway.
posted by Oyéah at 6:30 PM on March 5, 2023 [11 favorites]


"I can call you "userbetty"
And userbetty when you call me,
You can call me AI, call me AI"
posted by dannyboybell at 6:31 PM on March 5, 2023 [20 favorites]


Apparently there's a ChatGPT extension for Google sheets and someone used it to automate messaging potential employers on LinkedIn (Twitter link).
Easy to see this being used for things more nefarious then annoying CEOs.
posted by thatwhichfalls at 6:33 PM on March 5, 2023 [7 favorites]


I just want a robot that can do the dishes.
posted by Jacen at 6:40 PM on March 5, 2023 [14 favorites]



All of you LUDDDITES lolz!

posted by lalochezia at 6:51 PM on March 5, 2023


I just want a robot that can do the dishes.

Boy howdy do I have the thing for you!
posted by picklenickle at 6:56 PM on March 5, 2023 [9 favorites]


hang on I am chasing my accelerated uterus
posted by mochapickle at 7:02 PM on March 5, 2023 [3 favorites]


Apparently there's a ChatGPT extension for Google sheets and someone used it to automate messaging potential employers on LinkedIn (Twitter link).

I was half expecting to this to link to my friend who has used GPT to automate replying to headhunter emails on LinkedIn.
Part 1 Part 2
posted by aubilenon at 7:18 PM on March 5, 2023 [3 favorites]


To me, the writing is on the wall.

I see what you did there.
posted by briank at 7:20 PM on March 5, 2023 [1 favorite]


mmmmmm pancakes
posted by gwint at 7:30 PM on March 5, 2023 [2 favorites]


But everyone already knows AI is the Answer.
posted by Ice Cream Socialist at 7:40 PM on March 5, 2023 [2 favorites]


just want a robot that can do the dishes.

Boy howdy do I have the thing for you!


Those are some really realistic looking robot women! I'll place an order please
posted by Jacen at 7:46 PM on March 5, 2023 [2 favorites]


I was half expecting to this to link to my friend who has used GPT to automate replying to headhunter emails on LinkedIn.

So basically, in the future, you'll have a personal AI apply for jobs for you and even do the interviews for you. While companies also use AI to vet candidates and do the interviews for them. If you're rich you can afford better and more sophisticated AI.

I think it was an Alistair Reynolds book, maybe, where the majority of the time you're actually talking to AI simulations of the original - hi, how are you, how was your weekend, I need this document reviewed and approved, etc etc.... and only if it was something sufficiently unusual that the "real" person would deign to come online to talk to you, and if you were really important, actually talk to you in person in physical space.
posted by xdvesper at 7:56 PM on March 5, 2023 [1 favorite]


Funny how all this AI frenzy came along just in time to suck up the money that isn't going into crypto anymore
posted by Mary Ellen Carter at 8:18 PM on March 5, 2023 [44 favorites]


They can have AI apply for my job and interview for my job and for all I care even do my job, as long as I get the money so I can chill and live my life like Buckminster Fuller said.
posted by toodleydoodley at 8:25 PM on March 5, 2023 [9 favorites]


But everyone already knows AI is the Answer.

In theory, but not in practice.
posted by LionIndex at 8:44 PM on March 5, 2023 [5 favorites]


My kid hates california rolls made with real crab. Can't stand them. Way too "crabby", if that's a thing. But that fake krab california roll, she can't get enough of. Inhales them. Krab california roll with masago on the outside, forget it, that's perfection for her.

Anyways, spicy autocomplete is going to be a thing for a very long time.
posted by mark242 at 8:49 PM on March 5, 2023 [4 favorites]


The problem is not what AI generates. The problem is that we freakin' love it. (self-link).
posted by i_am_joe's_spleen at 8:54 PM on March 5, 2023 [5 favorites]


I like real crabmeat in a California roll but for nigiri sushi I prefer the krab with a "k", the stuff actually called surimi although my brother refers to it contemptuously as "stick". If it's real crab, even in the sushi bar, I just want it served with some drawn butter instead of wasabi and soy.
posted by Rash at 9:02 PM on March 5, 2023


A friend of mine works at a small media company, and apparently they can’t get enough of AI tools. AI to write news stories. AI to put publications together. Coming soon apparently is an AI to build ads. They’ve made no secret that they’re going to use any tools they can to cut jobs. Are they going to get good quality work out of the AI? Probably not. Are they going to get work that to them is “good enough?” Most likely. More creatives out of work with pretty much nowhere left to go.

Then you look at Dall-E and the new music AI that was linked not too long ago. These are more tools aimed squarely at cutting out the pesky part of paying creatives. Again, quality work? Maybe, maybe not. “Good enough” work? Considering how little many people and companies want to pay creatives, yyyyyep. The creative fields have been under fire for years now. Jobs keep getting cut, and there’s more and more that are giving up on the field entirely because the good jobs - heck, all jobs - in the field are mostly gone. Tools such as Canva have enabled a lot of people to do the creative themselves (hooo some of the stuff I get sent that was made in Canva is so bad) but AI tools stand to take even that person out of the loop. So yes, AI is coming for certain jobs. But it will never do those jobs as well. It doesn’t matter - “good enough” means not having to cut checks to creative workers.
posted by azpenguin at 9:46 PM on March 5, 2023 [23 favorites]


I had to write a shell script for the first time in ages last night and I was able to mostly write what I wanted by writing a comment to document the steps it should execute and Github co-pilot wrote most of the code for me. This saved my weekend. So go AI.
posted by interogative mood at 9:47 PM on March 5, 2023 [2 favorites]


Marking essays today. D-grade boy is suddenly writing B-grade essays on Pacino's Looking for Richard. It was not copied from the internet, but the correct use of grammar suggests it was internet generated. Yes the sky is falling and I'm giving it a 4/10 for form.
posted by Thella at 9:58 PM on March 5, 2023 [26 favorites]


The comparison of today's efforts to the early 90s internet is apt.

90s internet/www browsing experience progressed in parallel with connection speeds, along with encrypted traffic supporting commerce, and slick dynamic HTML capabilities added to clients to hide server latencies – Metafilter has changed very little lo these 20-odd years but much of the www experience is very different.

I had hoped / half-expected Wolfram to first get to where the GPT model is now, but it's clear we still need a rock-solid fact-based deductive AI, not the generative fabulist that's been popularized currently.

the AI will eventually be a solid extractive tap into the entire knowledge on the web. Search engines have historically indexed text, while the AI will pull facts from the web.

Google rather fails to assemble a simple response to this sample question currently, but this behavior will improve as the company adjusts to market expectations of its flagship search engine.

(20 years ago when Google was heading for its IPO I didn't have the first clue about the value it was capturing being the gatekeeper of what people could find on the internet)
posted by Heywood Mogroot III at 10:09 PM on March 5, 2023 [3 favorites]


I was thinking of this reading the other post about terrible 1970s cartoons. Apart from the songwriting for the themes (for which the 1970s were still pretty good), an AI could probably do comparably shitty writing and produce endless Scooby Doo Clone Shows. That's a rough thought.
posted by Fiasco da Gama at 10:45 PM on March 5, 2023 [3 favorites]


I guess this article was supposed to be funny? I didn't really get it. AI is going to change everything. We'll soon be living in a sci-fi future where machines can do all the jobs... but at the same time capitalism isn't budging and we'll all still have to pay the goddamned rent somehow.
posted by Ursula Hitler at 11:12 PM on March 5, 2023 [10 favorites]


These are more tools aimed squarely at cutting out the pesky part of paying creatives.

The way I see it is that the people so deeply annoyed at paying for creative don't see what the big deal is. They only asked for a quick five hundred words about a new product no one has even laid eyes on. They just want some art for the meeting tomorrow, and will ask it to be changed five different times between now and then.

They have no idea how much work the writer puts in, first, finding out what the hell they're supposed to write about, since no one bothered to let them know, then going back and forth over word choice and re-writing to make it seem effortless. They have no clue how much work the artist is putting into making sure the shit looks good because it does, genuinely look that good.

The thing is, the writers, artists, musicians, the creatives killing themselves to earn shit wages to make that last 10% shine, none of the people paying for it ever even look closely enough to see the difference between "good" and "eh, close enough."

The only question I have anymore is how many people get ground up and spit out before someone realizes how untenable this is. In any kind of well run society with leaders who actually gave a shit about their citizens, something would get done before we reach unemployment riots, but, well, I don't think we're looking at that kind of society. So then the question is, how much plenty do we have to have before the people who have all of it realize they should let other people survive a little, as a treat?
posted by Ghidorah at 12:09 AM on March 6, 2023 [28 favorites]


I agree with the sentiment of the article, but not the format. There are some legitimate questions around the use of “AI”, the form that it’s presented in, and whether we should regard these programs as intelligence, but… this is not the format to do it in. This is just being clever on the internet - being funny, even - which is fine, but maybe a bit counterproductive
posted by The River Ivel at 12:41 AM on March 6, 2023


I remember a Metafilter post several years ago about what computer languages will become popular in the future (I can't find it now). Everyone listed their favorite candidates. I mentioned that the future computer language will be the AI writing your program for you.

It makes me glad I'm a hardware engineer rather than a software engineer. I'm not saying they'll go away, but we'll probably not need as many. It is like the pictures of old offices filled with people sitting at desks with calculating machines--all obsolete with the invention of computers.

Of course, hardware design could get replaced eventually too--I'll be like that older engineer in the basement of Futureworld.
posted by eye of newt at 12:49 AM on March 6, 2023


I think we'll need just as many SW engineer for a good while yet, it's just that they'll get to produce more features, bugfixes etc than before.
posted by Harald74 at 4:45 AM on March 6, 2023


he should get stable diffusion to draw his venn diagrams when his hand gets tired
posted by mittens at 4:59 AM on March 6, 2023


The world is already a tower of Babel and AI only makes things more muddled. I vote for more intelligence in the human species. That would be encouraging.
posted by DJZouke at 5:14 AM on March 6, 2023 [3 favorites]


They can have AI apply for my job and interview for my job and for all I care even do my job, as long as I get the money

Computer says no
posted by flabdablet at 5:20 AM on March 6, 2023 [5 favorites]


If I ever have to say "output cannot exceed input...this is an increase in processing power and speed that WE CREATED" again. I SWEAR.
posted by lextex at 5:38 AM on March 6, 2023


Marking essays today. D-grade boy is suddenly writing B-grade essays on Pacino's Looking for Richard.

My husband has started noting the same marked improvement in essays. He was able to go to ChatGPT and see that it was a response given. It's not perfect, but this article shows it in action.
posted by kimberussell at 5:51 AM on March 6, 2023 [2 favorites]


Search engines have historically indexed text, while the AI will pull facts from the web....

How will it determine fact from fable? Will it deal with context? If so, how?

(Rhetorical questions, obviously, but soem things to think about.)
posted by BWA at 6:19 AM on March 6, 2023 [1 favorite]


> How will it determine fact from fable? Will it deal with context? If so, how?

I imagine this is future research, but one could add at the end of one's language model an adversarial network or system which verifies that citations exist and mostly describe something relevant to what was said. Getting it "good" of course would take a great deal of work, but I imagine a system to verify that a citation at least points to a URL that exists could be added in the near future.

Even that would be pretty handy, if you could have your language mode spit out your text and yourself spot check the relevant citations.
posted by Kikujiro's Summer at 6:57 AM on March 6, 2023


How will it determine fact from fable?

What makes you think that matters? If I'm being charitable, at least 50% of what modern conservatives believe is a total fabrication -- from religion, through social concerns, to economics and even hard science.

The only thing that matters is what you want to be true, and if you shout loudly enough, you can make it effectively true. That there is nothing fundamentally true about any of it is entirely irrelevant, so why should AI be any different? It'll just make the lies easier to construct, and that's gonna be a selling point not a problem.
posted by aramaic at 7:02 AM on March 6, 2023 [15 favorites]


The funniest thing about the SmarterChilds of today are the people who have convinced themselves these AIs are sentient. For some people it doesn't take much apparently.
posted by GoblinHoney at 8:08 AM on March 6, 2023 [2 favorites]


Funny how all this AI frenzy came along just in time to suck up the money that isn't going into crypto anymore

I was thinking just yesterday, it's like blockchain, VR, etc. A "solution" in search of a problem but investors are throwing money at it because they don't want to be the ones who missed out.

I just don't see any actual advantage in adding a chatbot to a search engine... I would rather read the search results myself and be able to apply my own judgement about their validity.

For all the controversy over AI art generators, it's the text ones that bug me most. People seem ready to trust the word of a chatbot, even though it's worse in some ways than just asking some internet rando. It's like asking a parrot who has hung around doctor's offices for medical advice. Bots that can "write" but can't separate fact, fiction, irrelevant information, and disinformation -- and yet present their results as if they were fact -- are dangerous.
posted by Foosnark at 8:15 AM on March 6, 2023 [12 favorites]


I just don't see any actual advantage in adding a chatbot to a search engine... I would rather read the search results myself and be able to apply my own judgement about their validity.

Agreed. But... Being on this website and even on the web in general, selects for folks who are more likely to apply discrimination among opinions and data sources and want to, effectively, see the work and the context around the work to make a choice about both which options appear to be trying to sell me something while also giving an answer, which options are giving an answer that is missing important context like being out of date or for a related but but inapplicable field, etc.

A lot of times, Nana, just wants to know how to make a cup for cup gluten-free replacement for wheat flour to make cookies for the grandkids. In a case where a chat system has been trained with enough context to answer with adequate accuracy (joking aside, this is both happening and well within reach for discrete cases) that Nana doesn't really have to care, may not represent every possible or most up-to-date source and treatment, it is a benefit.

However. I think there's a big rush, assuming that AI applied to search is somehow where everyone is heading because it is the easiest thing to shake our heads at and mock for being ridiculous. Selling a weight to, given equal opportunities, guide Nana toward a brand of rice flour or xanthan gum or whatever totally invisible to her and anyone asking this question seems profitable to somebody. Removing the context and work makes it harder to determine you're getting a bad deal, or even to know what you should be paying or that alternatives even exist.

As ever, if you're not paying for it, you are the product being sold to an advertiser with a guaranteed line on every thing you want and need so they can sell it to you, because you will tell that chatbot everything you want in nice structured terms that can be mapped back to a session where you will be sold everything they think they can get you to buy based on your needs. And when there are no alternatives presented, you are not burdened by all that painful choice.

The problem being solved here is not Nana's. It is the advertisers' reaction to privacy controls. They want to make it beneficial to people to hand over exactly the details of what they want and be willing to consume a single answer that will guide what they want to be closer to what the advertisers pay to ensure they want.

Large language models doing chat really are a step toward some expression of generalized intelligence, they're not the last step, and they are massively overblown, but capital has found a valuable application for them to reduce the very proliferation of choice that we thought we wanted. So they will get investment - and unlike cryptocurrency, there is a there there. Unlike the metaverse, which nobody really asked for or wants to inhabit the way it's imagined, we have all been asking for simplification of too much irrelevant information. The fact this tool can also simulate mediocre creativity is what I would call a happy accident of advertising.
posted by abulafa at 8:35 AM on March 6, 2023 [1 favorite]


I'd kind of argue that anything that can be written by AI doesn't need to be written in the first place. A friend is starting to use them for performance reviews where he works. He feeds it bullet points and it spits out a few paragraphs of editable prose about the person.

But why can't we just let the bullet points be the review in the first place? Why wrap it in meaningless fluff just to make it look like we put more effort into it than we actually did.

Soon we'll have these elaborate Markov chain bots writing 3000 words from some bullet point prompt and humans, not wanting to read piles of that crap, will have their own bots that do bullet point summaries of what the other AI generated. Will you get the same bullet points out that you put it? Probably not. So what you intended to communicate gets lost when you could have just sent the bullet points over to begin with.
posted by mikesch at 9:10 AM on March 6, 2023 [28 favorites]


Really enjoyed Timnit Gebru on the Tech Won't Save Us podcast link. As a tech folk, it's been easy for me to eyeroll the topic away into "oh god that's stupid", but Timnit sets out in an alarming way the damaging impact of hype merchants shilling LLMs.
posted by xurizaemon at 9:28 AM on March 6, 2023 [4 favorites]


Hi ChatGPT! I would like for you to produce an negative opinion piece on the popularity of ChatGPT in the writing style of Drew Magary.

Sure thing, I'll do my best to channel Drew Magary's style for you.

Oh great, another damn AI chatbot to waste our time. Just what we needed. As if we didn't have enough distractions already, we now have ChatGPT to add to the list of things we can use to procrastinate.

Sure, it's impressive that ChatGPT can understand and respond to natural language, but does that really make it useful? I mean, what can it actually do? It's not like it can clean your house or do your laundry. It's just another tool for us to avoid doing the things we should be doing.

And let's not forget about the creepy factor. This thing is basically a robot that's pretending to be a human. Who knows what kind of information it's collecting on us while we're chatting away with it. It's probably taking notes on our every word and using it to create some sort of dystopian profile of each of us.

But even if we put the privacy concerns aside, ChatGPT is just plain annoying. Every time you ask it a question, it spits out some pre-programmed response that doesn't really answer the question at all. It's like talking to a politician. You end up more confused than when you started.

And let's not forget about the fact that ChatGPT is putting real human customer service representatives out of a job. Sure, it's great for companies because they can save money on labor costs, but what about the people who used to rely on those jobs to pay their bills? We're just supposed to be okay with that?

In short, ChatGPT is just another useless, creepy, and annoying distraction that we don't need in our lives. Let's all just go back to doing things the old-fashioned way and leave the robots to their own devices.

posted by slogger at 9:37 AM on March 6, 2023 [1 favorite]


I've been using GitHub Copilot for a couple of months coding PHP with Laravel. Copilot suggests code that's eerily accurate most of the time. I also code very strictly to PSR standards, so it's easier to predict. It usually saves me time when it guesses right.
posted by kirkaracha at 9:39 AM on March 6, 2023 [2 favorites]


But why can't we just let the bullet points be the review in the first place?

IMO, it's just performative labor, all over again, and some people are really really really into the performance of labor rather than the actual point of the work. More than anything, they need to see the monkeys dancing to their tune.
posted by aramaic at 9:41 AM on March 6, 2023 [7 favorites]


I just don't see any actual advantage in adding a chatbot to a search engine... I would rather read the search results myself and be able to apply my own judgement about their validity.

GPT is much, much better than Google at doing something Google has long aspired to, and arguably kind of ruined its traditional search engine in trying to achieve - parsing and delivering what the user actually asked for in natural language. It does, however, have the maximally ironic flaw to keep it from replacing search, in that it cannot provide sources. But that’s why everybody is rushing to duct tape it to search.
posted by atoxyl at 10:25 AM on March 6, 2023 [4 favorites]


Is there a "too many fingers" equivalent of AI text? Right now one can more or less easily tell what is and isn't AI art (that will change in the future, to be sure), but as far as I'm aware, there isn't a similar method for text generated by AI.

As an artist and writer, I go back and forth about weather I should be worried. In the short and medium term, AI is definitly a threat to my life and livelihood. But in the far-term, once/if it's put artists and writers out of business, all AI will have to scrape will be art and text generated by other AI. I wonder if we'll end up with those sort of crazy jumble patterns we had with AI art in the very beginning all over again, as AIs create recursively copy off of one another.
posted by UltraMorgnus at 10:32 AM on March 6, 2023 [1 favorite]


Translation from other languages is an exciting application of LLMs.

I was just watching last night a Stanford NLP prof describing the interaction with these LLMs was like conversing with an alien intelligence, we are still figuring out how to establish efficient communication with it.
posted by Heywood Mogroot III at 10:48 AM on March 6, 2023


Is there a "too many fingers" equivalent of AI text? Right now one can more or less easily tell what is and isn't AI art (that will change in the future, to be sure), but as far as I'm aware, there isn't a similar method for text generated by AI.

In the same way that there's a strange breathlessness to computer-generated speech that makes it sound like someone typing words into a Word doc, I find that there is a very curious lack of personality or implied POV to AI-generated text. Even in cases when you're reading advertising or back-of-the-box copy, or informational placards on museum exhibits, there's always a sense that somebody wrote those words with a purpose in mind, and their implementation of that purpose is going to reflect who they are and where they come from as a human. AI text always reads like it's coming from some egoless nobody, born and raised nowhere, with no educational or cultural background save for scraping Wikipedia and random blogs for whatever it can glean on a subject.
posted by Strange Interlude at 11:01 AM on March 6, 2023 [4 favorites]


interaction with these LLMs was like conversing with an alien intelligence, we are still figuring out how to establish efficient communication with it.

We keep making this fundamental attribution error. It's like conversing with a convex mirror. You see glints and hints of motion and familiarity that make you believe that little moving part you can pick out must be something important and conscious just like you, with a theory of you, just like you have a theory of it. At least right now, language models do not maintain that level of internal state (about the interaction or the world at large) and are largely tricking our human judgment and risk assessment centers the same way that a stage magician does - it seems impossible someone would prepare that many permutations of a card trick, and it seems impossible that a system could remember that many connections between strings of tokens and words and how they relate to one another where they have appeared together.

We should probably separate natural language models from coding prediction models. The benefit of coding prediction is that they are trained on things that, for the vast majority of cases, lex and compile and even probably do something useful that can be associated with the structured source. This makes predicting the code to solve a common problem a distinctly different class of assessment challenge. As a programmer, you are familiar with explaining in structured terms, what you want the machine to do so when you see that structure being largely filled in. On that reflection of similar structures in other places, the mental and actual editing you are doing is finite and you are the determinant of "truth" in that domain.

In natural language there is, barring philosophical arguments that model this, some objective reality outside both the language model and your perception of it that reflects what we might call factual space, and unlike mapping coding space to runtime space, there is literally no Oracle of reference for factual space available to these models in their current form.

(Again, we will solve these problems by wiring together various types of models, state managers, etc. But what people are mistaking for cognition now biases how much we will believe until we get better at incorporating rationalization and reasoning.
posted by abulafa at 11:01 AM on March 6, 2023 [2 favorites]


A lot of times, Nana, just wants to know how to make a cup for cup gluten-free replacement for wheat flour to make cookies for the grandkids.

And before the release of ChatGPT, Nana could probably just Google "can I replace wheat flower when making cookies?" and she'd get a bunch of recipes, blog posts, and forum comments from actual humans that will more-or-less answer her question. Now though she'll be overwhelmed by hundreds of pages of content farm bullshit written by language models and she will need an AI chatbot to sift through it all.

It's that last bit that I can't stand about the introduction of chatbot search engines. It's like a protection racket. AI will ruin the Internet and then graciously offer to save us from itself.

Search has become so useless within the past year. I'm currently doing some DIY stuff and I used to be able to just fill gaps in my knowledge with a quick google, but there's so much superfluous, AI-generated content farm crap out there that it takes so much longer to actually find anything.
posted by RonButNotStupid at 11:35 AM on March 6, 2023 [15 favorites]


And before the release of ChatGPT, Nana could probably just Google "can I replace wheat flower when making cookies?" […] Now though she'll be overwhelmed by hundreds of pages of content farm bullshit written by language models and she will need an AI chatbot to sift through it all.

I agree that generative ML tools threaten to make this problem worse but come on, endless content farm bullshit has been the state of search for years now. There’s nothing new about the spam arms race, it’s just about to find another gear.
posted by atoxyl at 11:49 AM on March 6, 2023 [3 favorites]


Content farms have been a problem, but generative ML has definitely tipped the scales in favor of the bullshitters and both their breadth and depth have dramatically increased in the past few months. Content farms are showing up in a lot more searches, and what they're offering is getting longer and more detailed than the slapdash linkfarms that were easily identified. I'm starting to see fake blogs with months of archived longform posts and even a few fake forums with multiple fake users.
posted by RonButNotStupid at 12:19 PM on March 6, 2023 [3 favorites]


The robots aren't coming

About that. GPT, with its propensity to make up citations, isn't going to replace lawyers. But its quite possible to ask it for a structured response, like JSON, and integrate that into a system. One recent example basically replaces parts of Siri with GPT, to get it to perform compound actions, and generate custom responses, using JSON. This is where things will win in the short term, emitting structured responses to be handled by simpler systems. As an example, the tags on this post were generated by hand, and it would be simple enough to get GPT to recommend some to posters, and perhaps help address the tagging backlog.

It's also great at writing new test cases and personas for any old software, instead of "Joe Testo" everywhere. Of course this also makes bot and fraud detection a lot spicier. Asking it to send a JSON of username, birthdate, location, job history, etc. is easy, and soon Russian spambots pushing false Ukrainian statistics will be invisible to crude duplicate detection.

In the long term, as Two Minute Papers presenter Károly Zsolnai-Fehér says, "don't look at where the paper is today, look at where we will be two papers later." If you look at where GPT-2 was and where GPT-3 is, it may be possible that with better training algos, better models, and just more parameters, GPT-4 ends up learning and emitting real citations. Or maybe it hits a plateau and we need to add game theoretic layers via GANs. Or perhaps a brand new technique is invented. IDK how the future will go but it doesn't seem likely ChatGPT was the peak. And all the demand for "prompt engineering" may become obsolete, to the chagrin of those suggesting it is the new job replacing all the lost translators, copywriters and artists.
posted by pwnguin at 12:23 PM on March 6, 2023 [3 favorites]


just more parameters, GPT-4 ends up learning and emitting real citations

that seems a little unlikely (though I’m not an expert) but I do think integrating its ability to parse and synthesize language with something that lends a little more rigor (or most critically some genealogy of information) is at the intersection of “things that are technically possible” and “things that there are strong business incentives for doing”

I think there’s a lot of bullshit hype around these technologies but I’d certainly bet on them being a big deal in a way that I would not have at any point on crypto. The key for me is - the downfall of self-driving cars has been that “almost there” and “under most circumstances” simply do not constitute an acceptable state of functionality. A writing/research/coding assistant that you still have to double check can easily be an acceptable state of functionality.
posted by atoxyl at 12:44 PM on March 6, 2023 [1 favorite]


the downfall of self-driving cars has been that “almost there” and “under most circumstances” simply do not constitute an acceptable state of functionality

Perhaps it is because I don’t like driving and don’t consider myself a great driver that I came to this conclusion very early - if a person has to be paying attention in real time to catch the machine making a mistake, it defeats the whole point, not to mention being unreliable in practice.

For something like coding it’s very possible for work to be bad enough that it’s not even worthwhile for a person to review it, but the review can be done at one’s leisure and is usually part of the process of building software anyway. So as long as the generated code quality is good enough that it’s still saving some time, the tool has some value.
posted by atoxyl at 12:55 PM on March 6, 2023 [1 favorite]


get a bunch of recipes, blog posts, and forum comments from actual humans that will more-or-less answer her question.

I believe you are making my point. Nana was never good at sifting through blogs and content farms and forums and all that terribleness. She always wanted a straightforward answer with details that were close enough and didn't require evaluating a bunch of alternative sources. That is what I meant by the selection bias on this site and discussion on the Internet in general - there is an enormous majority of humans who do not have the expertise or desire to do that filtering and offering them an alternative is very enticing.

I'm not saying it's right or good. I'm saying it's a business model that threatens the older business model of search results which are, as noted here and elsewhere, rapidly declining in quality due to the flood of generated content diluting even the weak sauce of previously human created and curated content.
posted by abulafa at 1:07 PM on March 6, 2023


AI text always reads like it's coming from some egoless nobody

*THAT* part is easily remedied by telling it to do whatever in the style of Harlan Ellison.
posted by GCU Sweet and Full of Grace at 1:25 PM on March 6, 2023 [9 favorites]


it's clear we still need a rock-solid fact-based deductive AI, not the generative fabulist that's been popularized currently.

I think it will be a two-hemisphere situation: a fact based deductive AI interlinked with a fabulist/creative AI. Combined into one that if you give it a math or logic problem, can handle that correctly, as well as write a story or a song. Wolfram Alpha intermingled with ChatGPT & Dall-E.
posted by fings at 1:25 PM on March 6, 2023 [3 favorites]


language models do not maintain that level of internal state (about the interaction or the world at large) and are largely tricking our human judgment and risk assessment centers the same way that a stage magician does

Exactly. I was thinking recently about the AI true believers within the tech industry, and the analogy that came to mind was that they're like magicians, at a magicians' conference, watching one of their colleagues pretend to saw a woman in half on stage -- and they jump up in alarm, saying, "That woman's being sawed in half!" ...despite knowing how the trick is actually done.
posted by Artifice_Eternity at 2:02 PM on March 6, 2023 [1 favorite]


Except that magician's conferences are more likely to have people doing new routines or new approaches to classic routines, in a way that makes the other magicians wonder how it's done. Even classic tricks can be done in a myriad of ways.
posted by creatrixtiara at 2:22 PM on March 6, 2023


Also, nobody knows how the “trick” of human sapience works.

The “AI true believers within the tech industry,” Sam Altman declaring himself a stochastic parrot and so on - a lot of this is classic engineering mindset partisanship, staking out a claim that, philosophy and theory be damned, we’re gonna keep building these models until they are indistinguishable from what they are modeling, and what can anybody say then? The downfalls of this mindset have been discussed extensively, but if I’m being honest I think plenty of people on the “opposing” side are not really taking the philosophical or practical implications as seriously as they should be, either.
posted by atoxyl at 4:15 PM on March 6, 2023 [2 favorites]


How will it determine fact from fable?

>What makes you think that matters? If I'm being charitable, at least 50% of what modern conservatives believe is a total fabrication -- from religion, through social concerns, to economics and even hard science. The only thing that matters is what you want to be true, and if you shout loudly enough, you can make it effectively true

I was responding to this comment. And as it happens, I do think facts matter. Depending on the subject area, they will insist on making an appearance, and I'd sooner they do so sooner rather than later. I'm also interested in context, as my original comment suggests. Highlighting or lowlighting seemingly extraneous material can really change your mind. Which may be why it is so little practiced.

So I guess I disagree with your conclusion that wishing and shouting makes things so, regardless of one's political views.

(I do wonder if the article itself was written by AI. If I were the facilitator for that kind of stunt, I'd be on the floor right now.)
posted by BWA at 5:20 PM on March 6, 2023 [2 favorites]


In terms of fact versus text that looks like it's drawing on facts (e.g. ChatGPT's habit of inventing references to sources that look real but aren't), I think part of this issue is the mindset that all human interaction is a form of selling and consumption. If you can sell it, it doesn't matter whether it's truthful or complete BS. If you are a consumer, you pick what you like, not what is true. Truth, facts, none of it is relevant, only the market relationships of consumer and seller.
posted by zenzenobia at 5:30 PM on March 6, 2023 [4 favorites]


"#ChatGPT and its ilk are effectively a large-scale DDoS attack on not only specific tasks (such as teaching, software maintenance and fiction editing) but on human creative inspiration, sense of self and ability to make meaning of the universe..." @kat@weatherishappening.network
posted by urbanwhaleshark at 6:23 PM on March 6, 2023 [7 favorites]


I started my law studies before word processors existed. I paid to have someone else type up my assignments. I had also been programming computers for 5 years.

The nadir of my law studies was when I found an unpublished essay on "Natural Justice" and realised that a global replace on "natural justice" with "[Ronald] Dworkin's theory of law" would be a perfectly satisfactory (not good, just satisfactory) assignment to submit for Jurisprudence. I did NOT read the entire essay; my typist did not "read" the entire essay - she was just typing it; my lecturer (RIP) did read the essay.

I think of that whenever someone talks about AI - words as framing devices, creating a spun sugar lattice with no substance - but someone is at the end of that process. And I am pretty sure that my lecturer would not have appreciated my challenging Dworkin's theor[ies] as being a continuation of previous "natural justice" concepts.
posted by Barbara Spitzer at 10:00 PM on March 6, 2023 [1 favorite]


Metafilter: It's like asking a parrot who has hung around doctor's offices for medical advice.
posted by wats at 10:15 PM on March 6, 2023 [6 favorites]


philosophy and theory be damned, we’re gonna keep building these models until they are indistinguishable from what they are modeling

Given that Bay Area techbros seem to be the standard against which the artificial general intelligences they're working on are to be compared, it seems to me that Altman already having declared himself a stochastic parrot is an early indicator that this approach might achieve its goals sooner than I would otherwise have expected.
posted by flabdablet at 12:36 AM on March 7, 2023 [3 favorites]


I believe you are making my point. Nana was never good at sifting through blogs and content farms and forums and all that terribleness. She always wanted a straightforward answer with details that were close enough and didn't require evaluating a bunch of alternative sources. That is what I meant by the selection bias on this site and discussion on the Internet in general - there is an enormous majority of humans who do not have the expertise or desire to do that filtering and offering them an alternative is very enticing.

Okay, but wither media literacy? Because being able to evaluate sources, sift through forum posts, and sort the wheat from the chaff of content farms are skills that generally fall under that heading. Shouldn't Nana bear some responsibility for knowing how to consume search results, or should she just accept that powdered toilet bowl cleaner is a substitute for wheat flower because that's the straightforward answer given to her by the AI chatbot?
posted by RonButNotStupid at 5:13 AM on March 7, 2023


Okay, but wither media literacy? Because being able to evaluate sources, sift through forum posts, and sort the wheat from the chaff of content farms are skills that generally fall under that heading. Shouldn't Nana bear some responsibility for knowing how to consume search results, or should she just accept that powdered toilet bowl cleaner is a substitute for wheat flower because that's the straightforward answer given to her by the AI chatbot?

Does it sound like I think this is a good situation? Media literacy is where it's always been: unevenly distributed and at odds with rather intense countermeasures. Language models of adequate sophistication are one of an ever more effective array of those countermeasures.

I'm responding to the bewilderment around why anybody would want this. Money is pouring in because "we don't want knowledge, we want certainty." And people with money want to control what's certain.

This is different from cryptocurrency and metaverse bubbles because it changes the landscape of information processing in pursuit of perhaps the oldest market there is: one simple answer from a source you trust. Whether you trust it because it's your friend or friends' friend or someone who sounds like an expert in a forum who knows "more about ___ than you can possibly imagine" or because you've been effectively taken in by an illusion doesn't matter.

Anyway. If it sounds like I'm saying Nana shouldn't or can't want to not eat poison I hope it's clear that is ridiculous.

I guess it's time to get my own blog.
posted by abulafa at 5:56 AM on March 7, 2023


So when can I have one that will be on hold with the health insurance company and dispute a denial for me? Preferably saying stuff like "Do you want me to get Ms. tigrrrlily on the line? You should know I'm much nicer than Ms. tigrrrlily"
posted by tigrrrlily at 7:56 AM on March 7, 2023 [2 favorites]


Also, nobody knows how the “trick” of human sapience works.

Hopefully it was clear in my analogy that it's AI's production of sometimes pseudo-cogent responses that's the trick.

Human sapience is something of an entirely different order... not a trick at all.
posted by Artifice_Eternity at 8:48 AM on March 7, 2023


You’re missing my point, but maybe my point was unclear:

when people compare a language model to the human mind, they are comparing two things that are more black boxes than not. We understand how the former works at the lowest level. We sort of understand how the latter works at the lowest level. But precisely how the encoding of complex relationships or the production of language emerges from either structure is mostly unknown. That makes it very easy to play fast and loose and say “yeah, we’re definitely a couple years away from simulating a brain” which is definitely not the position I take. If you’re the one ex-Google guy saying the bots are already morally equivalent to humans - okay, you’ve probably been tricked by the trick. But it also means that there are all kinds of open questions about the higher-level workings of either, including attempts to make comparisons between the two, that are totally valid avenues of investigation, so I also find the “it’s just math” response pretty unenlightening. I mean, brains are just chemistry, as far as we can observe. And throwing around terms like “pseudo-cogent” is just asking to be blindsided by how far these technologies can go in emulating specific modalities of human cognition in a practically useful way.
posted by atoxyl at 10:13 AM on March 7, 2023 [2 favorites]


emulating specific modalities of human cognition

Which is not a new thing for machines to be able to do, of course, and we didn’t grant computers personhood just because they could play chess. But we also don’t go around insisting that computers don’t really play chess, and I guess that’s my point in a nutshell. It seems like a mistake to confidently conflate “the machine is not like us” with “what the machine is doing is unlike language.”
posted by atoxyl at 10:21 AM on March 7, 2023 [1 favorite]




But we also don’t go around insisting that computers don’t really play chess

I think people absolutely do make this claim. Okay, I will make the claim: computers aren't really playing chess.
posted by elkevelvet at 11:37 AM on March 7, 2023 [3 favorites]


Pfft. Next you'll be telling us that submarines aren't really swimming.
posted by flabdablet at 7:21 PM on March 7, 2023 [2 favorites]


While it does generate a lot of poorly written output (which requires a lot of editing), in the end, it gets the job done.

My experience as a human editor editing other humans' poorly-written copy into usable form is that it takes far longer than just writing the piece myself from scratch.
posted by Paul Slade at 1:30 AM on March 8, 2023 [3 favorites]


Metafilter: human editor editing.
posted by urbanwhaleshark at 8:33 AM on March 10, 2023


OpenAI has just announced GPT-4. Quick scan:

- passes a simulated bar exam with a score around the top 10% of test takers; in contrast, GPT-3.5’s score was around the bottom 10%
- passes more AP exams
- 95 percent pass rate on the HellaSwag test set
- API access is waitlisted
- accepts image inputs
- improved "internal factuality" scores of ~75 percent versus 50 for gpt-3.5, similar progress on TruthfulQA dataset
- open-sourcing OpenAI Evals, a software framework for creating and running benchmarks for evaluating models like GPT-4
posted by pwnguin at 10:47 AM on March 14, 2023 [1 favorite]


I don't understand what motivates these people to systematically destroy everything worthwhile in the world, but it seems like fairly drastic remedies would be necessary to make them stop.
posted by Not A Thing at 10:51 AM on March 14, 2023 [1 favorite]


Every species modifies the ecological niche it occupies, and once a species becomes sufficiently numerous then those modifications will be to its own long term detriment. Silicon Valley techbros are no different.

To the extent that we and Silicon Valley techbros are members of the same species, then, it behooves us to put as much ecological distance between their activities and our own as we can organize. Because if we continue to integrate their designs into our lives with the enthusiasm we've been showing for doing so over the last few decades, then we will continue to be locked in to their habitual patterns of habitat modification, and the consequent long term detriment will accrue to us as well.

The tawdry simulacra of AI that Silicon Valley keeps dangling over our cots are toys built to amuse Silicon Valley, and a big part of that amusement is the delight they experience as they see the rest of us reaching out with our pudgy little hands to grab at them, our trusting little faces all lit up with gurgles of pleasure as the bars of the cot grow ever higher around us.

We don't actually have to do that. The techbros are not the parents and guardians of humanity despite the depth of their deluded convictions that they are, and we do not have to buy into their fantasies.
posted by flabdablet at 12:15 AM on March 15, 2023 [2 favorites]


As OpenAI goes commercial, we see IP weaponized against the general population once again. All those stories about people getting over on the system in various ways meant that it's use must be controlled and restricted. Now it will be only us who will be the receivers of it's output.
posted by rhizome at 2:50 PM on March 16, 2023


« Older There is an Unofficial Rivalry Between...   |   Man takes woman to court for $3 million for... Newer »


This thread has been archived and is closed to new comments