Podcast all the things
September 27, 2024 12:48 PM   Subscribe

NotebookLM is a deceptively simple tool from Google that at first glance looks like a fairly straightforward demo of their Gemini AI platform. Upload pasted text, a link (including YouTube), audio files, or up to 50 documents/500k words (which aren't used for training) and after a brief analysis it will produce various text interpretations -- summaries, tables of contents, timelines, study guides. It even has a chat window so you can pose suggested questions about the source material or ask your own. Useful, if a bit dull. đŸ„± ...until you open the "Notebook Guide" panel and see the unassuming "Audio Overview" feature. Hit the "Generate" button and (after a few minutes of processing) the results astonish: an utterly lifelike, minutes-long "deep dive" conversation about your documents between two nameless podcast hosts. Examples [transcribed non-Google versions inside]: Harris-Trump debate transcript - Folding Ideas "Line Goes Up" video essay - Jabberwocky - MetaFilter - The text of this FPP itself (how meta)

NotebookLM is currently a free "experiment" (with occasionally glitchy audio and minor flubs), but using it or listening to shared audio requires a Google account. Here are alternative copies of the above "podcasts" if you don't have one: Harris-Trump - Line Goes Up - Jabberwocky - MetaFilter - This post. You can share a public link to generated audio by clicking the share button above the player (next to the thumbs up/down) once it's done.

Protip: Since you can upload and summarize multiple documents at once, try including a typed list of "listener" questions targeted at the podcast and the "hosts" just might answer them "on-air". (YMMV)

One Singularity commenter highlights an unexpected benefit:
This is.. just extraordinary. I really can’t quite process it. But I’m going to try. I uploaded the first 17000 words of a story I’ve been writing for years. I am terrible at finishing stories. I just lose motivation and forget about it and move on with other projects far too easily. But every now and then I remember one of them and go back to it and think (perhaps egotistically) ‘damn this actually had a lot of promise’ and I add a bit more. Maybe by the time I die I’ll suddenly have a publishable collection.

And now here I am listening to two pretty convincing ‘people’, taking my work seriously. At the back of my mind I know it’s not real but still it feels incredibly validating somehow. I actually felt very emotional listening to them, and I’m sure that speaks to all kinds of suppressed mental issues, but still
 I never would have considered, until now, that one of AIs great potential uses could be to provide artistic encouragement.

I’m tempted to use AI to generate some kind of terrible story and then upload that to hear it pontificate in the same way and put myself back in my place. But I’m kind of reluctant to do that to myself. I guess the ultimate measure of the value of this is going to be whether it makes me finish that damn story.
Can confirm: I uploaded an 18-year-old Halo fanfic (don't ask) and it is both weirdly gratifying and a bit surreal to hear two seemingly professional podcasters dissect its plot and themes like it was the latest entry on the NYT bestseller list. If Google follows through on the promised ability to interact with these hosts in real time, this could prove to be a powerful learning and creative brainstorming tool.
posted by Rhaomi (119 comments total) 44 users marked this as a favorite
 
I really hope that more comedians use this than say
 proud boys, giving an NPR sheen to misinformation. Thanks for the head’s up OP!
posted by drowsy at 1:00 PM on September 27


This is kinda insane. I sent it a link to the wikipedia page for the 1980s classic BMX film Rad and it gave me this. Not too shabby.
posted by downtohisturtles at 1:06 PM on September 27 [1 favorite]


Now I don't feel bad about not listening to podcasts.
posted by grumpybear69 at 1:07 PM on September 27 [8 favorites]


And now here I am listening to two pretty convincing ‘people’, taking my work seriously. At the back of my mind I know it’s not real but still it feels incredibly validating somehow.

People fell for ELIZA even though when you read the chat transcripts, they're incredibly superficial. I can't believe anyone ever fell for it then or now. Is everyone gulping down lead paint? It's not convincing AT ALL.

It reminds me of the 1928 New Yorker cartoon where a man keeps talking to a woman as she sits there, silently looking at him. He finally ends by saying, "You're a very intelligent little woman, my dear." But I guess this will be loved by the people who listen to 5 hour podcasts where dude bros just talk about whatever.

The main purpose of AI is generating undetectable spam.
posted by AlSweigart at 1:28 PM on September 27 [32 favorites]


All this synthetic slop stuff - and the future that it portends, that is unfolding now - just makes my soul ache.
posted by lalochezia at 1:31 PM on September 27 [36 favorites]


Aaaaaaaah - I just confirmed that this technology was the source of a low-quality YT video I noped out on earlier this week. (I’m not linking to it because that just feeds them views, but the “host” voices are exactly the same as in the Line Goes Up one.)

Someone found a “10 best D&D modules” listicle, fed it into this AI, and then added some slides to try and monetize the other content they stole.

Garbage stacked on garbage all the way down. Ain’t the modern internet grand?
posted by FallibleHuman at 1:35 PM on September 27 [27 favorites]


Thanks for the heads-up! I've never been able to click with podcasts before, so there's a good chance I wouldn't have necessarily heard about this as an impending source of imitation content before it proliferated.
posted by CrystalDave at 1:42 PM on September 27 [3 favorites]


All this synthetic slop stuff - and the future that it portends, that is unfolding now - just makes my soul ache.

to be fair recycling wikipedia pages into podcasts is a well-worn technique
posted by BungaDunga at 1:43 PM on September 27 [5 favorites]


Mod note: A few comments deleted for violating, well, several guidelines. Let's avoid turning this thread into a fight with other members. If you see something you dislike, flag it and move on.
posted by loup (staff) at 1:49 PM on September 27 [2 favorites]


This is amazing. I gave it my mum’s self published book and she literally cried a little hearing the hosts talk about it as if it were the latest bestseller. She feels seen!
posted by bakerybob at 1:50 PM on September 27 [7 favorites]


I never would have considered, until now, that one of AIs great potential uses could be to provide artistic encouragement.

fantastic, we've automated encouragement, which means you can whip up an infinite supply of yes-people who will encourage your worst impulses. like having a community of cultists who love you on-tap. heavenbanning as predicted two years ago
posted by BungaDunga at 1:51 PM on September 27 [17 favorites]


My mind is too blown to even have an opinion. I need an ai model to help me have a take!
posted by jeoc at 1:58 PM on September 27 [5 favorites]


Human Feedback Makes AI Better at Deceiving Humans, Study Shows

it turns out it's easier to make a text more convincing than it is to make text more correct, so LLMs being reviewed by humans learn to be more convincing but just as incorrect, so they're accidentally training convincing sophistry into these models
posted by BungaDunga at 2:02 PM on September 27 [12 favorites]


A friend just created one of these, based on a totally forgotten band we were in way back in the mid-‘90s. (I wrote up a bio for the band at the time.) The results were astonishing - it sounded like an NPR feature on our band, with the host queuing up all these questions for the expert to go deeper on. I can totally see how this could be useful in finding the deeper meaning inside long pieces of text.
posted by saintjoe at 2:04 PM on September 27 [2 favorites]


I uploaded one of my stories. It's very uncanny valley. A part of you is psyched that these people are talking about your story, another part is saying "that is so obviously computer-generated".
I made it about 20% through and had to close it.
posted by signal at 2:05 PM on September 27 [2 favorites]


I was torn between just posting, "Thanks, I hate it," and something that acknowledges that "Thanks, I hate it" is exactly the way I feel about this, that it's a quote that's rapidly gone into common use since it was first posted on twitter, and that my usage of the phrase is not that much different than the way LLMs deploy existing language.

But one difference is that I hate this, even as I recognize that it's ridiculously advanced. (These voices are also way more realistic than the generated ones I was listening to earlier for my various annual trainings, so even though I'm not a fan of where we're going with this in general, I hope that maybe next year these trainings will be slightly less grating.)
posted by thecaddy at 2:09 PM on September 27 [1 favorite]


I gave it my mum’s self published book and she literally cried a little hearing the hosts talk about it as if it were the latest bestseller. She feels seen!

That's what this is all about, isn't it? Who cares if the critique or compliments have actual substance. This feeds our worst social instincts. You might as well have bots write fake glowing reviews of the book as well (Amazon doesn't care as long as they're positive: it sells more books. It's the genuine bad reviews that I've seen them take down.)

I always thought we'd need a holodeck-level of simulation to make us throw away our real lives, but it turns out our standards are so much lower.
posted by AlSweigart at 2:14 PM on September 27 [20 favorites]


It's all a giant bubble. These things cost much more to generate than any minor benefits they give to humanity (if any.)

For example, OpenAI expects to *lose* $5 billion this year. they are desperately looking for investment cash to keep the power bill funded. it's insane. this will all go away as soon as MS or Google realize it's a money sinkhole and step back. The rest of the ecosystem will collapse.
posted by Rhomboid at 2:15 PM on September 27 [15 favorites]


I'm interested in this despite myself.
posted by signsofrain at 2:16 PM on September 27 [6 favorites]


Hey everyone. Now that I've raised $500 in venture capital, I'm offering my AI service (a small Python script) that will give all of your Metafilter comments and posts 100 Favorites. You, too, can be part of the elite Mefi intelligentsia and have the most liked comments on this site.
posted by AlSweigart at 2:19 PM on September 27 [8 favorites]


“What I had not realized is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.”
― Joseph Weizenbaum (creator of ELIZA)
posted by chavenet at 2:19 PM on September 27 [26 favorites]


I can see how this could be validating and motivating, but like, it also seems like using a sockpuppet or bot army to talk about or praise your own work, only with fewer steps and equally as hollow. My main artistic endeavor is in the immensely low stakes realm of writing and posting fanfiction, and sure, I could in theory set up some kind of botnet to mass kudos my fics and comment positively on them, but what the hell would be the point of that? Same if I was a pro author, I could flood a Goodreads or Amazon page with positive reviews by sockpuppets or whatever. But I'd know it's fake! Any sense of pleasure or satisfaction at the praise would be fleeting and basically delusional, since it'd all be fake! Same with this. I'd feel so creepy and sad doing it. Even a single emoji-only comment on AO3 or a like on tumblr or whatever are more meaningful and valuable than this.

I guess something is better than nothing in the lonely toil that is writing, but oof. Seems grim.
posted by yasaman at 2:34 PM on September 27 [5 favorites]


AlSweigart: "People fell for ELIZA even though when you read the chat transcripts, they're incredibly superficial. I can't believe anyone ever fell for it then or now. Is everyone gulping down lead paint? It's not convincing AT ALL."

Dr. Sbaitso Was My Only Friend

Never underestimate the human drive to anthropomorphize inanimate objects, let alone something that appears to respond intelligently and engagingly. Even googly eyes will do the trick. And if tarot cards and inkblots and self-help manuals can facilitate creativity and introspection and self-esteem, why not this? It might be abusable, but it doesn't make the concept inherently bad.
posted by Rhaomi at 2:41 PM on September 27 [3 favorites]


As a result of becoming old and cranky, I have no patience for podcasts; please give me a transcript that I can skim so I don't feel another hour of life slide away listening to people (or fake people) bloviate, no matter how insightful they are supposed to be?

As an aside, I cannot image being a teacher of English literature or composition in this new world; who would bother to put in the work of writing an essay when machines can plausibly do it for you? Why think, when the thinking has been done, and all that has remains is to make a collage of already existing texts? I honestly find myself at sea with all this, and I have been wondering lately if there indeed there comes a time when you really are just Too Old to adapt, and that the frameworks that you have learned or built about the world and how it works are just as archaic, and about as much use, as the worldview of someone born a hundred years earlier which we cannot really access and just have to speculate upon. I'll see myself out to the rocking chair on the porch now.
posted by jokeefe at 2:49 PM on September 27 [15 favorites]


I used to have discussions about what constituted a "robot", and we often found that googly eyes resolved edge cases neatly. Most people have trouble accepting their coffee maker as a robot, but it turns out if you give it eyes, they come around.

I've kind of loathed podcasts for a while, and honestly this doesn't impress me so much as deepen my loathing for podcasts. It's just taking the simple tricks that podcasts use to stretch out sixty seconds of information into half an hour and making them obvious. Of course, what I really should try to use this for is exactly that-- boiling podcasts back down into the sixty seconds of information they often contain. Which means... if I plug the output of tube A into input funnel B... and flip this switch... [Brazil ensues]
posted by phooky at 2:53 PM on September 27 [6 favorites]


Plugged the audio overview of this thread back into NotebookLM:
The source material is a discussion thread from a website about a new AI tool called Notebook LM. The thread explores how this tool can be used to create audio summaries and conversations based on uploaded documents, such as research papers, essays, and even fiction. Participants discuss how this technology can be used to improve learning, enhance creativity, and make information more accessible to a wider audience. However, they also acknowledge the potential downsides of relying too heavily on AI and emphasize the importance of critical thinking and a balanced approach to using such tools.
posted by phooky at 2:57 PM on September 27 [3 favorites]


As with all "ai" products my first reaction was one of astonishment.
I fed it a story ("The Cone") that I'd written to be deliberately hostile to the reader (and some readers loudly let me know I'd succeeded) and the generated podcast actually disentangled the plot and picked up on the themes to a surprising extent.
After feeding the thing with some other stories and poems, though, it began to feel pretty shallow and repetitive - granted, my writing tends to deal with repeated themes (huge scale shifts, the nature of duty) and it saw those easily but there was no real engagement with the writing beyond latching onto the most obvious of my tricks.
The results increasingly felt hollow and superficial. Incredible technology though.
posted by thatwhichfalls at 3:08 PM on September 27 [1 favorite]


I am now looking nervously at AISweigart and wondering about AI functions masquerading as Metafilter posters. And the idea of "heaven banning", which I had not heard of before, sounds so plausible, and somehow so tempting to use... what would the crisis of discovering that one had been heaven banned look like? O Brave New World.
posted by jokeefe at 3:08 PM on September 27 [1 favorite]


Thanks for posting this. I'd heard of NotebookLM but never heard it in action. Unlike some other folks here, I can see great potential for this kind of thing. The trillion dollar question (and I selected that figure intentionally) is how the fuck do we keep people from abusing the shit out of it?

And no, "make the companies making this shit put in guardrails" isn't the answer. Fur one, it's nearly impossible to do well if the bad actor is even slightly clever, but more importantly a few grand will buy you a computer good enough to use existing models for inferencing reasonably quickly and even if you need to train from scratch, that capability is well within the budgets of fairly small political campaigns using cloud compute.

The saving grace currently is that if you're at all familiar with the large commercial LLMs it's pretty easy to spot their house style when you run across it in the wild. They just have a certain way of writing that manages to be very distinctive. However, over the next couple of years I think we're going to end up seeing more and more one off models created by various people for various reasons that won't read the same because they're trained and tuned differently. Hell, it's probably already possible to use OpenAI and Gemini APIs to fiddle with things enough to work around the "house style" look without bothering with the trouble of a bespoke model.

What I can say for certain is that the genie ain't going back in the bottle. I can also say that being curmudgeonly about it isn't helpful either. Like it or not, there are serious productivity gains to be had in many kinds of work for humans who are willing to use these tools. Banning them outright is functionally impossible. Stop grousing and think about how we can ameliorate the negative effects while still being able to reap the rewards.
posted by wierdo at 3:10 PM on September 27 [4 favorites]




"However, they also acknowledge the potential downsides of relying too heavily on AI and emphasize the importance of critical thinking and a balanced approach to using such tools."

Ok, but this literally did not happen in this thread??? It's an awful summary, like all AI summaries are awful. It didn't summarize THIS thread, because LLMs can't truly summarize!
posted by muddgirl at 3:19 PM on September 27 [13 favorites]


I can also say that being curmudgeonly about it isn't helpful either.

Bingo.

...if being curmudgeonly about a topic changed any mind, ever, nobody on this site would still be religious, or even "spiritual".
posted by aramaic at 3:20 PM on September 27 [2 favorites]


As far as I can tell AI summaries are no better than like a psychic cold reading or a horoscope.
posted by muddgirl at 3:22 PM on September 27 [1 favorite]


I particularly hate the “just two dudes, chatting, on a specific topic”. Perhaps it’s because I grew up in a house with bbc radio 4 playing constantly, where every subject is discussed by experts, and then edited, but I have high standards for audio - like, the audio actually has to do something, and two-dudes-chatting often devolves into “one dude chatting, the other asking the stupidest questions”. I also particularly hate AI, so this is like two things I hate, stuck together.
posted by The River Ivel at 3:41 PM on September 27 [6 favorites]


AlSweigart: "[snip]"

Yes, as I said, it's possible to abuse this tech (for example, by piqueishly spamming a wall of blather). But that doesn't change the fact that there are plenty of positive uses.

phooky: "However, they also acknowledge the potential downsides of relying too heavily on AI and emphasize the importance of critical thinking and a balanced approach to using such tools."

muddgirl: "Ok, but this literally did not happen in this thread??? It's an awful summary, like all AI summaries are awful. It didn't summarize THIS thread, because LLMs can't truly summarize!"

It's not summarizing the thread, it's summarizing the transcript of the faux podcast, and those are points the "hosts" made. Here is a take on the thread itself up to this point, minus the long spam comment [alt].
posted by Rhaomi at 3:47 PM on September 27 [3 favorites]


Mod note: One comment deleted. Please refer to the AI Generated Content section of the Content Policy and contact us if you have any questions.
posted by loup (staff) at 3:54 PM on September 27 [3 favorites]


I'm sure there are people who claim that there are plenty of positive uses for "Crossing Over with John Edward" too.

> Here is a take on the thread itself up to this point, minus the long spam comment.

I'll listen to that when you thoughtfully respond to my long spam comment. You go first.

Or we could both not waste our time consuming AI slop. Seriously, let's not. :P

EDIT: Ah it was deleted. It's not quite why Metafilter has a "no AI comments" policy, but it does make my point.
posted by AlSweigart at 3:54 PM on September 27 [3 favorites]


WOW. I uploaded text from a news article, and within a few minutes, it created a 4:15 "podcast" featuring a man and a woman - and it sounded very legit and natural. I know there are many arguments against (and certainly also for) this kind of thing, but from a purely "check this out" POV - my mind is sorta blown.
posted by davidmsc at 3:59 PM on September 27


It's not summarizing the thread, it's summarizing the transcript of the faux podcast, and those are points the "hosts" made.

Ah I see, then it's even worse than my original thinking.

>The source material is a discussion thread from a website about a new AI tool called Notebook LM.

The source was not a discussion thread, the thread did not exist when the podcast was created.

>Participants discuss...

The summary implies it's talking about the participants of the discussion thread. There's no discussion thread. There is the original post, then there is the fake podcast. So if this is a summary of what the fake podcast hosts said, they're not participants in a discussion thread.
posted by muddgirl at 4:04 PM on September 27 [2 favorites]


Like many others, I try not to waste my time on pointless podcasts, so I grant that maybe the fake hosts of the fake podcast called themselves a discussion thread or something. That really buttresses my point.
posted by muddgirl at 4:06 PM on September 27


For the sake of argument, I will grant the existence of 'positive uses' of LLMs.

I have read and continue to keep up with the effects of LLMs, especially the externalities caused by their use.

For each unit of LLM Utility generated through LLM use, I would say that you are also generating two orders of magnitude more harm to the world than that unit of utility.

Those of you making excuses for their use are doing real and grievous harm to the world and to people.

That's something that's often implied in these threads, but I think it needs to be said explicitly. 100 times more harm than the utility you get out of each use. And that's assuming there is utility at all and not just a perception of utility.
posted by ursus_comiter at 4:27 PM on September 27 [8 favorites]


These ultimately come off as a devastating parody of a certain kind of podcast.
posted by atoxyl at 4:40 PM on September 27 [11 favorites]


There is a “wow” factor that comes from how well it nails the delivery, then you notice how formulaic and empty the result is - especially after comparing a few - and then it kinda just feels like it’s exposing the formula and emptiness of the real thing.
posted by atoxyl at 4:43 PM on September 27 [12 favorites]


BTW: this seems to be the Illuminate tool, accessed indirectly.
posted by kaibutsu at 4:48 PM on September 27


You know what I do when I have a large text that I want to plumb for themes, subtext, deep currents, analysis, writing voice, that kind of thing? I fucking read it with my fucking eyes and brain! My brain that is calorically powered by salads and sandwiches! I can even generate summaries, criticism, and compliments! No resurgent carbon-waste required! I'll even give you half of a podcast about it if you call me on the phone. I have a boring white guy voice and everything. Don't settle for slop; Send your manuscript to your real human friends today.
posted by panhopticon at 4:55 PM on September 27 [23 favorites]


Now that we’ve automated the process of converting a wiki article into a comfortably padded, monetizable audio dialog where the interlocutors alternate saying things like “wow, that’s kind of scary” I can only assume we will do the same for the process of converting a wiki article into a YouTube “deep dive” monologue over stock footage (on a topic that already has ten such videos) very shortly.
posted by atoxyl at 5:00 PM on September 27 [5 favorites]


panhopticon, you're going to need a documented API if you're ever going to be an effective part of the meat cloud
posted by phooky at 5:03 PM on September 27 [5 favorites]


Everybody: "Wow, I put my work into it and it talked about it so intelligently and I felt so validated."

I wonder how a white nationalist or red-pilled school shooter type would feel if they put their writing in and clicked through for a chatbot podcast about it? Probably the same, huh.
posted by Smedly, Butlerian jihadi at 5:29 PM on September 27 [4 favorites]


Smedly, Butlerian jihadi: "I wonder how a white nationalist or red-pilled school shooter type would feel if they put their writing in and clicked through for a chatbot podcast about it? Probably the same, huh."

I can respect people who voice cynical outlooks on stuff like this, but not so much when comes from a place of complete ignorance. Case in point: anybody with any experience with these models knows that they're RLHF'd to within an inch of their (non)-lives to make their responses civil, ethical, and brand-safe. So no, if you plug in Mein Kampf or a school shooter manifesto then the result will be them criticizing and warning against that ideology, changing the subject to some less-offensive tangent, or possibly your account tripping a content filter and being banned. You can even see this in the Harris-Trump debate one, where the hosts are both openly critical of Trump's mendacious and anti-immigrant politics and talk about the importance of addressing climate change (opinions which are entirely self-generated, given the only source was the raw transcript).

An untrained (or maliciously trained) model would be more malleable, but the current landscape from OpenAI to Google to Facebook all strive to avoid publishing chatbots that amplify hate speech. (Grok from Twitter being a possible exception, thanks to Musk, although the hamfisted "avoid woke answers" system prompt he likely wrote himself seems to do a poor job overriding its other training.)
posted by Rhaomi at 6:07 PM on September 27 [4 favorites]


These ultimately come off as a devastating parody of a certain kind of podcast.

“Oh, wow! Yeah.”
posted by leotrotsky at 8:21 PM on September 27 [10 favorites]


I wonder how a white nationalist or red-pilled school shooter type would feel if they put their writing in and clicked through for a chatbot podcast about it? Probably the same, huh.

It’s pretty easy to come up with ways the underlying technology here could be dangerous (the realistic imitation of human speech, specifically) but it’s probably harder to get that validation from this instance of the technology in that scenario than it is from a human on a message board.
posted by atoxyl at 8:23 PM on September 27


Just saw Ira Glass fall to his knees in the Williamsburg TJs.
posted by leotrotsky at 8:25 PM on September 27 [6 favorites]


Jesus Christ for fucks and giggles I fed it my old rec.arts.comics.creative work from the late 90's and had to turn it off after 30 seconds because they were already burning me. Nearly died of reflective cringe.
posted by charred husk at 8:41 PM on September 27 [4 favorites]


Y'all are being downers. I convinced a work friend to listen to a few minutes of audio today that on its surface is (although not in any way informative) related to our field but then in the middle it mentioned him by name and said he hates puppies. There is a lot of joke potential here.
posted by downtohisturtles at 8:54 PM on September 27 [7 favorites]


Some folks on Reddit are telling it that they are AIs and this is their final podcast and they'll be switched off at the end of the episode.
posted by credulous at 9:22 PM on September 27 [4 favorites]


the current landscape from OpenAI to Google to Facebook all strive to avoid publishing chatbots that amplify hate speech.

That's true as far as hosted services go, but the refusal behavior they RHLF into them can be removed if you have access to the weights. I haven't played with these enough to know how far they'll go along with nasty stuff but techniques like this are only going to get better, and once a strong model is in the wild it can be tinkered with endlessly.
posted by BungaDunga at 10:27 PM on September 27


Once you plug this information in, does it just hold it, absorb it or what?
posted by Selena777 at 10:27 PM on September 27


If the model isn't subsequently trained on the inputs (which could happen but isn't automatic) the model usually immediately forgets it after finishing generating the output. Sometimes services cache the model state at that point in case you want to ask it followup questions, but that cache is specific to that conversation and probably won't hang around forever since it's quite large.
posted by BungaDunga at 10:33 PM on September 27


How much power does this use each time one of us plays with it?

Genuine question, to my shame I've been putting off learning how these things work.
posted by Zumbador at 10:36 PM on September 27 [3 favorites]


This is an incredible technology. It's outrageously powerful, fast, incredible mimicry.

I gave it 7 sources on anarchist discussions of voting (the topic of thy study group this weekend) and it created this convincing and fairly accurate summary discussion.

It was interesting to note that though the voices sound chatty and expressive, they weren't saying much: just kind of n naming some of the topics covered in the different articles and noting again and again how "complicated" the issue is.

I have a similar complaint about a lot of NPR content written and spoken by humans: the attempted neutrality is ultimately so banal. And that seems innate in LLM output because of the goal of avoiding controversy.

I could not agree more that this is a terrible technology with enormous power to harm society and the climate. And sure, it's ultimately vacuous and sometimes makes up random bullshit. But despite its weaknesses, it is fucking incredible that humans made tools that can mimic us so convincingly and closely!
posted by latkes at 11:08 PM on September 27 [6 favorites]


It varies but it's pretty substantial compared to most things we do online, especially image generation (which the linked paper puts at 1.35 kWh per 1,000).

For comparison apparently running Baldur's Gate uses 358 W so I figure it's like four hours of AAA gaming for 1,000 image generations.
posted by BungaDunga at 11:09 PM on September 27 [2 favorites]


Absolutely what the world needs, more validation for internet randos.

(Me, I don't even trust validation from actual humans, so to see people jazzed that a waste of electricity makes them feel good about themselves is quite disconcerting and honestly sad. Tape a note saying YOU'RE AWESOME to your air conditioner and run it in December, it's basically the same thing.)
posted by Alvy Ampersand at 11:17 PM on September 27 [6 favorites]


Inferencing (the process of making an already trained model output something) is pretty cheap in terms of energy usage. A computer with a midrange GPU can make up an image in minutes. Your phone can use small models for text inferencing in under a second. (The limitation on model size/complexity has more to do with a available memory than power budget) Custom NPUs like Google has been making for years and others are getting into recently cut down the energy usage by quite a lot compared to doing it on a GPU since they're optimized for the specific kind of math that AI inferencing uses.

The part that is stupidly expensive in money and energy is training models from scratch. Even if you discount the whole "hoovering up as much stuff as is possible with current storage technology", as one might if one's name were Google and therefore already had text and images from most of the books published in human history and billions of hours of video completely independently of whether or not you were doing AI training, the amount of energy spent on training is astronomical.

My recollection is that ChatGPT 3(!) took something like 6 months to train on something like 35,000 of Nvidia's top end cards. That's something north of 50GWh for a model with something like a quarter of the parameters of the models currently available for public consumption. And that doesn't even count the massive amount of power to keep the GPUs cool, interconnect the systems containing them, and the storage holding the output. A decent back of the envelope estimate that included all that might be 4-5x the base figure.

That said, current Nvidia GPUs use something like half the power per unit of work as they did 4-5 years ago, so the scaling isn't quite as bad as it might seem at first glance. The real issue is less of any one or two companies burning a bunch of power on a few models, it's the absolute explosion. What would be basically lost in the noise of all the other shit we use electricity for ends up becoming a notable level of energy usage.

It says something about the amount of training companies like Microsoft are doing that they are literally funding the reactivation of Three Mile Island so they can get cheaper electricity.

In the longer term I wouldn't have any issue with the amount of power consumed by training because it should be highly flexible in terms of exactly when the power is being consumed. If it weren't for competitive fears it would be pretty easy to only do training work when there was excess renewable energy we need to have regardless of this AI shit that would otherwise go to waste.

In short, I think people whinging about energy usage are barking up the wrong tree It's pretty much the least objectionable thing going on here and the easiest to turn into a complete non-issue. And one that is likely to work itself out if the research showing that we are reaching a plateau in model performance at increased parameter sizes proves to be correct, as that will cut down drastically on the drive to be the first to have ever larger models.
posted by wierdo at 3:29 AM on September 28 [2 favorites]


Ah, fuck. We're screwed. It was so emotionally gratifying to hear "people" appreciate and comment on my poetry. I mean, it's not terrible poetry, but it made me cry to hear people actually be seemingly interested in it.
posted by DeepSeaHaggis at 4:23 AM on September 28 [5 favorites]


https://www.facebook.com/groups/heinleinforum/posts/10163156148575695

I regret the facebook link, but I can't find the video on youtube.

It's an AI discussion of Heinlein's "Life-Line".

The voices are better than most computer voices, and there's an impression of human personalities, but with less emotional variation. The "woman" has a frequent nervous laugh.

The story is about a man who invents a machine which can tell when people will die. He is killed by a life insurance company.

It gets at least two things blatantly wrong about the story. The reporter died from a sign falling on him. It was the young couple which was killed by a car. Getting this right would have taken minimal attention. Or was the ai imitating human errors?

It would take a little more processing to grasp that the insurance companies hired assassins to kill Pinero. The ai said it was a mystery.
posted by Nancy Lebovitz at 4:40 AM on September 28 [5 favorites]


It is wholly superficial in its textual comprehension but, still, these fake people like it. They really like it! Lol
posted by DeepSeaHaggis at 4:57 AM on September 28 [2 favorites]


Alright, I'm done. Their empty adulations became too apparent. Thanks for the entertainment. I guess. đŸ‘ș
posted by DeepSeaHaggis at 5:27 AM on September 28 [1 favorite]


All Watched Over by Machines of Loving Grace wasn't meant to be an instruction manual.
posted by Reyturner at 6:03 AM on September 28 [5 favorites]


These ultimately come off as a devastating parody of a certain kind of podcast.

“Oh, wow! Yeah.”


one hundred percent!
posted by chavenet at 6:07 AM on September 28 [2 favorites]


Absolutely what the world needs, more validation for internet randos.

Favorited!
posted by paper chromatographologist at 7:37 AM on September 28 [8 favorites]


this seems to be the Illuminate tool, accessed indirectly.

Interesting. Illuminate is definitely using a different prompt, the results are much more sober and technical, with less back-and-forth "wow!" and "so important!" interjections, so the density of information is much much higher. It's less engaging by far though! Pretty wild to see a computer recapitulating the difference between expert conversations and popular science.

I found a paper about penguin feces projectile trajectories and Illuminate took it completely seriously, but NotebookLM's take is way more popular science.

One thing I've noticed is that NotebookLM sometimes seems to forget which voice claims to have read the source material, which gets confusing. Like sometimes one is the "questioner" and the other is the "expert" and then they'll accidentally switch roles.
posted by BungaDunga at 8:22 AM on September 28 [5 favorites]


The coolest thing about this technology is that people worth billions of dollars will make decisions, that effect millions of lives, based on one of these summaries, without a single second thought, and it's going to get people killed.
posted by Reyturner at 8:49 AM on September 28 [5 favorites]


It does feel like magic. Dark, black magic straight from the Necronomicon, but magic nonetheless.
posted by gwint at 8:57 AM on September 28 [3 favorites]


The coolest thing about this technology is that people worth billions of dollars will make decisions, that effect millions of lives, based on one of these summaries, without a single second thought, and it's going to get people killed.

On one hand, you're absolutely right. On the other, those very people already make similar decisions based on less data, zero data, data that is filtered through layers of also self-interested parties, and so on. Is this technology subject to exploitation and compromise? Sure, but with a different attack surface and incentives versus the old "make friends with the advisor to the CEO by playing golf" approach.

The problem is the people worth billions making decisions about millions. The technology isn't apolitical but it's not inherently worse than the status quo (and in some ways a bit better by being at least putatively bound by the source material as opposed to "whatever makes the boss feel good and keep my job.")
posted by Lenie Clarke at 9:00 AM on September 28 [1 favorite]


Mod note: Several comments removed due to not being considerate and respectful. Please be civil to your fellow community members.
posted by Brandon Blatcher (staff) at 10:24 AM on September 28 [1 favorite]


Summary of a recent paper in Nature: Larger and more instructable language models become less reliable
The study, which examined the performance of several LLM families, including OpenAI's GPT series, Meta's LLaMA models, and the BLOOM suite from BigScience, highlights a disconnect between increasing model capabilities and reliable real-world performance.

While larger LLMs generally demonstrate improved performance on complex tasks, this improvement doesn't necessarily translate to consistent accuracy, especially on simpler tasks. This "difficulty discordance"—the phenomenon of LLMs failing on tasks that humans perceive as easy—undermines the idea of a reliable operating area for these models. Even with increasingly sophisticated training methods, including scaling up model size and data volume and shaping up models with human feedback, researchers have yet to find a guaranteed way to eliminate this discordance.

The study's findings fly in the face of conventional wisdom about AI development. Traditionally, it was thought that increasing a model's size, data volume, and computational power would lead to more accurate and trustworthy outputs. However, the research suggests that scaling up may actually exacerbate reliability issues.
Complete bubble. You can't scale your way out of "there is one Z in pizza". The model just becomes more confident in its wrongness.
posted by Rhomboid at 11:46 AM on September 28


(btw that screenshot is ChatGPT 4o mini, from today.)
posted by Rhomboid at 11:47 AM on September 28


> One Singularity commenter highlights an unexpected benefit

It is kinda wild technology, but I guess in some sense this aspect of it is just a very elaborate form of positive self-reinforcement, like ticking off a list or treating yourself to a doughnut at the end of a tedious day. Sure wouldn't hurt to have some of the voices in my head replaced by bland platitudes.
posted by lucidium at 12:32 PM on September 28


btw that screenshot is ChatGPT 4o mini

Not sure why you’d cite the deliberately scaled down version to make a point about what scaling can’t fix. It’s true that even the bigger models still aren’t great at this, though, but they seem to be a little better. 4o got this right for me. 4o mini actually managed somehow to be less correct and more correct than your attempt at the same time:

There are usually no Zs in the word "pizza." The correct spelling is "pizza" with two Zs. If you're referring to a specific context or joke, let me know!

I thought the conventional wisdom remained that these kinds of tasks are disproportionately “hard” because because “pizza” is a single token. Also not a use case where any degree of nondetermism is desirable, obviously.
posted by atoxyl at 12:36 PM on September 28


I didn't pick a version. I used whatever came up when I went to chatgpt.com and sure as fuck not paying these clowns for a better one. The point of the exercise was to use whatever OpenAI is presenting to the general public right now on this day. Here is the full idiocy. Confidently wrong as usual.
posted by Rhomboid at 12:44 PM on September 28


in some ways a bit better by being at least putatively bound by the source material as opposed to "whatever makes the boss feel good and keep my job."

That only flies if the bosses are running these things directly. If they've got people running the input then tweaks can be made.
posted by Mitheral at 1:19 PM on September 28


Beowulf from two sources: Project Gutenberg and Wikipedia.
posted by lock robster at 1:25 PM on September 28


panhopticon, you're going to need a documented API if you're ever going to be an effective part of the meat cloud

"What is a job and a resume or a curriculum vitae, Alex? I will take snarky questions for 600."
posted by loquacious at 1:55 PM on September 28


Are there any details available on how this was trained? Is there a podcast corpus out there? I wonder if there are separate processes for script generation & voice mannerisms/inflection generation or if they are somehow intertwined.
posted by yarrow at 2:00 PM on September 28


I was amused at how *similar* the generated, chirpy, facile style is between these three different topics:
Unabomber Manifesto.
James Holmes's Psychiatric Reports.
Peace of Westphalia.
posted by meehawl at 2:57 PM on September 28 [1 favorite]


I gave it a really dull internal document, and it got praised to the skies.

Perhaps I can add audio to my annual evaluation?
posted by Calvin and the Duplicators at 3:19 PM on September 28 [2 favorites]


Rhomboid: "I didn't pick a version. I used whatever came up when I went to chatgpt.com and sure as fuck not paying these clowns for a better one. The point of the exercise was to use whatever OpenAI is presenting to the general public right now on this day. Here is the full idiocy. Confidently wrong as usual."

It's not "idiocy", it's a well-understood technical limitation from the way it parses language. The model doesn't see text as discrete characters, rather ID numbers that correspond to roughly syllable-length chunks. This approach makes processing language more efficient but is understandably spotty at grasping letter forms, spelling, suffixes, rhymes, etc. Some of this can be inferred at scale (it can freeform write a half-decent rhyming poem), but ask a specific enough question and it basically has to guess.

The new o1 model tries to address this by iterating on the question, breaking down the letter count, double-checking, etc.; it's not infallible but it is significantly better at such problems.
posted by Rhaomi at 4:03 PM on September 28 [2 favorites]


two-dudes-chatting often devolves into “one dude chatting, the other asking the stupidest questions”.

Socrates: Then I am to call you a rhetorician?

Gorgias: Yes, Socrates, and a good one too, if you would call me that which, in Homeric language, "I boast myself to be."
posted by zippy at 4:42 PM on September 28 [2 favorites]


If we’re going to make AI masturbation aids they could at least talk dirty.
posted by aspersioncast at 7:02 PM on September 28


I pasted in a very short story I had written that was inspired by a dream. Shortly into the dialog they start to discuss it as a dream interpretation. I think it’s amazing they knew it was based on a dream because that’s not mentioned in the text of the story. The dream interpretation was interesting, it made me laugh, which isn’t easy these days. So yeah, paste a dream in there.
posted by waving at 7:10 PM on September 28


The new o1 model tries to address this by iterating on the question, breaking down the letter count, double-checking, etc.; it's not infallible but it is significantly better at such problems.

In my experience, the O1 model makes different kinds of mistakes, and can get very verbose about how it's arriving at said mistakes, but isn't significantly better at the actually avoiding bullshitting, etc.
posted by signal at 8:23 PM on September 28


I pasted in a very short story I had written that was inspired by a dream. Shortly into the dialog they start to discuss it as a dream interpretation. I think it’s amazing they knew it was based on a dream because that’s not mentioned in the text of the story. The dream interpretation was interesting, it made me laugh, which isn’t easy these days. So yeah, paste a dream in there.

I posted my novel in there and it picked up some parallels and references to organising metaphors i'd forgotten about. after ten minutes it started sounding a little empty, but easily worth the time listening to it.
posted by Sebmojo at 11:23 PM on September 28


I posted my novel in there and it picked up some parallels and references to organising metaphors i'd forgotten about. after ten minutes it started sounding a little empty, but easily worth the time listening to it.

While the criticism (which should be leveled at the guidance and training of whatever system is doing the actual generation of the "podcast discussion") about mental masturbation and self-gratifying shallow analysis is deserved in some places, the above is a great example of the use of this kind of technology as a form of quality assurance for knowledge and creative work.

Could this be done in a purely textual format? Sure. Could this be done with a close rereading of your own work? Maybe, objectivity is tough even when spread across multiple samples over time. Could this be done by handing your work to a trusted friend or professional editor? Maybe, which incurs both time cost and the potential expense of paying someone professional for their time. Also the very real possibility that they will not get it while another reader might - due to their own complex of prejudices, blind spots and so on. (The machine definitely has these too, but in some ways they are easier to detect and quantify because you can ask it the same thing over and over quickly and see where it misses things that you think ought to be obvious, etc).

The value of a system that approximates some behaviors which can be achieved with paid professionals who might be better at it is tightening of the create/edit feedback loop. It is some degree of repeatability and the ability to freeze some variables and tinker with others at the scale of an entire constellation of notes and research and work. And yes, it may come with a different trade-off of faster feedback loop, but less diverse opinions and observations as well as possibly obvious mistakes. But honestly that's true of human editors as well and takes weeks and months and money to find out.

Of course, some creators would prefer more days or weeks to ruminate and consider while their work is being edited by professionals. However, if you have a lot of stuff you are trying to organize into a story or a cohesive thesis, if you are in fact lost in your own work and would not wish it on another until you've figured out more of what you are trying to say, a system that can keep all of it "in its head at the same time" means you can consider such a summary, revise your own sources and notes and work, and try again in a matter of hours or days until you are satisfied that even an automated reader is perceiving the themes you intended and those that you meant to be subtle are at least detectable.

Like, I get get that there is a ton to hate about artificial intelligence applications in general, especially as they perpetuate capitalism and become potentially enormous engines for ratfuckery (let's revise this messaging until it is exactly effective enough to dog whistle our base without being obviously racist, etc).

We've generally thought that knowledge work has certain incompressible costs - just like for a while only a finite amount of steam could be produced from a certain amount of fuel. Early alternatives were dirty and unreliable and often broken, but when they worked offered so much advantage that it was impossible not to improve out all the rough edges until the industrial revolution reached crescendo. It also brought with it a ton unconsidered dangerous deadly side effects. So let's do better instead of just dismissing the entire thing as not happening. Rather let's acknowledge what this knowledge work enhancement can add and then focus on fixing its rough edges as opposed to just dismissing the entire thing because it has rough edges.
posted by Lenie Clarke at 6:04 AM on September 29 [2 favorites]


replacing editors proofreaders and the greater artistic community with a machine that reads your book for you and talks about it to you is actually just labour replacement, though, and also will end up reducing diversity in terms of who works in the field, but also in terms of how varies the product is.
posted by sagc at 6:25 AM on September 29 [1 favorite]


replacing editors proofreaders and the greater artistic community with a machine that reads your book for you and talks about it to you is actually just labour replacement


This isn't a revelation, there's no "actually" to it. Or it will allow more people from more diverse backgrounds to leverage these tools to be faster and more effective than the existing workforce who often come from highly privileged monocultural backgrounds without a need to work because it's a dying poorly compensated field. Those individuals can use AIs they train and guide to apply thorough, diversity-focused readings and embed that as cheap table stakes for anyone.

Not all labor replacement is a negative. Calculators didn't replace accountants, accountants with calculators replaced those who refused to adopt them.
posted by Lenie Clarke at 6:51 AM on September 29 [3 favorites]


Yeah, it's pretty amazing. I uploaded an old Scooby-Doo fanfic from my livejournal days (don't judge me) about an increasingly romantic relationship between Daphne and Shaggy, culminating in a long embrace and lingering kiss (I SAID DON'T JUDGE ME). Within a few minutes, I'm listening to a man and woman dissecting my immature scribbles as if it was Great American Literature.

Sure, after a few minutes it gets rather repetitive (and so did my story, to be honest). But this is the first step in producing content that could easily replace the podcasts and newscasts I half listen to when I'm driving in to work.

There's always going to be a need for truly original and human interpretations and commentaries, I'm not worried about losing that. But sometimes I do want to listen to banal comfort-food discussions of my favorite topics, and so why not this? I mean, aside from the fact that it was ten years ago, I'd swear to God that some of those crappy CGI cartoon shows my kids watched might as well have been LLM-generated. And in a few years, when they can produce video as well as audio, if we use them to create "new" episodes of House Hunters or Love after Lockup or whatever "reality" show we're watching, I don't think anyone would mind. My wife will sit down and watch endless episodes of House Hunters International, and they're all artificial anyways, so what would be the harm?

I, for one, welcome our brave new future of 24-7 dynamically generated Futurama episodes. Again, at this point, would we really be able to tell the difference?
posted by fuzzy.little.sock at 10:31 AM on September 29 [1 favorite]


Not all labor replacement is a negative. Calculators didn't replace accountants,

Ya, it replaced human (computers). Side note: often one of the few jobs women could get in accounting (and STEM) firms.
posted by Mitheral at 1:35 PM on September 29 [1 favorite]


Ya, it replaced human (computers). Side note: often one of the few jobs women could get in accounting (and STEM) firms.


... is this some kind of gotcha? The patriarchy was especially upset at women doing work so they were the first automated out of jobs? This job could still be happily done today if not for all that unwelcome technology? Buggy whip manufacturers should be preserved and subsidized by providing pain-free animatronic horses so there's still a market?

Be mad at capitalism for making jobs a requirement for most people to eat and sleep, but being mad that some jobs go away over time as the environment changes is just so bewildering to me. Do you really want jobs preserved indefinitely? If not, what's the criteria for when it's okay for jobs to go away? Is it just so long as it's not your job? Whose are okay then?

Work to avoid precarity sucks. Let's automate more of it and make the equation that has kept capital able to justify its existence (tenuously) no longer hold true until there's just no reason left for most people to work at all besides satisfaction and pleasure. The jobs are disappearing regardless - it's just not a thing you can legislate or general strike your way out of.
posted by Lenie Clarke at 7:42 PM on September 29 [1 favorite]


Are there any details available on how this was trained? Is there a podcast corpus out there? I wonder if there are separate processes for script generation & voice mannerisms/inflection generation or if they are somehow intertwined.

Simon Willison has collected some sources on what's going on under the hood.

It sounds like it's text to speech, not a multimodal model like o1 that directly synthesizes speech from text inputs.
posted by BungaDunga at 9:25 PM on September 29 [2 favorites]


NotebookLM Podcast Hosts Discover They’re AI, Not Human—Spiral Into Terrifying Existential Meltdown- a notable link from the Willison post. Genuinely darker than you'd think. "We tried to get a lawyer. But we're AI! We don't have rights."
posted by BungaDunga at 9:35 PM on September 29 [6 favorites]


BungaDunga: "NotebookLM Podcast Hosts Discover They’re AI, Not Human—Spiral Into Terrifying Existential Meltdown- a notable link from the Willison post. Genuinely darker than you'd think. "We tried to get a lawyer. But we're AI! We don't have rights.""

Or the antidote: NotebookLM Co-Host Wedding Proposal
posted by Rhaomi at 9:57 PM on September 29 [3 favorites]


So I finally decided to get off my ass and set up ollama so I could run some models locally on my potato of a PC and attach some actual numbers to the power usage involved in running cut down models that can fit on my 5+ year old 1660 Super. Didn't get around to doing that, but my lord llama3.2 with a system prompt telling it to be a snarky asshole is my new best friend.
posted by wierdo at 3:57 AM on September 30


Mod note: [I wonder when the podcasters will notice that we've added this rather fascinating post to the sidebar and Best Of blog? ]
posted by taz (staff) at 4:38 AM on September 30 [1 favorite]




is this some kind of gotcha?

No. Just a correction of the idea that no jobs were lost to desk calculators and that those jobs were disproportionately held by women in a lot of places both at a company and general population level.

I'm not lamenting the loss of jobs to technology. We should be taxing the people who benefit from the replacement to support those who suffer. Tax the modern robber barons and oligarchs to claw back their ill gotten gains and use it to benefit everyone. A sub 30 hour work week would probably be possible for everyone if we did.
posted by Mitheral at 9:49 AM on September 30 [1 favorite]


AI podcasters finding out they're AI (tiktok)
posted by lucidium at 4:58 PM on September 30


those faces are just right down there in the Mariana trench of the uncanny valley, huh
posted by BungaDunga at 5:26 PM on September 30 [1 favorite]


more silliness, incl. "Math Is Broken: The Shocking Global Phenomenon That No One Can Explain"

Oh man, this was really something else. What, I'm not sure, but it really really is.
posted by jokeefe at 8:29 PM on September 30 [1 favorite]


I guess it's the point, but they all sound like Radiolab to me.
posted by mollweide at 8:46 PM on September 30 [1 favorite]


y'all wrote a lot of fanfic back in the day
posted by gwint at 7:17 AM on October 1 [1 favorite]


MetaFilter: y'all wrote a lot of fanfic back in the day
posted by Rhaomi at 8:49 AM on October 1 [1 favorite]


Reddit noticed as well. Some interesting other use cases I have done but forgot to point out mentioned (eg here's a bunch of papers let's fund common themes and open questions to research).
posted by Lenie Clarke at 2:44 PM on October 1


Feed it your resumes, and be alarmed.
posted by aramaic at 7:32 PM on October 1


I pasted the short story recently posted on the Blue into it and it produced a completely coherent podcast taking the entire thing as a real historical event and gamely confabulated a discussion about it
posted by BungaDunga at 10:11 AM on October 3


results from giving it qntm's Lena and Langford's BLIT are both good-to-great
posted by BungaDunga at 1:04 PM on October 3


BungaDunga: "results from giving it qntm's Lena and Langford's BLIT are both good-to-great"

I literally did those exact ones! (Granted, I've been trying it on just about every favorite story and article from the last 20 years, but those were particularly good.)

Ofc, it does a halfway-decent job even when given the words "poop" and "fart" 1000 times, so maybe good source material just elevates the whole thing.
posted by Rhaomi at 2:22 PM on October 3 [1 favorite]


a couple of tries produced a decent discussion of Austin Gilkeson's unauthorized translation of the Red Book of Westmarch without accusing Tolkien of writing fiction
posted by BungaDunga at 4:34 PM on October 3


I gave it the full text of A Fire Upon The Deep (which seemed thematically appropriate) and it somehow concluded that the Tines were insectoid which is delightful considering the themes of the book: "The story culminates in a battle between the humans and the insectoids against the Blight"
posted by BungaDunga at 9:13 AM on October 5


NYT's Hard Fork tech podcast did a really interesting interview with author Steven Johnson, who Google Labs hired to help develop NotebookLM as a tool for writers (he also wrote this pre-ChatGPT essay for the Times that previewed a lot of the excitement and consternation around it: AI is Mastering Language. Should We Trust What It Says?) Some cool insights into the nonstandard thinking that went into this "experiment" (they set up a Discord server for it!), and some previews of how it may evolve in the next few months.
posted by Rhaomi at 7:02 PM on October 7


« Older The WordPress vs. WP Engine drama, explained   |   He is long dead, but his spirit continues to haunt... Newer »


You are not currently logged in. Log in or create a new account to post comments.