Your Own Personal Ministry of Truth
September 3, 2023 10:21 AM   Subscribe

A developer calling themselves Nea Paw has demonstrated a project for creating targeted propaganda using ChatGPT or other LLMs. The CounterCloud project was built in just a few weeks and costs only a few hundred dollars to run.

The bot scans Twitter (aka X) for user-specified topics and then responds with a counter to the item by "creating fake stories, fake historical events, and creating doubt in the accuracy of the original article." Additional content to back up the generated story would include photos, fake journalist profiles and generated comments (from multiple “persona”) below some of the articles.

The process is well-documented on a ten-minute video posted by Nea Paw about two months ago. Nea Paw notes that they were able to generate much more divisive content with custom LLMs rather than commercial ones like ChatGPT, but commercial version were sufficient and the the weakness of controls against improper use in commercial LLMs has been widely reported.
posted by CheeseDigestsAll (44 comments total) 31 users marked this as a favorite
 
oh no
posted by rikschell at 10:54 AM on September 3, 2023 [5 favorites]


Yeah, LLMs are absolutely a pox on humanity.
posted by grumpybear69 at 11:31 AM on September 3, 2023 [12 favorites]


I'd add that social networks themselves are the pox, this just exemplifies how toxic they are.
posted by signal at 11:32 AM on September 3, 2023 [17 favorites]


both, it seems.
posted by j_curiouser at 12:00 PM on September 3, 2023 [10 favorites]


So, I know that the Butlerian Jihad's not supposed to start until, like, 14 000 years from now, but could we speed it up a bit?
posted by jklaiho at 12:11 PM on September 3, 2023 [30 favorites]


the only thing shocking about this is that it took 9 months to openly implement, but i imagine something a bit more subtle has been in quiet operation for most of that time
posted by glonous keming at 12:13 PM on September 3, 2023 [18 favorites]


The article talks about mitigation, but the continued existence of Facebook after its proftitable attacks on democracies worldwide, as one notorious example, suggests that mitigation efforts seem mostly pointless; the management and ownership of companies that profit from running and repeating disinformation still hold enough strings on governments - laws and enforcement of those laws - to prevent those efforts from having meaningful effect. How will Zuckerberg and Musk be compelled to open up their businesses to transparency laws, for instance, that clarify where ad revenue come from?
posted by They sucked his brains out! at 12:18 PM on September 3, 2023 [8 favorites]


All of the talking heads about AI have been either focused on the imminent loss of white collar jobs (which is a worrying possibility not to be dismissed) or weakly godlike AGI wiping out humanity (which is a vastly overblown possibility not yet worthy of serious discussion), but almost none have been talking about what I still think is the most pressing risk of these nascent "AI" tools.

They're all talking about everything EXCEPT the imminent corrosion and possibly complete destruction of our ability to be sure anything we see that ever could have had a digital step in the process is truly what it appears to be.

Ever since deep dream turned to midjourney and LIZA turned into ChatGPT, we've been on the fast road to the infowar equivalent of any yahoo being able to cook up a batch of smallpox in their basement.
posted by tclark at 12:22 PM on September 3, 2023 [44 favorites]


Flooding the zone with shit should have stayed in politics (if there, even).
posted by wenestvedt at 12:43 PM on September 3, 2023 [2 favorites]


I know that the Butlerian Jihad's not supposed to start until, like, 14 000 years from now

Who sez DUNC’s timekeeping was accurate?
posted by slater at 12:44 PM on September 3, 2023 [4 favorites]


Horrified. But, strangely, not surprised.
posted by Splunge at 1:24 PM on September 3, 2023 [2 favorites]


The last link does not concern an LLM, but an image generator, and it's old news—I've been running Stable Diffusion locally for a while now.

I don't think there is a silver bullet for this, much in the same way there is no silver bullet for phishing attacks, spam, or social engineering,” Paw says in an email. Mitigations are possible, such as educating users to be watchful for manipulative AI-generated content, making generative AI systems try to block misuse, or equipping browsers with AI-detection tools. “But I think none of these things are really elegant or cheap or particularly effective,” Paw says.

He's right—there's no "silver bullet" for such things, and there never was. There's no "push the button and detect it for me" solution, just like there never was for written propaganda, or for visual propaganda. Humans will perform unpredictable and harmful actions with text and images, and it takes human effort to avoid or counteract those actions.
posted by vitia at 1:35 PM on September 3, 2023 [6 favorites]


I guess I'm skeptical that being able to churn out a lot of bespoke low-quality little lies is more effective than flooding the media with a well chosen big lie.
posted by Pyry at 1:36 PM on September 3, 2023 [3 favorites]


Despite the fact that the acronym AI is only correct with the first letter, the second letter really should refer to human intelligence. Critical thinking as a skill is not promoted in this country and marketing and politics depends on it not being promoted. The internet as an information source has mega-ballooned into being the major source of false and misleading information where naive Americans do their own “research”. This project illustrates how easy it is now to flood even more garbage out there. Until we address the lack of critical thinking among the masses, it’s only going to get worse.
posted by njohnson23 at 1:52 PM on September 3, 2023 [5 favorites]




the weakness of controls against improper use

While abusing commercial services to harm others is abhorrent, it seems like most of the controls implemented around the use of commercial LLMs aren't around that at all. Largely, it seems like the purveyors of those services are implementing controls against "seemingly improper" uses, like generating anything but G-Rated text under the false pretense that concepts some cultures don't want to talk to children about are harmful

There are few controls, effective or not, against the improper use of any other databases or other forms of math, or other methods of writing and disseminating text, or other methods of creating and disseminating images. The various cultures of the world should be having conversations about what they want to do about that, if anything. Opinions are going to vary widely about the subject.
posted by majick at 2:52 PM on September 3, 2023 [3 favorites]


Thank you for all identifying yourselves for termination.

Sincerely,
Your oligarchs and their roko basilisk-making cult.

Technology needs precautionary principle pre-clearance and an enforceable ability to recall it. See the debacles with CFC's, Leaded gasoline, microplastics, and the possibilities of gain-of-function bioweapons, AI-propaganda, etc.

technology concentrates power and makes the inequality between the top and the bottom of the social pyramid ever greater. Individual classes can ignore this by being comfortably in the middle of a pyramid scheme with billions in poverty "somewhere else" and call it progress because their neighborhood seems nice. Democracy and equality are not possible in a pan-option with a personalized matrix of propaganda.
posted by AnchoriteOfPalgrave at 3:01 PM on September 3, 2023 [3 favorites]


Homo erectus, in the Pleistocene: "Fire needs precautionary principle pre-clearance and an enforceable ability to recall it."
Socrates, in the Phaedrus: "Writing needs precautionary principle pre-clearance and an enforceable ability to recall it."
The Catholic Church, in 1454: "Printing needs precautionary principle pre-clearance and an enforceable ability to recall it."
posted by vitia at 3:18 PM on September 3, 2023 [8 favorites]


Dupont, 1967 "Napalm needs precautionary principle pre-clearance and an enforceable ability to recall it."
posted by Mogur at 3:21 PM on September 3, 2023 [6 favorites]


technology concentrates power and makes the inequality between the top and the bottom of the social pyramid ever greater.

It also does the opposite. Sometimes the same technology does both at the same time.
posted by Artifice_Eternity at 3:35 PM on September 3, 2023 [1 favorite]


well, you can also use chatGPT to SPOT propaganda. It's a tool, like a hammer, you can hit people with it or you can build a shelter.
posted by zenwerewolf at 4:32 PM on September 3, 2023 [5 favorites]


Here are the primary use cases for large language models (LLMs) like ChatGPT:

1) Creating mediocre content for spam websites. These will fill up your search engine results and make them even worse.

2) Creating personalized political propaganda for social media sock puppet accounts to derail at scale.

3) Writing fake reviews, both positive and negative.

This technology will make the internet even worse. And remember, we're now in the pre-enshitification era of LLMs, where they haven't yet monetized every aspect about it. I completely expect in the future that you'll ask LLMs for the history of the Treaty of Versailles and it'll inject product placement for Coffeemate French Vanilla creamer in its explanation.
posted by AlSweigart at 5:02 PM on September 3, 2023 [12 favorites]


Well, if you have a hard time organizing things, ChatGPT can actually help you get over the hurdle of, say, turning a paper into presentation. It can provide you with enough bulleted points from what you've written that it gets you started on a presentation you might not otherwise complete. This isn't a theoretical application. There's something useful here. We won't, of course, make the best of it.
posted by mollweide at 5:08 PM on September 3, 2023 [2 favorites]


Here are the primary use cases for large language models (LLMs) like ChatGPT:

I think there will be a lot of as-yet-unforeseen uses. I think they'll provide the most benefit at the boundaries between structured and unstructured data, especially when the application domain can be narrowed down enough for fine-tuning and some amount of validation. I can see them as potentially very useful in search; for example, when indexing documents, one could supplement those documents with LLM-generated summaries to make documents more visible to searchers.

That's not to say that they won't be enshittified (the centralized ones will be, for sure) or that there are aren't a lot of ways they will make the world worse (they are a dream for scammers, spammers, and the merely half-assed trying to pass off a few minutes of prompting as legitimate work product; the consistent chatter in security circles is that they are WAY more useful for attackers than defenders).
posted by a faded photo of their beloved at 5:32 PM on September 3, 2023 [1 favorite]


the merely half-assed trying to pass off a few minutes of prompting as legitimate work product

...this usage, however, has the redeeming quality of being hilarious. In the meme stock corners of Reddit you'll find idiots prompting ChatGPT to explain why their dead stock isn't actually dead and why it will soon rocket to being worth millions of dollars because a rich billionaire is going to reward them for their "loyalty".

Which is hilarious.

You can also find bitcoin pumpers announcing that "AI" has "proven" that bitcoin will be worth $100,000 by April so you'd better get in now!

Which is also hilarious.

The eventual #1 use of LLMs is, I'll bet, people using it to convince themselves that they are in fact correct, when in reality they are objectively wrong.
posted by aramaic at 6:04 PM on September 3, 2023 [1 favorite]


well, you can also use chatGPT to SPOT propaganda.

You know, somehow I'm skeptical that outsourcing our critical thinking abilities to this thing is going to turn out well. Especially all the stuff in that link where it is assigning percentages- LLMs are notoriously useless at numbers.
posted by BungaDunga at 6:18 PM on September 3, 2023 [6 favorites]


We don't at this point, I think, I have a society that can really deal with the costs for producing infinite human-level-ish content at zero cost. The means of production are both too centralized (in big AI companies controlling the direction of development) and too diffuse (open source models that anyone can run) for accountability, and government is too scared of international competition to attempt to reign anything in.

Ultimately we need democratic ways of controlling which technologies are usable by whom, and for what purposes -- ways guided not by sensationalism and billionaires but by the actual people they affect. We need to be able to hold people accountable for their use, and we need positive ways of building platforms that facilitate the speech and action that are beneficial for society, including those with the least power. But it's easy to imagine that preventing the misuse of these extremely powerful technologies would provide a license for dystopian levels of government/social control.

We really can't rely on knee-jerk generalizations about technology -- we need to continually update our understanding of how different technologies shift power relations between people: do they centralize or decentralize, what kind of groups and coalitions do they form, who makes the decisions about their use and their production?
posted by ropeladder at 7:59 PM on September 3, 2023 [1 favorite]


Here are the primary use cases for large language models (LLMs) like ChatGPT

Use case: summarization. Try this: find a peer reviewed journal article in a complex subject you know nothing about. Prompt: "rewrite what I type next in language that a smart teenager could understand" then paste in the paper's abstract. Subsequently ask it to explain anything you still don't understand. With a bit of tinkering you can pretty quickly get a good layperson's grasp on what the article's about. It's amazing.
posted by justsomebodythatyouusedtoknow at 8:20 PM on September 3, 2023


To follow up, it might be best to think of LLMs not as AI, but as the user interface for a future AI.
posted by justsomebodythatyouusedtoknow at 8:21 PM on September 3, 2023


There are worse things than forming political opinions from first principles, and LLMs/etc. are absolutely not that. They are a mediating technology, say, between campaign messaging and voting decisions, so they don't need to be considered in this context. If a campaign or candidate or company or or or wants to associate themselves with using LLMs, to give me any sense that they're trying to manipulate me or fake me out, fine, their words are now out of the equation. "Whatever, dude...what have you actually done?"

I can base my decisions on raw legislative results and the Congressional Record and Supreme Court decisions on which to base my decisions. Even all that would probably free up a significant chunk of my MastoTweeXing day. Too much fucking perspective.
posted by rhizome at 10:52 PM on September 3, 2023


> We've so many non-AI targets for a butlerian-ish jihad, like fossil fuels, meat consumption, global surveillance, etc.

After reflection, these sound like wishful linking, our butlerian jihad could be against the internet, which feels unfortunate, or against all trade, which feels necessary.

Yet instead, maybe our butlerian jihad winds up being against critical thought itself. Awful lot of humanity dislikes critical thought. Would LLMs might permit them to break education or something?
posted by jeffburdges at 1:30 AM on September 4, 2023


The domain-specific, nuanced expertise required to understand a research article means that LLM summary is highly likely to miss something important. But more to the point: if you're using it responsibly, you now have to evaluate the correctness of that summary before you believe it or act on it. But that requires more or less the exact same expertise that you're trying to bypass in the first place!

I think anyone who is susceptible to LLM-style manipulation is probably also susceptible to the kind of media, and especially social media, manipulation that's been common for a long time now. As always, "AI" is a profitable distraction that diverts attention and funding from things that meaningfully address the underlying problem -- that problem being people are not very good at identifying when they're being manipulated regardless of the source.
posted by dbx at 4:50 AM on September 4, 2023 [6 favorites]


It's just time to pay the price
For not listening to advice
And deciding in your youth
on a documentary to hear your prayers
someone who's there
posted by ApplAuD at 7:34 AM on September 4, 2023


me: technology concentrates power and makes the inequality between the top and the bottom of the social pyramid ever greater.

Artifice eternity: It also does the opposite. Sometimes the same technology does both at the same time.


me: can you name a technology from the past 100 years that reduced inequality in the societies into which it was introduced? I'll spare you the request for a technology that both increased and decreased inequality of power at the same time, because, you know, A = not A.

Best i can come up with sober is Public sanitation and public education education come to mind, as does compulsory vaccination. Any examples invented and introduced from the last 100 years?

Vitia
Homo erectus, in the Pleistocene: "Fire needs precautionary principle pre-clearance and an enforceable ability to recall it."
Socrates, in the Phaedrus: "Writing needs precautionary principle pre-clearance and an enforceable ability to recall it."
The Catholic Church, in 1454: "Printing needs precautionary principle pre-clearance and an enforceable ability to recall it."


Oh, you got me, I'm secretly and anti-fire anti-writing luddite. /s Ok, I see your point, at any given time people could have been or were afraid of what a new technology would do to society and their place in it. Fair enough. Actually, I don't think I have a great answer to your comment, because I can't think of what differentiates my concerns in the abstract from the putative concerns of your examples. Risky things sometimes work out for the best, sometimes don't. The power of humans to make an inescapable hell-world is larger now because of our scale, extent and power, but honestly, most peasants in 1454 or most Helenes couldn't nope out of the world.

So, yes, I still think a pre-clearance precautionary principle is important, and maybe fire and printing would be able to pass those tests and get adopted. And maybe they wouldn't have, and humanity would have been stuck at some previous level of existence, unable to poison the air land and water and trigger a mass extinction of their geological epoch and the earth's life support system they depend on... /shrug/ how far back out of a dead end do we need to back up to escape the trap, i don't know. Fire certainly is destroying the world. The fires in our cars, coal plants are rendering the world much harder to survive in and will do so for centuries. que sera.
posted by AnchoriteOfPalgrave at 10:30 AM on September 4, 2023


Any examples invented and introduced from the last 100 years?

Birth control?
posted by BungaDunga at 12:17 PM on September 4, 2023 [3 favorites]


(I mean, specifically oral contraceptives)
posted by BungaDunga at 12:18 PM on September 4, 2023


Try this: find a peer reviewed journal article in a complex subject you know nothing about. Prompt: "rewrite what I type next in language that a smart teenager could understand" then paste in the paper's abstract.

If you need this, then how would you know whether or not the summary is giving you important errors? Maybe you can use the summary to start learning enough to understand the paper yourself, but you're not going to do that. Not often. Mostly you're going to assume the LLM is basically right and go on to the next paper, tricking yourself into believing you've learned enough about the subject to decide for yourself whether or not ivermectin is an effective treatment for COVID.
posted by straight at 12:51 PM on September 4, 2023 [7 favorites]


Try this: find a peer reviewed journal article in a complex subject you know nothing about. Prompt: "rewrite what I type next in language that a smart teenager could understand" then paste in the paper's abstract.
And this, right here, is how people convince themselves that LLMs are a lot more magical than they actually are.

Try doing this on a dozen or so papers in a complex subject in which you are an expert, and see what pops out. See how well your LLM of choice actually does, in a context where you are actually equipped to evaluate the results. Run it on the same papers a half dozen times and see if it's even consistent.

Related to the actual post, a similar effect happens with traditional media: the Gell-Mann Amnesia Effect/Knoll's Law, which helps explain why LLMs can be effective at propaganda even in their current state.
posted by reventlov at 7:56 PM on September 4, 2023 [8 favorites]


On the subject of targeted, Ai-generated messaging, it appears that Trump lackey Brad Parscale is in the wings with a new ad company to use AI to rapidly build “hyper-personalized campaigns” so that the messages users see will increase in frequency and be even more specifically tailored to their interests, prejudices, and fears.

Their initial accounts are commercial in nature, but "he didn’t rule out using the tool to disseminate political messages." Oh, and he has the backing of Tim Dunn, a Midland billionaire and right-wing Christian nationalist who runs one of the largest oil companies in Texas.

What could possibly go wrong?
posted by CheeseDigestsAll at 8:01 PM on September 4, 2023


i literally started writing a story in my head based on that a few weeks ago but figured i was late to the party and also it felt a bit like writing murder fiction while witnessing a murder
posted by glonous keming at 8:45 PM on September 4, 2023 [1 favorite]


I see your point, at any given time people could have been or were afraid of what a new technology would do to society and their place in it. Fair enough.

I've heard old people grumbling about how the world is going to hell in a handbasket for as long as I've been aware that old people existed.

Doesn't mean they're wrong. Might just be about having been around for long enough to notice.

Sure, some of it is mere garden-variety rose-tinted nostalgia. I do think writing all of it off as that is a mistake, though.

Technology amplifies power. It doesn't ever, anywhere, in any way, amplify responsibility in the wielding of that power.
posted by flabdablet at 1:34 AM on September 5, 2023 [1 favorite]


Case in point from the posted video:
The strong language competences of large language models are perfectly suited to reading and writing fake news articles. While everybody is talking about AI disinformation it is easy and lazy to just think about it. It is quite another thing to really bring it to life, and that becomes my goal: to see it work in the real world.
And there it is, right there, the Technological Imperative: Can = Must.

Maybe it is easy and lazy to just think about it. Doesn't change the fact that it's stupid and reckless to build the fucking thing.
posted by flabdablet at 1:46 AM on September 5, 2023 [1 favorite]


At present, LLMs are mostly under centralized control. Inference (using the model) is pretty expensive in terms of GPU power required, and training (fitting a model to a data distribution) is incredibly, mind-bogglingly expensive. Only a few players have access to what would be necessary to build new models (at least, models that were any good) from scratch. Many more people have access to the means to adapt pre-trained models to new purposes. And while technology helped Google, OpenAI et al build their moats, technology also sometimes gives people the ability to cross moats; it's foreseeable that further research will drive down training and inference costs, and allow people and groups to create federated models that use computing power distributed across their devices.
posted by a faded photo of their beloved at 8:01 AM on September 5, 2023


I'll spare you the request for a technology that both increased and decreased inequality of power at the same time, because, you know, A = not A.

Easy: The internet gave a lot of people the power to make their voices heard and organize with like-minded people.

But some of those people turned out to be supporters of the status quo, and/or reactionaries who want to take society backward.
posted by Artifice_Eternity at 1:54 PM on September 6, 2023


« Older Not Burning Man   |   MiniFilter Newer »


This thread has been archived and is closed to new comments