Is the Ottawa Food Bank really a must-visit vacation destination?
April 30, 2024 8:46 AM   Subscribe

Keep truth human - take this short quiz from the Canadian Journalism Foundation to find out if you recognize AI generated, false news content.
posted by jacquilynne (27 comments total) 9 users marked this as a favorite
 
I would generally consider myself pretty good at spotting fake news and I scored 50% on this quiz, which considering it's a yes/no quiz is around random chance, so that was discouraging. I feel like more context would have certainly helped me do better at actually identifying fake news, but it was still a bit of a wake-up call.
posted by jacquilynne at 8:47 AM on April 30 [2 favorites]


8/10, which obviously means I can simply ignore this problem from now on, resting assured in the knowledge of my inimitable brilliance and definitely not having to worry about being scammed. Overweening confidence? Surely not, not me.
posted by aramaic at 8:51 AM on April 30 [6 favorites]


I, an American, scored 100%. Canadians, please feel free to reach out if you need my assistance spotting AI.
posted by mittens at 8:52 AM on April 30 [7 favorites]


This feels like more a question of "do you know what news outlets are"? Anything with enough context to tell it was from a real website was actual news, anything without context was fake. (10/10, using that heuristic)
posted by sagc at 8:52 AM on April 30 [9 favorites]


10/10; too many obvious clues. But that's going to change quickly as the tools get better and people get better at using the tools.
posted by dmd at 8:53 AM on April 30 [1 favorite]


I got 100%.

The key thing to look for: is the news item just bizarre / truth-stranger-than-fiction or does it seem to have some kind of political agenda?

Misinformation is specifically created in order to sway opinion, so if a news item seems to push some kind of world view -- particularly one that is fascist or police-state aligned, it needs extra validation before you accept it. This really has nothing to do with how many fingers someone has in a photo. (Never trust photos anyway.)
posted by seanmpuckett at 8:54 AM on April 30 [9 favorites]


Obligatory North American House Hippo reference.
posted by chococat at 9:00 AM on April 30 [6 favorites]


I got nine out of 10 - I noticed the smoking chimp article had a post date and then an "updated on" date and it felt sus.
posted by EmpressCallipygos at 9:04 AM on April 30 [1 favorite]


10/10. Seconding the advice that misinfo is usually anything that makes you feel an emotion besides "Huh. Neat!"
posted by Cpt. The Mango at 9:10 AM on April 30


I work for an AI company that, as a small part of our business, creates and sells audio and video AI "deepfakes" except completely legally for media and entertainment companies - e.g. video engines which can duplicate Elmo or Spongebob or other cartoon characters perfectly, which the copyright holders of those characters can sell through, say, Cameo. We've also done a couple of real life celebrities recently (about 6 months ago) as proof of concept - think something like, an AI generated Brett Favre starring in a Nike commercial (but not him or for that company exactly). I will tell you that it's absolutely undetectable. The celebrity we mocked up for the sports commercial (with their permission, we bought rights) was absolutely realistic. They had real words clearly showing on their shirt. Their movements and voice were perfectly lifelike and based on hours of footage of them playing and giving interviews which had been fed into the AI engine. They spoke in their own voice, not an AI robot voice. That future is already here, but AI products like the ones my company creates are expensive not free, so that's the only barrier that prevents the media market from being flooded with truly perfect AI dupes.

I don't see the value in giving ourselves a false sense of confidence from being able to spot extra fingers or garbled wording on police uniforms, to be able to "tell" AI generated from non-AI generated photos and videos from these goofs. These goofs aren't going to be around for much longer. The source where the video is hosted, the source where the image is published - that will be all that matters. If it's Reuters and AP and NPR and NASA, yes, let's believe it. Otherwise, nope!

I got 8/10 on this test because I blanket do not trust the Daily Mirror and NDTV . These are not reputable news outlets. I am pretty darn annoyed with this test for telling us they can be trusted! SMH. These are sensationalist media outlets which are for sale to the highest bidder, period.
posted by MiraK at 9:11 AM on April 30 [9 favorites]


Anything with enough context to tell it was from a real website was actual news, anything without context was fake.

Not true, though--amnesty.org is indeed AI's website, so while the image looked off to me, I got snagged by that one.

In reverse, like MiraK, by NDTV.

I think there is, at least in the present day, some value in trying to learn how to identify fakes. I'm sure the AI you work on is fantastic, MiraK, but we'll be stuck with people flooding the zone with cheaper products for some time still.
posted by praemunire at 9:20 AM on April 30 [4 favorites]


I messed up the one about Smoky, the six-fingered gun-toting Colombian chimp who hands out some of his bitcoin fortune at the Ottawa food bank.

I want veracity ratings on everything. And then I want the raters rated.
posted by pracowity at 9:26 AM on April 30


The Microsoft Travel one was also really published on the real MSN website, but I got that one right because stories about it were big news here in Ottawa when it happened.
posted by jacquilynne at 9:34 AM on April 30 [1 favorite]


I should add that certain arms of the NDTV machine are more reputable than others. It's just that their digital presence has been spun off into a million purely commercial ventures and recently large chunks of this have been outright sold to third parties that continue to present "news" under the NDTV banner which is highly misleading and so you can't trust their digital news wing, I don't think. At all.

> I think there is, at least in the present day, some value in trying to learn how to identify fakes. ... we'll be stuck with people flooding the zone with cheaper products for some time still.

Yeah, you're right, there are some people who would probably benefit from being taught how to tell real from fake.

I've recently had a string of very discouraging interactions with older people on this topic though - my family mainly, but also some facebook friends and groups. I almost think the more urgent need for educating these folks lies in a different direction... they don't need education on "how to tell real from fake". They need "why fake is bad".

When there's fake news being shared by these folks on whatsApp or facebook or whatever, I can almost guarantee that when they're told this is fake or AI, their response will be "so what? God's world is awesome, I'm still going to appreciate it, AI or no AI," or "Well, even if this specific thing is faked, it's not like this exact type of thing hasn't happened IRL. We know what those left wingers are like. This isn't fake news, this is information about the essential truth of those people."

So many levels on which this whole fake news thing sucks, but the worst level is the one where after all those long years of my parents beating my ass for telling lies, they're just out and proud liars these days because "fake news is essentially the truth". Grrarrghhh.
posted by MiraK at 9:34 AM on April 30 [12 favorites]


Also 10/10. A couple of items were stories I remembered from the pre-AI past (the chimp, the gold toilet), several were just dead obvious, so there were only maybe two where I "went with my gut" and got them right.

I've recently had a string of very discouraging interactions with older people on this topic though - my family mainly, but also some facebook friends and groups. I almost think the more urgent need for educating these folks lies in a different direction... they don't need education on "how to tell real from fake". They need "why fake is bad".

As my age cohort is slipping into old age, this might finally no longer be about "old people" so much as "willfully ignorant" people, it's just that at the moment that's a pretty big Venn diagram overlap.
posted by briank at 10:24 AM on April 30


I got 10/10! Or did I?
posted by heyitsgogi at 10:43 AM on April 30 [4 favorites]


10/10, partly because I remembered some of the stories as they actually happened, so that may be a bit unfair. But the fake news anchor made me lean back in my chair, repulsed, so cheers to the uncanny valley - for now, anyway.
posted by PussKillian at 10:50 AM on April 30 [1 favorite]


Wait did I just train an AI model a little bit...
posted by shenkerism at 11:06 AM on April 30 [2 favorites]


I don't think the Canadian Journalism Foundation is spinning up its own AI.
posted by jacquilynne at 11:17 AM on April 30


I, an American, scored 100%. Canadians, please feel free to reach out if you need my assistance spotting AI.

Could be added to the Federal Skilled Worker test. Scored 10/10, as well, though I was on the fence about Macron having six fingers.
posted by They sucked his brains out! at 11:22 AM on April 30


10/10, to my relief. I wasn't looking at the North Korean provenance for that chimp story, I was looking at the cigarette—AI ones wouldn't have been as straight. Some with the Portuguese wine flood—a telephone line on the top of the image was straight, it didn't continue in the wrong position past the trees. But is the average person going to be paying as close attention?—plus we were clued in to the fact that each story might be AI, which makes a difference.

Still, that fake Canadian tourist copy was amusing. Endless supply of maple syrup and beaver tails for all!
posted by rory at 11:53 AM on April 30


8/10 - I got one wrong because it was video and I am absolutely not clicking on a news video in 2024, and one because I was sure that Reuters wouldn't have a section on their website called "Breakingviews" since that sounds like absolute AI gobbledygook, but apparently is entirely human gobbledygook.

For me this is what is so absolutely frustrating about the general conversation around AI and fake news--the focus is always on individual action and education, while the perpetrators, like Google, tabloids, even real news outlets (who are somehow still bandwagonning on tech trends years and years after being conned by Facebook's "pivot to video" scam) never actually take on any of the responsibility of establishing actual safeguards around disinformation, either because they don't want to make the effort or because they don't want to give up the ability to spread misinformation if that is in some way valuable for them.

Google is happy to sponsor a cosmo quiz, or an initiative to educate about AI misinfo, but they're not going to stop rolling out and hyping half-baked AI products to the general public. Professional journalism associations are more likely to censure individual reporters for offending powerful stakeholders than hold news outlets accountable. News organizations pat themselves on the back about journalistic integrity and the fourth estate, safe in the knowledge that no matter how many WMDs or Clinton emails or detransitioners or beheaded babies they may later have to correct under the fold, they can still call themselves the paper of record.

It is not reasonable to put the burden on ordinary people to never fall for mis- or disinfo. It is just not possible for that to work, across all ages and cultures, and at scale. People need to be able to trust institutions and information; to do otherwise is literally crazy-making. Constant mistrust leaves people vulnerable to convincing liars, conspiracy theories and authoritarian worldviews.
posted by radiogreentea at 11:56 AM on April 30 [3 favorites]


I'm pretty upset that I missed the one with six fingers!
posted by TwoWordReview at 12:01 PM on April 30


We showed our kids the house hippo and now they want one
posted by St. Peepsburg at 12:48 PM on April 30 [3 favorites]


The quiz fails to mention that although Reuters is a reputable source, you gotta check the URL, not the screenshot.
posted by credulous at 7:35 PM on April 30 [2 favorites]


For me this is what is so absolutely frustrating about the general conversation around AI and fake news--the focus is always on individual action and education, while the perpetrators, like Google[…]never actually take on any of the responsibility of establishing actual safeguards around disinformation

It does feel similar to the constant personal shaming over individual environmental impact when the top 100 polluting companies are responsible for 71% of all greenhouse gas emissions.

There have been some threads on /r/StableDiffusion and related ML subreddits regarding the potential impact of the pending EU AI legislation, and the conclusion was: we’ll need to add some imperceptible watermarks to our output but otherwise this doesn’t affect anyone who wasn’t doing deeply creepy shit (deepfake porn) or attempting overt political manipulation. And, of course, anyone doing the latter things is hardly going to stop because of some law. Which doesn’t mean we shouldn’t pass laws in good faith, just… who’s it for, really?

So what kind of laws should we be passing regarding generative systems?

I maintain that the answer to this question is forced public disclosure of training set data and methods. For multiple reasons:

1) it is far more difficult to discover sources of bias against marginalized groups when we have no insight into what’s in the soup

2) these tools are a major front in individual vs corporate power and general class warfare, and the more the major players have exclusive access to the most cutting edge tools the worse it is for everyone outside the top 0.01% most wealthy

3) while doing anything about the use of copyrighted work in pre-training data is probably a lost cause for a few different reasons, it is not really possible to even contemplate remediation without an objective answer as to what was or wasn’t used… which from OpenAI and Google’s perspective is entirely the point

Related to #2, though: I don’t actually want to see LLMs or diffusion models that cannot be jailbroken out in the wild. I want the general public to always have a key to the boiler room, because without that these tools become vastly more useful as means of automating control of / maintaining power over the working class. I don’t think this is irresponsible behavior or the systems are half-baked; I think it’s essential behavior that I am quietly praying is flatly impossible to fully patch out in a “this is actually a very heavily obscured, deeply convoluted P=NP / halting problem-type artifact” way. I don’t believe in literal evil but people who want to control what I can know are close enough that it makes little difference, and I desperately hope their tools are forever less than perfect.
posted by Ryvar at 7:56 AM on May 1 [2 favorites]


People who think they can spot AI news reliably are the same people who think they can spot undercover police cars. Some unmarked police cars are easy to spot, but you won't spot the truly anonymous undercover cop cars when the cops decide to get serious about going undercover.
posted by SnowRottie at 12:23 PM on May 2 [1 favorite]


« Older Re-light my fire   |   Reality TV for Writers Newer »


You are not currently logged in. Log in or create a new account to post comments.