The Internet is and always has been full of lies
April 1, 2024 10:47 AM   Subscribe

As always, you can tell from some of the pixels and having seen quite a few GenAI images in your time.

As they say toward the end, don't get caught up on these specific heuristics, because they're likely to be fixed sooner or later. But understanding how AI image generation works can go a long way to understanding and spotting the types of mistakes it's likely to make, and probably continue making, for a while.

This shit absolutely makes me uncomfortable about the future of truth, though. These images are all pretty bad, generally speaking, but millions of people have been / will be taken by them. And they're only going to get better.
posted by uncleozzy at 11:37 AM on April 1 [8 favorites]

This is good info, thanks! (Also noted, that this info will change/evolve.)
posted by Glinn at 11:58 AM on April 1

I do think that one of the key factors in any effective fake is confirmation bias. If something happens to coincide with what I want to be true, I'm less likely to go to the trouble of doubting it. Until it's considered common sense to pretty much doubt anyone or thing that tells us what we want to hear, we're going to be stumbling into confusion.

And, of course, the further we move from our own networks (the people we actually know and trust), the more savvy we need to be about not falling for stuff. Bottom line: it's okay to just not be sure.
posted by philip-random at 12:00 PM on April 1 [9 favorites]

These images are all pretty bad, generally speaking, but millions of people have been / will be taken by them

Indeed. It's easy to forget that most people aren't really thinking about AI image generation that much, and aren't being exposed to a lot of criticism of those images that teaches them to look for its tells. Understanding how the images are made so you don't rely on specific tells is another level entirely.

It's also easy to forget how much it matters that you want to believe something is true (or fake). People will look for any reason to believe or disbelieve an image, and AI is just going to make it easier to do so. It already is - see all the people on Twitter claiming images/audio of war crimes is "AI."
posted by Kutsuwamushi at 12:02 PM on April 1 [2 favorites]

Interesting how many of these are similar to the classic clues to a photoshopped image. Extra/missing hands or feet, inconsistency between opposite sides of an object that blocks others, and a slick airbrushed feel are all tells that a person has altered an image. Humans don't tend to mess up fingers or text in the same way, though.
posted by echo target at 12:06 PM on April 1 [1 favorite]

AI image generators aren't very good at drawing archers.

They sure aren't.

Paging The Corpse In The Library . . .
posted by The Bellman at 12:12 PM on April 1 [2 favorites]

Has there been good science fiction on the reorientation of the human mind to a world where half of reality is known to be fake?
posted by dances_with_sneetches at 12:18 PM on April 1 [2 favorites]

This shit absolutely makes me uncomfortable about the future of truth

I'm glad I'm going to be dead in 10-20 years.

Some like to assume humanity will swing back to progress after a timeout in the dark enlightenment, but I'm not so sure when I look at the relative stability of the Amish, North Korea, Iran and other heavily repressive cultures. The Soviet Union only crumbled under constant outside pressure from all sides. The faux-socialists in China saw that and pulled a huge economic u-turn just to keep themselves in power. Supported by deepfakes and violence, global dictatorships could support and scratch away at each other for centuries.
posted by CynicalKnight at 12:24 PM on April 1 [7 favorites]

"where the pastry meets the plate" is the new "you can tell from the pixels"
posted by chavenet at 12:34 PM on April 1 [4 favorites]

There's a woman on TikTok who has a very catchy slogan for catching AI generated images.

Count the fingers
Read the letters
Count how many things
Ask yourself how
Ask yourself why

She works in interior design so she specifically critiques interior design images, especially the really crazy ones she sees on Facebook, though she also shares higher quality ones she has created for her job and shows what to look for.
posted by misskaz at 12:36 PM on April 1 [6 favorites]

it's so very obvious in retrospect that all those hours spent looking at Worth1000 photoshop contests was, in fact, not a waste of time mom, not a waste of time at all 😌
posted by paimapi at 12:54 PM on April 1 [15 favorites]

I'm glad I'm going to be dead in 10-20 years.

posted by CynicalKnight

posted by Navelgazer at 2:09 PM on April 1 [2 favorites]

Can't we train AI to look for bad AI? Because then in an evil way, we can just tell AI to, "try again" until the AI looked even better (and repeat).

I feel like that's the future, or the future is then images that AI is trained on are all now crap AI images, and there's this weird artifacting going on from using too much AI images to produce more AI images.

Obviously I'm not the first one to think of all this.
posted by alex_skazat at 2:56 PM on April 1 [1 favorite]

Can't we train AI to look for bad AI? Because then in an evil way, we can just tell AI to, "try again" until the AI looked even better (and repeat).

That's one of the big tools in the current wave of AI development, yes. The "Generative adversarial network". Any automatic system for detecting bad output will be used to train up a system until it can't be detected by that same system.
posted by CrystalDave at 7:04 PM on April 1 [3 favorites]

I have to say that the urge to say "his hed is pastede on yay" about that elf archer image is very very strong. (If you know you know, but I am happy to explain.)

Can't we train AI to look for bad AI?

The other night I had the idea to bring AI down by trying to train it on Tom Waits lyrics. I think this idea is even better.
posted by EmpressCallipygos at 7:35 PM on April 1

The only AI I have used to this point was to generate some “my DnD character looks like this” possibilities. And now I know a little more about why I was’t able to get a decent ranger drawing his bow for my husband. Also, funnily enough, it was able to sometimes make said ranger a plausible half-orc, but was almost always unable to parse out “with a bear walking next to him.” Instead, it made him into a grotesque bear-man-thing. Which is ironic as our barbarian just got scratched by a bear spirit and turned into a…grotesque bear-man-thing.
posted by PussKillian at 8:11 PM on April 1 [2 favorites]

I think you could usefully add another "red flag" criterion: Does the image make you feel strong emotions? For example:

1) Does it make you feel smug? Does it appear to confirm exactly what you desperately wanted to be true?

2) Does it make you feel angry? Does it appear to confirm what a dirty-so-and-so someone is? Does it energize you, puff you up with self-righteousness?

3) Does it distract you from things that matter because it's just so sad, or so cute, or so sweet?
posted by Western Infidels at 8:31 PM on April 1 [2 favorites]

posted by DeepSeaHaggis at 9:37 PM on April 1

I've seen this as advice.

"Meta data, it would be blank. If it was filmed or recorded on a device it leaves a lot of information in the file - color, camera, lens at times, etc. Lots of information in the background, When there is nothing it's a red flag because you really can't delete all, that I know of."

Is it a good idea? Have deepfakes started faking metadata?
posted by Nancy Lebovitz at 5:04 AM on April 2

So this helpful video by 3blue1brown just dropped. In it, they talk about how the perceptron layer is basically about “asking questions” about the input tokens. This kind of implies a strong parallel to compression to me and it feels like if we want to understand how “good” an AI image can be we need to think about it in those terms. Basically, how convincing the output is depends on how much information is input and how much information is encoded in the perceptron and how much space and (cpu) time we’re willing to expend. How “lossy” this is depends on what we tolerate or what an AI detector will tolerate…but this probably leaves some hope for detectors because the output can’t be perfect and will probably always have some lossy fingerprints. We can see them now as 20 digits on one hand but they will get more subtle and require automated detection…just like old low quality jpegs vs high quality lossy compressed images.
posted by delicious-luncheon at 6:01 AM on April 2 [1 favorite]

Can't we train AI to look for bad AI?

I am reminded of The Midas Plague, in which robots and cheap energy are pouring out an avalanche of consumer goods and humans (well, the poor ones) are forced to live in ridiculous luxury to consume all this stuff lest the system collapse. Finally one overworked guy hits on the idea of having the robots use the stuff themselves, so then humans are finally free to do what they want while the robots are furiously making as many shoes as possible and then furiously using them up.

Or, more succinctly, it's AI all the way down.
posted by Naberius at 6:08 AM on April 2 [3 favorites]

Is it a good idea? Have deepfakes started faking metadata?

Metadata for the usual media formats is extremely easy to fake and most social networks delete it when media is uploaded to them since it can be a privacy hazard (phones save the location where you took a picture for example).
posted by simmering octagon at 10:01 AM on April 2 [2 favorites]

Biden of the nine fingers
Image generator of doooom
posted by abraxasaxarba at 11:35 PM on April 2

« Older Dr. Who Titles Prisoner Style   |   Don't Be Email Newer »

You are not currently logged in. Log in or create a new account to post comments.