Bitrot that doesn't kill posts makes them stronger
February 27, 2022 4:11 PM   Subscribe

> comp.basilisk - Frequently Asked Questions :: Is it just an urban legend that the first basilisk destroyed its creator?
Almost everything about the incident at the Cambridge IV supercomputer facility where Berryman conducted his last experiments has been suppressed and classified as highly undesirable knowledge. It's generally believed that Berryman and most of the facility staff died. Subsequently, copies of basilisk B-1 leaked out. This image is famously known as the Parrot for its shape when blurred enough to allow safe viewing. B-1 remains the favorite choice of urban terrorists who use aerosols and stencils to spray basilisk images on walls by night. But others were at work on Berryman's speculations...

A collection of short stories on a theme by David Langford (the original story previously)

The motif of harmful sensation - TVTropes: Brown Note - examples in fiction - previously

SCP Foundation: Cognitohazards and You!

Op Art: The Art that Tricks the Eyes - Or more strikingly: This is not a GIF

The McCullough Effect, which can last up to three months (previously)


This updated post brought to you by #DoublesJubilee

Lastly: BOO!
posted by Rhaomi (16 comments total) 37 users marked this as a favorite
 
❤️❤️❤️the Basilisk stories. Thank you, Rhaomi.
posted by Orange Dinosaur Slide at 5:39 PM on February 27, 2022


I hadn't ready these. Really great, though sadly I have been thinking of writing a story with a similar idea behind it! Different enough to still write... but now I'm curious about other "memetic weapons" like these. Any I should look up?
posted by BlackLeotardFront at 7:16 PM on February 27, 2022


I do love these stories, though for some reason I could buy very elaborate patterns deep in mathematical sets locking up the human brain but not that people could reproduce them sufficiently in a medium as crude as stencils and spraypaint.
posted by tavella at 7:35 PM on February 27, 2022


You likely already know this, but Snow Crash is in this territory, BlackLeotardFront.
posted by Songdog at 7:55 PM on February 27, 2022 [5 favorites]


BlackLeotardFront: qntm has There is No Antimemetics Division which is both puzzling and horrified me with entirely reasonable humans dealing with monsters (antimemes) that destroy cognition and knowledge of their existence.
posted by k3ninho at 12:33 AM on February 28, 2022 [2 favorites]


A major element of Blindsight by Peter Watts is a version of this concept—his vampires (super-predators that eat humans and rely on mimicry to get close to their prey) are repelled/traumatized/injured by seeing right angles (hence the use of crosses as an anti-vampire defense).

(I introduced a rather extreme version in the Laundry Files stories, starting in The Concrete Jungle: the implications are still unfolding, ten books later.)
posted by cstross at 1:56 AM on February 28, 2022 [13 favorites]


In a few moments, he will think of the funniest joke in the world, and he will die laughing.
posted by eustatic at 4:18 AM on February 28, 2022 [1 favorite]


eustatic, did you even read the FAQ? *plonk*
The comp.basilisk community does not want ever again to see another posting about the hoary coincidence that Macroscope appeared in the same year and month as the first episode of the British TV program Monty Python's Flying Circus, with its famous sketch about the World's Funniest Joke that causes all hearers to laugh themselves to death.
posted by zamboni at 5:19 AM on February 28, 2022 [3 favorites]


*plonk*

*sigh*. I haven't thought about my killfile in a long time. Where have you gone, Kibo? The Internet turns its lonely eyes to you.
posted by The Bellman at 9:08 AM on February 28, 2022 [5 favorites]


Oh wow. I was talking about The Parrot to a friend not that long ago, but I'd forgotten the name and it's now buried under a ton of "Roko's Basilisk" junk in Google, consequently hard to find if you don't know where to look.

The topic came up because there's a whole line of research right now in "adversarial machine learning", that is, basically ML techniques that counter other ML techniques. An example is a pair of sunglasses you can wear that break facial-detection algorithms, or a sticker on your car that foils automatic number plate recognition systems on toll roads. Right now the field is in its infancy, but... watch this space. It's going to be A Thing.

Anyway, most adversarial ML techniques just cause the model to malfunction or fail to recognize a pattern that it otherwise would. But there's no reason why you have to stop there. There's no reason why a device's camera couldn't be an intrusion vector. Lots of developers haven't quite realized that a camera facing out onto a public street is potentially like an un-sanitized database input on a "send us an email!" webform. Any time you have un-sanitized inputs flowing into complex software (and ML models are pretty complex), there's the potential for weird stuff happening.

So while I don't really think that something like The Parrot is going to melt people's brains and require all of us to wear goggles while walking around on the street, I think it's entirely possible that we'll see a cat-and-mouse game between Basilisk and anti-Basilisk software, specifically targeting systems like electronic tolling, face recognition/detection, security cameras, anti-counterfeiting detection, OCR, etc.
posted by Kadin2048 at 2:23 PM on February 28, 2022 [3 favorites]


Had to read to the end to find out the copyright date. Immediately made me think of cstross's Laundry Files. Thanks for pointing out the Blindsight connection, I had almost forgotten that story.
posted by Phredward at 4:31 PM on February 28, 2022 [1 favorite]


...but now I'm curious about other "memetic weapons" like these. Any I should look up?

The plot of Infinite Jest revolves around a film that is so compelling it's impossible to look away.
posted by logicpunk at 8:25 PM on February 28, 2022


a pair of sunglasses you can wear that break facial-detection algorithms

The ultra cheap Chinese phone I had before this one had a face recognition unlock feature that never worked for me because it wouldn't register that I have one. Something to do with wearing an untrimmed full beard and half height reading glasses that usually sit some way down my nose, I suspect. I was quite chuffed to find that my default look functions, completely by accident, as some degree of dazzle.
posted by flabdablet at 8:37 PM on February 28, 2022 [2 favorites]


Lots of developers haven't quite realized that a camera facing out onto a public street is potentially like an un-sanitized database input on a "send us an email!" webform. Any time you have un-sanitized inputs flowing into complex software (and ML models are pretty complex), there's the potential for weird stuff happening.

This seems a bit overly pessimistic to me. I actually think many ML models are /safer/ than equivalent signal processing code that they are (often) replacing: The signal processing code is often task-specific hand-coded C, where the ML model executor is shared by myriad different models and is extremely well tested both for functionality and security (eg, fuzzing... and you can even look at the various supported models as providing their own kind of fuzzing).

As a point of experience, I've recently been fixing overflow bugs in a popular open source audio package. These tend to be caused by assumptions about the signal that don't hold if you feed in random input (fuzzing). You take a miniscule performance hit to ensure the overflows really can't happen, but we're adding these fixes years-to-decades later, and having to unravel pretty damned complicated sigproc code to ensure we're fixing things well.

OTOH, when coding an ML kernel, you take the miniscule performance hit up front because you have no assumptions on the signal coming in; you just have to assume it'll overflow, and act appropriately.

So if you're willing to accept that security coverage of the ML model execution code is decently hardened (and getting better over time), the threat surface shifts to the input device itself (no change vs non-ML methods) or the handling of the model outputs. The model output handling is going to be domain specific; classifier outputs are hard to handle wrong, where text will need to be treated like any other"); DROP_TABLE comments;" text. Again, not much different from the non ML-case.
posted by kaibutsu at 12:35 PM on March 1, 2022 [4 favorites]


Hey! My lab got a shout out!
posted by KeSetAffinityThread at 3:09 PM on March 1, 2022


It's interesting to compare these theoretical musings to the things that are actually unthinkable and unimaginable in US anglophone culture, like a truth and reconciliation movement for the Atlantic slave trade, or independence from fossil fuels as an energy source
posted by eustatic at 9:47 PM on March 7, 2022 [1 favorite]


« Older A future from our past might appear in our present   |   The option of dropping a 500-ton structure on... Newer »


This thread has been archived and is closed to new comments