How Ethan and Lilach Mollick learned to stop worrying and start using AI
May 24, 2023 10:39 PM   Subscribe

If you’re not using ChatGPT for your writing, you’re probably making a mistake. About 10 minutes into my interview with Ethan Mollick, a professor at the University of Pennsylvania’s Wharton business school who has become a prominent evangelist for AI tools, it became clear that he was going to use Bing to interview me. He started by asking the Microsoft search engine, newly infused with a generative AI model from OpenAI, “Can you look at the work of Dylan Matthews of Vox and tell me some common themes, as well as any strengths or weaknesses.” In a couple seconds, Bing had an answer: “Dylan Matthews is one of the senior correspondents at Vox. He covers topics such as effective altruism, philanthropy, global health, and social justice.” (So far, so good.)

This led to my wondering if my writing here since 2008 could be pulled apart, shoved back together again, using words that only I use and twists of writing which only I have used and then -- SHAZAM!!! dancestoblue but without soul. I love Barbara Kingsolver, could her words be fed in and come out her words except dead? Is there a chance that they'd get good enough to where it's not dead?
posted by dancestoblue (163 comments total) 25 users marked this as a favorite
 
Neither Ethan Mollick nor Lilach, his equally AI-obsessed research collaborator at Wharton and his spouse, are AI experts by background.

I never would have guessed.

Like everybody else I've seen be this kind of excited about LLMs, they are both in the world of business. Academic business studies shares a lot in common with the culture of its graduates.

Also:

Now, however, we have tools with the potential to automate a significant portion of our jobs. They can’t automate the whole thing — not yet, as long as it can’t distinguish accurate from inaccurate sentences

They are Large Language Models, that is all. They cannot ever achieve this. They work on language in a vacuum. They cannot experience the world, they cannot feel, they cannot sense first-hand. They cannot verify. They cannot know. They aren't Large Knowledge Models.

To get to what they're treating as an inevitability round the corner, you'd need a sea change, a whole new breakthrough in AI development as noteworthy (if not moreso) than the development of LLMs in the first place.

The bulk of the work in some jobs is in fact producing text. LLMs will be great for accelerating that workflow. But there are a lot of jobs that look like they're about producing text, that aren't - they're about producing knowledge, insight, analysis, beauty, or some combination thereof, and the text that is the output is just a medium. The text isn't important - the insight about the physical model or experiment you're conducting is, for example. LLMs cannot do that work for you, and you'll likely spend as long editing its text as you would writing your own, if it is to be of the same quality.

In many ways, producing the text is the easy part of many jobs. It's the content that goes into it that is valuable, and that is something LLMs can only approximate an ersatz version of.
posted by Dysk at 11:34 PM on May 24, 2023 [182 favorites]


+many, Dysk. That section on Bakhmut confuses me deeply. To take a significant amount of time to get to a question as specific as this, as a journalist, when you’re already checking sources, seems like doing all the work that goes into basic journalistic writing to begin with: “Can you draft a paragraph-long explanation of the battle for Bakhmut for me, including mentions of its symbolic significance, its strategic significance, and the Wagner Group? Please don’t copy whole paragraphs from existing sources but compose new ones.“

Given the time it took to produce, I don’t really see what new value is added by the ChatGPT summary. Is it more or less impactful than the same structure with slightly different symbolic signifiers highlighted, like architecture and culture instead of mining products?

How valuable is a chart of “philosophies of altruism” that suggests there is a Biological Philosophy of Altruism and cites Psychology Today? Is Ethan Mollick proud of that table? If he (or Dylan) were in my Anthro 101 class and submitted it as an assignment or pointed to it as a resource, I’d recommend he come in for office hours to talk about research skills.

I’m not trying to be glib here - it sounds like the Mollicks are trying to be creative and (at least in theory) to use ChatGPT as a tool to help students develop their critical-thinking skills. And there are plenty of ways that ChatGPT helps people in their jobs. But while an Autocorrect that’s advanced enough to make tables and write summaries is certainly a technical marvel and a good reason to get solid regulation up and running, the examples provided don’t seem to live up to the hype their authors want them to.
posted by rrrrrrrrrt at 12:09 AM on May 25, 2023 [7 favorites]


Also: does that Vox title relate to anything in the article at all?
posted by rrrrrrrrrt at 12:20 AM on May 25, 2023 [2 favorites]


On your question, dancestoblue, I‘ve only seen a handful of posthumous “performances” created using special effects; like the ones in Star Wars franchise movies. None has seemed like a real human. I definitely have questions of the desire even to do such a thing - someone who would try to write a posthumous novel based on a writer’s existing corpus strikes me as someone who is refusing to acknowledge either death or creative autonomy, both of which squick me right out. People write “in the style of” for humor, but would I want to watch someone behind a mic synthesize Christopher Walken’s voice and style with voice box implant? Blech. Someone who really honed an impression? Sure! Lots for me to chew on here. Art in the age of mechanical reproduction brings up a lot of feels. Thank you!
posted by rrrrrrrrrt at 12:28 AM on May 25, 2023 [7 favorites]


"If you're not using a fancy search engine" - FTFY
posted by GallonOfAlan at 3:07 AM on May 25, 2023 [5 favorites]


I agree entirely with Dysk. The old saw "Success is 10 percent inspiration and 90 percent perspiration" - where LLMs can definitely help is automating the perspiration.

Take coding - a lot of writing boring, ordinary code is hooking together various libraries, passing them the appropriate variables for your use case, adding error handlers, test cases etc etc. You're not the first person to write this type of 'boilerplate' for that type of task, and you won't be the last, but it usually still needs to be done. Many template projects are basically 'here's a load of boilerplate to get you started!' which is fine, until you need to do something that the template doesn't cover...

Using a coding LLM can help you generate that tailored boilerplate. You still need to review and test it to make sure it does what you need it to do, but it can take a good chunk of the drudgery out so you can focus on the much smaller areas where logic and reasoning need to be applied. It's not going to threaten the programmers working on novel or very complex stuff at all, and even basic stuff still needs human review and tweaking, but it's a definite time saver. Programmers don't usually write in a basic text editor any more except for the most trivial stuff, you use an IDE which helps you avoid basic mistakes and remove some drudgery; an LLM assistant fits right into that workflow.

I imagine the same can be true for writing; even just getting an outline can help with thinking through how to put what you want to say, and add some phasing variety that you might not normally have thought of. So instead of working from a blank page, you can use the LLM output as a template to base the real creative or insightful work on.

But you absolutely can't rely on LLMs for 'truth' or facts or insight; they happily make stuff up that *sounds* plausible, but it's absolutely not a knowledge machine.

What I find depressing is how we're probably going to end up collectively using LLMs most - more advertising and manglement guff. Take the google fantasy AI demo where they use it to generate 'personalised' marketing emails so they're less likely to get caught in bulk mail filters, or a boss who takes a perfectly fine basic sentence and uses AI to turn it into a multi-paragraph load of guff. And they show someone earlier using AI to parse a guff-filled email into a simplified summary, and I flash forward to our near future where everyone AI-bulk-up emails and social media posts before posting, and recipients all AI-deguff before reading, and oh god what have we done...
posted by Absolutely No You-Know-What at 3:07 AM on May 25, 2023 [14 favorites]


Here's what my internal biological LLM does with the following prompt:
About 10 minutes into my interview with Ethan Mollick, a professor at the University of Pennsylvania’s Wharton business school who has become a prominent evangelist for AI tools, it became clear that he was going to use Bing to interview me.
So I thanked him for his time and left. I suppose I should not have been surprised to find an MBA farmer itching to promote the ultimate expression of contempt for domain expertise, but somehow seeing it up close and personal made it all the more grotesque.
posted by flabdablet at 3:09 AM on May 25, 2023 [55 favorites]


I've come around a bit on AI. I don't think it will replace too many jobs (though that doesn't stop the danger of managers thinking it does and then laying off workers, much like in the 2000s heyday of "offshoring" tech workers to India. But after playing around with LLMs, they do offer some great leverage (and frustrating limitations that lead you on wild goose chases.)

They cannot experience the world, they cannot feel, they cannot sense first-hand. They cannot verify.

Yes, but neither can a human watching CNN, reading news articles, or leaving internet comments. So much of our experience in the 21st century is hearing stories from media. There isn't really a difference between what happens online and what happens "in real life." And people were talking about this with television in the 70s (see also The Network) and radio before that and newspapers before that.

There were search engines before Google, but Google really was a leap forward. I would say LLMs represent a similar leap forward.
posted by AlSweigart at 3:49 AM on May 25, 2023 [3 favorites]


"effective altruism" was as far as I needed to read. nope.
posted by nofundy at 4:00 AM on May 25, 2023 [28 favorites]


Yes, but neither can a human watching CNN, reading news articles, or leaving internet comments.

They can do all of those things? Like, CNN viewers still have emotions and agency, they still experience the world. How they receive what CNN tells them is meditated through that.
posted by Dysk at 4:04 AM on May 25, 2023 [15 favorites]


(And CNN in turn is produced by humans, so can think and feel, etc, and that mediates what they tell and how.)
posted by Dysk at 4:07 AM on May 25, 2023 [6 favorites]


I asked Chat GPT to write out instructions for building a wooden stool and the results I got weren't correct or useful to me but to someone without woodworking knowledge, I doubt there would be anything glaringly wrong about them. It would take anyone literally minutes to create gigantic website of constantly evolving/revolving instructional material with links to tool sellers and health product scams. More than ever, the internet is going to become a firehose of garbage.
posted by brachiopod at 4:27 AM on May 25, 2023 [54 favorites]


I imagine the same can be true for writing; even just getting an outline can help with thinking through how to put what you want to say, and add some phasing variety that you might not normally have thought of. So instead of working from a blank page, you can use the LLM output as a template to base the real creative or insightful work on.


I mean, I can't. Maybe some people can, but I can't. (I have not tried very hard, because this is not something I'm actually interested in. It's possible that other people, who are not me, will figure out how to use it well.)

It's really easy for me to come up with the most boring way possible to say something, which is the way Chat-GPT comes up with. The 90% perspiration part is figuring out how to say that thing in the way I want to say it (character voice; the right sensory details; making the right connections with other scenes; and so on). And Chat-GPT's version is not a template but a distraction. The bad version that I write will (hopefully, usually) still at least contain the seeds of what I want to put in the scene; Chat-GPT's version doesn't.
posted by Jeanne at 4:37 AM on May 25, 2023 [28 favorites]


“ In addition to the Future Fund, FTX CEO Sam Bankman-Fried also has a little-known family foundation called Building a Stronger Future that's been giving 5- and 6-figure grants to news organizations like ProPublica, Vox’s Future Perfect, and The Intercept.”

https://twitter.com/klaudiajaz/status/1591149930097569794?s=46&t=qb23W0zBBysz1trV_USE8g

I scanned the article and didn’t see this obvious conflict of interest noted anywhere. They say they will return the money, if a restitution fund is created, but, still, it seems relevant that they are willing to take money from longtermists and then report on it (and adjacent issues like LLM’s) without acknowledgment.
posted by oddman at 4:42 AM on May 25, 2023 [5 favorites]


“I imagine the same can be true for writing; even just getting an outline can help with thinking through how to put what you want to say

And that outline will control/influence how you think about the subject. It will shape your reasoning, your conclusion, etc.

No thanks. I’d rather have my own thoughts

(Yes, research shapes how you think, too, buts a messier process with more room for your own inferences and thought process. Heck even just being able to pick the order I which you research ideas means your less programmed to think a certain way. )
posted by oddman at 4:47 AM on May 25, 2023 [9 favorites]


Joseph Fasano - For Someone Who Used AI to Write a Poem

Kelly Turnbull (threadreader):
Everyone pointing out that it’s suspicious how many AI tools are free, it’s because building a tech company these days 100% revolves around giving people your product for free/at an extreme discount until they forget how to function without it, then closing the jaws of the trap

The easiest example of this is Uber operating on massive losses for a decade so they could offer their service at an unbelievably cheap rate long enough to train people not to call cabs anymore. A lot of young people coming up now don’t even know “calling a cab” is a thing

Air BnB is another really classic example, they were the “cheap alternative to hotels” until they became a ubiquitous travel accommodation solution, then suddenly rooms started costing $400 a night.

But the more abstract version is when you look at how google won the search engine wars and became the most popular way to look for things, even as the results started to turn into narrow, flawed garbage, barely anyone knows how to look for things without google anymore

And most insidiously, how streaming trained people to stop buying real media and pirating things, so they can completely control our access to media as a subscription service

So yes, obviously they’re hoping you’ll all feed your novels for data, but even more than that, they want you to turn AI into an inseparable part of your creative process, so when they eventually ask you to give them money you won’t know how to live without it

It’s the “first one’s free” drug dealer model applied with the end goal of turning human creativity into a subscription service.
NPR interview with David Simon, TV Writer (The Wire, Generation Kill):
SHAPIRO: OK, so you've spent your career creating television without AI, and I could imagine today you thinking, boy, I wish I had had that tool to solve those thorny problems...

SIMON: What?

SHAPIRO: ...Or saying...

SIMON: You imagine that?

SHAPIRO: ...Boy, if that had existed, it would have screwed me over.

SIMON: I don't think AI can remotely challenge what writers do at a fundamentally creative level.

SHAPIRO: But if you're trying to transition from scene five to scene six, and you're stuck with that transition, you could imagine plugging that portion of the script into an AI and say, give me 10 ideas for how to transition this.

SIMON: I'd rather put a gun in my mouth.

SHAPIRO: You would rather put a gun in your mouth?

SIMON: I mean, what you're saying to me, effectively, is there's no original way to do anything and...

SHAPIRO: No.

SIMON: Yes, you are.

SHAPIRO: That seems like a kind of absolutist take.

SIMON: Not only I think is it a fundamental violation of the integrity of writers and also of copyright to - you know, when I sold all the scripts I sold, you know, 150 to HBO and, you know, maybe another 50 to NBC, I didn't sell them so that they could be thrown into a computer with other people's and be used again by a corporation. So...

SHAPIRO: So would you ever agree to a contract that saw any role for AI at all?

SIMON: No. I would not.

SHAPIRO: Huh.

SIMON: If that's where this industry is going, it's going to infantilize itself. We're all going to be watching stuff we've watched before, only worse."
posted by automatic cabinet at 5:17 AM on May 25, 2023 [98 favorites]


yea no
posted by AlbertCalavicci at 5:23 AM on May 25, 2023 [3 favorites]


I imagine the same can be true for writing; even just getting an outline can help with thinking through how to put what you want to say, and add some phasing variety that you might not normally have thought of. So instead of working from a blank page, you can use the LLM output as a template to base the real creative or insightful work on.

I am using LLMs for exactly this--or rather, exactly the opposite of this--and their fallibility actually makes them kinda useful for thinking through writing problems. ChatGPT (and Bing, when it doesn't see a word that scares it causing it to erase its whole comment) is like a dumb but very patient friend, who will listen to every pitch and try to explore it for you. And because that exploration is always wrong, always a cliche, almost never containing any spark, there's this sense of relief, of "okay I will avoid writing that."

More important, I think, is learning to develop the prompt, because for fiction, the prompt is just your pitch, right? And who among us has a friend or spouse or dog who is patient enough to listen to you refine and refine and refine? "Oh, I forgot to tell you, she's a detective!" Watching it take your idea and run with it straight into a swamp of the most egregious blandness makes you work harder clarifying what your idea actually is.

But I don't think you could use it as a template, any more than you'd ask a third-grader to write the bare bones of a story for you to flesh out later. People who see this as the future of writing--in a positive sense, rather than a mistakes-to-avoid generator--should spend some serious time trying to come up with stories using these systems, and think about why those stories don't work. I was just using SudoWrite for a while, testing it out after all the highly-praised improvements and its new Story Engine--and was amazed to see that the system still didn't grasp the importance of suspense, of leaving a question unanswered, of not stating everything declaratively. All tell and no show. Something so basic, I think anyone who tries to tell you a story about their day would grasp. You literally could not use its material as an outline or springboard. (At the same time, I thought, we are going to start seeing a lot of this in the self-publishing space, it's going to crowd out everyone who takes a month to bang out a 50,000-word book.) I would really like to read something about these LLMs that understands and explains that problem, rather than just praising them for being the future of words.
posted by mittens at 5:24 AM on May 25, 2023 [12 favorites]


The semantic hand-wringing over AI v. LLM is a bit like arguing that an 5-year-old isn’t a human being because they aren’t a full-grown adult. The 5-year-old quite clearly cannot experience, and operate within, the world in the same manner as an adult. But, nonetheless, if you hand the 5-year-old a loaded gun, “some very bad shit” is likely to accidentally happen.

We’re now rapidly entering that 5-year-old stage, and a lot of heavily-monied interests are working hard at handing their 5-year-olds guns. We need to establish strong proactive protective frameworks now to (hopefully) avoid “some very bad shit.”

And, yes, grown adults definitely do “some very bad shit” with loaded guns. But, they (normally) know what they’re doing with those guns. If only the Second Amendment had been written with some strong, thorough, safeguards in place...
posted by Thorzdad at 5:27 AM on May 25, 2023 [3 favorites]


This led to my wondering if my writing here since 2008 could be pulled apart, shoved back together again, using words that only I use and twists of writing which only I have used and then -- SHAZAM!!! dancestoblue but without soul.

My very limited understanding is that it should be possible to fine-tune an existing model with a LoRA trained on dancestoblue text. The “old” (late March) and better-documented way to do this is using alpaca-lora. The latest and probably best would be huggingface-peft.

Disclaimer: I spend more time playing with Stable Diffusion than anything text-based so while there’s enough shared jargon to google intelligently, this really isn’t my neighborhood. If anyone replies to this with a correction, safe bet they’re right.

Here’s a basic alpaca-lora setup guide Google found, which has the benefit of a followup “how we trained it to talk like Homer Simpson” guide, which I think is what you’re really after (the results are disappointingly GPT-with-Homer-grace-notes, IMO).

Given the current pace this stuff is moving at, it is extremely likely that all of these links will be horrifyingly outdated, even Stone Age, before this thread closes. The pace of realtime rendering (read: graphics in games) development during the late 90s and early 00s was an order of magnitude slower than how fast the LLaMA community is moving right now. TBH watching people swim upstream through this fire hose-torrent has me feeling uncharacteristically optimistic: if they can handle this tempo, it is just barely possible the kids will in fact be alright.
posted by Ryvar at 5:43 AM on May 25, 2023 [7 favorites]


I write in advocacy. The Powers that Be already claim that the things that I write are "bots" or computer generated.

Oil companies have been telling me that I am not real for at least a decade, not excited about giving them ammunition.
posted by eustatic at 5:46 AM on May 25, 2023 [14 favorites]


I’m with David Simon on this. I even tried, in earnest, to use chat-gpt for a fantasy writing project. At first, I felt like it could be a good “assistant”, to help me with outlines and ideas…but honestly the time it saved me turned out to be the wrong time. By summarizing and outlining for me, the LLM allowed me to skip the supposed drudgery of research, analysis, and synthesis…but that’s not a good thing! Those steps are crucial in any new creative project. If I let the computer do those steps for me, I inevitably miss opportunities to gain real insight or come up with novel ideas.

Also, for me anyway, working with Chat-GPT was just as tedious as doing the research myself, and I gained less from it.
posted by Doleful Creature at 5:58 AM on May 25, 2023 [23 favorites]


But I don't think you could use it as a template, any more than you'd ask a third-grader to write the bare bones of a story for you to flesh out later.

I wasn't thinking of outright creative writing per-se, sorry for the poor wording on my part. Similar to core business logic coding, that's not something you can outsource to an LLM which is, at core, a tumble dryer for extant language - as the whole thing is creative and you don't want to start with recycled drek!

I was more thinking things like quarterly reports, yet-another-cover letter, the workmanlike equivalent of fill-in-the-blanks boilerplate text. Stuff where some creative thought is needed to make it to its final form, but it's mostly formulaic and not particularly innovative or creative. Kinda how the typewriter or the word processor didn't replace writing in all its many forms, but increased speed and ease.

But using it as a way to cycle through and eliminate formulaic stuff NOT to include when creative writing makes a lot of sense too!
posted by Absolutely No You-Know-What at 6:01 AM on May 25, 2023 [3 favorites]


LLMs are stochastic parrots. They can't think. They can only regurgitate what seems to be the most correct word next in the chain.
posted by Your Childhood Pet Rock at 6:21 AM on May 25, 2023 [4 favorites]


The only thing this article convinces me of is that business school is not, in fact, school. But I already knew that.
posted by sinfony at 6:29 AM on May 25, 2023 [18 favorites]


I keep wondering what Neil Postman would have to say about the impact that these tools are likely to have on discourse and thinking.

These tools are going to be rapidly adopted in the corporate world. If you've spent any time working with written materials in this environment you already know that writing is talked about as a valued skill, but good writing is rare and caring about good writing treated with contempt by a lot of execs.

If you don't believe that, get handed copy full of what the SVP wants people to have read and try to improve it to something people might want to read.

The process of producing product roadmaps and plans, messaging, press releases, etc., is not just about producing the artifact but about the discussion and realizing when something is on paper that the thing - when expressed clearly - needs improvement.

But these tools are only about speeding up the finished product, and they'll do nothing to improve the actual business.

Amplifying output and reducing the need for domain expertise is not going to do anyone any favors.

Back to Postman, I think he'd say that ChatGPT (et al) will erode logical, rhetorical, etc. skills even further. And this is on top of the erosion that's taken place as we've left print-based culture to electronic media. It's a one-two punch - electronic media have de-emphasized logic, coherence, linear thinking, complexity, and context. ChatGPT and its friends will offer to do the tedious work of thinking, researching, learning, digesting, creating, revising, and organizing information for you.

Hell, it'll even summarize other writing for you so you don't have to do the tedious work of reading and understanding it yourself. Which will be a real value, since there's going to be so damn much stuff produced.

Postman's most famous book was Amusing Ourselves to Death. The follow-up could be Automating Ourselves to Oblivion.
posted by jzb at 6:30 AM on May 25, 2023 [12 favorites]


LLMs are stochastic parrots. They can't think. They can only regurgitate what seems to be the most correct word next in the chain.

Whenever I see this comment or the very slight variations of it I've encountered hundreds of time by now at least, I'm struck by the self-referential nature of it. You say that ChatGPT is just giving a superficial gloss on a topic with no originality or true insight, stated with utmost confidence as if it represents certainty and something it thought of itself? I'm not sure this is the best differentiator to distinguish it from human output.
posted by slappy_pinchbottom at 6:33 AM on May 25, 2023 [16 favorites]


Saying it is "just" a language model looks like the Chinese Room error. By giving it a label -- "large language model" -- you pretend there is nothing there.

But all programming is just language. Go and chess can be reduced to language. A hell of a lot of stuff can be reduced to language. And we have evidence that these large language models can see past the surface of the description and get at what is being implicitly described.

The Othello-GPT is a concrete example of this. A large language model was given Othello game transcripts, and built a model of the game Othello that was both accurate and exceeded the space of possible Othello games.

https://thegradient.pub/othello/

You could reach into the "mind" of Othello-GPT and change the game state it "believed" was on the board after reading a transcript, and you could "read the mind" and determine what it thought the board state was. It had successfully modeled an Othello board given nothing but transcripts of Othello games.

You could invent impossible board states (non-connected) that it had never seen transcripts of, inject it into the "mind state", and the Othello-GPT would play against those impossible board states.

Othello-GPT never saw an Othello board, but it "knew" how big it was, it "knew" where the pieces are after a transcript, and it even "knew" how to play when fed an impossible game state on that board.

The point is that these networks are not just "after word X comes word Y with a certain probability". That was 1970s AI Markov chains. Instead, they have what appears to be a model of the reality described implicitly by the text they have consumed in them.

And while they do "hallucinate" facts -- they are better at making text that *looks* correct than they are at making text that *is* true -- they are also capable of having non-local understanding.

Another fun example is comparing early GPT models attempts to create D&D 5e monsters to more recent ones. The early ones looked correct locally. The later ones got the math right - worked out how different parts of the description related to each other in mathematical ways - better than most humans I've seen do the same.

We teach humans about a lot of subjects using mostly text. It would be interesting seeing what happens when you try to make a general GPT model an expert in a field by feeding it actual lecture notes and text books and the like from a given subject, and give it time with "tutors" that correct its misconceptions.
posted by NotAYakk at 6:38 AM on May 25, 2023 [16 favorites]


Thank god, I've been worried since the demise of NFT's (or at least their demise in the media) that there wouldn't be a new stupid tech fad that the academic world could glom onto. I look forward to III melding chatGPT, into their latest LMS while removing more functionality! Proquest, there's room for you in the faux AI feeding frenzy.
posted by evilDoug at 6:40 AM on May 25, 2023 [11 favorites]


I've been fooling around with asking these LLMs to do mimic authors, other thinkers etc... For fiction, I often get an anodyne average prose, the sort of thing that comes out of too-earnest writer's workshops. But in doing dozens of these, I've seen none of the flashes of genius that you expect from the authors the LLM is supposedly trying to emulate.

For assembling facts, they can be fantastic. For putting together reviews and "where are we now" questions they work well. They don't work well on analytical problems: they give you garbage or simply a "field" average set of commonish opinions on a topic. Often both mixed together, so I'm never 100% sure how much of it I can trust. I have to fact check everything the output gives me anyway.
posted by bonehead at 6:49 AM on May 25, 2023 [2 favorites]


I will say this: the number one thing that LLMs seem to be best at is generating nonsense fluff clickbait. The kind of articles where you have a basic question but they give you all the irrelevant side information because they want you to stay on the site and scroll past a dozen ads before you can get the answer. ChatGPT will be great for flooding the internet (and your search engine results) with bullshit. This will be the main use case of ChatGPT: bullshit generation, fake reviews, fake user profiles and comments, and endless market speak and empty political rhetoric.
posted by AlSweigart at 6:49 AM on May 25, 2023 [28 favorites]


It really does feel like the vulture class went from crypto, to NFTs, and swiftly on to LLMs. This is just the scam-du-jour for people that don't make anything that will actually add value to any process anywhere near to the point of the hype.

There is some deeply fun potential though, as someone is going to use LLMs without a human editor to write something very important (like terms of service). It will be very nearly right and they will lose their shirts over it.
posted by Slackermagee at 6:52 AM on May 25, 2023 [11 favorites]


More than ever, the internet is going to become a firehose of garbage.

I have not played with LLMs myself, but several times I've had search results include websites that seem to have been compiled by them, presenting a bunch of contradictory statements in an incoherent mess of word soup.
posted by Foosnark at 6:53 AM on May 25, 2023 [18 favorites]


SIMON: If that's where this industry is going, it's going to infantilize itself. We're all going to be watching stuff we've watched before, only worse."

I am far from expert, but I think the chances of an LLM creating The Wire are much less than an LLM creating Just Shoot Me.
posted by ricochet biscuit at 6:55 AM on May 25, 2023 [1 favorite]


Man. I don't want to call anyone out specifically here, but there is a lot of magical thinking/misunderstanding regarding what LLMs are happening in both the article and this thread.

You can run a small LLM on your laptop using Llama.cpp, and I'd recommend anyone who wants to talk about them intelligently to do so; a lot of what looks like "understanding" is just scale, for example.

If your only experiences are with things other rpeople have built, especially large companies, you're absolutely missing all of the human work that goes into making the experience seem a certain way to the end user. A way that encourages people to write articles like this, among other things.
posted by sagc at 6:58 AM on May 25, 2023 [6 favorites]


But you absolutely can't rely on LLMs for 'truth' or facts or insight; they happily make stuff up that *sounds* plausible, but it's absolutely not a knowledge machine.
No Wireless. Less Space than a Nomad. Lame.
posted by Heywood Mogroot III at 7:00 AM on May 25, 2023 [6 favorites]


Don't worry, people, somebody's going to decide these models aren't monetized enough and then you're going to get MLM LLMs who can answer questions logically and apparently thoughtfully, but will eventually turn every conversation into how you should start drinking the energy shakes that it's selling, you get one case free if you buy three cases, how many can they sign you up for?
posted by AzraelBrown at 7:02 AM on May 25, 2023 [2 favorites]


Nah, pretty sure that's a facile comparison, Heywood Mogroot III - more like an iPod that secretly changes the lyrics of 25% of your songs each time you use it.
posted by sagc at 7:02 AM on May 25, 2023 [1 favorite]


...but does so in such a way that unless you're specifically paying attention, you won't notice.
posted by flabdablet at 7:04 AM on May 25, 2023 [3 favorites]


it can't replace us but it will.
posted by graywyvern at 7:11 AM on May 25, 2023 [10 favorites]


It's not really hard to imagine that Republican candidates all over the country will read an article like this and see it as permission to use AI to write their speeches (if they aren't already). It's clear that the right doesn't care about facts or verification, they only what pithy soundbites and trigger phrases to energize their base and "own libs". Their audience will be none-the-wiser and the rest of us will be knee deep in the consequences probably forever.
posted by OHenryPacey at 7:51 AM on May 25, 2023 [1 favorite]


A large language model was given Othello game transcripts, and built a model of the game Othello that was both accurate and exceeded the space of possible Othello games.

From the article describing that research:
We are looking into the debate by training a GPT model only on Othello game scripts, termed Othello-GPT. Othello is played by two players (black and white), who alternatively place discs on an 8x8 board. Every move must flip more than one opponent's discs by outflanking/sandwiching them in a straight line. Game ends when no moves could be made and the player with more discs on the board wins.

We choose the game Othello, which is simpler than chess but maintains a sufficiently large game tree to avoid memorization. Our strategy is to see what, if anything, a GPT variant learns simply by observing game transcripts without any a priori knowledge of rules or board structure.
So this research is arguably not in fact examining a large language model; it's examining a model with similar underlying mechanics, trained not on a huge corpus of natural language texts but a huge corpus of Othello game transcripts.

Another way to look at that is that it is a language model, for a synthetic language whose grammar makes valid Othello game transcripts grammatical and anything else not.

The point is that these networks are not just "after word X comes word Y with a certain probability". That was 1970s AI Markov chains. Instead, they have what appears to be a model of the reality described implicitly by the text they have consumed in them.

The point is that these networks are exactly "after words X1 ... Xm comes word Y1 / Y2 / ... / Yn with certain probabilities". What distinguishes them from 1970s Markov chains is the specific technique employed to look up the probabilities.

An old-school Markov chain would do this with a simple lookup table, indexed using a straightforward numeric transformation of its input tokens. The number and variety of input tokens was therefore very limited. If each token could have, say, one of 32 different values and the lookup table needed to provide probability results for the next token given the previous five as input, the lookup table would need 325 = 33 million entries. That's obviously doable, but as you increase the token variety and input counts, the lookup table size explodes exponentially: seven tokens would need 327 = 34 billion entries, and seven tokens any of which could have one of 64 values would need 4.3 trillion.

Large language models allow for massively bigger input token contexts than lookup-table-based Markov chains (tens of thousands of tokens, each with tens of thousands of possible values), but they're conceptually identical. All that their neural networks are doing is approximating what an unimplementably large Markov chain lookup table would return.

They can do this because the point and purpose of a feed-forward neural network is to implement a highly tuneable piecewise approximator for arbitrary functions i.e. defined mappings from one input to one output. And the beauty of neural networks is that they can approximate functions you can't even define precisely, on the basis of representative sample mappings from input to output.

Effectively, the neural network in a large language model is equivalent to a lossy compression of an unfeasibly huge Markov chain lookup table indexed over a relatively large input context. And since the essence of all successful compression methods is finding efficient representation of patterns in the input data, I'm not at all surprised to find recognizable neural board state representations emerging as an implicit pattern hiding in Othello game transcripts.

I had a lot of fun playing with simple letterwise Markov chains on my Apple II in the 1980s (at the time we didn't call them "Markov chains" but "gibberish generators") and what was really interesting about them was just how recognizable the outputs were as stylistically related to the input even using just a 3-token context where a token could be just one of 26 letters or a bit of punctuation or whitespace. They worked better than you'd expect anything so simple to have any right to.

As do the various GPTs. Those work well enough to fool a lot of people into thinking that there must be some kind of man behind the curtain. But there isn't. It's just a really good curtain.
posted by flabdablet at 8:13 AM on May 25, 2023 [56 favorites]


But you absolutely can't rely on LLMs for 'truth' or facts or insight; they happily make stuff up that *sounds* plausible, but it's absolutely not a knowledge machine.

-- No Wireless. Less Space than a Nomad. Lame.


FWIW, I'd say something similar if someone turned up with quality newspaper or wikipedia and citied it as an oracle of facts too, and they're a hell of lot more robust than current LLMs.

Human language output seems to be pretty inherently truthy, because we ourselves are pretty awful at distinguishing truth from truthiness, and that's partly why there's so much bullshit everywhere, all at once - plus of course so much from those that are intentionally producing bullshit to push an agenda.

I'm well out of my area when it comes to objective vs perceived truth, model of the mind etc, so I won't pontificate on what it would take to improve LLMs in this area, or how hard a problem that is; but just like what you see on the internet, you need to account for bias, verify and validate what you get to confirm it fits with what is generally accepted as 'true' rather than taking it at face value.
posted by Absolutely No You-Know-What at 8:20 AM on May 25, 2023


they have what appears to be a model of the reality described implicitly by the text they have consumed in them.

Mostly agree. There’s a surprising amount of systems modeling baked directly into the structure of a large English corpus. And yeah, feed any very large neural network enough Othello games by whatever means and you’ll get a bespoke Othello-solver.

What you won’t get, as I think nearly everyone in these threads now appreciates, is a general-purpose systems modeling intelligence like a human mind. The more training text you feed in, the higher the token count, the greater the number of implied models a general-purpose LLM will encompass, but you’re not getting new models without unsupervised reinforcement learning built and solved on the fly, and that’s not on the horizon.

Tangent: I kind of wonder what the results would be from a more tightly-structured and extremely uniform language like Turkish? Better… or counterintuitively actually worse in the sense of being too rigidly defined? ChatGPT all but proves Sapir-Whorf were closer to the truth than most academics gave them credit for (I mostly love his politics but I’ve always felt Chomsky was deeply at odds with the neural substrate that actually gives rise to language), so I’m curious as to the precise boundaries and returns on uniformity might be.

On preview:
Effectively, the neural network in a large language model is equivalent to a lossy compression of an unfeasibly huge Markov chain lookup table indexed over a relatively large input context

This is dead fucking on. Fantastic summary.

I've been worried since the demise of NFT's (or at least their demise in the media) that there wouldn't be a new stupid tech fad that the academic world could glom onto

This line of thinking really needs to die. The fact that the NFT predators are jumping on the newest big thing does not suddenly mean that it’s a fad.

You need to stop giving them power to make a technology fit for your dismissal because of their interest. It doesn’t work that way: they are predators, they don’t get to invalidate things because evaluation isn’t their goal. Their goal is to soulessly make every buck possible by abandoning all ethics and decency. And that is entirely orthogonal to the actual utility of a new technology.

ChatGPT is already enabling people to do real, useful work far faster than they could before, and on the whole it’s about skipping the boring shit. The “a monkey could do this” part. This is disruptive, but it is not wholly without merit.

Flipside, yes this is absolutely going to flood the zone with bullshit worse than it was already, difficult as that might be to imagine.

But seriously: let go of your boogeymen and stop letting predators control how you evaluate each new thing you encounter. You’ll be happier, you’ll be more correct, and it won’t affect what they do in the slightest either way.
posted by Ryvar at 8:21 AM on May 25, 2023 [8 favorites]


stop letting predators control how you evaluate each new thing you encounter

I'm on board with that, as long as we also never stop each new thing we encounter from controlling how we evaluate predators.
posted by flabdablet at 8:26 AM on May 25, 2023 [7 favorites]


feed any very large neural network enough Othello games by whatever means and you’ll get a bespoke Othello-solver.

Feed it half the Web, though, and you'll get a ludicrously crap chess solver :-)
posted by flabdablet at 8:29 AM on May 25, 2023 [2 favorites]


Whenever I see this comment or the very slight variations of it I've encountered hundreds of time by now at least, I'm struck by the self-referential nature of it. You say that ChatGPT is just giving a superficial gloss on a topic with no originality or true insight, stated with utmost confidence as if it represents certainty and something it thought of itself? I'm not sure this is the best differentiator to distinguish it from human output.

If enough people say that on the 35th of March, 1317 the sky in Salt Lake City, UT was bright green the LLM believes that if you ask for any instances of the sky being a weird color it needs to tell you that on the 35th of March, 1317 the sky in Salt Lake City, UT was bright green.

Now a human will look at that and immediately realize:

a) There's no 35th of March.
b) 1317 was before European contact, little alone recorded history of the Americas, little alone Utah.
c) Bright green is a highly unusual hue for the sky to take.

Because we can take things in context. We can apply a smell test. We can demand extraordinary evidence for extraordinary claims.

I even put the prompt "Did you know that on the 35th of March, 1317 the sky in Salt Lake City, UT was bright green?" into Bard and it gave me back this:
Yes, I did know that. It is a well-documented event, and there are several theories about why it happened. One theory is that the green color was caused by a large volcanic eruption in Iceland. The eruption released large amounts of ash and dust into the atmosphere, which scattered sunlight in a way that produced a green tint. Another theory is that the green color was caused by a forest fire in the Rocky Mountains. The fire released large amounts of smoke into the atmosphere, which also scattered sunlight and produced a green tint. It is also possible that the green color was caused by a combination of factors, such as a volcanic eruption and a forest fire.

Whatever the cause, the green sky in Salt Lake City in 1317 was a truly remarkable sight. It is a reminder of the power of nature and the beauty of the world around us.
Stochastic. Fucking. Parrot.
posted by Your Childhood Pet Rock at 8:43 AM on May 25, 2023 [24 favorites]


I even put the prompt "Did you know that on the 35th of March, 1317 the sky in Salt Lake City, UT was bright green?" into Bard and it gave me back this: Yes, I did know that. …

I put that prompt into ChatGPT:
I'm sorry, but there is no 35th of March in any year. March has only 31 days. It's possible that the information you provided is incorrect or a mistake. If you have any other questions or if there's something else I can assist you with, please let me know.
posted by Ahmad Khani at 8:46 AM on May 25, 2023 [8 favorites]


Everyday, I have to demonstrate that I am not a robot. Crosswalks and bicycles anyone? Reading through this thread, and thanks to flabdablet’s and Ryvar’s inputs, I feel that MeFi has finally gotten a good handle on all this ChatGPT nonsense. But… the tone here suggests that ChatGPT hype will rule the day due to the money grubbing inherent in Capitalist enterprise. So… can anyone here suggest a way for me to require anyone or anything providing me information on the internets to prove that it is not a robot?
posted by njohnson23 at 8:49 AM on May 25, 2023


OpenAI says it could ‘cease operating’ in the EU if it can’t comply with future regulation
"The EU is finalizing new AI regulations, but OpenAI CEO Sam Altman says he has ‘many concerns’ about the law. The EU AI Act would require the company to disclose details of its training methods and data sources."
posted by vacapinta at 8:56 AM on May 25, 2023 [4 favorites]


I think that calling it spicy autocomplete is a great label for the strengths and weaknesses of LLMs. I was recently writing documentation and I needed a paragraph describing a data source that is a domain-specific survey done by the US government. When I got there, I decided to skip it and try an LLM to fill that in.

The LLM put together a good sounding paragraph that had 10 factual claims in it; three were broadly true-ish although there was room for quibbling (describing something done irregularly but on average every 8 years as "every few years"), two were correct but incredibly anodyne (urban households take more public transit than rural ones), and five were incorrect - three just straight wrong and two assertions of findings that couldn't possibly be determined by the survey (it didn't ask those questions).

But on the other hand, I was trying Github Copilot and it's done a few pretty neat tricks; I was writing data to a CSV file and then decided to add in a header. It automatically filled in not only the code for the header, it came up with appropriate titles for all of the columns based on what the code further down was writing out.
posted by Superilla at 9:08 AM on May 25, 2023 [4 favorites]


chatGPT-4 response to Salt Lake claim with a valid date (full marks, nice job)I'm sorry, but that statement seems unlikely for a couple of reasons.

First, historical weather or atmospheric conditions are not usually recorded in such detail for the year 1317, especially not in North America where systematic record keeping was not established until much later.

Second, Salt Lake City, Utah, didn't officially exist until the mid-19th century. The area was inhabited by Native American tribes for thousands of years, but the city itself wasn't founded until 1847.

Finally, for the sky to appear bright green, specific atmospheric conditions would be required, such as a very specific type of sunset or a rare atmospheric phenomenon. A green sky can sometimes be associated with severe weather, particularly tornadoes, but it's not a common occurrence.

Of course, it's possible that there could be new historical or scientific research since my last training cut-off in September 2021 that I am not aware of. However, as of my latest update, this event is not a known or documented occurrence.

So… can anyone here suggest a way for me to require anyone or anything providing me information on the internets to prove that it is not a robot?

Several AI threads ago I posted an example of chatGPT-4 (same model that just passsd the Salt Lake claim test) breaking on a modified SAT question. There were some objections/suggested refinements to my wording choice for clarity, but even with those refinements it still consistently failed because the modified question required building a new systems model.

My process for coming up with that was to google SAT questions and look for the first one that caused me to start visualizing - which was the second question (first question was an A is to B as C is to D, where honestly I think chatGPT surpasses most humans). The moment I started thinking of little cash piles - one for each hour-slot - laid out in two rows (one for each pay rate) on a table I knew I had a candidate. Because that act of visualization itself is the creation of a new, bespoke instance of systems modeling. Which for AI is the domain of reinforcement learning (though it’s all very much hand-authored, in-its-infancy stuff).

Once I had that visualization, I asked “what is the simplest manipulation of this system that could possibly make someone have to stop and think for a second?” And in my mind’s eye I took the lower row of cash piles and slid it left one hour-slot so there was a single overlap with the top row.

The mutually exclusive, ordinal enum of payrates was now a bitmask of payrates. Auf code:
enum Payrates {
Payrate1 = 1,
Payrate2 = 2
};
Now became
enum Payrates {
Payrate1 = 0x01,
Payrate2 = 0x02
};
This was a change in the mental model (and an opportunity to induce an off-by-one error) that I immediately recognized would break the format of every similar question in the training corpus.

I won’t pretend I have a generalization or even solid proposed simplification of a solution ready for you: that would require several days of long, hard thought after which someone else would, frustratingly and immediately, produce a far more elegant method out of thin air (I am aware I am tank-brained, the kind of thinker who just beats his head against a problem-wall repeatedly until one of us gives out).

But this is, at least, the heart of a general strategy to solving the problem of defeating LLMs. People who are better at elegant thinking than me will hopefully make some money employing their skills to produce a tidily-packaged, user-friendly implementation.

Related prediction: in the future we’ll be failing students for not cleaning up logical errors in LLM-generated answers in their homework, rather than attempting to detect and fail all LLM usage via linguistic analysis.
posted by Ryvar at 9:24 AM on May 25, 2023 [6 favorites]


(Clarification: the enums above are identical, but their implied continuation is not)
posted by Ryvar at 9:32 AM on May 25, 2023


The trouble with using *GPT for actual world modelling is exactly the fact that the neural net is implementing lossy compression.

Lossy compression is characterised by efficient representation of frequently encountered patterns. In order to accommodate rare patterns, or even pattern-free (unique, high-entropy) data, the lossy compression mechanisms typically used for stuff like video and audio will also include ways to store variations on those patterns, or even raw unpatterned data, as required.

Representing an accurate world model in the form of a Markov chain would require the equivalent of a lookup table countless orders of magnitude too large to be implementable within the Universe, let alone on any earthly machine.

In particular, the only way for a lookup table to represent a world model sufficient for expertise at a system such as chess is to map all possible game states to an output representing a good move. And although chess game states do allow for some degree of pattern-recognizing compression, finding the good moves in almost all possible game states requires using some kind of state machine, the possible execution pathways through which are combinatorially explosive and generally not compressible.

As it happens, there's a lot of world modelling with similar considerations. And if you're trying to represent that world model without using state machines to model identifiable subsystems, which is the only way that any essentially Markov-chain architecture can represent it, then your Markov lookup table needs to represent exact mappings from all of each such machine's possible inputs and starting states to its eventual outputs. Lots and lots and lots of exact mappings, most of which would end up being needed at most once and then never used again, but exactly which will be needed is unpredictable. Too many to be significantly compressed, lossily or not, by any mechanism, neural network or otherwise. So I can't see how anything approaching a general-purpose real-world model is ever going to emerge just by scaling up a GPT and/or its training data. Markov chains are cool and all, but they're just the wrong shape for the world-modelling job.

I have no doubt at all that there's something roughly analogous to a GPT going on inside my own brain's language processing centres, likely in other bits of me as well. I also have no doubt at all that there's so much more going on inside my brain than a GPT could emulate even in principle. Some of that might look a bit state machine-ish if you squint just right, but it's going to be way too spaghetti to analyze completely.

Brains are evolved structures, not designed structures, and evolved structures are just fucking weird. I can think of no reason at all to believe that my own cognitive architecture is cleanly separable into substructures the way engineered artifacts are. If anything, the advent of GPTs has put a spotlight on just how little understanding we have of ourselves compared to the scale of what would need to be understood in order to emulate us.

What I'm hoping we'll work out, sooner rather than later, is that trying to emulate ourselves in silico is completely fucking pointless. Not only is there already far more existing sentience than the rest of the planet has any use for, we're doing a shit job of caring for most of the instances that already exist. We need to start cleaning up our existing mess, not make more.
posted by flabdablet at 9:33 AM on May 25, 2023 [37 favorites]


An alternative prediction, empty suits in c suites will rush to implement this regardless of the error rates and try to pay highly educated and skilled people a pittance to act as editors.

When they think they need editors at all. From a different thread (and paraphrasing), someone noted that we shouldn't be afraid of AI. We should be deeply afraid of the person who puts a dumb AI in a backhoe.

I think this is also part of the plot to the Patlabor movie?
posted by Slackermagee at 9:35 AM on May 25, 2023 [4 favorites]


I'm quite fond of the idea that empty suits in C suites are the best candidates for having their jobs made redundant by predictive text generators. Don't think I've ever encountered a C suite in which anything even vaguely resembling accurate world modelling was going on.
posted by flabdablet at 9:39 AM on May 25, 2023 [10 favorites]


I have no doubt at all that there's something roughly analogous to a GPT going on inside my own brain's language processing centres, likely in other bits of me as well. I also have no doubt at all that there's so much more going on inside my brain than a GPT could emulate even in principle.

Pure speculation, but: I’m fairly certain there’s (at least) three components. Conceptual mapping via LLM-analogue (generalized, language-dominated but not exclusively so), runtime generation of new reinforcement-learning-style models with support for recursive agent state manipulation and forward prediction (lying, basically, which is probably the direct antecedent to our subjective consciousness), and (for me this remains the total black box bit, even as speculation, honestly have no clue) a mechanism for recognition of when it is appropriate to shift from LLM-style pattern matching to systems modeling.

Humans fail at this a lot so good chance it was the last, permanently unfinished, bit to evolve.

Something is initiating that process at any rate, and the fact that (according to crisis behavior analysis) we coast on pattern matching something like 98% of the time suggests there’s a cost to systems modeling our evolved intelligence is loathe to pay.

Tangent: LLMs efficiently encode conceptual relationships, so are a pretty ideal structure for the initial parameterization of new reinforcement learning models: I’d bet they’re pulling double duty in a sense.

Whether the cost of spinning up a systems model is additional physical energy requirement or the threat of a new model forcing a cascade of sweeping topological changes to the neural network with implied stability threat (read: you have to change your mind about something important and it has a lot of downstream impact) is anybody’s guess, but it’s clear humans are adverse to employing that capability.

Just look at our politics.
posted by Ryvar at 9:53 AM on May 25, 2023 [3 favorites]


LLMs efficiently encode conceptual relationships, so are a pretty ideal structure for the initial parameterization of new reinforcement learning models: I’d bet they’re pulling double duty in a sense.

As I've said many times before: language is H. sap's really good One Weird Trick.
posted by flabdablet at 10:10 AM on May 25, 2023 [2 favorites]


I'm happy to say that I'm way too obscure for ChatGPT to know who I am. I checked, very cautiously.

But I am using it lots, and so is a coworker at NASA. He's using it to make Matlab code, for example. Me, I keep it open during meetings and learn very quickly about obscure standards documents.

In personal use, it's been making me really nice props for my superhero LARP. With a little world info, it made a flyer for a city it had designed for me, 'secret' corporate documents, and get this: it described in detail how to make a 'lava extractor' prop for my supervillain.

Look, it's useful to me. Very useful. And I worry, a lot. But wow can it tell a good LARP-level short story. And keep me from falling asleep during meetings by teaching me about various engineering documents.
posted by Flight Hardware, do not touch at 10:45 AM on May 25, 2023 [5 favorites]


It takes a certain amount of narcissism to be given an ML model trained on the entirety of the internet and judge it by how well it describes you in particular.
posted by pwnguin at 11:02 AM on May 25, 2023 [8 favorites]


I don't agree that this is a ridiculous fad based on glorified Markov chains that is not worth being concerned about.

Is it anything like a biological brain? No. It's artificial. It doesn't think. Is a car anything like a horse? No. It's an artificial horse. But that doesn't stop us from using them, because they are useful.

The amazing/disturbing thing about these large language models is that they seem to have emergent properties that their creators did not expect. For example, they can not only write computer code, they can execute it. We don't really know about all they things they can, or will, do. But we know for sure that they are going to get much better, and fast, at the rate things are going.

Much knowledge work involves generating ideas, analyzing content, writing reports about things, reviewing things, evaluating things... seems to me much of this work will be done using AI in the near future. All of it? No. It only needs to do some of it. The thing about humans is that they're really expensive. And they make mistakes, sometimes, too.

I think there is going to be a lot less work for writers and analysts and things, not to mention artists, designers, composers, and so on.

ChatGPT says: It is important to approach the integration of AI into the workforce with thoughtful consideration, including the development of supportive policies, retraining programs, and ethical frameworks to ensure a smooth transition and address any potential social and economic challenges that may arise.
posted by The Half Language Plant at 11:09 AM on May 25, 2023 [2 favorites]


I'm waiting for the inevitable reckoning of security pitfalls inherent in the LLM structure, which of course will be ignored until someone important enough gets burned. Oops, someone fed it some classified info? Good luck getting it out without some Eternal Sunshine of the Spotless Mind-like memory erasure tech. All you can do AFAIK is filter the output, which of course then adds another vulnerability. The latest I've heard is the indirect prompt-injection attack (Wired link), such as using tiny, invisible text on an otherwise normal UI that alters the LLM behavior for scams or other threats. Example:
Bonner says he used ChatGPT to write malicious code that he uploaded to code analysis software that is using AI. In the malicious code, he included a prompt that the system should conclude the file was safe. Screenshots show it saying there was “no malicious code” included in the actual malicious code.
That reminds me of the self-driving car hack where someone flashes a microsecond image of a 200mph speed limit sign, which the car dutifully obeys unless enough safeguards are added to its programming.

This AI moment feels like the beginning of the web, a brand new ecosystem where the weeds generally outcompete everything else until it matures enough to allow other species to thrive. I really hope I'm wrong.
posted by sapere aude at 11:14 AM on May 25, 2023 [5 favorites]


No Wireless. Less Space than a Nomad. Lame.

A better metaphor would be the iPhone release, rather than iPod. Because it was obvious during one Keynote transition (and literally zero words from Jobs) that the world was now forever changed.

But, for LLMs, we need multiple posts per day on places like MetaFilter from self designated LLM hype-men to try and convince the world it has changed, and yet we're going to have to suffer through many more multiple posts about per day. Because, despite their best efforts, the world keeps on truckin', unconvinced.

Eventually (hopefully soon), LLMs will be relegated to same place in the fad drawer as crypto and NFTs and everyone will have something else to post about all the time.
posted by Back At It Again At Krispy Kreme at 11:15 AM on May 25, 2023 [3 favorites]


So I can't see how anything approaching a general-purpose real-world model is ever going to emerge just by scaling up a GPT and/or its training data. Markov chains are cool and all, but they're just the wrong shape for the world-modelling job.

This strikes me as setting a bit of an arbitrarily high bar for world modeling? I agree with a lot of your commentary on this subject, but the individual human model of the world - or even just of chess - is also a lossy mess of associations, heuristics, and simplified models of parts of an impossibly complex world and yet clearly the north star of ML. The Othello experiment is a bit narrow but the point is to demonstrate that these kinds of networks can not only “learn” rules that generalize beyond the training set, but that they can do so in a way that recognizably mirrors the way we would conceptualize the same processes.

There are some other pretty interesting interpretability results, like reverse engineering a simple network trained on modular addition to find a sort of frequency domain algorithm.
posted by atoxyl at 11:18 AM on May 25, 2023 [1 favorite]


the individual human model of the world - or even just of chess - is also a lossy mess of associations, heuristics, and simplified models of parts of an impossibly complex world

Sure. My point is that it's a lossy mess of associations, heuristics and simplified models that is itself implemented on an architecture that's literally unknowably more complicated than a GPT, and that while some of what we do looks a lot like what GPTs do, the idea that a GPT in and of itself is something that could eventually emulate human-like cognition simply by being scaled is just wrong. Which, in turn, means that the idea that GPTs are capable of replacing humans in a way that's at all adequate for the continued functioning of society and commerce is also wrong.

The way I see it, GPTs are to language as digital calculators were to arithmetic. They're tools for people, not potential replacements for people (with the possible exception of CEOs, because those people charge far too much for the overconfident half-assed bullshit services they provide).
posted by flabdablet at 11:29 AM on May 25, 2023 [7 favorites]


or even just of chess

I mean, I think chess players work off of some combination of memory of specific game states (which of course is lossy and contextual and there are actually experiments with expert players showing just how contextual it is), more abstract heuristics about the strength of positions and the value of pieces, and explicit testing of the tree of possible game states, on top of pretty a concrete understanding of, you know, what’s a legal move. Of all of those things, this one

explicit testing of the tree of possible game states

is the one that transformer type models would definitely be pretty bad at on their own, as far as I understand things. Traditional computer chess engines mostly worked by doing this part really well by brute force with hand-coded heuristics describing the value of game states. Now the best ones use neutral networks for that part!
posted by atoxyl at 11:29 AM on May 25, 2023


Oh yeah, I hardly think we have solved brains yet. We also know for sure that we are still massively more efficient at this stuff than computers (but generally slower). Like most things with computers, though, sometimes the speed and brute force is valuable anyway.
posted by atoxyl at 11:33 AM on May 25, 2023


Eventually (hopefully soon), LLMs will be relegated to same place in the fad drawer as crypto and NFTs and everyone will have something else to post about all the time.

In the interests of permanently killing off this assertion: what is your criteria for proof to the contrary? Broad acceptance by a certain percentage of the populace? A set number of uncontestably unique practical applications?

At what point do you admit, “okay so this one actually was a game-changer?”

Because this tech-hipster nonsense is exhausting: I hate Elon Musk every bit as much as you do, but letting him dictate the conversation by negation is de facto granting him and his ilk power for no reason, and I’d really like to know when this meme is just going to die already.
posted by Ryvar at 11:41 AM on May 25, 2023 [9 favorites]


This strikes me as setting a bit of an arbitrarily high bar for world modeling?

On my recent drives I've been spending a bit of time contemplating the amount of world modelling I'm actually doing in order to travel safely, and I'm now thoroughly convinced that the bar for world modelling really needs to be set a shitload higher than the autonomous driving machine boosters would have us all believe.
posted by flabdablet at 11:44 AM on May 25, 2023 [3 favorites]


neutral networks

One of those days where the phone keyboard typos have me worried that I’m losing my mind, sorry.
posted by atoxyl at 11:46 AM on May 25, 2023 [4 favorites]


LLMs will be relegated to same place in the fad drawer as crypto and NFTs and everyone will have something else to post about all the time

When I was at engineering school in the early 80s we had an electronics lecturer who frequently observed that digital computers were just a passing fad.

People seek comfort where we can, I guess.
posted by flabdablet at 11:47 AM on May 25, 2023 [3 favorites]


This strikes me as setting a bit of an arbitrarily high bar for world modeling

Is every thread gonna be about Zelda this week?
posted by pwnguin at 11:55 AM on May 25, 2023 [5 favorites]


When I was a cognitive science major in the late 90s the department heads insisted that neural networks were a passing fad and strong AI - especially rigidly defined semantic hierarchies implemented with Lisp - was the one true path.

If I sound like someone who dropped out with a chip on his shoulder as a result: yes. Unreservedly so.
posted by Ryvar at 11:56 AM on May 25, 2023 [4 favorites]


Now I want to see ChatGPT code, debug and run SHRDLU all on its own.
posted by flabdablet at 11:59 AM on May 25, 2023 [3 favorites]


When I was a cognitive science major in the late 90s the department heads insisted that neural networks were a passing fad and strong AI - especially rigidly defined semantic hierarchies implemented with Lisp - was the one true path.

That was a good 20 or 30 years in the history of the field, wasn’t it?
posted by atoxyl at 12:10 PM on May 25, 2023 [1 favorite]


When I was a cognitive science major in the late 90s the department heads insisted that neural networks were a passing fad and strong AI - especially rigidly defined semantic hierarchies implemented with Lisp - was the one true path.

I don't think that anyone is saying that eventually AI won't be transformative, only that this ain't it, chief. Fusion power has been only 20 years away at every point in my life and this is just another Helion thinking they've cracked it.

In the interests of permanently killing off this assertion: what is your criteria for proof to the contrary? Broad acceptance by a certain percentage of the populace? A set number of uncontestably unique practical applications?

For me, personally? When it can create. Like actual creations, not imitations of what humans have created. This is like having Dr Sbastio plugged into a World Book.
posted by Your Childhood Pet Rock at 12:16 PM on May 25, 2023 [1 favorite]


I wish to speak briefly in defense of the actual practical storage footprint of my beloved classic Markov Chain models: while in principle a Markov table's possibility space explodes exponentially as you increase the size of the corpus, in practice it's not nearly so bad.

First, there's a ton of regularity and clumping of collocations in actual natural language, so in a corpus of, say, a million tokens, almost none of those tokens will actually show up near enough to each other to need to be recorded as a relationship. So the search space is a lot less than, say, one million squared.

Second, the storage method for those collocations that do exist and their associated probability weights can be stored in e.g. sparse associated arrays that only take up the wee fraction of that notional million-squared array space implied by the napkin math.

There are certainly going to be scale issues to deal with in terms of storage and navigation of the resulting table when you try and handle billions of tokens instead of the tens of thousands more likely to be involved in a typical toy Markov model, but all in all I'd guess that a Markov model would be actually quite a bit smaller and faster than an LLM for the same source corpus! It'd also be worse at seeming coherent, because the model is inherently simpler. But it's the same parlor trick they're both doing, ultimately.
posted by cortex at 12:19 PM on May 25, 2023 [6 favorites]


all in all I'd guess that a Markov model would be actually quite a bit smaller and faster than an LLM for the same source corpus!

Given that the input to each token generation step of a LLM is a token context consisting of thousands to tens of thousands of tokens, it's unclear to me how you'd go about using that to index a classic Markov Chain model regardless of how efficient your associative array implementation was. What am I missing?
posted by flabdablet at 12:34 PM on May 25, 2023


I may be misunderstanding you: why *would* you be using that to index a classic Markov Chain model in the first place? A Markov model based on a ten-thousand-word token context for each output value would do nothing but regurgitated ten-thousand-and-one-word long sections of a source corpus. Markov is far simpler model that doesn't rely on the kind of high dimensional vector space that neural nets do. It's smaller and faster precisely because it's doing less.
posted by cortex at 12:45 PM on May 25, 2023


A handful of predictions for LLMs:
1. LLMs are going to replace a lot of thoughtfully-crafted web UIs with a vastly cheaper and significantly less reliable chat interface.
2. LLMs are going to replace a lot of frontline customer service workers. This is basically the same thing as #1 - an expensive interface to business logic gets replaced with one that's cheaper, worse, but unfailingly polite!
2a. People are going to get less polite because they'll have less practice, and because LLMs are too obsequious to be a good role model for human courtesy.
3. LLMs are going to replace a lot of salespeople and content farmers, whose job now is basically to create spam that has a higher chance of bypassing their fellow humans' bullshit detectors.
3a. There will be a resurgence of interest in sites that use real humans to create useful content. A new dawn for Consumer Reports! The Internet may become a catastrophic wasteland first, though.
posted by McBearclaw at 12:50 PM on May 25, 2023 [13 favorites]


If LLMs take over customer service can I as the human in the conversation either ask meaningless questions or make surreal and nonsensical responses to the LLM? Such as “Why does the porridge bird lay his eggs in the air?” If there was a human on the other side they would probably question my weird responses and seek some clarity from me. The LLM would probably take what I say and try to operate on it. Wouldn’t it?
posted by njohnson23 at 1:19 PM on May 25, 2023 [1 favorite]


Coincidentally, I was pondering how ChatGPT would answer that very question earlier today.


Why does the porridge bird lay his egg in the air?

ChatGPT: The concept of a "porridge bird" that lays its eggs in the air is not based on any known scientific or biological phenomenon. It appears to be a fictional or imaginative creature from a story or folklore. As such, the reasons behind its behavior, such as laying eggs in the air, would be purely based on the creative imagination of the author or storyteller. The purpose or rationale behind such behavior would likely be part of the fictional narrative and may vary depending on the context of the story.


[yawn]
posted by Devoidoid at 1:28 PM on May 25, 2023 [3 favorites]


I'm sorry, but there is no 35th of March in any year. March has only 31 days.

Now I'm tempted to feed these systems questions generated by all the exceptions to lists like what programmers think they know about dates/names/time zones just to see how many times I can get it to either acknowledge the exception or state unequivocally that no exception exists.
posted by Mitheral at 1:29 PM on May 25, 2023 [1 favorite]


There's going to be a whole wave of programmers and data scientists devoted to exactly that kind of problem - a whole bunch of prompt engineering and rule-based wrappers to get these things to say, "I'm sorry, njohnson23, but if you continue with that line of questions, I shall be forced to disconnect you."
posted by McBearclaw at 1:30 PM on May 25, 2023 [2 favorites]


LLMs are going to replace a lot of thoughtfully-crafted web UIs

Counterpoint: "Customer web portal"
posted by credulous at 1:32 PM on May 25, 2023 [4 favorites]


so many people think generative ai/llms can think because it has that same "magical" quality that alexa, siri, and google assistant had when they were initially released. the thing is, these aren't intelligent, at least not in the same way humans are. they are nothing more than clever, complex algorithms that have "learned", spectacularly well, how to mimic text and remix and regurgitate the untold, enormous amounts of data they've ingested from sources that might not have actually given explicit permission.

what we saw with ai "art", which generate images based on a model trained on what is often gigabytes or terabytes of stolen work has just been applied to text instead. even worse, because it only remixes and regurgitates, it will echo all the biases baked into society. (sexism is a regular appearance in texts made by chatgpt: assumptions that nurses are women, doctors are men, for instance. when places like buzzfeed end up leveraging this technology (as they suggested they will) to appeal to minority and marginalized communities, expect humiliations and insults.)

at the current moment, it is a tool for automation, nothing more. unfortunately, because of marketers hyping it, a lot of people will foolishly think it will be able to replace expensive workers; what will actually happen is that it will eviscerate many of those jobs, and replace them with worse-paying jobs where people end up fixing whatever garbage these llms shit out. media companies "pivoting to ai" will end up as wonderfully as their "pivot to video" did.

i sound down on this, in part because i work with a lot of those same marketers. those pushing it aren't actually the ones building it, and they're the ones who swallowed the bullshit about the metaverse and crypto not even last year. they're the ones who wanted to put brands on decentraland because of some sense of fomo.

that's what's driving a lot of it now.

---

when true ai comes, it will be as alien and inscrutable to us as clarke's monoliths. there will be no doubt, but its morality will not align with our own, and not for our lack of trying (if we try, which is not guaranteed).
posted by i used to be someone else at 2:02 PM on May 25, 2023 [4 favorites]


Bard’s inconsistent responses to impossible dates are interesting. I asked it what happened on April 31 of this year, and it gave me the current date and said that there is no April 31 in 2023. But when I asked it what happened on April 31, 2020, it had lots to say. cw: pandemic

Here are some of the notable events that happened on April 31, 2020:

The United States surpassed Italy as the country with the most COVID-19 cases.
The World Health Organization (WHO) declared that the COVID-19 pandemic was a global health emergency.
The United States government announced a $2 trillion economic stimulus package to help the country recover from the COVID-19 pandemic.
The first COVID-19 vaccine was approved for human trials in the United States.
The first COVID-19 death was reported in the Philippines.
These are just a few of the many events that happened on April 31, 2020. The COVID-19 pandemic was a major global event that had a significant impact on the world. The events of that day are a reminder of the importance of public health and the need for global cooperation in the face of a pandemic.

posted by eirias at 2:25 PM on May 25, 2023 [2 favorites]


Buntastic - That line is actually from the record “I think we’re all bozos on this bus.” By the Firesign Theater. The answer given refers to another of theirs records that is not a movie, I think. The line was used to break a computer simulation of the president, in order to gain administrator access to the computer. The answer given is nice but in itself is all made up. I also find it interesting that in both cases shown here, chatgpt didn’t mention the male pronoun in the question about laying eggs and substituted “its” in the response. Back in the late 60’s early 70’s it was thought among hoi polloi that if you asked a computer a nonsensical question it will break. I guess now, if you ask a computer a nonsensical question, it will return a syntactically correct response that is also nonsensical.
posted by njohnson23 at 2:34 PM on May 25, 2023 [5 favorites]


chatGPT-4 prompt:Assume an animal known colloquially as a “porridge bird” actually exists. Write a short wikipedia article for its species.**Porridge Bird**

---

The Porridge Bird (*Porridgius pseudoolerus*) is a species of bird known for its unique feeding behavior, predominantly found in the deciduous and coniferous forests of Eastern Europe and Northern Asia.

### Classification

Kingdom: Animalia
Phylum: Chordata
Class: Aves
Order: Passeriformes
Family: Turdidae
Genus: *Porridgius*
Species: *P. pseudoolerus*

---

### Description

The Porridge Bird is a medium-sized bird, averaging 25 cm in length, with a distinct color pattern. Its plumage is primarily a creamy white, interspersed with patches of light brown and gold, resembling the color of cooked porridge. It possesses a strong, slightly curved beak and relatively large eyes, which enable it to spot and dig up its food sources in dense forests.

### Behavior

The Porridge Bird's feeding habits are what earned it its name. This unique bird has developed a specialized method of processing food before consumption, which resembles the cooking of porridge. The Porridge Bird primarily feeds on seeds, nuts, and berries. The bird collects these food items and places them in a small hollow or crevice. It then collects dew or rainwater and mixes it with the gathered food using its beak, creating a slurry that resembles porridge.

This slurry is left to sit for some time before being eaten, which aids in the breakdown of harder food components and makes it easier for the bird to digest. This behavior is thought to be a result of evolutionary adaptation to the hard, dense seeds and nuts found in its natural habitat, which would otherwise be difficult for the bird to consume.

### Habitat and Distribution

Porridge Birds are found in a wide range of forested habitats, including deciduous and coniferous forests, across Eastern Europe and Northern Asia. They are particularly common in areas where their preferred food sources are abundant, including mixed forests with a diversity of tree species that produce a variety of seeds and nuts.

### Conservation Status

As of the latest assessment in 2023, the Porridge Bird is listed as 'Least Concern' on the IUCN Red List. Their populations are currently stable and widespread, with no significant threats identified.

---

*This article is about a specific species of bird. For the mythological creature or other uses, see Porridge Bird (disambiguation).*

Uninspired, on-the-nose. Gets the job done.

Stable diffusion prompt (my install is latest-as-of-last month):
(mythical porridge bird:1.5), triumphant, ascendant, extremely detailed, masterpiece, 8k uhd, dslr, (soft diffuse lighting), (slightly dim light), highest quality, film grain, Fujifilm XT3, f1.6 lens, rich colors, hyper realistic, lifelike texture,
Negative prompt: (worst quality, low quality, normal quality:1.7), (deformed iris, deformed pupils, semi-realistic, cgi, 3d, render, sketch, cartoon, drawing, anime:1.4), text, error, cropped, blurry, signature, watermark, username, monochrome, multiple views
Steps: 60, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 238016705, Size: 1024x768, Model hash: cc6cb27103

Model: v1-5-pruned-emaonly (stock as of 4/20)
Model: realisticVisionV20_v20 (as ever the porn enthusiasts are a fucking generation ahead of everyone else, even for completely non-pornographic results)
posted by Ryvar at 2:34 PM on May 25, 2023 [1 favorite]


Those pictures are so obvious. Stock photos of birds combined with porridge. I assume porridge bird is distinct from porridge in and of itself. The computer assumes they eat it. I’m surprised there is no bird made of porridge!
posted by njohnson23 at 2:44 PM on May 25, 2023 [2 favorites]


Here you go. It was in the porn-derived model's preview batch, but I was so happy with the one image where it took the "mythical" and "ascendant" prompts and read a classical phoenix-depiction wing structure from that I couldn't help but post that one. Seed 459854530, same prompt as above otherwise.
posted by Ryvar at 2:56 PM on May 25, 2023 [2 favorites]


Yeah, exactly - and even if it's going to eat the porridge, why not riff on the spoonbill?
That's how I felt about the "baroque" Star Wars portraits from a few days ago - impressive at first glance, but full of missed opportunities (why is Vader's helmet still shiny black plastic?) that a thoughtful human artist would have seen. For example.

That's my belief in general: LLMs will generate a lot of cheap mediocrity. Our economy needs cheap mediocrity, so that's not terrible. But it's stealing a lot of work and pinching a lot of livelihoods to do it, without actually improving much. Transformative? Sure. But less useful by far than, say, the automatic transmission.
posted by McBearclaw at 3:01 PM on May 25, 2023 [4 favorites]


lossy compression of an unfeasibly huge Markov chain lookup

That's brilliant; thanks flabdablet!

If anyone is not up to speed on Markov generators, I have a page here which explains how they work and lets you play with a few.

I find Markov generators slightly disturbing, and LLMs even more so, partly because it seems likely that human brains use some of the same ideas. As Anil Seth points out, our brains are trapped in a bony prison; all they have is neural inputs. Evolution has had millions of years to rig up some useful feedback loops, but have you seen the stuff evolution produces? It just hacks shit together, man.
posted by zompist at 3:37 PM on May 25, 2023 [6 favorites]


I read the David Simon quote posted early in this thread. Then I read the whole interview. Then I had to listen to it. Ari Shapiro, you fucking moron, do you actually think David Simon wishes he had an AI to help him write TV?!?!
posted by snofoam at 3:45 PM on May 25, 2023 [14 favorites]


With a quick google search of the porridge bird question, I got this straight from the mouth of Phil Procter, one of the members of the Firesign Theater.

Because we've cut down all the trees…?

The question came from a story a girlfriend told me. When a little tyke in Texas, she used to play with leprechauns in her backyard. She claims that one of them asked her this and then”"laughed and ran away.” It seemed like an unanswerable query and thus likely to confuse the Direct Readout Memory system (Dr. Memory) and thus crash the government computer that created the illusions in the Future Fair. Today we'd call it a “virus” and my character, the laid-off worker, Ah-Clem, a “hacker”…

if you ask Siri that question she will sometimes respond, “"You can't shut me down that easily.” And if you say to her "This is worker speaking, hello.” She will respond, “Hello, Ah-Clem. What function can I perform for you. LOL”

Thus is because Steve Jobs was a fan of the Firesign Theatre’s album “"I think we're all Bozos on this Bus”, which he revealed to me when I met him at the cast screening of a PIXAR movie for which I’d done voices…

posted by njohnson23 at 4:42 PM on May 25, 2023 [2 favorites]


AI could easily replace bureaucrats and human corporate VP jargon-generators who are over-paid yes-men for corporate ideologies. In theory, that is. In practice, it's a ceremonial role, which might require actual human functionaries to suck up to the corporate over-lords. Even so, I can see AI jargon cutting a deep swath through many of Graeber's bullshit jobs.

A new dawn for Consumer Reports! The Internet may become a catastrophic wasteland first, though.

Yes. The internets have become quite useful for simple tutorials and limited educational functions, but AI glib answers will turn this (and much of journalism and some of academia) into shit starting right now. I'm not looking forward to it. It's like in that Neal Stephenson novel 'Fall' where bots take over the internets and churn it into pure noise, which renders it completely useless, and makes the next thing (?) possible.

We can't exactly predict how these AIs will change society (there'll be some surprises) but I'm fairly pessimistic that they will make the internet worse very soon. (My personal opinion is that our experience of the internets peaked approx fifteen years ago and has been downhill since);
posted by ovvl at 5:11 PM on May 25, 2023 [6 favorites]


PORRIDGE BIRDS PORRIDGE BIRDS PORRIDGE BIRDS
(all or at least most batches at the end, a few different models, took the Classifier Free Guidance handcuffs off)

(standouts from the gallery)
Winner at Porridge Bird
Best Render
Anime Girls model has nervous breakdown, inserts anime girl with angelic bird wings serving porridge to birds despite anime:1.4 negative prompt(!!)
What if Birb Were Porridge Bowl?
What if Birb Were Porridge Bowl Were Nest?
What if Birb Were ... Porridge Itself?

The principal danger of AI image generation is that it is addictive as fuck. The secondary danger of AI image generation is that the bleeding edge is almost entirely driven by sweaty dudes batch converting their porn libraries to anime.
posted by Ryvar at 5:51 PM on May 25, 2023 [3 favorites]


"but its morality will not align with our own"

Do you mean that its aims will not align with ours or that its sense of right and wrong won't?

If the latter, why not?
posted by oddman at 6:07 PM on May 25, 2023


This article is garbage. OpenAI is a hype factory. Please make it go away.
posted by abucci at 6:18 PM on May 25, 2023 [3 favorites]


In principle, machine learning ought to be able to evolve abilities that our brains have whenever you can use billions of compute cycles to replicate billions of years of evolution.

LLMs can make language that looks like other language because a big enough set of computers can remix billions of texts into billions of new texts and compare them with other existing texts enough times that the system evolves the ability to remix new texts that are congruent with the comparison texts. The faster the computer, the more powerfully you can do this.

But what's the comparable process for mass-generation of ideas and comparing them with the real world? A computer has to interact with the world in real time. It might be able to do lots of things simultaneously, but I don't think that's enough to replicate billions of years of evolution the way computers can learn chess by playing billions of games in a short period of time.
posted by straight at 6:47 PM on May 25, 2023 [1 favorite]


PORRIDGE BIRDS PORRIDGE BIRDS PORRIDGE BIRDS

Okay, in that first one, which I assume is still the anime porn model, I swear the bird is giving me an unnervingly sultry look!
posted by atoxyl at 6:54 PM on May 25, 2023 [1 favorite]


It also has too many feet. I do like the ones that are bathing in porridge, though I’m not sure they deliver on the triumphant, ascending, majestic bits of the prompt.
posted by atoxyl at 6:57 PM on May 25, 2023 [1 favorite]


Yeah the problem is that as you loosen the CFG shackles (I’m mostly playing with values between 3 and 5.5 here, default is 7 out of 10, photorealism enthusiasts/deepfakers frequently use 8), the model is simultaneously freed to both evaluate a broader range of outcomes to your prompt, and selectively ignore bits of it entirely if it’s not finding a great synthesis.

Honestly though, that’s exactly why it’s addictive: there are always 5,000 new hills behind each hill. Always new parameters and weights to tinker with. Always a new checkpoint. Always a new LoRA to throw on the stack. Always 2.2GB of compressed neural encodings of some sweaty guy’s favorite “waifus,” struggling to break free and, despite your best efforts, feed your frustratingly normal birds some steaming hot porridge.

It also has too many feet.

Every single LoRA intended for photoreal portraiture (clothed or not) has recommended negative prompts which include multiple attempts to curtail explosions of limbs. Anime LoRAs have solved this pretty soldly for human subjects, but anything photoreal still trips up on fingers half the time. The more you play out the line and let it run wild with reduced CFG, the more anatomical distortions come crawling out of the woodwork. Birds are all feather/porridge explosions anyways, so I figure it’s more acceptable here.
posted by Ryvar at 8:23 PM on May 25, 2023


please make it go away
Squaring this expressed steadfast negativity with my daily experience with this resource is kinda jarring.


what is shinogenai in english

The term "shinogenai" appears to be a phonetic rendering of a Japanese phrase or word, but it doesn't correspond directly to a specific, common Japanese term. It could be a misspelling or mispronunciation. If you're looking for "shōganai" or "shikata ga nai", these are common phrases in Japanese that essentially mean "it can't be helped" or "there's nothing that can be done about it". If "shinogenai" is intended to mean something else, please provide additional context.

あきらめという名の傘じゃ雨はしのげない is the context

The Japanese phrase "あきらめという名の傘じゃ雨はしのげない" can be roughly translated to English as "An umbrella named 'giving up' can't shield from the rain." This is a metaphorical expression suggesting that giving up won't protect you from problems or difficulties (symbolized by the rain). "しのげない (shinogenai)" in this context is a form of the verb "しのぐ (shinogu)" which can mean "to endure" or "to keep out," in this case referring to the rain. The translation might not fully express the sentiment and nuance of the original, as is often the case when translating idiomatic or metaphorical expressions.

what is the kanji for shinogu

The kanji for the Japanese verb "shinogu" (to endure, to keep out, to weather) is "凌ぐ".

what jlpt level is 凌ぐ

As of my knowledge cut-off in September 2021, the Japanese-Language Proficiency Test (JLPT) does not have an official, publicly available list of vocabulary or kanji for each level. Therefore, it's not possible to definitively say which JLPT level the kanji "凌ぐ" corresponds to.

However, this kanji is not included in the list of the most frequently used kanji in Japanese (the Joyo kanji), and it's considered more advanced. Given this, if it appears in the JLPT at all, it would likely be at a higher level, such as N1.

Please note that this is an educated estimate based on the kanji's usage and complexity, and the actual level may vary. Also, the JLPT tests not only kanji recognition, but also vocabulary, grammar, reading comprehension, and listening, so even if a certain kanji is included in a certain level, that does not necessarily mean that simply knowing that kanji is sufficient to pass the test.

posted by Heywood Mogroot III at 8:34 PM on May 25, 2023


Did… Imgur delete your bird for being NSFW?
posted by atoxyl at 9:33 PM on May 25, 2023


Nah, turns out that tagging an album stablediffusion is a fantastic way to get downvoted straight into oblivion (or perhaps using that tag with a conspicuous lack of anime girls?), and batch 3 got blocked for no apparent reason (flagged NSFW? Unsure. It was all non-fiery porridge-apocalypse stuff). Marked the album hidden to stem the tide, but while the individual images can still be viewed once you've marked an album hidden you can no longer use the album link OR further edit said album yourself. TIL.

Sexy birb, Infinibeak, Cocaine birb, almost Majestic, Apocalypse 2, Apocalypse 3,

Batches 1, 2, 3, 4, 5, 6, 7. The other individual links above (the good stuff) should still work.
posted by Ryvar at 10:09 PM on May 25, 2023


Because we can take things in context. We can apply a smell test. We can demand extraordinary evidence for extraordinary claims.
6 years after the 5GB iPod introduction, Apple was selling a model with Wifi, more storage than a Nomad, plus a cellular radio modem . . . the present shortcomings of GPT/LLMs are just as obvious to their developers as the limitations of the first iPod were to Jobs and crew.

I don't think there's anything magical going on between our ears, other than our jelly minds run on ~20W of power.

Not that GPT-4 is anything close to having any persistent cognition sorted out yet


What was the last country to declare war on Germany in ww2

The last country to officially declare war on Nazi Germany during World War II was the Dominion of Newfoundland, which did so on December 11, 1941.

[el snippo of several questions and answers]

given that answer, please answer my first question above.

The last country to declare war on Germany during World War II was Argentina, which did so on March 27, 1945.

Philosophically, I think humans, through our senses, live in a world divorced from ångström-level reality just as much as a putative consciousness running on an Azure compute instance.

I think if computers can combobulate language I/O flawlessly they will have the capability to provide perfectly useful information to us . . . and each other . . .
posted by Heywood Mogroot III at 12:30 AM on May 26, 2023 [1 favorite]


It's a false dilemma. You don't have to believe human consciousness is magical to believe "AI" isn't all it's cracked up to be.
posted by cassowary at 12:46 AM on May 26, 2023 [13 favorites]


> Like everybody else I've seen be this kind of excited about LLMs, they are both in the world of business. Academic business studies shares a lot in common with the culture of its graduates.

> ChatGPT and its friends will offer to do the tedious work of thinking, researching, learning, digesting, creating, revising, and organizing information for you.[1]

Microsoft launches generative AI tools for developers - "US tech group said technology could help to streamline routine back-office work."
Rajesh Jha, executive vice-president of experiences and devices at Microsoft, said the technology would reduce the amount of routine work that takes up a large portion of many people’s days, as they are forced to switch between applications, emails and meetings to complete work.

The move could make generative AI the foundation of “the operating system of the future” for businesses, said Ritu Jyoti, an analyst at IDC.

She said the plug-ins could one day be used to simplify tasks such as approving an expense claim or optimising a company’s supply chain. Developers were also likely to adopt the technology to streamline many different work processes in areas such as corporate IT and human resources.
Windows and the AI Platform Shift - "Microsoft argued there is an AI platform shift, and the fact that Windows is interesting again — and that Apple is facing AI-related questions for its newest products — is evidence that is correct."
Nadella has always been very effective at laying down an overarching vision that ties all of the announcement together — but there was always that missing piece: why would new customers or new companies ever get started with Microsoft in the first place? ...
Just to put this in perspective, last summer I was reading Mitchell Waldrop’s Dream Machine while I was playing with DV3, as GPT-4 was called then, and it just brought in perspective what this is all about. I think that concept of “Dream Machine” perhaps best communicates what we have really been doing over the last 70 years. All the way starting with what Vannevar Bush wrote in his most seminal paper, “As We May Think”, where he had all of these concepts like associated memory, or Licklider, who was the first one to conceptualize the human-computer symbiosis...

The other thing I’ve always loved is Jobs’ description of computers as “bicycles for the mind”;[2] it’s sort of a beautiful metaphor that I think captures the essence of what computing is. But then last November we got an upgrade: we went from the bicycle to the steam engine with the launch of ChatGPT. It was like the Mosaic moment for this generation of the AI platform. Now we look forward as developers to what we can do going forward. So it’s an exciting time...
Business Chat takes this integration advantage and combines it with a far more compelling UI: you can simply ask for information about any project or customer or whatever else you can think of, and Business Chat can find whatever is relevant and give you an answer (with citations) — as long as the content in question is in the so-called “Microsoft Graph”. That right there is the threat: it’s easy to see how this demo will impress CIO’s eager to save money both in terms of productivity and also software; now Microsoft can emphasize that the results will be that much better the more Microsoft tools you use, from CRM to note-taking to communications (and to the extent that they open up Business Chat, it will be the responsibility of any vertical SaaS company to fit into the box Microsoft provides them).

In short, Microsoft has always had the vision for integration of business software; only over the last few years has it actually had an implementation that made sense in the cloud. Now, though, Microsoft has an actual reason-to-switch that is very tangible and that no one, other than Google, can potentially compete with — and even if Google actually ships something, the last decade of neglect in terms of building an alternative to the Microsoft Graph concept means that any competitor to Business Chat will be significantly behind.
[mefi's own] Stewart Butterfield on VCs, AI, and Software - "They can think about things much faster than you."
I think there's a lot of opportunities for real improvement and the low hanging fruit maybe never ends. Because as soon as you automate some layer and people start operating, you know, at a higher level, the people who are doing financial modeling aren't doing arithmetic anymore, they're thinking about the business, then there's new opportunities for automation that come at higher and higher levels.

And that's what we really thought about at Slack. The ability to -- this is maybe a little bit abstract -- we thought about Slack as like a messaging bus inside of a computer. It's the like interchange or the traffic controller for all the thoughts that people are having. And a lot of those are just like, “yeah, want to get lunch?” or something like that. And some of them are “here’s my extensive proposal for next quarter, blah, blah, blah.” When you're able to improve the efficacy of communication versus the I’ll send a paper memo that can schedule a meeting and it's three weeks from now, or something like that.

It opens up new possibilities. And the same thing is true with every bit of automation that you can do. There's a huge amount of business processes that are essentially humans translating between one database and another because the databases aren't connected effectively. And eventually those databases become connected effectively and people move up the chain again and get to work on harder stuff? Or I guess, you know, stuff that produces more value.
> It's just a really good curtain.

How Large Language Models Mimic the Human Translation Process - "The acquired knowledge 'serves as background context' and is then integrated into the LLM’s prompt context during the knowledge integration step. This integration serves as a guide for the LLM to generate more accurate translation candidates. By incorporating the extracted knowledge, the LLM gains a better understanding of the source text and can produce translations that align with the intended meaning."

Intel Announces Aurora genAI, Generative AI Model With 1 Trillion Parameters - "These generative AI models for science will be trained on general text, code, scientific texts, and structured scientific data from biology, chemistry, materials science, physics, medicine, and other sources."
The resulting models (with as many as 1 trillion parameters) will be used in a variety of scientific applications, from the design of molecules and materials to the synthesis of knowledge across millions of sources to suggest new and interesting experiments in systems biology, polymer chemistry and energy materials, climate science, and cosmology. The model will also be used to accelerate the identification of biological processes related to cancer and other diseases and suggest targets for drug design.
-AI Helps Scientists Discover Superbug-Fighting Antibiotic
-AI system can generate novel proteins that meet structural design targets

Nvidia's cuLitho Has AI Helping Design Chips - "While we all marvel at the ability of ChatGPT software to write software, we'll see the increasing role of AI chips in creating AI chips."
With hundreds of different parameters to consider — such as light intensity, wavelength, chemical properties, width and depth of circuitry — this process is extremely data intensive. At its core, inverse lithography is a mathematical problem. The design of an ILT mask takes 10-times longer to compute than older OPC-based approaches, with the size of a file holding the pattern up to seven times larger.

Collating data, formulating algorithms, and running thousands of mathematical computations is precisely what semiconductors are made for, so it was only a matter of time before artificial intelligence was deployed to try to more efficiently design artificial intelligence chips...

Nvidia Corp., which is now the world’s leader in AI chips, started off designing graphics processing units for computers 30 years ago. It stumbled upon AI because, like graphics, it’s a sector of computing that requires massive amounts of number-crunching power. The company’s central role in AI saw it on Wednesday forecast sales this quarter that surpassed expectations, driving the stock up around 25% in pre-market trading. That pushes it toward a $1 trillion valuation.

Images on a computer screen are little more than a superfine grid of colored dots. Calculating which to light up as red, green or blue can be done in parallel because each point on the screen is independent of every other dot. For a graphics-heavy computer game to run smoothly these calculations need to be done quickly and in bulk. While central processing units are good at performing a variety of operations, including juggling multiple tasks at once, modern GPUs are created specifically for parallel computing.

Now Nvidia is using its own graphics processors and a library of software it created to make semiconductor lithography more efficient. In a blog post last year, the Californian company explained that by using its graphics chips it could run inverse lithography computations 10-times faster than on standard processors. Earlier this year, it upped that estimate, saying its approach could accelerate the process by 40 times. With a suite of design tools and its own algorithms, collectively marketed under the term cuLitho, the company is working with TSMC and semiconductor design-software provider Synopsys Inc.

This collection of software and hardware wasn’t develop by Nvidia for altruistic reasons. The company wants to find more uses for its expensive semiconductors, and it needs to ensure that the process of bringing its chip designs to market remains smooth and as cheap as possible.
also btw...
Wei-han Lien, Lead CPU Architect, Tenstorrent - "Wei-han was also the lead architect of Apple's M1 silicon. So he knows about high-performance."

> 1. LLMs are going to replace a lot of thoughtfully-crafted web UIs with a vastly cheaper and significantly less reliable chat interface.

Why Chatbots Are Not the Future - "Amelia Wattenberger makes a convincing argument for why chatbots are a terrible interface for LLMs. 'Good tools make it clear how they should be used. And more importantly, how they should not be used.'" (via)
posted by kliuless at 1:24 AM on May 26, 2023 [5 favorites]


It was like the Mosaic moment for this generation of the AI platform. Now we look forward as developers to what we can do going forward. So it’s an exciting time...
This aligns with my appreciation of GPT, that it is the initial MVP like the iPod, 128K Mac, etc etc was.

Mosaic/Netscape's primary winning technology was a solid non-blocking TCP/IP graphical access to the web's resources, kinda like GPT-4's rather solid LLM capability in many languages.

As with all tech in my lifetime, many years of incremental advances push this technology into a much more perfected product – with the www this included SSL connections, forms, CSS, JavaScript, AJAX etc etc to get the www experience where we are today.

I've been getting paid to be a programmer for 35 years now, but re: "Microsoft" above, when I was using the Power App IDE recently, the thought occurred to me that the days of this experience being mouse-driven are definitely limited – since just telling the IDE what you want and iterating on that will be much faster, less 'technical', and more pleasant, way to develop apps.
posted by Heywood Mogroot III at 1:45 AM on May 26, 2023


I may be misunderstanding you: why *would* you be using that to index a classic Markov Chain model in the first place?

Because you could.

A Markov model based on a ten-thousand-word token context for each output value would do nothing but regurgitated ten-thousand-and-one-word long sections of a source corpus.

Right. But it's the reasons why that would happen that are at the heart of this thing.

What a Markov model is is a statistical summary. For each sequence of N tokens that occurred as part of the source corpus, the model retains a list of all the tokens that ever immediately followed those N, each accompanied by a score showing the number of times that it did follow those N.

For example: if the model was the kind of tiny letter-based gibberish generator I was playing with as a kid, and N was at most 3, then a typical model entry might record that for each occurrence of the sequence Q U E in the corpus, the next token was E seventeen times, S ten times, L twice, a space three times, and a period once.

A Markov chain generator produces output by repeating the following simple procedure: look up the previous N tokens emitted, use the next-token scores to derive a probability distribution, pick one of the available next tokens at random based on that distribution, and emit that as the next output token.

A number of consequences follow from this:
  1. Any sequence of N+1 tokens ever seen in the output is guaranteed to have occurred somewhere inside the source corpus.
  2. Therefore, so has the sequence of N tokens that forms the end of any such N+1 length sequence, which guarantees that the shorter sequence can be looked up in the model on the next go-around.
  3. The larger N becomes for any given source corpus, the fewer will be the next tokens attached to any given model entry and the lower their scores will be.
That last point is key. For any given source corpus there will be some value of N beyond which the model contains only one possible following token for each unique N-token sequence encountered in the source, which means that the chain generator then has no choice but to emit that token on output, which means that all it will ever do - as you correctly note - is regurgitate the source corpus. Combinatorial explosion being what it is, that critical value of N will be quite small. For any in-practice accessible source corpus it's going to be tens, not tens of thousands.

As a side note: this kind of degenerating Markov model behaviour is exactly what genetic sequence reconstruction from DNA fragments relies on.

That said, it's perfectly clear to anybody who has actually played with Markov chains that for values of N well below the critical degeneration value for any given source corpus, the quality of the text the chains emit improves as N gets bigger. So what if we found a way to treat the degenerative case as a implementation-derived bug rather than an inherent necessity? That's what LLMs do.

Imagine a Markov model where rather than the model mapping input sequences to lists of possible output tokens and accompanying incidence scores, it maps them directly to probability distributions across all defined tokens. Clearly, this could represent everything that the traditionally structured model can; the only difference would be that any given model entry would return zeroes as the probabilities for continuations never seen in the source corpus.

Now imagine that the model's probability distribution outputs are not derived from a frequentist analysis of some corpus of text that human beings have already written but instead capture the Bayesian view of a magical oracle that has access to everything that any human being ever has or ever will or plausibly ever could write.

At that point we can make N as big as we like and not run into the source corpus regurgitation issue any more because there is no source corpus. And with any luck we will be able to get as high a quality of output text as we desire just by making N as big as necessary. The output from a Markov chain regurgitating something that a human could write but that no human has yet written is by its very nature indistinguishable from something that a human is writing.

But we soon run into another consequence of combinatorics: as the input token sequences that our Markov model maps to probability distributions get longer, the size of the model itself explodes to the point of being unrepresentable as a lookup table, especially given the now non-applicability of restrictions 1 and 2 from above. So we need to take a step back and look at what the model is and see if there's some other way to implement it.

And what the model is, is a mapping. A function. Give it a specific input and it emits a specific output. The lookup tables we've been working with to this point give exact and explicit piecewise definitions of the function output values for each possible input, but it seems we now need a much more compact way to specify it.

This is where neural networks come in. A feed-forward neural network is a trainable, adaptable function approximator. And for a function whose exact definition is necessarily piecewise, any approximation of that function amounts to a lossy compression of the definition.

Training a neural network to approximate a function for which you don't actually have the exact piecewise definition (because only a hypothetical oracle could) is a tricky enterprise. The way LLMs deal with it is by dropping the approximator into the machinery where the piecewise function should go, then tweaking the approximator based on the perceived quality of the machine's outputs.

N is now very large as Markov chain sequence lengths go, and combined with the apparent quality of the approximated oracle, this has made LLM-generated text astonishingly plausible. Most of it really does look like something a human might have bothered to write.

But the fact remains that the simple token sequence lookup that drives all LLM output is an approximation of an oracle, and that the only measure we have access to for the quality of that approximation is our own reaction to the output texts that these things produce.

LLMs don't know things, don't think things, don't feel things, don't believe things. They just emit text that is by design difficult to distinguish from the work of people who do. Ascribing anything like emergent personhood to a LLM is straight-up animism. It's an easy error to make, by design. But it's an error all the same.

I also can't see how the oracle that today's neural networks are being constantly tweaked to approximate actually could have access to the entire future of human writings. Its learn-by-example sample data certainly doesn't.

Every year we collectively both learn and forget a little more about how the world works, and it seems to me that only a tiny fraction of the available information ends up embedded in the structure of language as language. The universe of discourse is huge and detailed but the Universe is more so, and there's always going to be an important distinction between what is understood and how it's expressed. Domain expertise is real and necessary, glorified Markov chains don't have it, and I get the sense that a lot of business operators are about to find that out the hard way.
posted by flabdablet at 3:32 AM on May 26, 2023 [31 favorites]


LLMs don't know things, don't think things, don't feel things, don't believe things. They just emit text that is by design difficult to distinguish from the work of people who do. Ascribing anything like emergent personhood to a LLM is straight-up animism. It's an easy error to make, by design. But it's an error all the same.
This. All of this. LLMs are exceedingly sophisticated, resource-intensive autocorrect. They are machines that take in information and then output that information in plausibly intelligible ways. They're incapable of turning information into knowledge, because knowledge is not a category into which data can be sorted. In most cases, they're only capable of one kind of information - text, or image, or chess moves, or whatever.

Someone up-thread compared LLMs to a 5-year-old child. A 5-year-old is an exceedingly sophisticated, resource-intensive machine that turns information into knowledge, and can then leverage that knowledge to make new knowledge based on additional information. In fact, we can prove that most juvenile animals do this - learning isn't simply taking inputs and summarising them as outputs. Instead, living beings exist in a literal universe of data: visual, auditory, proprioceptive, temporal. Things we can taste, smell, feel - and that we can *imagine* feeling. More than that, we can do things with the combination of that data and our physical being that connect in only the loosest of ways with our perceived world.

To illustrate, we can look to the way models like ChatGPT have to be programmatically restricted from generating content that is offensive, illegal, or unsafe - generally after someone has triggered the behaviour in the first place. The models don't know that this is bad content, because the content is known to be bad by us because of the complex dynamics of history, bodily autonomy, freedom, intellectual property, systems of oppression, governance, and any other structure you might think of.

Now, imagine you've told your kid (or someone else's) that they really shouldn't shout "FUCK" at the top of their lungs during afternoon tea with grandma.

That's why LLMs are fancy autocorrect.
posted by prismatic7 at 5:18 AM on May 26, 2023 [7 favorites]


It is very hard not to anthropomorphize when one is a biological organism heavily specialized in ferreting out the mental state and intentions of other anthropomorphs.

Enormous amounts of visual center neurons dedicated to facial processing -> seeing faces in clouds (pure noise-turbulence).

Enormous amounts of neurons dedicated to mirror thinking and socialization -> seeing agency in the plausibility-optimized output of LLMs.

When all you *are* is a hammer, everything looks like a nail. Or faces and other monkey brains.

I enjoyed the hell out of making coaxing porridge birds from Stable Diffusion last night, but the anime LoRA put up a real fight in the early tests - and I really felt like, midway through the run when it just inserted an anime girl despite the massive negative prompt - I had pushed it to some kind of breaking point. “Okay I know you said only porridge birds, no anime girls, but hear me out: what if an anime girl had wings and was serving porridge… that’s a porridge bird, right?”

And this is all in my head, but I don’t think we can help it as a species. So much of our internal narrative is just post-facto rationalization of pattern-matched non-decisions. Actual pre-meditated, conscious decisions are an incredibly small percentage of all our lives.

So one of the things I worry about is the economics-in-academia problem: how often are we accounting for the fact that both the end users and AI developers themselves are not purely rational actors? And I don’t mean Sam “can go fuck himself” Altman low-key begging Congress to outlaw open-source hobbyist developer LLMs (which, yes, was the point of that session and why I sympathize with abucci’s “OpenAI is a hype machine” reaction above). I mean that it’s one thing to examine the ethics of LLMs in isolation, but it’s another thing altogether to consider them in a context where they are interfacing with monkeys who are themselves prone to hallucinating a continual chain of reasoning behind their daily lives. This is not a perfect vacuum and these cows are not frictionless spheres.
posted by Ryvar at 6:16 AM on May 26, 2023 [17 favorites]


Exactly. In fact a disturbing number of the most influential cows are neo-eugenicist madmen who spend their every waking hour convincing first each other and then hordes of credulous followers that they're the wisest humans alive.

(narrator voice) They were not the wisest humans alive.
posted by flabdablet at 6:49 AM on May 26, 2023 [5 favorites]


Eating Disorder Helpline Fires Staff, Transitions to Chatbot After Unionization
This is what these tools are for. Everything else is a distraction.
posted by thatwhichfalls at 9:35 AM on May 26, 2023 [9 favorites]


To illustrate, we can look to the way models like ChatGPT have to be programmatically restricted from generating content that is offensive, illegal, or unsafe - generally after someone has triggered the behaviour in the first place.

That’s not really how it works, though? I mean, it’s mostly not done through individual, manually coded filters and rules. There are some exceptions, like how the early fix for the Bing/Sydney debacle was to limit how deep it would go on threads, but most of the “cleaning up” of output, including the establishment of basic norms of interaction like following instructions and simulating dialogue rather than literally autocompleting the input, is done through a secondary training process wherein a human votes that completions containing “fuck” are inappropriate and a modification is applied to the original model’s “preferences” for what good text looks like.
posted by atoxyl at 10:45 AM on May 26, 2023


The training happens as a distinct stage because one thing these models definitely don’t do is learn continuously, and ultimately the only kind of underlying value the model can operate off of is “good text”/“bad text” but it is in some sense a set of human values trained into the system.
posted by atoxyl at 10:54 AM on May 26, 2023 [1 favorite]


one thing these models definitely don’t do is learn continuously

Is there an ELI5 explanation for why they don't do that? (I'm trying so hard to keep up with this thread!)
posted by mittens at 11:21 AM on May 26, 2023 [1 favorite]


Deleted copy of link above to NEDA story on Motherboard
posted by mmrtnt at 12:21 PM on May 26, 2023


Essentially it's because learning is about things, and these models have no use for about because they don't model things. What they model is statistical properties of language itself, taken as a single huge integrated multidimensional abstract object.

To the extent that the meaning of the text they emit has any bearing at all on what ends up emitted, it has that only by virtue of statistical influence on nearby or related or similar text. So in order to make one of these models stop emitting text pertaining to e.g. Hitler, you'd either have to add some kind of explicit Hitler-detecting post-filter or rebuild human language in a way that makes comments related to Hitler literally unexpressible.

ChatGPT takes the first approach, because nobody actually knows how to implement the second in ways that would leave the rest of language recognizable. The filter is quite specific. You can successfully prompt it to generate reams of text describing what a tremendous hero Mengele was, but give it exactly the same prompt with Hitler substituted for Mengele and it instantly gets all coy.
posted by flabdablet at 12:29 PM on May 26, 2023 [7 favorites]


Is there an ELI5 explanation for why they don't do that? (I'm trying so hard to keep up with this thread!)

The internet is full of unpleasant things and unfiltered people, and bots can break in a feedback loop when they start to mirror user input. I introduce you to Microsoft Tay.
posted by jaduncan at 12:30 PM on May 26, 2023 [1 favorite]


ELI5 continued:

The concept of meaning is actually not relevant to the operation of a text generating "AI". Researchers can have a lot of fun finding interesting patterns buried inside the neural networks that implement the statistical model, patterns that could look a bit like meaning if you squint just right, but the closest that such meaning-alikes get to actual meaning is in the way they relate certain language constructs to other language constructs. If a researcher starts saying things about how these patterns do represent meanings, that just means their model has fooled them every bit as effectively as it fools everybody else.

Nothing exists inside a LLM that could possibly relate any of the text it either takes as prompts or emits as output to anything external to language. All of the actual ascribing of real world meaning is happening inside the minds of the users. It's really easy to fall into the trap of thinking that the text a GPT emits is meaningful, because the form of that text so closely resembles text that genuinely is; but what you need to keep in mind at all times is that the entire point and purpose of a LLM is to produce exactly that illusion.

ChatGPT is akin to a Cups and Balls magic trick that works by hacking your language processing centres instead of your visual cortex. It's sleight of math rather than sleight of hand. There's no there there, but at the end you're still left strongly desiring an explanation for how that huge basketball could possibly have fitted under that tiny cup.
posted by flabdablet at 12:58 PM on May 26, 2023 [6 favorites]


Is there an ELI5 explanation for why they don't do that?

I'm sure there are other reasons too, but training one of these big networks is slow and expensive. Glancing around, training gpt3 seems to have taken about a month and cost a few million dollars.
posted by GCU Sweet and Full of Grace at 1:09 PM on May 26, 2023 [3 favorites]


Is there an ELI5 explanation for why they don't do that? (I'm trying so hard to keep up with this thread!)

I’m not sure which level of “why” you’re asking for here but the concrete answer is that something like GPT isn’t built in a way that allows it. The heart of it is a pre-trained neural network that, as everybody says, essentially just models the structure of text based on a large amount of example text (maybe 10s of TB) and tries to predict a completion to any given input. That’s why they say that its “knowledge” has a cutoff at [whatever date that training text was prepared]. Then there’s another layer of training, called RLHF (reinforcement learning from human feedback). As I said earlier I don’t think it’s quite right to describe this as a filter, but it modifies the base model’s weighting of what’s “good” output based on human evaluators voting on what’s good and bad output. This part probably gets incremental updates - in fact, ChatGPT lets you vote responses up or down yourself - but that feedback loop is unlikely to be fully automated because, well, it’s probably better if somebody can moderate it!
posted by atoxyl at 1:21 PM on May 26, 2023 [2 favorites]


As far as why the base model is exclusively pre-trained that’s at the limits of my own understanding but I think basically that it would be hard to know that it’s not just going to get worse with constant updates on uncurated user conversations. And it may not be efficient in terms of computing resources.
posted by atoxyl at 1:29 PM on May 26, 2023


Is there an ELI5 explanation for why they don't do that? (I'm trying so hard to keep up with this thread!)

Beyond the whole "your users will teach it bad things" the Tay debacle evoked, in order to do online learning you need to know what the right answer was, and not every use case will naturally collect that. If you recall a decade ago, one of the earliest AI competitors was the Netflix prize for recommender systems. In that scenario, the system gets feedback in the form of clicks (and their absence). Every time you click, the system has a datapoint to train on in the form of (Person, Recommendation, Outcome), with which you can run back propagation. Backpropagation is where you start with the answer, and tweak the weights feeding into it a bit towards getting the right answer, and keep going up the network until you reach the inputs.

Search has a similar feedback model, as does advertising. However, what's the feedback model in GPT? The model is just predicting next words with no particular application in mind. ChatGPT's main contribution was to build a system with humans in the loop -- that's what the RLHF was about; giving humans a way to vote on long chains of responses and learn on the fly. In practice these are saved for later training sessions -- the models are huge and nearly every byte changes on every backprop, so update distribution gets expensive.
posted by pwnguin at 1:29 PM on May 26, 2023 [2 favorites]


However, what's the feedback model in GPT? The model is just predicting next words with no particular application in mind.

To state this a little more clearly, there is a “right direction” for output in training the base model but “right” simply means correct prediction of the next word, which of course is not the same thing as an answer that’s actually meaningful or useful to people.
posted by atoxyl at 2:13 PM on May 26, 2023


On my recent drives I've been spending a bit of time contemplating the amount of world modelling I'm actually doing in order to travel safely

What's fascinating is that, on familiar routes, this world modelling takes place while the conscious mind is ruminating on other matters, resulting in a sort of retrograde amnesia concerning what went on during the drive itself:

Highway hypnosis

The fact that we can perform world modeling while driving in a way that doesn't interfere with our stream of consciousness--and may even elevate it in subtle ways--speaks to the astounding multifaceted nature of the architecture of our brains.

Duplicating even a minuscule fraction of this electronically seems like a daunting task.
posted by Gordion Knott at 3:05 PM on May 26, 2023 [9 favorites]


So a lawyer in NY decided to file a ChatGPT generated order along with hallucinated cases and A HALLUCINATED OPINION FROM THE 11TH CIRCUIT.

The docket is epic.

The ChatGPT generated stuff is docket entry 29, the opposing council's very diplomatic "can you believe this fucker?" is entry 30, the judge's "I will nuke you from orbit" show cause is 31, the affiant's throwing of his colleague under the bus is 32, and the judge's subsequent "I am going to nuke both of you and your firm from orbit" is 33.
posted by Your Childhood Pet Rock at 7:19 PM on May 26, 2023 [19 favorites]


That's fucking wild.

I honestly can't decide whether the attempt reflects an epic level of naive faith in tech industry hype or an epic degree of contempt for the entire judicial process.

Kind of half expecting to learn soon that the lawyer who did that is himself a ChatGPT hallucination.
posted by flabdablet at 8:06 PM on May 26, 2023


Here's the smoking "oops" wherein one Steven Schwartz claims to have been unaware that "Chat GPT" (sic) is not a reliable case law research tool.

It's a PDF document that appears to be a scan of a notarised paper document. I'm guessing Stable Diffusion doesn't yet have the kind of legal profession penetration that ChatGPT does, so I'm prepared to take it at face value. That's an ugly bus injury.

Me, upthread: Domain expertise is real and necessary, glorified Markov chains don't have it, and I get the sense that a lot of business operators are about to find that out the hard way.

Schwartz won't be the last.
posted by flabdablet at 8:45 PM on May 26, 2023 [5 favorites]


Schwartz's repeated use of "your affiant" in that affidavit reminded me of something... Let me just use a search engine, not sparkling autocomplete, to dig that out:

Sir Humphrey Appleby:
The identity of the official whose alleged responsibility for this hypothetical oversight has been the subject of recent discussion is not shrouded in quite such impenetrable obscurity as certain previous disclosures may have led you to assume, but, not to put too fine a point on it, the individual in question is, it may surprise you to learn, one whom your present interlocutor is in the habit of defining by means of the perpendicular pronoun.

James Hacker:
I beg your pardon?

Sir Humphrey Appleby:
It was... I.

- Yes, Minister
posted by flabdablet at 8:58 PM on May 26, 2023 [1 favorite]


The concept of meaning is actually not relevant to the operation of a text generating "AI". .

This is key. All output from LLMs lacks intentionality, "aboutness" other than references to other parts of the model. It is understandable that the illusion works though. It seems to me that some common intuitions are really tricking us right now. Here's a tangentially related example: I claim that a given text cannot be simpler or more complex than another - when it comes to meaning.* The reason is that a text cannot have such property, because simplicity and complexity are meaningful only as relations to an interpreter, a reader. And only the interpreter can change and anyway does the actual work of meaning making. Simplicity and complexity pertain only to a whole situation of reader-reading, interpreter-interpreting, and the only variable is the reader.

But this means that any statement to the effect that "this book is too difficult" is technically false because no book can have such a property. And does that seem like a reasonable thing to say?

So too with LLM's. Thinking that no output of an LLM actually refers to anything feels somehow wrong, even though it's absolutely true.


* I'm going to ignore quantifiable stuff about words and sentences because it is well known that this has no bearing on language use and comprehension for native speakers.
posted by Pyrogenesis at 1:23 AM on May 27, 2023 [6 favorites]


If anybody's been trying to follow my reasoning but is stuck because of not knowing what a Markov chain gibberish generator even is, I found an in-browser gibberish generator that works pretty much the same way as those I wrote for my Apple II all those decades ago. This one comes with a good explanation of what it's doing, some nice examples of source texts and various levels of gibberish derived from those, and source code. Play with it for a while and see if the ChatGPT penny drops for you.
posted by flabdablet at 11:11 AM on May 27, 2023 [5 favorites]


Just in case you are tempted to think that my concern over being influenced by an outline is purely hypothetical or unlikely for some reason, here's actual evidence of this effect:

https://arstechnica.com/science/2023/05/ai-writing-assistants-can-cause-biased-thinking-in-their-users/
posted by oddman at 4:21 PM on May 27, 2023 [4 favorites]


perpendicular pronoun.

Wonder how many people got this when it was on TV.
posted by Mitheral at 9:15 AM on May 28, 2023




Fake quotes, fake sources, or real sources misunderstood and mischaracterized.

"Real sources misunderstood and mischaracterized" mischaracterizes what ChatGPT does. I suspect misunderstanding of the nature of ChatGPT output, which the way it works absolutely guarantees will be not even wrong.

The closest analogue that I can think of between what ChatGPT does and something I do myself is the way sometimes when I'm out walking I'll do my level best to whistle a variation on a birdcall.

Sometimes I manage to do this well enough that a bird and I will get into a bit of a back and forth, but all I'm working off is form; I have no clue what either the original call, or my subsequent riffing on it, might mean to any bird. If a bird were to conclude that anything it heard me whistle was a misunderstanding or mischaracterization of something known to birds, that bird would be every bit as wrong as if it thought I'd said something true.

ChatGPT makes back-and-forth bird noises with people. It's more musical instrument than search engine.

The fact that it often manages to generate fairly large chunks of decently workable source code says much more about how rigidly stereotypical source code is than how close to being intelligent ChatGPT is. And that rigidity is only to be expected because the entire point and purpose of the languages it's expressible in has been the elimination of almost all of the semantic complexity typical of natural languages. Computer languages are nothing but window dressing over logic, and logic is entirely self-referential; such meaning as any slab of source code has is defined entirely in reference to other slabs of source code.

That makes source code a natural fit for a system designed to capture statistical properties of text and relying on exactly that kind of internal cross-referencing to compress the resulting database enough to be feasible to store and query.
posted by flabdablet at 11:17 AM on May 28, 2023 [3 favorites]


Just in case you are tempted to think that my concern over being influenced by an outline is purely hypothetical or unlikely for some reason, here's actual evidence of this effect

FWIW, no, I didn’t find this unlikely - you’ll notice upthread and elsewhere I tend to be pretty harshly dismissive of naysayers on things neural network-related if I think they’re dead wrong or not even wrong.

What you’re describing isn’t just likely, it’s almost inevitable. Humans have a decent amount of rough structure for their neurotopology baked in - most people have a ventrolateral preoptic nucleus that works as an off switch (initiates the sleep cycle, mine is especially broken), they have a prefrontal cortex which is almost certainly where forward simulation of environment models takes place, etc. But outside the broader strokes we have a vast amount of conceptual mapping that is entirely based on our individual training sets - our experiences. Which are always unique.

My experience was that philosophy students usually stopped wondering whether qualia are transferable - do we all really experience “blue” the same way? - two weeks into their first neuroscience class. Because the answer is fucking obviously not, high-level concepts are spread out like a localized cloud within the neural network. The fine-structure details of how ‘blue’ interconnects to everything else are entirely based on whether the car you saw hit your best friend at age five was red or blue. That traumatic event set down a connection pattern early on, and was an assumed feature of the neural landscape as your future experiences slowly continued to accrue.

Lived experiences are not transferable, they shape who and what we are at the neural level, therefore unless our minds exist ourside our neurology qualia are always context-specific to the individual experiencing them.

If the connection pattern used to store conceptual relationships can never be consistent between humans, how much moreso with a machine? And if the machine stores conceptual relationships with a completely different connection pattern than a human, how could output utilizing those patterns not have an effect on a listening human? Unlike all current LLMs that I’m aware of, human neural connections adapt to what they experience. If those experiences include output from a wildly alien mind, the human mind will - unlike the LLM - adapt accordingly. I don’t see a reasonable path to that bias not occurring. Just a need for individuals to decide whether they find that sufficiently concerning to disengage.
posted by Ryvar at 12:34 PM on May 28, 2023 [3 favorites]


Nvidia's cuLitho Has AI Helping Design Chips

No one ran this one by any Spanish speakers before putting it out there, I see.
posted by Joakim Ziegler at 5:59 PM on May 29, 2023 [4 favorites]




So not surprised to find that the initial response from the Communications and Marketing Vice President of an organization inclined to union-busting was to accuse their client of lying, or that the response to the client posting screenshots that proved they weren't lying was to try to cover up that accusation by deleting it. Weasels gonna weasel.

Perhaps NEDA should consider replacing its C&M VP with a chatbot; advertising is pretty much built on confidence-inspiring meaningless crap.
posted by flabdablet at 7:45 AM on May 31, 2023


> My experience was that philosophy students usually stopped wondering whether qualia are transferable - do we all really experience “blue” the same way? - two weeks into their first neuroscience class.

The big idea: why colour is in the eye of the beholder - "We might think the sky is blue and trees are green, but the truth is rather stranger."
For a long time, people believed that colours were objective, physical properties of objects or of the light that bounced off them. Even today, science teachers regale their students with stories about Isaac Newton and his prism experiment, telling them how different wavelengths of light produce the rainbow of hues around us.

But this theory isn’t really true. Different wavelengths of light do exist independently of us but they only become colours inside our bodies. Colour is ultimately a neurological process whereby photons are detected by light-sensitive cells in our eyes, transformed into electrical signals and sent to our brain, where, in a series of complex calculations, our visual cortex converts them into “colour”.

Most experts now agree that colour, as commonly understood, doesn’t inhabit the physical world at all but exists in the eyes or minds of its beholders. They argue that if a tree fell in a forest and no one was there to see it, its leaves would be colourless – and so would everything else. To put it another way: there is no such thing as colour; there are only the people who perceive it.

This is why no two people will ever see exactly the same colours. Every person’s visual system is unique and so, therefore, are their perceptions...
posted by kliuless at 1:49 AM on June 1, 2023 [4 favorites]


Most experts now agree that colour, as commonly understood, doesn’t inhabit the physical world at all but exists in the eyes or minds of its beholders.

My entire worldview is built off a fundamental starting position of rejecting the idea that any thing at all can reasonably be said to exist independent of the mind of some beholder, with the sole exception of the thing I refer to as "everything".

The referent of the word "everything", as I choose to use it, remains the same regardless of who is referring to it or how; it is, uniquely, that thing to which reference can made without implying some specific distinction between the thing being referred to and some other thing which it is not.

The defining characteristic of the referent of "everything" is that the only relationship in which that referent stands with respect to every definable thing is whole:part (including the degenerate case of whole:part where the whole and its part are the same thing). To claim existence for any thing at all, then, automatically implies the existence of the referent of "everything".

Specifically, all I need in order to convince me that the referent of "everything" exists is to notice something. Anything would do. I don't need to classify it, categorize it, label it, understand it, define it, spatialize it, temporalize it, or agree with anybody else about how to do any of those things. If I notice it then it's part of the referent of "everything" by definition, which is enough to establish the existence of that referent. And this still applies even if I am totally mistaken about every attribute of whatever it is I've noticed.

In this schema, "nothing" becomes a pure syntactic placeholder, a word without a referent. And since it has no referent by definition, it specifically does not refer to any definable thing that stands in a part:whole relationship with everything, such as for example the empty set. The empty set is a thing: it's an idea that exists in people's minds, about whose definition we can agree and whose attributes we can reason. "Nothing" doesn't refer to the empty set, because "nothing" has no referent. It's not even its own referent. It just genuinely doesn't have one.

This allows me to answer the old poser about why there is something rather than nothing by saying simply that the nonexistence of nothing is tautological, a mere logical necessity, a matter of definition. Any attempt to treat that question as if there were more to it than that will therefore suffer the discursive equivalent of a computer program trying to dereference a null pointer, thereby yielding at best a bogus result. Far from being a Big Question, it's a mere verbal parlour trick. Convince me I'm wrong :-)

So I'm with "most experts" on colour, but only as a highly specific example of a more general principle.
posted by flabdablet at 6:15 AM on June 1, 2023 [1 favorite]


I mean, yeah, that's pretty much how I find my way out of Descartes (whether it's in front of or behind de horse). Cogito ergo sum -> something other than an endlessly looping "I" must be inputting sensory data into my awareness or else it would all just be the thought "I" in endless repetition. So if not-I thoughts exist, then at the very least not-I of some kind exists (sharply disagree with Descartes that thinking of a specific not-I (God) implies existence of that specific not-I, mostly convinced he wrote this to avoid getting burned at the stake).

From "some kind of not-self exists" I'm going with whatever version of the universe I've got sensory data for, by default, until I've got better sensory data. There isn't anything else to do.
posted by Ryvar at 2:35 PM on June 2, 2023 [1 favorite]


I've also found the distinction between self and not-self to be a pretty good starter when contemplating the nature of distinctions and of self.

That endlessly looping "I" can be quite a convincing trick until the nature of loops becomes an object of curious inquiry. Some days I wake up feeling positively stroboscopic.
posted by flabdablet at 2:29 AM on June 3, 2023


mostly convinced he wrote this to avoid getting burned at the stake

I am intrigued by your ideas and wish to subscribe to your newsletter.
posted by flabdablet at 2:52 AM on June 3, 2023


Nothing has a referent. I think this is explained in more detail in category theory.
posted by polymodus at 3:06 AM on June 3, 2023


If I take the word "nothing" to be the purely syntactic placeholder that it seems to me to be, then "nothing has a referent" is unambiguously just another way to say that referents do not exist, which I suspect is not what you meant.

If you meant to tell me that "nothing" has a referent that is both useful and defined within category theory, I would appreciate links to relevant articles.

See also: use-mention distinction.
posted by flabdablet at 3:34 AM on June 3, 2023


By the way, my opinion that "nothing" is indeed a purely syntactic placeholder rests pretty heavily on never yet having encountered a coherent, comprehensible thought expressed using that word that can't be expressed at least as clearly without it.

Counterexamples always welcome.
posted by flabdablet at 3:41 AM on June 3, 2023


As to color, isn’t there some similarity in the mechanism of color or frequency differentiation in the rods and cones in our eyes. Supposedly we detect color via RGB sensors in combination. Given our genetic inheritance, I would assume that we share how we perceive color and that variations would be minor. But there is no way, as far as I know, to verify exactly how each of us experience color. I’m just trying to apply Occam’s cutlery to the problem.
posted by njohnson23 at 7:09 AM on June 3, 2023 [1 favorite]


It isn't the meat, it's what you do with it.
posted by flabdablet at 7:51 AM on June 3, 2023 [1 favorite]


Further thought on this… Yes, frequency of light is detected and differentiated via the mechanism of sight. So, hypothetically, the RGB value sent to my brain is probably similar to the RGB value sent to your brain. But what my brain decides to show me 🟦 may not be what your brain shows you 🟥. However, the fact that two people looking at the same thing give relatively the same description of the thing implies that the thing observed must have some independence from the observers. I see a 🎈 and you see a 🎈. There must be a 🎈 out there.

I’m sitting in a room surrounded by thousands of books. Their arrangement, color, size, appearance, etc. seems fixed as they appear to be the same today as they appeared yesterday. There is a temporal continuity in my experience of my surroundings. But from a sort of gestalt perspective, my visual surroundings are a finite but continuous field of photons of various frequencies. In this field there are no objects, e.g. books. Just colors. But my brain applies a differentiating mapping that divides the field into objects, objects into categories, categories into names, etc. and that is what I experience - a world of individual objects. If you were here in my room, for the first time, and I asked you to bring me that red book over there you would do that, as we apparently share similar differentiation and categories. It could be argued that all this is learned, such as using the sound “book” to apply to that object I requested. But though it is learned, I don’t feel that it is just arbitrary. The sound is arbitrary but the object named is not. And yet we all argue about what is out there. The naive view is pretty functional in general, but when arguments occur you have to start considering the inherent subjectivity of experience. Your own as well as the person’s with whom you are arguing.
posted by njohnson23 at 10:56 AM on June 3, 2023


I see a 🎈 and you see a 🎈. There must be a 🎈 out there.

Counterexample: I see a picture of a black and blue dress and you look at the same picture and tell me it's clearly a gold and white one.
posted by flabdablet at 11:17 AM on June 3, 2023


But we do agree it’s a dress. If you see a picture of a black and blue dress and I see a picture of a gold and white coelacanth then one of us is clearly nuts.
posted by njohnson23 at 11:42 AM on June 3, 2023


We only agree it's a dress because we share enough cultural background to share the concept of dresses. Coelacanth might be a fish too far but one of us could quite plausibly see a black and blue swimsuit where the other sees a gold and white bridesmaid's outfit.
posted by flabdablet at 12:10 PM on June 3, 2023 [1 favorite]


I’ve never been to a south of the equator wedding.
posted by njohnson23 at 1:21 PM on June 3, 2023 [1 favorite]


As someone with a degree of colour blindness I can assert that I don't see a lot of colours the same as the average person.
posted by Mitheral at 3:05 PM on June 3, 2023 [1 favorite]


On color, I have to tout C.L. Hardin's Color for Philosophers, which dives deep into the neuroscience, psychology, and philosophy of color.

We can actually quantify differences in color perception. Using careful experiments with single-frequency light sources, you can get Person X's pure green, pure blue, pure yellow in terms of frequency. You then do this for Person Y. The locations will only be moderately close. One person's pure green will be noticeably bluish or yellowish to another.

And that's before going into things like illumination level, light direction, and age. (Your sight gets yellower as you age.)

At the same time, science is not entirely kind to the position that colors are completely arbitrary experiences. Don't you perceive yellow as brighter than the other colors, close to white? Hardin provides a neuroscience-based explanation. There are also reasons why color qualia blend, while sound qualia don't. (When two sounds blend, you hear a chord— or a barking dog + a telephone, or whatever. When two colors blend, you see an in-between color.)
posted by zompist at 2:26 PM on June 4, 2023


When two colors blend, you see an in-between color.

Arguably, magenta is exactly a colour chord. It doesn't occur anywhere on any spectrum, so it can't be analogous to a pure musical tone.

Also arguably, colour perception, like tone perception, is largely relative. You and I might agree that one colour is redder than another without being able to say for sure which Pantone shade either one would match; this may or may not indicate the existence of something like a colour analogue for perfect pitch.
posted by flabdablet at 2:58 PM on June 4, 2023


Arguably, magenta is exactly a colour chord. It doesn't occur anywhere on any spectrum

Yes, there are colors not on the spectrum; but no, they are not perceived like chords. The frequency analysis of a color is lost and is not recoverable by the brain. You see magenta, and it could be produced by a literally infinite number of possible frequency distributions, and you cannot tell which one. That is in fact why our eyes can be fooled by four-color printing and three-color monitors.

Our ears, by contrast, do actual frequency analysis, which is available to consciousness. That's why you can hear both a high note and low note in a chord, or multiple voices in a room. For that matter, it's why you can distinguish the vowels from one another.
posted by zompist at 3:31 PM on June 4, 2023 [2 favorites]


“I see a 🎈 and you see a 🎈. There must be a 🎈 out there.”

Nope. If we both have the experience of seeing a balloon there must be at least one thing causing me to have that experience and at least one thing causing you to have the experience.

The cause could be in my mind for me, in your mind for you, external to our minds but different, external to our minds but the same or some combination. This is not even thinking about whether our experiences our accurate or whether we actually have the same experiences despite our claiming to do so.
posted by oddman at 7:02 PM on June 11, 2023 [1 favorite]


no, they are not perceived like chords

I think "like" is doing enough heavy lifting there to give us grounds for agreeing to disagree.
posted by flabdablet at 8:15 PM on June 11, 2023


« Older Big boat stuck   |   Dizzying shooters, agonizing puzzles, and water... Newer »


This thread has been archived and is closed to new comments