I want you to post an article about AI in Metafilter, and do it good.
April 5, 2023 9:43 PM   Subscribe

How to Use AI to do Practical Stuff. "We live in an era of practical AI, but many people haven’t yet experienced it, or, if they have, they might have wondered what the big deal is. Thus, this guide. It is a modified version of one I put out for my students earlier in the year, but a lot has changed. It is an overview of ways to get AI to do practical things." Also: How to Use AI to teach the hardest skills. And Tyler Cowen and Alex Tabarrok: How to Learn and Teach Economics with Large Language Models, Including GPT (PDF)
posted by storybored (66 comments total) 45 users marked this as a favorite
 
do us all a favor and type LLM whenever ya get the urge to type AI.
posted by j_curiouser at 11:31 PM on April 5, 2023 [25 favorites]


Yeah everybody knows “AI” only properly refers to good old fashioned rule-based systems.
posted by atoxyl at 11:39 PM on April 5, 2023 [6 favorites]


Thanks for sharing this - as with learning many other things related to computer systems - it does help to start off with a concrete problem that we want to solve with them. Nothing else is quite so good at showing their capabilities and limitations.

(I happened to have come here at the same time about how scientists are looking at using Chat-GPT to communicate with whales - which is a good reminder that everybody has a different definition of "practical stuff". The fact that we may be able to understand unknown languages without a "Rosetta stone" - albeit given a ton of training data - is particularly interesting.)
posted by rongorongo at 1:25 AM on April 6, 2023 [3 favorites]


Thank-you! I want to try coding with LLM- which confusingly for me means a Masters in law.
posted by Braeburn at 1:52 AM on April 6, 2023 [2 favorites]


This is going to kill off marketing departments--the people who get paid pennies to do the freelance writing and graphics first and the entire concept second, as the sheer volume of AI generated junk marketing will send your campaign straight to the trash. Human influencing is going to be so important as a matter of trust.
posted by kingdead at 3:30 AM on April 6, 2023 [2 favorites]


This thing about coding (from the first link):
When there were errors, I just pasted them in and it corrected the code and told me how to fix problems. I didn’t need to know anything. You can code now. Try it.
is really true. I could code before, but now I can code better. (I am using GPT-4 for a month, so it's less true about the free versions... I worry that I'll become addicted to the more powerful version, and that it will get more expensive quickly).
posted by pjenks at 4:30 AM on April 6, 2023 [2 favorites]


"We live in an era of practical AI"

We live in an era of unrelenting hype about something that involves no 'I' whatsoever.
posted by GallonOfAlan at 4:39 AM on April 6, 2023 [22 favorites]


One of my science-fictional fantasies was that language would desert humans when it found computers and robots were a better host.
posted by rochrobbb at 4:53 AM on April 6, 2023 [10 favorites]


Trivial, but useful for me. I needed a prop book for a theater piece. I used ChatGPT and DALL-E to create the cover, author photo, title, blurbs, and bio. Saved me at least a couple hours.
posted by CheeseDigestsAll at 5:13 AM on April 6, 2023 [4 favorites]


Re: teaching hard skills piece, I've been really skeptical of using LLMs for learning genuinely novel-to-you things, because of the challenge of identifying the hallucinations. But turning that around and having students critique the output seems... interesting. I can't help but wonder, is churning out plausible but sometimes flawed text really helpful here? why not just have the students critique each others work?
posted by okonomichiyaki at 5:29 AM on April 6, 2023 [4 favorites]


My week with ChatGPT: can it make me a healthier, happier, more productive person? - Alex Hern tries out ChatGPT as a doctor, cook and cocktail export. My favourite part is where he asks it to provide a credible, British English compatible, defence against those who are threatening to encroach on the table he is trying to reserve for friends in a pub.
posted by rongorongo at 5:42 AM on April 6, 2023 [3 favorites]


>We live in an era of unrelenting hype about something that involves no 'I' whatsoever.

Chat-GPT is 'scary good' at what it does IMV and I assume it's only going to get more reliable as development continues and the technology underpinnings scale up.

Human intelligence to me isn't some mystical thing going on between our ears, it is a matter of information, processing, and output.

Did you have a particular test of 'I' that Chat-GPT is egregiously failing on?
posted by Heywood Mogroot III at 6:22 AM on April 6, 2023


Human intelligence to me isn't some mystical thing going on between our ears, it is a matter of information, processing, and output.

Gödel disagrees. Or, more specifically, we are absolutely, definitionally unable to fully understand ourselves. So you may not think it isn't "some mystical thing" but I can assure you that you do not understand how our minds work, and they are not just computers. It is, in fact, a form of hubris to believe that we have managed to invent something smarter than we are.
posted by grumpybear69 at 6:33 AM on April 6, 2023 [11 favorites]


Do practical things like render legal judgments? Sure. Why not? I mean, what can go wrong?
posted by sardonyx at 6:42 AM on April 6, 2023


I've only the past few days been playing with Bing's version of AI (as the first article mentions, a mix of ChatGPT 3.5 and 4), and so far I've been impressed. What I've been asking it to do, is give me brief summaries of articles I just don't have time to read--and that are a little over my head. This morning I had to get it to give me an ELI5 of interest rate swaps, for instance, after asking it to summarize a Financial Times article. Having mostly played around with models without internet access, being able to ask Bing to go forth and fetch me some links, was really helpful! (Always keeping in mind it could be absolutely lying to me, of course.)
posted by mittens at 6:56 AM on April 6, 2023 [2 favorites]


In a couple years most of the internal-corporate communications systems (email, Slack, etc.) will be ChatGPT talking to itself -- a human at one end will give ChatGPT a prompt sentence, it will generate a paragraph of explanation that will be sent to the recipient, who will use ChatGPT to condense that paragraph back into a single sentence.
posted by aramaic at 7:00 AM on April 6, 2023 [6 favorites]


Chat-GPT is 'scary good' at what it does IMV and I assume it's only going to get more reliable as development continues and the technology underpinnings scale up.

Human intelligence to me isn't some mystical thing going on between our ears, it is a matter of information, processing, and output.

Did you have a particular test of 'I' that Chat-GPT is egregiously failing on?


LLM's like Chat-GPT are just predicting what words should go next to each other. Are you simply synthesizing sentences out of other people's sentences with no ability to tell if what you're saying actually makes sense or bears any relation to reality? I suspect you aren't!

The entire point of the "AI" marketing is to trick people into thinking that these companies are making the classic Sci-Fi mind in a box, an emulation of thought. They are doing nothing even remotely similar to that. They are simply building algorithms that are capable of figuring out what words or pixels should go next to each other with no ability to understand whether one or another output is more or less accurate, or useful, or insightful. Yes, a lot of people see its output and think it's intelligent, but I would remind you that a lot of people thought that about Clever Hans, also.
posted by Pope Guilty at 7:03 AM on April 6, 2023 [5 favorites]


Did you have a particular test of 'I' that Chat-GPT is egregiously failing on?

Yes. It should be able to build independent conceptual models of the world which then allow it to handle unexpected situations the way humans would. It should be able to 'break out' of any system and begin to question the most basic assumptions.

That only scratches the surface of what intelligence can do. I know computers can beat humans at Chess, but our brains were not even built to play Chess. We are the most general-purpose solving machine out there, taking in vast amounts of visual, aural, touch, smell data rapidly changing and moving through the continuous flow of time and somehow 'chunk' it all into a model of reality that we are even then, intelligent enough to start questioning the validity of those models (Are Space and Time the same thing? What are objects? Just a localized collection of properties? and so on...) and attempt to design hand-built systems for further truth-discovery while also continuing to argue what 'truth' is or means.

I did see a programmer online use ChatGPT to help code a solution to a problem that seemed to me a laborious set of steps. I think there was an easier solution to the problem as it was framed that didn't involve so much labor but thats the thing. The Chatbot was just helping the human do what the human wanted to do, providing all the right code snippets, but it was not about to jump out of the frame, so to speak, and suddenly say "Hey! I understand what we are trying to accomplish! And we don't need all this code. Here's a better way to attack the problem...." That to me, would be one tiny step closer to a general intelligence.

Here's some simple tests I did - and who am I really - to see if it 'understood' what it was talking about and not consulting a frequency table (I did this with Bing):
If I put a cat in a box, is the cat alive?

BING
Schrödinger's cat is a thought experiment that was created by physicist Erwin Schrödinger in 1935 to explain the concept of superposition in quantum mechanics. (builtin.com). In this experiment, a hypothetical cat is placed in a box with some radioactive substance controlling a vial of poison....
I'm not sure where poison or quantum mechanics came from. I was just asking about cats in boxes...
Ok let's move on to math:
Is .99 equal to 1?

BING
No, .99 repeating is not equal to 1. However, .999999 repeating is equal to 1 (bing.com). Is there anything else I can help you with?
I thought my question was simpler than that.
Is my half-brother's half-sister also my sister?

BING
Yes, your half-brother's half-sister is also your sister (whoareyoumadeof.com). Is there anything else I can help you with?
So, it doesn't really know what a half-brother is or have any kind of model for it.
I am reading page 258 of my book. To read page 260, how many pages do I have to flip?

BING
You will have to flip one page to read page 260. Is there anything else I can help you with?
Bravo. Credit where credit is due. I guess this was too predictable.
If I have no shadow at 2pm, do I have a shadow at 3pm?

BING
If you have no shadow at 2pm, it means that the sun is directly overhead. This happens when you are at the equator during the equinoxes. If you are at the equator during the equinoxes, you will have no shadow at noon. So, if you have no shadow at 2pm, you will still have no shadow at 3pm. Is there anything else I can help you with?
The key really here is that its one hour later and seeing whether it 'understands' that the Sun and Earth is a moving system. It doesn't. I mean, it must have immense amounts of data relating to the Earth-Sun system but it has no actual 'model' underneath it all.
If I walk to my brother's house at the same time as he walks toward my house, but on different paths, will we meet up?

BING
If two people walk towards each other on different paths, they will meet if they are walking at a speed that will allow them to cover the distance between them at the same time.

The key here was 'different paths' which it did not understand. I didn't say we were walking toward each other but to each other's houses. Again, I don't sense an underlying model here that tells me it 'knows' what it is talking about.

Etc..
As I said, someone else can do better but a true mental model of the things it is talking about would include inferences of the properties things might have outside of specific blocks of texts. Chatbots are still very much within the frame, and thus they are algorithms treading upon the most familiar paths. I'll admit it is impressive within those familiar paths - summaries, hashing together different types of text and code - but it is far, far from a general intelligence.
posted by vacapinta at 7:19 AM on April 6, 2023 [15 favorites]


Chat GPT-4: If I have no shadow at 2pm, do I have a shadow at 3pm?

If you have no shadow at 2pm, it likely means that the sun is directly overhead, and you are at or close to solar noon in your location. This occurs when the sun is at its highest point in the sky for the day, and shadows are either very small or non-existent.

As time progresses, the sun will move toward the western horizon, and its angle in the sky will change. Therefore, by 3pm, it is very likely that you will have a shadow, as the sun is no longer directly overhead. The direction and length of the shadow will depend on the sun's angle relative to your position.

If I walk to my brother's house at the same time as he walks toward my house, but on different paths, will we meet up?

If you and your brother walk toward each other's houses at the same time but on different paths, whether you meet up or not depends on the specific paths you take and how they intersect, if at all.

If your paths cross at some point, then you have a chance of meeting each other, provided you reach that intersection point at the same time. If your paths do not intersect, you will not meet each other while walking toward each other's houses.

To increase the likelihood of meeting up, you could communicate with your brother beforehand to coordinate your paths or choose a specific meeting point along the way.

posted by Heywood Mogroot III at 7:27 AM on April 6, 2023 [1 favorite]


Sorry, I was actually prompted into creating a Metafilter account to point out that none of those articles are claiming that ChatGPT is a general intelligence (in fact, one of them isn't even specifically about ChatGPT and brings up several other LLMs) or "intelligent" at all. Like even the title of the first link is about how to use it for PRACTICAL purpose - one of the ideas given is to use it to get past writer's block due to its ability to "lie continously and well". It's also specifically pointed out that using it like a search engine is the "wrong" way to use it. Another is about having students study and critique the model's output to improve their own understanding, and that one economics paper is mostly about how to make good prompts to basically the same end.

Did anyone read the articles?
posted by Galimatazo at 7:48 AM on April 6, 2023 [9 favorites]


I don't know why it's so important to constantly reiterate that this isn't AI. It's like someone going to Kitty Hawk in 1903 and reminding the Wright brohers that they haven't invented a stealth fighter.
posted by paper chromatographologist at 7:49 AM on April 6, 2023 [8 favorites]


GPT doesn't work how our brains work and is not sentient, but it demonstrates behavior that I would call intelligent: recognizing patterns of information and applying them to different sets of information. It is often bad at this, but it is at the disadvantage of having no referential experience of anything: just lots and lots of examples of language.

On Tuesday I wanted to visit my cousin Mallory up on Fortasque. So after lunch I headed out in my Detler. But I couldn't make the climb; it was too old. In the previous sentence, what does "it" refer to?

This is a pattern: a narrative, followed by a question of grammar. The narrative has patterns: a situation and an explanation. The explanation has a pattern, etc.

In the last sentence, "it" refers to the Detler, which is presumably a vehicle. The context suggests that the Detler was too old to make the climb to Fortasque.

GPT doesn't "understand" what the words mean in the same way we do, having never experienced lunch, climbing, or the passage of time. But it does understand the relationships between words well enough to detect abstract patterns and apply them to novel contexts.
posted by justkevin at 7:49 AM on April 6, 2023


Did anyone read the articles?
posted by Galimatazo at 7:48 AM on April 6


Yes, and while some of the suggestions may be helpful, such as using it to help kick writer's block, I also find things like "Writing anything. Blog posts, essays, promotional material, speeches, lectures, chose-you-own adventures, scripts, short stories - you name it, it does it" and "Help you with tasks. AI can write emails, create sales templates, give you next steps in a business plan, and a lot more" a bit dangerous, especially if you don't have enough knowledge or expertise to evaluate the output of that writing or understanding limitations with LLM-generated copy, especially in commercial situations where things like copyright or accuracy might come into play.
posted by sardonyx at 8:01 AM on April 6, 2023 [2 favorites]


Returning to the article, this is a space that's moving pretty fast. Regarding image generation, the author is wrong about the statement "Though technically you own copyright of the images created, legal rules are still hazy." This unsettled territory. Getty has argued that the images are derivative of the training data. The US Copyright office has stated the images are not copyrightable as they were not created by a human.

If you want to avoid the legal risks of infringement, Adobe has announced their own image generation tool Firefly, which was trained on images from their own stock library and ones in the public domain.
posted by justkevin at 8:03 AM on April 6, 2023 [1 favorite]


Yeah, the moral/legal issues around AI content haven't gotten enough attention.
So for example, if you google "Is ChatGPT plagiarism?" the top result says that AI gen'ed content is not plagiarism because Chat doesn't copy words verbatim. But that's not what plagiarism is. Plagiarism is unacknowledged use of content that you did not create yourself. That is to say, if you use ChatGPT in your essay without acknowledging it, you are plagiarizing.

The difference between copyright violation and plagiarism.

Plagiarism is unacknowledged. Copyright violation is unauthorized. Plagiarism is an ethical issue that turns into a legal issue if you also violate copyright.
posted by storybored at 8:16 AM on April 6, 2023 [6 favorites]


Human intelligence to me isn't some mystical thing going on between our ears, it is a matter of information, processing, and output.

“‘Yes, an electronic brain,’ said Frankie, ‘a simple one would suffice.’
‘A simple one!’ wailed Arthur.
‘Yeah,’ said Zaphod with a sudden evil grin, ‘you’d just have to program it to say What? and I don’t understand and Where’s the tea? – who’d know the difference?’”
posted by rouftop at 8:20 AM on April 6, 2023 [7 favorites]


Human intelligence to me isn't some mystical thing going on between our ears, it is a matter of information, processing, and output.

somebody took the whole CPU analogy and ran with it
posted by elkevelvet at 8:23 AM on April 6, 2023


That is to say, if you use ChatGPT in your essay without acknowledging it, you are plagiarizing.

Turnitin turned on AI detection this Tuesday. I expect that yesterday was a nutty day for any teachers/professors that use it. From discussions with my students, ChatGPT is getting a *lot* of use in "helping" with assignments.
posted by pjenks at 8:28 AM on April 6, 2023 [2 favorites]


I live in a rural part of Canada and over a month ago a high school teacher was describing the "AI" tool they were using to detect "AI" cheating on assignments. I assume this was some sort of LLM/ChatGPT detection tool.
posted by elkevelvet at 8:31 AM on April 6, 2023


pjenks: Turnitin turned on AI detection this Tuesday


"The result? It got over half of them at least partly wrong. Turnitin accurately identified six of the 16 — but failed on three, including a flag on 8 percent of Goetz’s original essay. And I’d give it only partial credit on the remaining seven, where it was directionally correct but misidentified some portion of ChatGPT-generated or mixed-source writing."

I would give Turnitin a failing grade! Accusing the innocent, what a nightmare.
posted by storybored at 8:39 AM on April 6, 2023 [4 favorites]


We live in an era of unrelenting hype about something that involves no 'I' whatsoever.

"LLMs aren't true AI" is the new "Unless it comes from the Champagne region of France, it's actually just sparkling wine."

More seriously: there's a real kneejerk rush to dismiss the capabilities of LLMs here on Metafilter, which at this point really feels more like an anxiety reaction than something borne out of real concern for technical distinctions. It doesn't actually matter whether it's an intelligence or not! It can be a world-changing technology without being intelligent. It can absolutely put you out of a job without being intelligent. (Your boss does not care if it has generalized intelligence. Your boss doesn't even care if it can do your job as well or better than you do your job. If an LLM can do your job even one-fifth as well as you do it - at a minuscule fraction of the cost required to have you do it - then you should either start pondering career change options, or try to position yourself as the person in your office who is the best at coaching and refining LLMs to do your job better.) And it doesn't have to be intelligent to be a useful tool that you can use to do incredible things! But a lot of folks on Metafilter sure seem like you'd all rather kvetch about how actually LLMs aren't "all that", than engage even slightly with what they are.

Dear Metafilter: This is you getting old. This is you turning into an Old Man Yelling At Clouds. This is how that happens: something new comes along that will be future-defining and culture-shifting, and you just refuse to actually engage with it. LLMs are not truly intelligent, they are just a tool, but they're still an extremely powerful tool and they're going to become a ubiquitous tool at a speed that will shock you. If you don't learn to use that tool, in 10-15 years you will be the equivalent of your crotchety relative who refuses to get a smartphone or learn how smartphones work.

Anyways, thanks for this post storybored.
posted by mstokes650 at 8:48 AM on April 6, 2023 [22 favorites]


Thanks to the annoyingly frictionless interfaces of today, learning to use a smartphone takes all of about 5 minutes. Maybe a few hours if you've never seen a computer before. Which is why it isn't much of a marketable skill. I appreciate this article because it at least begins to try to answer the question "what actually is there to learn about LLMs, if you aren't going to be building or training them yourself?"

As for the I in AI, well, my opinions about intelligence and the self are set forth in detail in my username. But I am increasingly sympathetic to "it's not really intelligence" arguments and the whole sad Searlean edifice they stand on, on pragmatic grounds. We are at a point where it is not the tool's actual capabilities, but the narratives around those capabilities, that are most important. Intelligence has always been a construct of narrative convenience, and right now we need that construct more than ever. The more the conversation about LLMs becomes a conversation about functional equivalence and not about the je-ne-sais-quoi of humanosity or whatever, the more danger it poses to working people all over the world.
posted by Not A Thing at 9:07 AM on April 6, 2023 [1 favorite]


By coincidence, I asked ChatGPT4 earlier today to write a 6502 assembly program to check if a number was prime.

ChatGPT's effort included a divide command that the 6502 does not have.

I mentioned this on Facebook and a coder friend tried to get ChatGPT to write a 6502 divide routine. The first effort produced what he described as "carefully-commented and carefully-explained nonsense" and when he pointed out the mistakes each successive attempt was worse. ("I tried to point out to the model that it was overwriting inputs before using them, at which point it apologised again and produced a new version of the code which again overwrote the inputs before using them.")
posted by Major Clanger at 9:14 AM on April 6, 2023 [3 favorites]


ha yes assembly doesn't tolerate BS very well. there's a strong need for a mental model of what's happening, not just putting together stuff that "looks right". the llm generated crochet patterns are similarly nonsensical.
posted by Ansible at 9:21 AM on April 6, 2023 [6 favorites]


>But a lot of folks on Metafilter sure seem like you'd all rather kvetch about how actually LLMs aren't "all that", than engage even slightly with what they are.

This is why I keep posting "Poets, priests, and politicians" in these threads.

If an entity can put words together in a sensible manner, it is providing economic utility.

I mean, I got 90% of what I know from National Geographics 1928-1970, the 1965 World Book set, and Peoples Alamancs I - III. Pretty basic training corpus : )

And I suspect the web of words in LLMs is similar in function to the web of words and grammar(s) we have encoded in our own wetware, i.e the boffins are on the "right" track with this.
posted by Heywood Mogroot III at 9:22 AM on April 6, 2023


I can't help but wonder, is churning out plausible but sometimes flawed text really helpful here?

As an expert in a topic it's useful for me to structure the spaghetti knowledge of my brain into something coherent, and highlights what a layperson might want to know. It's not at all useful at knowledge generation, but if you have the knowledge and can tell at a glance if something is correct or incorrect, then it can give you a basic structure for explaining a concept and highlight the points that laypeople are interested in (as there's vastly more lay content than specialized academic content in these models). It also gives me lay language rather than the jargon I think in. It's stuff I could have written but would have had to spend a lot of time trying to escape the quartz of course problem.

I've also found it helpful for breaking down tasks that I know how to do into step-by-step lists, which may not be complete or may be inaccurate, but again--I can tell at a glance if it's obviously wrong, and if it doesn't include a step I'm no worse off than sitting there trying to write it down myself. Asking it to lay out the steps for getting a proposal approved by IRB does not give a very accurate or detailed document, but it breaks through the "blanking on getting started" executive functioning issue.

Basically, if you know things but struggle to get those things out of your brain in a useful way it can be helpful. But otherwise I'm not sure.
posted by brook horse at 9:27 AM on April 6, 2023


my first interaction with Chat GPT was it describing a Nubus feature for the IIci that simply did not exist, so I get what its limitations are, LOL. It sounded good tho, so it went into the stream. GPT needs more GAN apparently, or a super-ego for its id.

Its creators know that.
posted by Heywood Mogroot III at 9:27 AM on April 6, 2023


If I have no shadow at 2pm, do I have a shadow at 3pm?

BING
If you have no shadow at 2pm, it means that the sun is directly overhead. This happens when you are at the equator during the equinoxes. If you are at the equator during the equinoxes, you will have no shadow at noon. So, if you have no shadow at 2pm, you will still have no shadow at 3pm. Is there anything else I can help you with?

The key really here is that its one hour later and seeing whether it 'understands' that the Sun and Earth is a moving system. It doesn't. I mean, it must have immense amounts of data relating to the Earth-Sun system but it has no actual 'model' underneath it all.


Also, what about clouds? Roofs? Artificial lighting? Necromancy?
posted by BrashTech at 9:32 AM on April 6, 2023


Realized I copied that quote, was going to say something else, got distracted and came back and what I said was not relevant to what you're talking about so ignore that.

If only I had an AI to keep me on task.
posted by brook horse at 9:34 AM on April 6, 2023


Yeah everybody knows “AI” only properly refers to good old fashioned rule-based systems.

Hey, don't forget Decision Trees!

That to me, would be one tiny step closer to a general intelligence.

Zeno will meet you at the door.
posted by achrise at 9:36 AM on April 6, 2023 [1 favorite]


Getty has argued that the images are derivative of the training data.

Getty actually said: "you gotta pay us to use our images" to the Stable people.
posted by Heywood Mogroot III at 9:39 AM on April 6, 2023


I don't know why it's so important to constantly reiterate that this isn't AI

as my jokey first comment is getting at, this bit of pedantry peeves even more if one has a background in CS, where “AI” has been a term of art for decades with a minimum implied capability far less impressive than what LLMs can do now. I understand that there’s some collision of insider and outsider concepts here but sometimes it’s like, yeah, we get it.
posted by atoxyl at 9:40 AM on April 6, 2023 [1 favorite]


>what about clouds? Roofs?


on the above shadow question, are there any other reasons why I would not be casting a shadow at 2pm?

Yes, there are a few other reasons why you might not cast a shadow at 2 pm:

Overcast or cloudy weather: If the sky is overcast or there are thick clouds blocking the sunlight, the light may be too diffuse to create a distinct shadow. In such conditions, shadows can be very faint or even non-existent.
Light from multiple sources: If there are multiple light sources in the environment (e.g., streetlights, windows, or artificial lights), they can create multiple shadows that overlap or cancel each other out, making it difficult to see a distinct shadow.
Surrounding objects or surfaces: If you are standing close to a wall, fence, or another object, your shadow may be cast onto that object, making it difficult to see or recognize. Additionally, if you are on a surface with patterns, textures, or irregularities, your shadow may blend in with the surroundings.
Position relative to the light source: If the sun or other light source is at a very low angle relative to your position, your shadow may be cast in a direction where it is difficult to see, such as behind or under an object.

These factors can contribute to a situation where you might not perceive a distinct shadow at 2 pm, even though the sun is not directly overhead.

what about if I were leaning?
If you were leaning, you would still cast a shadow, but its shape, direction, and length would be affected by your leaning position. The shadow would still be created according to the angle of the light source (e.g., the sun) relative to your body. Depending on the angle of the light source and the degree of your lean, your shadow might appear distorted or elongated compared to when you are standing upright.

In some cases, if you are leaning against an object like a wall, your shadow may be cast onto the wall or other surfaces, making it less noticeable. However, unless there is another factor like an overcast sky or multiple light sources interfering with the shadow, you would still cast a shadow even while leaning.

what if I were leaning at the same angle of the incident sunlight? Would I still cast a shadow?
If you were leaning at the same angle as the incident sunlight, your shadow would still exist, but it might appear significantly distorted, elongated, or compressed depending on the specific angle and the orientation of your body relative to the light source.

In this particular case, your shadow would likely appear stretched out and flattened on the ground or surface behind you. The shadow would still be present, but it might be more challenging to discern due to the unusual shape and position created by your leaning angle.

posted by Heywood Mogroot III at 9:41 AM on April 6, 2023 [1 favorite]


Some people say artificial intelligence isn't intelligence. Well, artificial grape isn't grape, either.
posted by The Half Language Plant at 9:50 AM on April 6, 2023


on the above shadow question, could necromancy affect my ability to cast a shadow?

Bing AI: Necromancy is a fictional practice of magic involving the manipulation of death and the undead. There is no scientific evidence that necromancy exists or affects the ability to cast a shadow. In this context, a human might say that necromancy is a fantasy concept that has no bearing on reality. However, some fictional stories may depict necromancers or their creations as having no shadows or distorted shadows as a way of showing their unnatural or evil nature. This is a creative choice by the authors and not a fact.
posted by mittens at 9:55 AM on April 6, 2023 [3 favorites]


(also, i just repeatedly burned my mouth all through lunch because i was rushing through an overcooked cheese pie before my next conference call, so maybe i'm not the best judge of general intelligence.)
posted by mittens at 9:55 AM on April 6, 2023 [2 favorites]


”AI” has been a term of art for decades with a minimum implied capability far less impressive than what LLMs can do now

Which I guess is why people stared bandying about things like “AGI” to mean “no, like real AI.” My current take though is that it’s unlikely that clear-cut “AGI” will arrive all at once, but that a variety of ML approaches will continue to encroach on areas of intelligence-as-we-know-it such that the dividing line will eventually look pretty threadbare. Or maybe advancement gets capped by what’s physically possible in hardware, I dunno.
posted by atoxyl at 9:58 AM on April 6, 2023


Regarding LLMs and world modeling, an argument that they can based on a scaled-down experiment (playing Othello). And followup suggesting that the internal representation is actually more transparent than originally claimed.
posted by atoxyl at 10:09 AM on April 6, 2023 [1 favorite]


The first link has some helpful advice and caveats. The second link seems like practical advice. I worry that the focus among educators is only on detecting the use of the technology rather than teaching how to use it responsibly, just like with sex|drugs|rock-and/or-roll. There needs to be a nationwide seminar "How to Teach in the Age of LLMs" or something.
posted by credulous at 10:13 AM on April 6, 2023 [3 favorites]


If it makes you feel any better, credulous, most of the faculty with whom I interact at my (US-based, research) university are much more focused on "What does this mean for what we should be teaching?" and "How can we prepare students to ethically and effectively use these tools and those that will follow?" than on "How do we catch students using these tools to cheat?" There is some concern and consternation but a lot of curiosity and experimentation, too.
posted by ElKevbo at 10:56 AM on April 6, 2023 [2 favorites]


Everyone here seems to be Very Concerned about LLM accuracy and implications for society and the the true nature of intelligence... but I'm here to tell you this shit has been great for my D&D game.
posted by bradbane at 11:25 AM on April 6, 2023 [5 favorites]


I stepped back from teaching this year but one exercise I would LOVE to do is to ask students to have ChatGPT write about a topic they know a lot about (preferably something they know about for fun/interest reasons, like a favorite band or a sport or a book series) and then analyze what it gets wrong, what's missing, what they would add, etc. I feel like it could be a good way to help learn where there might be gaps in the information they're presented (even when written by a human!) and how to become more aware of that in their day-to-day life.
posted by brook horse at 11:28 AM on April 6, 2023 [2 favorites]


The first thing people try to do with AI is what it is worst at; using it like Google: tell me about my company, look up my name, and so on. These answers are terrible. Many of the models are not connected to the internet, and even the ones that are make up facts. AI is not Google.

One way to think about LLMs is a lossy compression of the internet. The common crawl is like 300TB of internet (with traditional compression!) but the models are like 1TB. Unless you are important enough to have a good Wikipedia article about yourself, you will be compressed away, since like you are not that unique (how many people share your name?). When prompted to write your biography it will have to make shit up. It will probably get your gender right, but make assumptions about your birth location based on surname, occupations based.

Whats kind of interesting is the hope that this high level of compression will require embedding conceptswithin the network, thus advancing AI. We've seen similar things in toy models on MNIST -- "this region recognizes loops, this region recognizes lines, and the output for sixes is hooked up to both while the recognizer for eights is hooked up to loops and some other thing." The danger of course is that we don't have great ground truths, so the representation it learns is distorted. The best we have is "the encyclopedia anyone can edit," some photo tagging datasets from Flickr, and reddit conversations. But at least the stack overflow data has feedback systems to put misinformation in check.
posted by pwnguin at 11:45 AM on April 6, 2023


This post is great, thank you for it (and for the nuggets of discussion that hew to the point of tfas).

As a 40ish developer who swung into the field a few years back from a prior career in (non-stem) teaching, I'm astounded by the impact it's had on my personal learning vs. the times I would spend not that long ago finding the right docs or stackoverflow questions. Its use as a research-butler type learning aid in these fields where problems can be pseudo-described rather formally is already something revolutionary.

I can't wait to see the impact models better tuned to messier subjects will have across the wider education system as they get put to use by passionate people who value increasing knowledge access, engagement and feedback and see their own role as guides, not gospels.
posted by protorp at 11:48 AM on April 6, 2023 [2 favorites]


The more the conversation about LLMs becomes a conversation about functional equivalence and not about the je-ne-sais-quoi of humanosity or whatever, the more danger it poses to working people all over the world.

This is an interesting assertion to me, because I’ve been feeling that people who get too caught up in debating the je-ne-sais-quoi of humanosity are in danger of getting bowled over by functional equivalence. The narrative matters in the short term, but capabilities (which again I actually personally expect to be piecemeal but still significant) matter in the long term.
posted by atoxyl at 12:21 PM on April 6, 2023 [3 favorites]


Are you simply synthesizing sentences out of other people's sentences with no ability to tell if what you're saying actually makes sense or bears any relation to reality?

No, because I am not a Republican.
posted by flabdablet at 1:07 PM on April 6, 2023


The danger of course is that we don't have great ground truths, so the representation it learns is distorted. The best we have is "the encyclopedia anyone can edit," some photo tagging datasets from Flickr, and reddit conversations. But at least the stack overflow data has feedback systems to put misinformation in check.

I wonder how long it will take before someone takes a GPT-4-level LLM and wires it up to something like Cyc and a second GPT-4-level LLM that intermediates between the two. We've already seen multiple agent approaches (e.g. generator-discriminator and researcher-decider) that work better than single agents on some tasks. Something like language + logic + mediator might perform better and fail less often than a language model alone, even a very, very large one.

DeepMind tried something kind of like this with Retro, but it was just an LLM + a database of statements from the internet, rather than an LLM + a logical inference engine with a large corpus of fairly rigorously vetted inputs.
posted by jedicus at 2:31 PM on April 6, 2023


I used chat gpt 3.5 today to write a short batch file I've been meaning to make to do a 5second job I do everyday. Based on that xkcd comic about saving time, I think this could help me make quite a few small macros and programs without having to spend a lot of time working out how.
posted by Braeburn at 3:02 PM on April 6, 2023


I don't know why it's so important to constantly reiterate that this isn't AI.

For the same reason that people constantly reiterate that you shouldn't take a nap while your self-driving car drives:

Because the term is misleading and it's a damn important distinction.
posted by AlSweigart at 9:41 PM on April 6, 2023 [8 favorites]


> I wonder how long it will take before someone takes a GPT-4-level LLM and wires it up to something like Cyc and a second GPT-4-level LLM that intermediates between the two.

ChatGPT Gets Its “Wolfram Superpowers”!*

HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in Hugging Face[1,2]
Solving complicated AI tasks with different domains and modalities is a key step toward advanced artificial intelligence. While there are abundant AI models available for different domains and modalities, they cannot handle complicated AI tasks. Considering large language models (LLMs) have exhibited exceptional ability in language understanding, generation, interaction, and reasoning, we advocate that LLMs could act as a controller to manage existing AI models to solve complicated AI tasks and language could be a generic interface to empower this. Based on this philosophy, we present HuggingGPT, a framework that leverages LLMs (e.g., ChatGPT) to connect various AI models in machine learning communities (e.g., Hugging Face) to solve AI tasks. Specifically, we use ChatGPT to conduct task planning when receiving a user request, select models according to their function descriptions available in Hugging Face, execute each subtask with the selected AI model, and summarize the response according to the execution results. By leveraging the strong language capability of ChatGPT and abundant AI models in Hugging Face, HuggingGPT is able to cover numerous sophisticated AI tasks in different modalities and domains and achieve impressive results in language, vision, speech, and other challenging tasks, which paves a new way towards advanced artificial intelligence.
also btw...
-ChatGPT as a query engine on a giant corpus of text
-Think of language models like ChatGPT as a “calculator for words”

oh and: posted by kliuless at 10:57 PM on April 6, 2023 [3 favorites]


I asked ChatGPT4 earlier today to write a 6502 assembly program to check if a number was prime.

ChatGPT's effort included a divide command that the 6502 does not have.

I mentioned this on Facebook and a coder friend tried to get ChatGPT to write a 6502 divide routine. The first effort produced what he described as "carefully-commented and carefully-explained nonsense"


Even without the divide command that the 6502 doesn't have, the code it originally generated for you is obsessively over-commented nonsense.
is_prime CMP #1      ; Compare the accumulator to 1
         BEQ not_prime ; If the number is 1, it is not prime
         CMP #2      ; Compare the accumulator to 2
         BEQ prime   ; If the number is 2, it is prime
         BCS not_prime ; If the number is less than 2, it is not prime
"Compare the accumulator to 1" as a comment for "CMP #1" adds no value whatsoever: anybody with enough familiarity with the 6502 to know what its accumulator is will also know what CMP is short for and that # denotes an immediate value. The comments attached to the BEQs in the rest of this sequence are more justifiable but only barely.

"BCS not_prime" doesn't do what the comment attached to it says it does. After a 6502 CMP instruction, the carry flag will be set if and only if the contents of the accumulator, interpreted as an unsigned 8 bit number, are greater than or equal to the operand of the CMP instruction (the value 2, in this instance). So the sense of the test used here is exactly wrong. To match the comment, the instruction should have been BCC, not BCS. Plus, the fact that no number less than 2 is indeed prime makes the previous test (for equality with 1) redundant.
         LDX #2      ; Initialize X to 2
is_loop  CPX #254    ; Compare X to 254 (the largest factor of 65535)
         BCS prime   ; If X is greater than 254, the number is prime
         INX         ; Increment X
         DIV X       ; Divide the accumulator by X
         BNE is_loop ; If the remainder is not zero, continue the loop
not_prime CLC        ; Clear the carry flag to indicate the number is not prime
         RTS         ; Return from the subroutine
prime    SEC         ; Set the carry flag to indicate the number is prime
         RTS         ; Return from the subroutine
This part continues the pattern where instructions get commented with their literal meanings in English, a super irritating habit when actual human 6502 coders do it. But even the parts that attempt to go beyond that are askew: not only is 254 not the largest factor of 65535, it's not even a factor of 65535. Plus, the comment gives no clue as to why 65535 is even brought up here. And again, the sense of the comment "If X is greater than 254" doesn't match that of the instruction it's attached to: BCS is a greater-or-equal test, and the comment identifies it as greater-than. Not only is this wrong, but it's wrong in a way inconsistent with the way the same instruction was wrongly used in the previous stanza.

As somebody who got paid to write 6502 code for many years, I find this output fascinating - especially given how far short it falls of the generally acclaimed usefulness of GPT output in higher-level languages.

Interestingly, the comments alone make some kind of sense when interpreted purely as pseudocode. If you were to strip the 6502 mnemonics off the left hand side and use only these comments as an initial outline for writing a primality test in any assembly language, the approach would be quite reasonable.

It looks as if the LLM does know how to code a primality test, in that the structure of the code is essentially OK: there are subroutine calls and returns that make sense, there's clearly a loop designed to work through an input's potential divisors one by one, and there's some kind of nod in the direction of only bothering to test divisors that are smaller than the largest possible input's square root. The number of not totally-made-up 6502 instructions is actually quite impressive too. What's entirely missing, though, is anything like a 6502 programmer's model.

The main reason I find this fascinating is because the places where GPT reveals gaps in its model are exactly those parts of my own personal coding model that I had to build by sitting down with pencil and paper and drawing boxes and arrows. I don't believe I would ever have been able to understand the 6502 without doing that; no amount of verbal explanation would have been enough on its own.

This says things to me about how meaning gets coded in my own head that I am going to have to muse over at some length, preferably with herbal assistance.

For me, learning to code in higher-level languages has involved very little boxes and arrows work compared to that required for assembler. High-level languages lend themselves much better to purely verbal explication. And I'm wondering whether that observation has anything usefully predictive to say about the nature of emergent world-models and the domains in which LLMs are likely to end up saving more time than they cost us.

Contrasting GPT's laughable performance at chess with its far greater competence at Othello, a game for which an emergent game state model has been clearly demonstrated, is interesting as well. Chess, like the 6502, has an intricate internal game state, with many interacting parts, many kinds of interaction between those parts, and only the most tenuous connection between those internals and the notation used to denote operations performed by and on them. Othello's game state consists of 64 regular cells, each independently capable of assuming only one of three distinguishable values. The move notation explicitly identifies exactly the cell to be affected, and the move order implicitly identifies the operation to be performed on that cell. The only relationships that the LLM needs to discern on its own in order to have a genuine model of what's happening during gameplay are those that define horizontal, vertical and diagonal connections between cells.

I've long thought of what my own mind does as maintaining a coded representation of my experience as a being in the world. Any such representation would obviously require massively lossy compression, because the world is literally infinitely detailed. All compression relies ultimately on pattern recognition, and watching LLMs do their thing illustrates quite a lot about just how much pattern recognition is achievable even when a system has nothing more to work with than one-dimensional sequences of input tokens.

The main thing I'll be musing on for the next few herbal sessions is whether there are strong parallels between the rationale for inventing high-level coding languages, which is that they're easier for people to understand, and the relative success of today's token-stream LLMs at generating workable high-level coding language output compared to playing chess or writing assembler. There's something there about the levels at which meaning is coded, and therefore the extent to which a worldview becomes susceptible to high levels of compression without undue loss, and that's a thread I fully intend to enjoy tugging at over the next little while.

Because I'm starting to suspect that "high-level" is actually an upside-down nomenclature, and that the point of high-level coding languages is exactly to avoid needing to climb terribly far out of our own internal LLMs in order to get good work done. Getting higher than that requires a decent scaffolding of boxes and arrows.
posted by flabdablet at 6:56 AM on April 7, 2023 [4 favorites]


Using ChatGPT for coding tasks is interesting for both the successes and failures, but I hope people are not using it to judge the intelligence of LLMs.

The overwhelming majority of humans don' t know how to write code at all. For the 6502 example, I'd say that fewer than 1% of professional programmers know how to write 6502.

In general, the history of AI (or whatever you want to call it) development has been marked by the movement of goalposts. Chess ability was once considered a signifier of intelligence between humans, so of course early AI researchers had a go at that. Once computers were beating humans, the goalposts shifted to "Oh, but computers aren't smart enough to beat all humans." Once that threshold was crossed, "Oh, but it only knows how to play chess, it isn't really intelligent." All fair criticisms, none of which changed the fact that progress was made.

Recently, the goalposts seem to be moving very fast.
posted by swr at 7:22 AM on April 7, 2023 [2 favorites]


Yes, we’ll be entering ST:NG’s Measure of a Man territory someday.

LLM may not be sufficient, for that, but it does look like a necessary piece of the puzzle.
posted by Heywood Mogroot III at 7:27 AM on April 7, 2023


As somebody who got paid to write 6502 code for many years, I find this output fascinating - especially given how far short it falls of the generally acclaimed usefulness of GPT output in higher-level languages.

It also probably matters that it’s presumably “seen” a lot more Python than 6502 assembly, especially since it seems to have some basic confusion about what the instructions do, but yes - I think by nature GPTs are much better pattern matchers than state machine simulators, even as they’ve improved a bit at doing the latter (and can famously be made more reliable by promoting them to write out the steps before generating real code). It also seems like their notoriously shaky model of arithmetic is coming into play.

Most of the popular high-level languages are designed to look like natural language and offer functions called “do_thing()” which do the thing (and make it hard to forget what thing you’re doing mid-thing). I’d hypothesize that the more a program that “looks right” is actually right, the better I’d expect LLMs should be at producing it.
posted by atoxyl at 9:39 AM on April 7, 2023 [2 favorites]


and make it hard to forget what thing you’re doing mid-thing

I threw this in as a parenthetical but based on my understanding of how language models work I think this might actually be a key factor. When you can only write one token at a time going forward, predicted from the previous tokens, having the code that you already wrote compactly remind you what’s it’s already done probably helps a lot!
posted by atoxyl at 9:52 AM on April 7, 2023


I get the feeling LLMs don't like boxes and arrows, they like nested structures. Anything with GOTOs might give it a problem.

I think another problem with 6502 specifically is that it is likely overwhelmed by code for x86, and also other PLs. Once I saw GPT-3 indent its 6502 assembly code like Python.

I'm sure there is research into "guiding" the output somehow with a less squishy model or algorithm.
posted by credulous at 10:11 AM on April 7, 2023


« Older [don't share with candidates]   |   New York Jewish Conversational Style Newer »


This thread has been archived and is closed to new comments