More on AI and the Future of Work
April 2, 2023 7:42 AM   Subscribe

Thinking About AI - "So where do I think we are? At a place where for fields where language and/or two dimensional images let you build a good model, AI is rapidly performing at a level that exceeds that of many humans."
But when building the model requires input that goes beyond language and two dimensional images, such as understanding three dimensional shapes from three dimensional images (instead of inferring them from two dimensional ones) then the currently inferred models are still weak or incomplete. It seems pretty clear though that progress in filling in those will happen at a breathtaking pace from here.

Since this is getting rather long, I will separate out my thoughts on where we are going next into more posts. As a preview, I believe we are now at the threshold to artificial general intelligence, or what I call “neohumans” in my book The World After Capital. And even if that takes a bit longer, artificial domain specific intelligence will be outperforming humans in a great many fields, especially ones that do not require manipulating the world with that other magic piece of equipment we have: hands with opposable thumbs. No matter what the stakes are now extremely high and we have to get our act together quickly on the implications of artificial intelligence.
---
  • Sabine Hossenfelder: I believe chatbots understand part of what they say. Let me explain - "I used to think that today's so-called 'artificial intelligences' are actually pretty dumb. But I've recently changed my mind. In this video I want to explain why I think that they do understand some of what they do, if not very much. And since I was already freely speculating, I have added some thoughts about how the situation with AIs is going to develop."
  • Does GPT-4 Really Understand What We're Saying? - "One question for David Krakauer, an evolutionary theorist and president of the Santa Fe Institute for complexity science."
  • @leopoldasch: "It's definitely not just memorizing, it's learning to think and reason. Probably the most important thing happening in the world right now."
  • @scienceisstrat1: "The AI revolution 🤖 is at least 70 years in the making. Slowly, slowly, and now very quickly."
---

Thinking About AI: Part 2 - Structural Risks - "Structural risk is about AI aggravating existing problems, such as wealth and power inequality in the world."
I am sympathetic to those who point to structural risks as a reason to slow down the development of these new AI systems. But I believe that for addressing structural risks the better answer is to make sure that there are many AIs, that they can be controlled by endusers, that we have programmatic access to these and other systems, etc. Put differently structural risks are best addressed by having more artificial intelligence with broader access... the focus of regulation should be avoiding a situation where there are just a few huge models that have a ton of market power.
Artificial General Intelligence and how (much) to worry about it - "Presenting the Strange equation, the AI analogue of the Drake equation."[1,2,3]
The first problem is how to break the problem down. Because ultimately a lot of the ways in which AI makes everyone grey goo, or sets off all nuclear bombs, or creates a pathogen, assume a level of sophistication that basically begs the answer. So I looked at where we’ve grappled with the ineffable before, and tried to somehow break down our uncertainty about the existence of a thing, in this case a malevolent entity that an uncomfortably large percentage of the population are happy to call a demon.

We’ve looked at calculating the incalculable before. Most famously, in 1961, the astrophysicist Frank Drake asked how many extra-terrestrial civilisations there could be. To answer this, he created his eponymous equation...

I did start writing this pretty heavily in the “AI safety concerns seem silly” camp, though just enumerating the variables has made me vastly more sympathetic to the group. I still think there’s an insane amount of Knightian uncertainty in the ways in which this is likely to evolve. I also think there’s almost no chance that this will get solved without direct iterative messing about with the tools as they’re built...

If we bring it up like the child in Omelas, life could get bad. So let’s not.
---
  • ChatGPT Gets a Computer - "The old brain Hawkins references is our animal brain, the part that drives emotions, our drive for survival and procreation, and the subsystems of our body; it's the neocortex that is capable of learning and thinking and predicting. Hawkins' argument is that absent the old brain our intelligence has no ability to act, either in terms of volition or impact, and that machine intelligence will be similarly benign; the true risk of machine intelligence is the intentions of the humans that wield it."
  • ChatGPT Gets Its 'Wolfram Superpowers'! - "Wolfram plugin gives computationally accurate answers to ChatGPT queries using Wolfram's data knowledgebase and language."
  • Existential risk, AI, and the inevitable turn in human history - "How well did people predict the final impacts of the printing press? How well did people predict the final impacts of fire? We even have an expression 'playing with fire.' Yet it is, on net, a good thing we proceeded with the deployment of fire ('Fire? You can't do that! Everything will burn! You can kill people with fire! All of them! What if someone yells 'fire' in a crowded theater!?')... Existential risk from AI is indeed a distant possibility, just like every other future you might be trying to imagine. All the possibilities are distant, I cannot stress that enough. The mere fact that AGI risk can be put on a par with those other also distant possibilities simply should not impress you very much."[4]
  • @Noahpinion: "We've been living in a world of truly radical technological change for over a decade now."
---

Why Americans fear the AI future - "Five decades of rising inequality and slow growth have made us a less optimistic country."
Why? Is it because earlier waves of technological innovation happened at a time when the U.S. was a rising power, and now we’re in decline while China rises? Perhaps. But I think there’s another factor at work here — Americans have spent much of the last four decades watching technology fail to produce the kind of rapid, broad-based income growth that it had in the past. That made them think of their economy as more of a zero-sum proposition, and it made them fear that any new technology will simply create more winners and more losers.

And that has blinded them to the possibility that AI might reverse this dynamic...

Because LLMs are great at summarizing the broad corpus of human written knowledge, they can also act as a how-to manual for any task that can be explained in writing — much like YouTube acts as a how-to manual for any task that can be demonstrated with a short video. LLMs won’t just explain how to do things, either; they’ll draw on the collective accumulation of human wisdom to offer ideas on how to solve problems.

These functions aren’t just about allowing highly trained coders or content producers to become even better at their high-paying jobs. Both the “machine tool” function of AI and the “how-to manual” function could easily allow average workers to do things that currently only very smart and highly-trained workers can do — much as sewing machines allowed regular people to make stitches as close and even as those produced by a master seamstress...

People using AI for this task become more optimistic about AI.
Society's Technical Debt and Software's Gutenberg Moment - "This framing—grammar vs predictability—leaves us convinced that for the first time in the history of the software industry, tools have emerged that will radically alter the way we produce software."[5,6,7]
There is immense hyperbole about recent developments in artificial intelligence, especially Large Language Models like ChatGPT. And there is also deserved concern about such technologies’ material impact on jobs. But observers are missing two very important things:
  1. Every wave of technological innovation has been unleashed by something costly becoming cheap enough to waste.
  2. Software production has been too complex and expensive for too long, which has caused us to underproduce software for decades, resulting in immense, society-wide technical debt.
This technical debt is about to contract in a dramatic, economy-wide fashion as the cost and complexity of software production collapses, releasing a wave of innovation...

In the limit, we think about this moment as being analogous to how previous waves of technological change took the price of underlying technologies—from CPUs, to storage and bandwidth—to a reasonable approximation of zero, unleashing a flood of speciation and innovation. In software evolutionary terms, we just went from human cycle times to that of the drosophila: everything evolves and mutates faster...

A software industry where anyone can write software, can do it for pennies, and can do it as easily as speaking or writing text, is a transformative moment. It is an exaggeration, but only a modest one, to say that it is a kind of Gutenberg moment, one where previous barriers to creation—scholarly, creative, economic, etc—are going to fall away, as people are freed to do things only limited by their imagination, or, more practically, by the old costs of producing software.
---
  • @Noahpinion: "LLMs are to software as machine tools are to hardware!"
  • AI, Mass Evolution, and Weickian Loops - "You are seeing, for the first time, real-time, mass evolution, as people and groups are de facto creating increasingly complex applications right in front of you... It is faster and more democratic, given the collapse of de facto coding costs, as well as the disappearance of just year-ago barriers to participating in this iterative process."
  • @paul@paulkedrosky.com: "The future is all bullshit jobs, all the way down."[8]
  • @eyesquash@mastodon.world: "Obvious. UBI, and universal healthcare, would empower people to work only for employers who treat them well. That's why they don't exist."
---

@peternixey: "I'm in the top 2% of users on StackOverflow. My content there has been viewed by over 1.7M people. And it's unlikely I'll ever write anything there again."
What happens when we stop pooling our knowledge with each other & instead pour it straight into The Machine? Where will our libraries be? How can we avoid total dependency on The Machine? What content do we even feed the next version of The Machine to train on?

When it comes time to train GPTx it risks drinking from a dry riverbed. Because programmers won't be asking many questions on StackOverflow. GPT4 will have answered them in private. So while GPT4 was trained on all of the questions asked before 2021 what will GPT6 train on?
---
  • @arclight@oldbytes.space: "Fun fact: Nuclear physics researchers prize metal cast prior to 1945 because it predates nuclear weapons and is not contaminated with trace amounts of fallout. Less background radiation means clearer reception of the signals they care about. Pre-1945 metal is 'low noise' but for radiation. Which year will be the cutoff for GPT contaminated images and text?"
  • @interfluidity@fosstodon.org: "The social prerequisite for technological dynamism is a universalist welfare state that reduces the coupling between fluctuating labor income and human thriving."
---

What To Do About AI-Driven Job Displacement - "The policy solution to unemployment remains the same."
Every time a new technology or a new economic proposal requires a significant reallocation of labor — such as the phasing out of fossil fuels or the creation of a universal public health insurance system — we see this exact same discourse. It’s a strange discourse in that it never really changes but we also treat every instance of it as if it is a novel thing that requires a novel response.

The problem of job reallocation is a general one that is rooted in the fact that our national income is primarily doled out using factor payments to labor and capital. When people lose their jobs, their labor factor payments are reduced to $0 until such time as they find a new job. Finding a new job may also be difficult if the kinds of jobs a person is capable of doing are shrinking in number due to technological shifts. Once a new job is found, the factor payments that flow to it may also be significantly lower than the factor payments that flowed to the person’s prior job.

This constellation of issues is not constrained to a small group of people nor is it constrained to unique mass job loss events. In a typical year, there are around 20 million unemployment events in the US, defined as the number of times a person went from employed in one month to unemployed in the next month.

The solution to this problem is fairly straightforward and well-known. Give unemployed people cash benefits to replace their prior wages. Give them job search and retraining assistance to help them find new work. And, possibly, give them wage insurance to ensure that, if they do end up reallocated into a lower-paying job, they can receive benefits to make up for some of the difference, at least for some time...

My writing frequently comes back to this same point, but that’s because it really is the solution to so much of the economic distress people experience and worry about in the US. An egalitarian society with a robust welfare state creates the kind of stability and minimum living standards that takes the financial sting out of things like unemployment, labor reallocation, disability, old age, and having children. We need this kind of society at all times, not just during splashy labor reallocation events like the one LLMs might cause.
posted by kliuless (110 comments total) 59 users marked this as a favorite
 
Which year will be the cutoff for GPT contaminated images and text?"

Many art websites are already overrun by AI generated images.

I wonder, though - some training models are adversarial in nature, where it iteratively trains itself without reference to human inputs. For example, an AI can learn to play Chess or Dota by playing against itself, not playing against humans. It's not trying to play in a more human manner, it's trying to find the best way to win.

I watched a documentary about an orchid that exploited a certain wasp species. It grew a flower that mimicked a female wasp and even exuded chemicals that mimicked the wasp mating pheromone. Male wasps mated with the orchid, thus carrying its pollen to other orchids. They showed footage of 5 male wasps competing to mate with the orchid while an actual female wasp nearby was ignored.

Male wasps wasting so much sperm on orchids naturally has a negative impact on their reproductive success. Female wasps in response evolved the ability to reproduce asexually, but their offspring are all male. Because if 90% of male wasps are tricked by orchids you need more of them to ensure some females are fertilized.

Well, what this orchid is doing, it's not just content to mimic a female wasp, it's surpassing it in attractiveness by an order of magnitude. I think that's what general AI will do to us.

A pessimistic view is that humans will live like wasps in the not so far future...
posted by xdvesper at 8:06 AM on April 2, 2023 [2 favorites]


I just watched that wasp video, the male wasp ends up mating with the female wasp after being confused by the orchid.

But also: humans are not wasps.
posted by grumpybear69 at 8:12 AM on April 2, 2023 [6 favorites]


It's definitely not just memorizing, it's learning to think and reason.

GPT-4, can you give me an example of begging the question?

I think it's in the example of the "common sense" solution to a problem with eggs that we see a real confusion between wordplay--or, more charitably, second-hand knowledge--and what we mean by common sense.

If I have never seen an egg, never handled one, but have only read about them in books, I too could offer you a way to line up eggs to distribute weight evenly, but I'm not sure you could call that common sense, because of the million ways it could go wrong. I might not have read that some eggs come out of the carton with cracks, for instance. I might have learned about eggshell strength from an article with a misprint. I would not have any first-hand knowledge of eggs.

What GPT-4 is doing is impressive in the same way that a medieval scribe could make an impressive bestiary of animals he had never seen, from faraway lands he only knew from his reading. Great for drawing, great for twitter feeds that want to show you a weird medieval guy, not so great for biology and safaris.

There's a cottage industry in telling us how great (and terrible) these models are, and how they're going to change the world, and somehow this industry is producing even more words than the language models themselves, endless words, congratulatory epics. Here we are, finally, as close to AGI as we are to fusion! But I wish we could scrape all that away and ask other questions, like...where are the bodies and the senses, the instincts and emotions, upon which our irrational but necessary values are built? Because there's no reasoning without those first-hand values. Why are none of these guys pouring models into robots with eyes and skin and fear?
posted by mittens at 8:14 AM on April 2, 2023 [22 favorites]


As a preview, I believe we are now at the threshold to artificial general intelligence, or what I call “neohumans” in my book The World After Capital.

No.
posted by Going To Maine at 8:14 AM on April 2, 2023 [34 favorites]


(I believe we are at the threshold of a lot of people mistaking these machines for artificial general intelligence because they talk pretty. See LeMoyne, Blake)
posted by Going To Maine at 8:15 AM on April 2, 2023 [20 favorites]


Do we really need a blog post by a partner at Union Square Ventures, whose portfolio includes a shitload of Web3 and crypto stuff? I'm not saying AI isn't doing neat things, but this is literally one of the last people whose opinions about "the future of technology" I'd respect or trust.
posted by Tom Hanks Cannot Be Trusted at 8:25 AM on April 2, 2023 [30 favorites]


I think whether there exists such a thing as “general intelligence” is a pretty open question. Our brains appear to be a bunch of domain-specific problem solvers bunched together. One of those domain specific problem solvers creates a sense of self, but that “self” can be unbundled pretty easily by meditation or the right psychedelic.
posted by leotrotsky at 8:28 AM on April 2, 2023 [7 favorites]


Unbundled, or changed?
posted by ashbury at 8:34 AM on April 2, 2023


Sorry, I'm reading through a lot of these links, and they feel like bullshit to me. For instance, this one has this amazing clunker of a line:
For my entire life, and a bit more, there have been two essential features of the basic landscape:

1. American hegemony over much of the world, and relative physical safety for Americans.

2. An absence of truly radical technological change.
It's absolutely astounding to make the argument that AI is a breakthrough in a way that literally nothing else to come out of computing has been, especially since AI is itself a byproduct of, I mean, computing.

We've also got a link to an excitable tweet with a link to a 160-page paper, the contents of which are relevant to said tweet in the sense that it's a guy excitedly going on about AI... a couple of different @Noahpinion tweets... another post by a VC firm quietly arguing that it's time to replace software engineers...

The nicest thing I can say about this post is that its signal-to-noise ratio feels less-than-great. Is this really the best of the web?
posted by Tom Hanks Cannot Be Trusted at 8:40 AM on April 2, 2023 [14 favorites]


I don’t know if these have been in the discourse, but I liked them:

Bill Gates, writing in his blog: “The Age of AI Has Begun”

Yuval Harari, Tristan Harris, and Aza Raskin in the New York Times: ”You Can Have the Blue Pill or the Red Pill and We’re Out of Blue Pills” (gift link)
posted by Going To Maine at 8:55 AM on April 2, 2023 [2 favorites]


> Here we are, finally, as close to AGI as we are to fusion! But I wish we could scrape all that away and ask other questions, like...where are the bodies and the senses, the instincts and emotions, upon which our irrational but necessary values are built? Because there's no reasoning without those first-hand values.

for me, how close/far AGI is and whether it requires embodied cognition and the like, while interesting, is kinda besides the point here. i like how sabine hossenfelder reframes the debate with What Does "Understanding" Mean? and then refocusing on AIs' capabilities and uses. (here she is on fusion ;)

> Do we really need a blog post by a partner at Union Square Ventures, whose portfolio includes a shitload of Web3 and crypto stuff?

i dunno, wenger is also a UBI advocate and contains multitudes. i find him thoughtful, ymmv?

> It's absolutely astounding to make the argument that AI is a breakthrough in a way that literally nothing else to come out of computing has been, especially since AI is itself a byproduct of, I mean, computing.

fwiw, there was a rebuttal immediately following...
posted by kliuless at 8:56 AM on April 2, 2023 [5 favorites]


Professionally, I was a systems analyst / system integrator by trade. I figure out how things work, then figure out how to make them work together. I am here today to tell you that all the pontificating about sapience and intelligence are irrelevant. The singularity is here, it has arrived, the exponential slope is starting.

A business is presented with a tool that will perform say 80% of a human's job for say 20% of the cost, at ten times the speed. That business is either going to take that opportunity, or it is going to be outcompeted by a business that takes that opportunity.

I will give you a tiny example. Online advertising systems can be hooked to large models to generate custom pitches that are tailored exactly to appeal to each individual viewer, with instantaneous feedback as to success/failure. A/B testing disappears and becomes an infinite number of tests constantly honing in on what will appeals to you specifically, backed by gigabytes of personalized predictive weightings. How many "creative" industries will this completely obvious, absolutely inevitable change raze to the ground?

I'm personally just really sick of arguments about what is this or what isn't this, because it doesn't fucking matter. It is the beginning of the end of middle management in every industry, and the commercial arts (including computer programming).

It is the vanguard of an even greater divide between capital owners and everyone else, and the question has to be: what is to be done to preserve civil society as we know it?

Don't think I'm not serious, don't think this is a joke. No business will survive who does not grab large model generation by the horns, and the economic ramifications of the changes that come from that are tectonic.
posted by seanmpuckett at 8:56 AM on April 2, 2023 [53 favorites]


We're at the point in the discourse about AI where pretty much everyone everywhere feels the need to opine about it and Sturgeon's law is definitely in effect.

To add to the pile (hopefully the 10% part), I'll humbly include a series of essays by Frank Lantz (prof at NYU and creator of the famous Universal Paperclips game) where he tries to scope out the various camps of AI prognosticators (doomers, deflators, etc.): Part 1, Part 2, Part 3.
posted by gwint at 9:09 AM on April 2, 2023 [7 favorites]


It's easy for AI to match human intelligence, given that we keep falling for the same this-technology-will-change-everything news stories over and over.

Anyway, I got to hop into my self-driving car and buy a coffee with bitcoin to drink while visiting the Museum of Too Many Fingers Art in my VR headset before booking a ticket to Mars on the blockchain.
posted by AlSweigart at 9:11 AM on April 2, 2023 [4 favorites]


Don't think I'm not serious, don't think this is a joke. No business will survive who does not grab large model generation by the horns, and the economic ramifications of the changes that come from that are tectonic.

If your business is solving problems and talking about things that only existed prior to 2021, sure.

As these systems start to feed on their own generated output - to say nothing from the emergence of general purpose defensive measures that are quickly emerging as well, their quality is going to start to decay.

What we're going to see - in terms of operational understanding, business models and so on - is going to be functionally indistinguishable meme decay. Just like how the only interoperability layer between social networks is the jpeg, the image whose ongoing decompression and recompression eventually causes it to decay into static, that's what we're going to see with any knowledge, business model or process.

That is to say, shitposting as applied epistemology.
posted by mhoye at 9:13 AM on April 2, 2023 [20 favorites]


LLM content typically seems to elicit cranky responses from Metafilter... maybe more than it should, but also it's good to have ready access to heavy skepticism when dealing with this subject.

Regardless, thanks for the post, kliuless. This strikes me as a really good collection of links on the subject.
posted by Alex404 at 9:20 AM on April 2, 2023 [8 favorites]


And to add: the privileged, comfortable navieté on the part of the people building these tools is like watching a curious kitten wander into an abattoir. It is so obvious that these are incredibly sheltered people who have never experienced, in any meaningful way, the undiluted Internet Experience. That's why they discover - they always, always discover - that they need to hire armies of human moderators to keep these systems from repeating the monstrous shit they're born purpose-built to parrot.
"OpenAI sent tens of thousands of snippets of text to an outsourcing firm in Kenya... Some of it described situations in graphic detail like child sexual abuse, bestiality, murder, suicide, torture, self harm, and incest. Some workers made just $1.32/hour."
Make no mistake: this 100% classic extractive-resource colonialism, except the resources being extracted and exported are psychological safety and mental health. If you're using them, that's what you're paying for, that's what you're complicit in. That's who's paying for this supposed "exponential improvement in productivity". All the glitter from these shiny new tools are blood diamonds. Enjoy your light show.
posted by mhoye at 9:32 AM on April 2, 2023 [44 favorites]


More AI fear, from the Gus:

The robot uprising is inevitable now
...the original version of ChatGPT scores in the 10th percentile on state bar exams. The new version scores in the 90th percentile. It doesn’t take a futurist to see where this is going.
posted by Rash at 9:33 AM on April 2, 2023 [4 favorites]


It doesn’t take a futurist to see where this is going.

"Oh my god, this could mean that in the future, robots will force us to realize that standardized tests are an administrative convenience, not an effective way of gauging human potential."

oh no
posted by mhoye at 9:37 AM on April 2, 2023 [45 favorites]


LLM content is difficult because we are in the middle of a simultaneous “is it AGI” and “is it taking our jobs” Freakout, and even though the former is false and the latter is probably true, it’s a lot more fun to talk about the former and we are also easy suckers. The 2024 election is going to really suck, because an AI can definitely be used by a person to do a better QAnon.

(I fear that the future is going to be bad in ways that are more War Games than The Matrix.)
posted by Going To Maine at 9:44 AM on April 2, 2023 [7 favorites]


I'm with Alex404, I appreciate a good kliuless post full of links on this topic. Maybe I disagree with some individual takes but in aggregate like this I think it's great as a survey, and leads to even more interesting links (like the Frank Lantz posts from gwint) and I still find the discussion around it thought provoking (even if a bit cranky sometimes) So thanks kliuless for the post.

I'm definitely still in the Deflators camp altho I haven't used GPT-4 yet. I found this interesting: "we hope to inject some reality into the conversation. OpenAI may have violated the cardinal rule of machine learning: don’t test on your training data" (linked from one of the Twitter threads)
posted by okonomichiyaki at 9:47 AM on April 2, 2023 [3 favorites]


> The problem (well, one of them) isn't whether I'm a sucker, that doesn't matter. The problem is whether people who make business decisions are suckers. On that I am not optimistic.
posted by Zalzidrax at 9:59 AM on April 2, 2023 [6 favorites]


The feedback loop theory is painfully stupid. The primary training data builds associations among embeddings of sequences but the fine tuning can be very few-shot - it doesn't need to index the Internet or the content created by other language models every time. The better training gets (in terms of efficiency and cost of reinforcement propagation, which is evolving quickly from the one size fits all of even a few months ago) the less baseline training you need and the more effective smaller numbers of examples become.

Does that mean nobody will train on their own maelstrom of generated (increasingly garbage) content? Of course not, they just don't have to the way these misunderstandings imagine.

I know it's satisfying to predict Pop will Eat Itself, but it's hokum in this case.

The externalized mental health costs of raters are very real, but the same can be said for your YouTubes and your search rankings and your smart phone manufacturers. That doesn't make it ok, but it's blood diamonds all the way down and singling out LLMs just takes heat off the larger late stage capitalism incompatibility with ethics.

What does this make me? A deflater?
posted by abulafa at 10:11 AM on April 2, 2023 [6 favorites]


An absence of truly radical technological change.

This guy's two years older than me. Without thinking too hard, I'd count sending people to the Moon and the internet as radical technological changes in my lifetime. Maybe mobile phones, too.
posted by kirkaracha at 10:30 AM on April 2, 2023 [7 favorites]


The other big problem with the feedback loop theory is that often what we see on the internet is what has been hand-selected by a human - the best outputs of the model for a given prompt. This is gold in terms of training material...

For the article worried about lack of new stack overflow answers: Working code will continue to get put on GitHub - again, the hand-picked best stuff. Whoever builds the best software-assistants will end up absorbing huge amounts of feedback on what works and what doesn't, which can feed back directly into further training.
posted by kaibutsu at 10:32 AM on April 2, 2023 [1 favorite]


I read the SKVentures article. As a professional software engineer of 30 years, I'm calling bullshit.

My field has gone through various waves of hype predicting its end. First there was the prediction that RAD (rapid application development) tools would kill it. Then there was the threat that offshoring to developers in India would destroy jobs in the US. Neither has made much of a dent.

Writing new snippets of code is the lowest hanging fruit of my profession. Knowing how to put those snippets together into a larger system and maintain that system will require bona fide general intelligence. I've not seen anything that convinces me that there even a route from LLMs to AGIs.

LLM based technologies will be most useful for simple business automation... Basically a more accessible form of macros. And that's fine! I'll continue to advise my nephew that software development is a good field to go into.
posted by microscone at 10:33 AM on April 2, 2023 [25 favorites]


So much of the current dialogue about GPT makes me think of an industrialist who goes to a magic show and sees a magician pull a chicken out of a hat. He's witnessed it with his own eyes - the magician has certainly willed a chicken from nowhere. "Well hell, son, if you can pull a chicken out of a hat from nowhere, you're hired, and the two of us are going to take on Frank Purdue!"
posted by eschatfische at 10:56 AM on April 2, 2023 [14 favorites]


What we need isn't more AI models, it's more workers in charge of the means of production. AI is just the latest wave of automation that will aid capitalists in removing that pesky expense of paying people wages.
posted by lock robster at 11:00 AM on April 2, 2023 [5 favorites]


Ok, I'm gonna say it again. Large Language Models aren't thinking and they certainly aren't understanding. They are pattern matchers. They are complex software machines. Certainly they might have their uses when properly trained but they are still automatons. You know the old adage "A computer must never make a management decision because computers can't be held accountable" It's still just as true today as when it was coined. I dearly wish that the people researching this kind of software would think about the Pandora's box they have opened by in any way putting into the collective consciousness that these programs are capable of reason, because what's going to happen is we are going to put software into jobs humans with an actual understanding of reality should be doing and people are going to die.
posted by signsofrain at 11:03 AM on April 2, 2023 [14 favorites]


I dearly wish that the people researching this kind of software would think about the Pandora's box they have opened by in any way putting into the collective consciousness that these programs are capable of reason

I'm a senior machine learning researcher, working with neural networks although not large language models. My experience has been that actual researchers in the field hate and avoid the term "AI" and are very very clear that these are just big pattern matchers, not proto-life. It's the tech grifters (VCs, start-up hype men, Musk) and tech people outside the field (usually with minimal liberal arts education..) who make wild claims about AI ("AGI") or consciousness in these models.

I'm not saying "Don't blame us!" for the discourse, but rather: be very skeptical of who is making claims and predictions. This is a field where expertise really, really matters for a substantive discussion.
posted by riotnrrd at 11:27 AM on April 2, 2023 [31 favorites]


all the pontificating about sapience and intelligence are irrelevant

I do agree with this in many ways - I have no idea what the bar for “AGI” actually is, and I think whatever it is is unlikely to arrive in one piece, but it’s clear enough playing with current models that their problem-solving and synthetic capabilities are, let’s say, more general than expected, and that they’ll probably have an impact on a variety of industries. I mean, in software, if the floor is some kind of hybrid of Stack Overflow and a natural language compiler, that’s still pretty neat, no?

but… is there a reason we’re supposed to take this guy seriously? Does he not know anyone who can tell him that his emoji example is completely wrong?

1.) it’s fundamentally confused about whether it’s replacing ASCII emoticons vs. actual emoji

2.) the emoticon he wants it to keep isn’t even pure ASCII, is it?

3.) its “clever” solution goes way beyond the spec in a way that would be unacceptable in a real application

4.) the inner-loop replace doesn’t seem particularly efficient, either, if it matters

I guess the part where he thinks that grammar is the hard part of software development should have tipped me off. But in a way this is a good toy example of the actual hard parts of software development!
posted by atoxyl at 11:33 AM on April 2, 2023 [4 favorites]


It’s okay, though, I’m sure he can fix it by giving detailed instructions describing the input, output, and basic step-by-step procedure. Guess we’re going to have to come up with a job title for people who do that!
posted by atoxyl at 11:39 AM on April 2, 2023 [11 favorites]


I see no one has mentioned how Chat GPT-n likely passes the Turing Test. Decades ago the vision of AI was a machine that stood by itself, which had some deep understanding of the world and could reason the same way we can. It's ironic isn't it that the first technology that managed to pass the test is actually just taking all of our noise and giving it back to us.
posted by storybored at 12:05 PM on April 2, 2023 [1 favorite]


The solution to this problem [of AI destroying jobs] is fairly straightforward and well-known. Give unemployed people cash benefits to replace their prior wages. Give them job search and retraining assistance to help them find new work. And, possibly, give them wage insurance to ensure that, if they do end up reallocated into a lower-paying job, they can receive benefits to make up for some of the difference, at least for some time...

Every Hacker News motherfucker pish-poshing my concerns as someone in a field being highly threatened by AI says this shit. But what are all these AI boosters doing to make this happen? What are the corporations abusing any existing concept of "fair use" to scrape the entire internet and put them in their for-profit plagarism machines doing to make this happen? Fuck-all, as far as I can tell.
posted by egypturnash at 12:11 PM on April 2, 2023 [14 favorites]


cranky responses from Metafilter

https://www.metafilter.com/79791/WolframAlpha-the-future-of-web-search-technology

(I believe we are at the threshold of a lot of people mistaking these machines for artificial general intelligence because they talk pretty. See LeMoyne, Blake)

Poets, priests, and politicians...
posted by Heywood Mogroot III at 12:14 PM on April 2, 2023 [3 favorites]


I see no one has mentioned how Chat GPT-n likely passes the Turing Test

Well it’s also interesting (though not a bad thing) that while the technology has reached the stage that it probably could be made to pass a Turing test, the public versions will try very hard not to. I think if you really wanted to do it the best shot might be to take one of the base models and then fine-tune it specifically to pretend to be a person? Which is rightly seen as ethically iffy.
posted by atoxyl at 12:15 PM on April 2, 2023


Reminds me of a Life-of-Brian-esque take: "Only a true AGI would deny it's own AGI-ness!"
posted by storybored at 12:19 PM on April 2, 2023 [3 favorites]


I was watching the new 3b1b video today and thinking how the pretty hand-drawn images could very well have been AI-generated. And then thinking how if arting up these kinds of images was your line of work, things aren't looking too good for you this decade.

I am most definitely not in the naysaying camp because I believe the functionality of the wetware between our ears will be, to a useful extent, replicated in software.

I've only been playing with Chat-GPT for a month now and have seen it make factual errors left and right, but not a single grammatical error, in English at least.

If the conscious mind just the voice that is talking to us in our heads?

then again many people don't have that monologue??

It's going to be a weird decade by the looks of it!
posted by Heywood Mogroot III at 12:30 PM on April 2, 2023 [3 favorites]


Favorited just for that last pull quote.

One thing I love about your posts, kliuless, is that, in addition to giving me lots of great things to read*, those links introduce me to good sites and projects - I'm very intrigued to know about the People's Policy Project (although I wish the About page named founder Matt Bruenig and gave some info about him).

Thanks so much for creating and posting this wonderful collection of ideas, kliuless - I appreciate it.

* I have just now added front-page posts by kliuless to my To Read.txt file
posted by kristi at 12:31 PM on April 2, 2023 [2 favorites]


I’ve seen a couple people working on stuff like this - tools that use GPT (here apparently in conjunction with more traditional static analysis) to answer questions about a codebase. Literally just saw this, identified by its dev as an early prototype, so I have no idea how reliable it is, but might be a fun thing to play around with. If it can be made reliable, that’s a very strong application right there.
posted by atoxyl at 12:33 PM on April 2, 2023


LLM based technologies will be ... Basically a more accessible form of macros.

Reminds me of how Bruce Sterling recently called LLMs "spicy autocomplete."
posted by doctornemo at 12:35 PM on April 2, 2023 [8 favorites]


that these are just big pattern matchers, not proto-life

. . .
posted by Heywood Mogroot III at 12:41 PM on April 2, 2023 [4 favorites]


"spicy autocomplete."

when GPT got popular late last year this was the take-away I got. Using it off and on for the past few weeks, I see a lot more there, there


please explain the usage of English's various punctuation marks. When doing so, use them as much as possible (but in a reasonable manner)

Certainly! Punctuation marks help convey meaning, clarify sentence structure, and indicate pauses in writing. Here are some common punctuation marks, along with examples of their use:
  1. Period (.) - Used to mark the end of a sentence. Example: She went to the store. He stayed at home.
  2. Comma (,) - Used to separate items in a list, to separate clauses, or to provide a pause. Example: She bought apples, oranges, and bananas.
  3. Semicolon (;) - Used to separate two independent clauses that are closely related in meaning. Example: She loves reading; he prefers watching movies.
  4. Colon (:) - Used to introduce a list, explanation, or to emphasize something. Example: She had three hobbies: painting, dancing, and writing.
  5. Question mark (?) - Used to mark a question. Example: How are you today?
  6. Exclamation mark (!) - Used to express strong emotions, like excitement, surprise, or anger. Example: Watch out for the dog!
  7. Quotation marks (" ") - Used to show speech or to quote someone. Example: She said, "I'm going to the movies later."
  8. Apostrophe (') - Used to show possession or to form contractions. Example: It's John's car. (It is = It's)
  9. Parentheses ( ) - Used to enclose additional information or explanations. Example: The students (John, Mary, and Lisa) went on a field trip.
  10. Hyphen (-) - Used to join compound words or to divide a word at the end of a line. Example: The well-known author spoke at the event.
  11. Dash (—) - Used to indicate a pause or break in thought, or to emphasize information. Example: She was tired—not just physically, but mentally as well.
  12. Ellipsis (...) - Used to indicate omitted words or a trailing thought. Example: "I was thinking about going to the store... but I changed my mind."
These are some of the main punctuation marks in the English language, and I've tried to use each one appropriately in the examples provided. It's important to use punctuation marks correctly to ensure clear and effective communication.

if something doubles every 2 years what is its growth rate quarter over quarter

To find the growth rate quarter over quarter for something that doubles every 2 years, you can use the following formula:

Growth rate per quarter (GR) = (Final value / Initial value)^(1 / Number of quarters) - 1

Since there are 4 quarters in a year, there are 8 quarters in 2 years. The final value is double the initial value, so:

GR = (2 * Initial value / Initial value)^(1 / 8) - 1

The initial values cancel out:

GR = 2^(1 / 8) - 1

Calculating the value:

GR ≈ 0.0902 or 9.02%

So, the growth rate is approximately 9.02% quarter over quarter for something that doubles every 2 years.


autocomplete my @ss
posted by Heywood Mogroot III at 12:46 PM on April 2, 2023 [1 favorite]


I haven’t seen this posted yet, but some of the material is referenced by others. I do think it’s a good summation of “AI”. I’ve also decided Adam Conover is my anger translator when it comes to the BS that is being pushed about “AI” .

https://youtu.be/ro130m-f_yk

For the TLDW crowd:

1. LLMs are not sentient, they simply predict the next set of ones and zeros based on the chances of that set of ones or zeros occurring before.
2. AI ethicists, who were all fired in the last 18 months from Google, MSFT, etc. show that since we are prone to find meaning in meaningless patterns we are bound to mistake LLMs for having “minds”.
3. As mhoye noted, the only way these LLMs work is by abusing workers in poor countries and inflicting the most awfull parts of humanity upon them to “filter” it so LLMs don’t spew racist, toxic, or deranged responses
4. LLMs literally break copyright. Without the collected works of humanity, it’s efforts in art and science, and yes programming, LLMs stop working. By simply labeling the input as “data” glosses over the fact that hours if not years of human effort went into building that data.

Something not mentioned, and I feel it’s important to remind a non-technical audience, the reason that programming is expensive is that it is hard. Even then it comes out with many, many bugs and is problematic is many ways (see security issues in software). As an industry we fail at building reliable, mission critical software without massive government subsidy and support (review the story of NASA and putting men on the moon). So much of the code that is being fed to LLMs has so many bugs and is flawed in so many ways, and it will now produce code containing such bugs, introduce new ones, that we will not be able to debug (ask any technical support people about the difficulty of debugging code that you didn’t write), and that is written without the basic underlying assumptions that human programmers make.

This is not “AI”. This is marketing to collect investment on the next new thing.
posted by herda05 at 12:51 PM on April 2, 2023 [13 favorites]


I see no one has mentioned how Chat GPT-n likely passes the Turing Test.

I'll spare us all from the particularly strong opinions I have about it but as we collectively understand the term "Turing Test" - "Can a chat program sometimes convince a human that it is talking to a human" - we passed that bar 50 years ago with Eliza, and it's just not an interesting metric for anything. People will anthropomorphize anything that moves and plenty of things that don't if you give something a name and put a pair of googly eyes on it.
posted by mhoye at 12:51 PM on April 2, 2023 [14 favorites]


I've been enjoying trying to get AI to be good at writing but actually it's pretty terrible based on the results so far. It doesn't actually create original writing, it removes language it considers extraneous. An interesting example is on a question where I'm asked to provide examples of places beneficiaries will go in order to show the localisation of the offer in relation to the funding.
I wrote a series of examples encircled by language explaining the context of active care e.g., "When T saw an ad online for a free fishing licence and support to learn to fish, staff helped T make a list of everything they needed, e.g., making lunch and snacks in advance; T calling their relatives to let them know where they would be; and, scheduling wake up to make sure they were on the bus for the 9:20am learn to fish session. The one-day event was generously supportive of adults and children with special needs and/or disabilities and T was happy to meet adults his age and older from other parts of Shropshire who were learning to fish. Staff supported T to be comfortable and safe at water's edge. T was happy that he was not afraid of the water and stayed fishing until 5pm, a rarity since the Covid-19 lockdown when many residents were unable to attend day centres. T celebrated catching a fish and was invited back to fish at the canal by local fishermen. Staff accompanied T sixteen more times in step by step active support to build his confidence and resilience to take the bus alone to the canal for the day and return home before sundown. T set alarms on their phone to remind of times of day. T now regularly fishes two times each week on their own."

This is what OpenAI wrote:
"Client T who has global development delay and lives alone found a free fishing program, aimed at individuals with special needs and disabilities. The local event offers a comprehensive experience that includes a fishing license and active support. T was able to make a list of everything needed to prepare for their fishing trip with the help of our staff. The one-day event allowed T to meet other individuals learning to fish and our staff provided support to ensure their comfort and safety at the water's edge. T's success in catching a fish was celebrated, and our staff accompanied T sixteen more times to gradually build their confidence to fish independently twice a week."

It's not bad, but it didn't win as it didn't provide enough detail to make buyers, advocates for disabled adults, believe this service was as effective as other organisations offering the same service. I know there are many writers trying to figure out AI to write for them. It's great for marketing, terrible for supporting vulnerable people.
posted by parmanparman at 1:10 PM on April 2, 2023 [2 favorites]


LLMs literally break copyright

They do not. Assembling statistics of language, which is what these systems do, is not a violation of copyright. The statistics are incomprehensibly complex, but they're functionally no different from saying "In the human language the change of 'mom' following the phrase 'I love my..' is 30%".
posted by riotnrrd at 1:24 PM on April 2, 2023 [9 favorites]


I wanted to know what Douglas Hofstadter thought about all this, but could only find this from last year.

“Artificial neural networks today are not conscious,” Douglas Hofstadter, The Economist, 09 June 2022

Cf. “Douglas Hofstadter, Strange Loops and the Enigma of Intelligence,” Steven Gambardella, The Sophist, 22 February 2023
posted by ob1quixote at 1:39 PM on April 2, 2023 [4 favorites]


the original version of ChatGPT scores in the 10th percentile on state bar exams. The new version scores in the 90th percentile. It doesn’t take a futurist to see where this is going.

It’s “going” to a bunch of lawyers preemptively buying their third and forth vacation homes based on rubes who think they can just type “win my my breach of contract lawsuit” into a textbox and be successful in court.
posted by Back At It Again At Krispy Kreme at 1:47 PM on April 2, 2023 [4 favorites]


There’s an awful lot of skating to where the puck is, in this thread. It kind of reminds me of the naysayers who declared Apple DOA when they released the first iPhone. It’s still early. I think the next few iterations of the various AIs in the world are going to surprise/shock/scare-the-shit-out-of a lot of people over the course of the next year or two.
posted by Thorzdad at 1:54 PM on April 2, 2023 [8 favorites]


There’s an awful lot of skating to where the puck is, in this thread.

Yes, but there's been an awful lot of "the technology will be ready in five years" as well. I'd actually really like to read a FPP that discusses the difference between technological changes such as the internet, mobile smartphones, and GPS, compared to self-driving cars, metaverse, and NFTs.
posted by AlSweigart at 2:08 PM on April 2, 2023 [4 favorites]


LLMs literally break copyright

They do not.


I mean, people whose concept of generative ML models is still that what you get out of them is verbatim regurgitation of things in the training set have to be trying to not to learn anything themselves, but it certainly seems like they can be argued to break some of the assumptions behind the whole idea of copyright.

I am most certainly not a lawyer but it would surprise me if there are not many unanswered questions about the application of IP law both to the training inputs and the generated outputs.
posted by atoxyl at 2:14 PM on April 2, 2023 [7 favorites]


I am most certainly not a lawyer but it would surprise me if there are not many unanswered questions about the application of IP law both to the training inputs and the generated outputs.

Agreed. I would be shocked if a court ruled that training a DNN violates copyright, but I'm no judge. I suspect new legislation will come into play as well, perhaps with something similar to "mechanical royalties" for music.
posted by riotnrrd at 2:18 PM on April 2, 2023 [2 favorites]


people whose concept of generative ML models is still that what you get out of them is verbatim regurgitation of things in the training set

The code generators, at least, have a demonstrable tendency to periodically barf out code verbatim from their training sets.

You can dismiss it if you like but this is a real thing that happens.
posted by mhoye at 2:20 PM on April 2, 2023 [7 favorites]


The code generators, at least, have a demonstrable tendency to periodically barf out code verbatim from their training sets.

Oh no I am not saying they can't do that - it's a major copyright issue that they definitely, unpredictably can. I'm saying that it's not all they do, that it's pretty easy to see that they can also generate outputs that would be considered "new" under existing IP laws if they were produced by a person. Since they are not produced by a person, it gets murky again, but it's not obvious that generating new IP based on whatever mangled synthesis of existing IP happens inside the network is inherently infringement.

compared to self-driving cars

I discussed this here before but I think this an instructive comparison. The thing about self-driving cars is, they actually work pretty well 99 percent of the time - but that's not good enough when it comes to operating dangerous machinery! If text or code or image generators plateau at a level where they come up with usable results with the same reliability, that seems far more likely to be "good enough" to assist, if not necessarily replace, human workflows, which is why I'm more bullish on ML tools right now - on their potential for some level of commercial use, not touching the AGI mess here - than I was on self-driving cars.
posted by atoxyl at 2:31 PM on April 2, 2023 [4 favorites]


Thanks kliuless for this post! I’ve read a few of these before, and bookmarked many of the rest for later.

Working in an AI-adjacent field (research supercomputing), I generally agree with those that say that this is “just” pattern-matching. I don’t believe that there’s reasoning per se going on here, though that’s a fairly weak opinion since I’m not an AI researcher myself — I just go to conferences with some of them. :)

But I think one of the interesting things about this research, is that it’s demonstrating just how much can be done by pattern matching. If you had asked me a year ago, I wouldn’t have believed that LLMs could produce text or images at the quality level they are right now.

Given how many people’s livelihood effectively depend on tasks that look like “pattern match and then produce text in response”, I do think the economic worries are warranted.
posted by learning from frequent failure at 2:53 PM on April 2, 2023 [12 favorites]


Biggest thing since (choose one):

a.) NFTs
b.) Social Media
c.) The Internet
d.) Electronic Computers
e.) Electricity
f.) The Industrial Revolution
g.) Fire
posted by atoxyl at 3:28 PM on April 2, 2023 [3 favorites]


LLMs (Large Language Models) are indeed a significant development, but it is difficult to compare them directly to the other groundbreaking innovations listed. Nonetheless, to provide some context, I would choose:

c.) The Internet

The Internet has revolutionized the way we communicate, share information, and conduct business. Similarly, LLMs have the potential to profoundly impact various industries and the way we interact with technology. Both the Internet and LLMs have broad implications for society, whereas other options may be more specific in their impact or have a more limited scope.
posted by Heywood Mogroot III at 3:36 PM on April 2, 2023 [1 favorite]


Yeah I’m also (c)-ish - wait, that’s not an LLM answer, is it? The way it’s hedged reads a little GPT…

GPT (3.5) seems to go with (d) for me.

Anyway, I realize the scope of what I’m even asking about is not particularly well defined. I don’t actually have trouble imagining something beyond (c) if we include an expansion of paradigms beyond what’s available right now but do we include that? I dunno, I shouldn’t have to say that it’s meant as a very lighthearted exercise.
posted by atoxyl at 3:47 PM on April 2, 2023


Biggest thing since (choose one):

a.) NFTs
b.) Social Media
c.) The Internet
d.) Electronic Computers
e.) Electricity
f.) The Industrial Revolution
g.) Fire


Of course it will be its own thing, whatever it is. But that's a good list of comparatives. I'm going off the board with:

h.) Graphical User Interface (GUI)

My warm take here is that the jump from batch processing and Command Line Interfaces (CLI) to GUIs let a lot more different kinds of people interact with this software stuff than were able to before. Voice interfaces have absolutely sucked up until now, and these LLMs show every sign of changing that, with people having interactive conversations to get the software they really wanted, or close enough to make it affordable to work with a human programmer for the bits it can't handle.

There will also be amusement value for the next few decades! From failures that don't hurt anyone, for example, ASCII arts by chatbot.
posted by johnabbe at 4:15 PM on April 2, 2023 [6 favorites]


I never saw the economic value of crypto so never invested in it – I haven't used cash in years and write one paper check every 3 months so I don't see the point of it.

Oddly, the 1990s VR wave did intrigue me and I even managed to get a job making VR "experiences" back in the mid-90s. After a year or so though, the underlying difficulties made it clear there was much more sizzle than steak with VR.

Metafilter pre-dates social media; the recent thing with twitter prompted me to kill my account dating from 2010 there and look for other ways to acquire the links.

Zuck is/was going after the metaverse of Snowcrash but for me the real treasure is to be found in the Primer of Diamond Age.

The state-of-the-art 3b1b instructional video I posted above could very well be generated completely from scratch and be entirely interactive later this century – and later this decade for all I know.

I've been on the tech train since the 4k TRS Model I, loading and saving programs to cassette tape.

Theory of Mind debates on sentience do not interest me in the slightest, what does interest me is what the functional limit of this technology as it continues its development.

It's off to a good start!
posted by Heywood Mogroot III at 4:25 PM on April 2, 2023 [5 favorites]


My warm take here is that the jump from batch processing and Command Line Interfaces (CLI) to GUIs let a lot more different kinds of people interact with this software stuff than were able to before

yes, that was Gates' main hot take (two, actually) in the video posted last week:
  • "every advance lowers the bar as to who can take advantage"
  • "this LLM demo is same scale as the demo I got from Simonyi in 1979 at PARC"
posted by Heywood Mogroot III at 4:28 PM on April 2, 2023


This is of course anecdotal, but when I ask ChatGPT to produce artistic work the results are pretty dismal.

Social media has made it clear that when that system that is intended for good is used for bad ends, people will be driven away from social media. Young people are adverse to using Facebook, and Elon Musk seems determined to drive people away from Twitter. If ChatGPT fills the internet with so much misinformation that it ceases to be useful, people will stop using the internet, or find alternate ways to use it, or cease using it as a medium for expression.

Further, ChatGPT has no way to generate social trends. It's chasing a moving target, it depends of updated datasets to keep up with the evolution of language and knowledge. It could potentially be deprived of these updates, technically or legally. Some would say that the use of data in training sets to date without permission is itself a trespass.
posted by JHarris at 5:11 PM on April 2, 2023 [2 favorites]


Oh god. I just asked it to write a song about longing for life in the style of They Might Be Giants, and it was awful. It usually writes bad song lyrics, but it really botched this one. It feels like everyone who enthuses about ChatGPT picks its best output and ignores the rotten.
posted by JHarris at 5:17 PM on April 2, 2023 [1 favorite]



They do not. Assembling statistics of language, which is what these systems do, is not a violation of copyright. The statistics are incomprehensibly complex, but they're functionally no different from saying "In the human language the change of 'mom' following the phrase 'I love my..' is 30%".


In the human language the chance of '30%' following the phrase "They do not. Assembling statistics of language, which is what these systems do, is not a violation of copyright. The statistics are incomprehensibly complex, but they're functionally no different from saying "In the human language the change of 'mom' following the phrase 'I love my..' is" is 100%.
posted by dsword at 5:38 PM on April 2, 2023 [3 favorites]




I've been working with ChatGPT since around October when OpenAI opened up the beta. I find that many people seem to both wildly underestimate and overestimate its capabilities. I think it is appropriate to note that I currently am a technical writing manager at a consumer electronics company. I develop troubleshooting instructions for users. This bot is made precisely to replace people like me. I'd like to share two use cases I've had.

My first big use for it was to write my employee self-review for me. I hate this kind of writing because ADHD, so I had ChatGPT do my review for me. I told it to write a draft employee self-review for a job with my title. This was when it was at ChatGPT-3, and I can tell you that the initial output was about as worthless as one would expect. While it was able to output text that certainly looked like a self-review, it knew nothing of me or my job.

I started feeding it information. Over 45 minutes, I told it to perform a total of 36 edits, 8 of which I reverted. I told it to rework how long it was, add and remove different details about my work, gave it items to highlight, etc. When I was done, I had a well-written self-review for myself.

Within the confines of the task, it was great. I was able to take what would have been a four hour hell task for myself and bring it down to 45 min, and didn't have the stress of doing "HR-approved creative writing." My company's executive team still doesn't know that they awarded me a rare Exceptional review score written in part by ChatGPT.


The second go of ChatGPT for me was more to the point on my job. Test to see: Can ChatGPT perform better at tech support than our chatbot built by humans? Can ChatGPT perform better than call center employees? I can tell you that, as of today, the answer is "absolutely not."

My company's executive team sent around an email spouting that it will change how we do everything. I sat through a meeting where I heard VPs confidently exclaim that they are going to use ChatGPT to drive business planning decisions. They then brought up of examples of how great it is at "doing tech support." They had a few of our SW devs train an instance of ChatGPT-4 on manuals, forums, and our public knowledge base. The bot proceeded to confidently output support information that was nonsensical. Examples include:

* Telling users to connect their device to Wi-Fi when the device is not Wi-Fi capable.
* Telling users to wait for a Green LED to flash, when none of our devices have a Green LED.
* Stating that the first fix to a "device offline" issue is perform a factory reset on it (deleting all information, including Wi-Fi pw) and then wait for the device to automatically re-pair to Wi-Fi.
* One of the test runs included telling a user to buy a piece of equipment that doesn't exist. It crammed the name of two other products together. A similar example would be if a chatbot told you to buy an "Apple iPhone MacBook Pro 17 SE."

I had to be the wet blanket to point out that the bot was making egregious mistakes. I asked questions about how much training it would take to fix it, and how many people. Someone will have to train the bot. And someone will have to re-train the bot every time we put out a new product. The training takes lots of hours to get correct. Suddenly eyes were opened.

My feelings can be summed up thusly: CEOs should use ChatGPT in their emails to root out their idiot upper management. Anyone who gets forced to work ChatGPT into their projects will be subject to the classic IBM model: The first 90% of the project will take 90% of the time, the last 10% will take 90% of the time, and double the time if you are doing it all in-house.

Biggest thing since (choose one):

I'd go off-menu with i.) Google Search. Although somedays I feel like the answer is j.) Clippy.
posted by Mister Fabulous at 6:02 PM on April 2, 2023 [19 favorites]


A pessimistic view is that humans will live like wasps in the not so far future...

Futurama: Don't date robots
posted by fings at 7:13 PM on April 2, 2023 [3 favorites]


This is of course anecdotal, but when I ask ChatGPT to produce artistic work the results are pretty dismal.

Well, the "G" in it stands for General.

It's like asking the average person to write you a song. What you want is an professional, in a particular genre, with experience with a particular instrument.

So you'll get rather generic results if you use the vanilla version of Stable Diffusion or MidJourney to create images. But once you run it locally on your own hardware, then add custom extensions like Controlnet to precisely control the poses, and LORAs to precisely replicate certain faces or features, the tool suddenly becomes a lot more powerful. Instead of just throwing in some word salad into the prompt box and crossing your fingers, you can use a LORA trained on a specific model's face that you want, trained on a specific dress that you want them to wear, then use Controlnet to pose them exactly the way you want them, then you can generate a photo shoot of the model in multiple different poses.

The demand will be for people who are adept at using Controlnet, or adept at rapidly training LORAs. I expect something similar to happen for ChatGPT, eventually.
posted by xdvesper at 7:47 PM on April 2, 2023 [4 favorites]


The aesthetic of intelligence we have now I do not think will technologically evolve into some kind of general artificial intelligence; like marble statues that start walking.
posted by UN at 8:08 PM on April 2, 2023 [1 favorite]


As someone who has been teaching himself to code for the last year or so in hopes of at least getting a bit of work down the road, this thread is very depressing.
posted by zardoz at 8:11 PM on April 2, 2023 [2 favorites]


autocomplete my @ss

I agreed, Heywood Mogroot III. I shared Sterling's line because I thought it was characteristically cute, and also neatly summarized some of the criticism I'm seeing in this thread and elsewhere.

But I'd go further along these lines. Here is me getting ChatGPT to act as a simulation manager/role-playing game DM.
posted by doctornemo at 8:22 PM on April 2, 2023


I’ve been in software for most of my 42 years, and this thread makes it clear I need to add a “My rate for resuscitating any software applications generated by a LLM process starts at no less than $1k an hour.” addendum to the canned response I send to recruiters who cold-call me.

Because if the enshitification of software development is going to accelerate like this, at least the future will be lucrative.
posted by Back At It Again At Krispy Kreme at 8:27 PM on April 2, 2023 [10 favorites]


For fun I had it start a simulation of being a server in a restaurant in Japan. I wanted to see if it knew the script (i.e. the '何名さま?' [the 'how many people?' starting question] and it did, though for some reason it romanized it incorrectly

Will the quality of the interaction scale as we throw more compute power at this, like how video games have gotten so much more realistic vs the early days of 3Dfx & nVidia 25 years ago??
posted by Heywood Mogroot III at 9:17 PM on April 2, 2023 [1 favorite]


That might be a good analogy, Heywood, at least in terms of image quality, animation, audio, etc.
posted by doctornemo at 9:31 PM on April 2, 2023


Well, the "G" in it stands for General

I thought it stood for “generative?”

As someone who has been teaching himself to code for the last year or so in hopes of at least getting a bit of work down the road, this thread is very depressing.

One of the big lessons of experience in software development - for me, anyway - is that the profession is not really “about” knowing some secret language. It’s about understanding how to translate somebody’s vague idea of a product into an actual product with a robust structure and precisely-defined behaviors. This is the exact thing that the emoji cleanup script VC prognosticator guy clearly doesn’t understand, which is why he gave GPT a half-assed spec, got back a program that doesn’t do what he asked for, and didn’t even notice. I can’t predict the future - I haven’t even had the chance to give GPT-4 or Copilot a real spin on a serious programming project to see how well they live up to the hype - but if we’re in the moment that I think we’re in, in which actually generating code for well-defined tasks has been accelerated but planning and understanding of the machine is still required to build a sensible program, you might more optimistically consider it an opportunity to learn to code bigger things.
posted by atoxyl at 9:40 PM on April 2, 2023 [7 favorites]


I suppose the future of software development depends to an extent on how good the chat bot is at talking people who don’t understand computers through doing computer things.
posted by atoxyl at 9:45 PM on April 2, 2023 [1 favorite]


herda05: LLM's literally break copyright.

riotnrrd: They do not.

riotnrrd: I would be shocked if a court ruled that training a DNN violates copyright.

Assembling a training corpus, though, I'll be stunned if every line and every file was checked for licence data. Training a model, absolutely transforms those works into derived works and should attract a protection in copyright law matching the source works. If it doesn't, then LLaMa has leaked and I can cross-train my future-generation models on outputs from your VC cash.

The USA has had a small-fry case where an artist backed comic-book panels with Midjourney graphics and could not register the combined work as copyrightable. (I think that's a mistake, there's still creative effort expended in the writing, the use of the ML tool and the eventual collage, and the output of the ML tool should have a copyright licence in its terms of use.)

What if someone richer and with more lawyers steps up?

You can prompt the model to spit out whole source documents. There will be specific work swallowed into training data that's recorded as copyright and marked as "all rights reserved." Let's say it's Oracle's Java API* and someone will prompt the system to regurgitate it exactly. Then the defendants will say that it's not retrieving the item but remaking it from statistics in the model, backing this claim up with some visualisation about the layers and state vectors. Then the court will ask how many other responses from the model use 80+ or 95+% information from this copyrighted text -- having shown tools to link outputs to their data in the gestalt of the model -- using this as a test for creative work derived from these source documents.

This example, of course, would settle before trial, it absolutely involves breach of copyright and breaking the core idea of copyright: the link between creative effort in a specific item and derived works that incorporate and transform those specific works.

Maybe the lawyers want that, so their employers can spend less money paying employees -- but revolution comes from the monied upper classes changing the regime, so you'd better not cull those lawyers when you notice the case law and written arguments can be done cheaply by LLM.

*: yes, I spoke the name, and, yes, programming interfaces as fact or creative work is a distraction.
posted by k3ninho at 11:00 PM on April 2, 2023 [3 favorites]


> To add to the pile (hopefully the 10% part), I'll humbly include a series of essays by Frank Lantz (prof at NYU and creator of the famous Universal Paperclips game) where he tries to scope out the various camps of AI prognosticators (doomers, deflators, etc.): Part 1, Part 2, Part 3.
Next: The Circus
part 4? "Whatever it is, it's in the tent as I see it. Same goes for this shader-based videogame desperately trying to imagine itself into existence. And whatever this is. Look, I don't know. It's a big tent, it's dark in here, it's loud. I'm just looking for those guys Miyazaki yelled at, I want to buy them a drink."[9]
In my view, one of the most important things written about what’s currently happening in AI is Peli Grietzer’s A Theory of Vibe, from 2017. In it, Grietzer draws a parallel between the mathematical operations of a neural net that is trained to encode and decode a set of data (like the ones we’re all freaking out about now) and the aesthetic operations by which we sense a genre, style, or mood. How do we recognize something as “Kafkaesque”? We can think of Kafka’s stories as compressing a large cloud of ideas and impressions and observations about the world into a small set of concrete images, situations, and events. But this same operation can be run in reverse to generate new ideas and impressions that fit the same compression scheme...

Here we see Chiang’s “blurry jpeg” without the deflationary agenda. This process of encoding and decoding according to a compression scheme which has evolved to best fit a curated set of observations, it shouldn’t be thought of as a diminished or impoverished version of a “clean” transmission of “pure” information. Instead, we can see the compression scheme itself as a form of meaning creation. The concrete details that Kafka chose, and the specific way he arranged them, tell us something about the world not in spite of being artifacts of a compression scheme but by virtue of it.

Grietzer’s framework is a powerful new way of thinking about how artistic creation and interpretation works, one that is grounded in a close technical analysis of the statistical methods by which machine learning transforms raw data into ghostly, multi-dimensional shapes and back again. It also gives us a new software-inspired perspective on how art forges a connection between things and ideas - the concrete stuff of the world and the ghostly shapes of our thoughts and feelings, theories and plans, hopes and fears...

How do we make sense of a world undergoing massive, tumultuous change, a world in the process of being re-made by facts and machines, a world slipping in and out of the grip of authority, tradition, and myth, a new world full of amazing promise and incomprehensible catastrophes. Sound familiar? Reading Grietzer gives me that same sense of urgency, makes me see the ambitious projects of modernist aesthetics as something ongoing, unfinished, and vitally important.

So that’s why I’m stopping here, this is obviously my tent, the one where Hey kids! let’s put on a show! Where Mad Magazine meets Mad Science and the Mad Professor and we see what these babies can do.
Toward a Geometry of Thought - "There are various lines of reasoning that lead to the conclusion that this space has only ~1000 dimensions, and has some qualities similar to an actual vector space. Indeed, one can speak of some primitives being closer or further from others, leading to a notion of distance, and one can also rescale a vector to increase or decrease the intensity of meaning... we now have an automated method to extract an abstract representation of human thought from samples of ordinary language. This abstract representation will allow machines to improve dramatically in their ability to process language, dealing appropriately with semantics (i.e., meaning), which is represented geometrically."[10]

also btw...
  • @johnjnay: "Big improvement over base GPT4 on medical summarization & care plan generation."
  • @demishassabis: "Super cool use of #AlphaFold from the world famous @zhangf lab to help them develop a novel molecular syringe that could act as a new type of drug delivery system that could potentially target things like cancer cells!"
posted by kliuless at 1:14 AM on April 3, 2023 [6 favorites]


Time to remind/tell everyone about Semantle, a wordle-like game where you try to guess a secret word with hot/cold/you're getting warmer feedback generated by the same kinds of vector spaces that underlie the LMMs. It's fun, but also I feel gives me a much better horse sense for the kind of info that these models are pulling on.

For instance, "hot" and "cold", despite being opposites, are *extremely* close in distance - in the high dimensional vector space, there might be a single dimension where they are as far apart as possible, but they are really really close on all the other dimensions.

Another way to understand this is that there are many cases where you could swap these two words out in a sentence to get another sentence "My soup is too hot/cold" is an easy swap, "My soup/hot is too cold" is a nonsensical swap, therefore hot is closer to cold than to soup, even though soup is pretty closely associated with hot, for humans.
posted by itsatextfile at 5:41 AM on April 3, 2023 [4 favorites]


(and actually, because chatGPT is *not* just a raw vector space, finding out how good at Semantle it is would be super interesting!)
posted by itsatextfile at 5:45 AM on April 3, 2023 [1 favorite]


I think LLMs performing better than some humans is less an achievement of LLMs and more a damning indictment of modern society's deliberate structural failure to provide decent early education and ongoing education to all.
posted by srboisvert at 7:25 AM on April 3, 2023 [2 favorites]


It feels like everyone who enthuses about ChatGPT picks its best output and ignores the rotten.

I mean honestly you could say the same thing about everyone who enthuses about human-made music as well.
posted by FatherDagon at 7:33 AM on April 3, 2023 [7 favorites]


JHarris, you have to teach it and train it at least a little bit, but you're right about songs, and that's been my experience with poetry too—even if I give it some examples of Audre Lorde, June Jordan, Troy Jollimore, Kenneth Koch, Jericho Brown, and ask it to tell me about other poetic forms and sprung rhythm etc. it still spits out ABAB rhymed doggerel.

Its sense of style is still attenuated and over-the-top in some cases.

HOWEVER.

As a writing teacher, I know you have to give students instructions and examples with sufficient specificity. You do that with prose, I've found that GPT4 especially will regularly produce some amazing stuff. Give it vaguely general instructions and it will give you a vaguely general approximation of what you're asking for. As with the image generators like MidJourney, it helps to refine the prompt over multiple iterations. As with writing assignments. As with writing.
posted by vitia at 8:33 AM on April 3, 2023 [4 favorites]


Among academic rhetoric and composition specialists, there are two long-dormant threads of inquiry that LLMs might (I hope) revive, and that might be of interest to Mefites:

1. What cognitive process psychology has to say about the brain-level processes involved with writing. The cognitive process folks helped some compositionists ask, "How might the writing brain be like a computer, and what more could we find out about specific thinking processes that would help writers?" Critics in the discipline dismissed it as too science-y in the 80s and then flailed about how un-science-y the discipline was. GPT being built on neural networks kinda makes that dismissal look dumb right now.

2. What literary critics like Stanley Fish have said about intention and meaning. Fish has that famous example in "How to Spot a Poem When You See One" convincing some students that a list of names is actually a medieval poem, and covers some of the same ground in "Is There a Text in this Class?"—literary critics got away from the intention=meaning idea for a while (it kinda limits interpretation), and then in the 80s "theory wars" with Barthes and Foucault and Derrida and the "death of the author" stuff, it got renewed interest. Is LLM output intentionless meaning?
posted by vitia at 8:54 AM on April 3, 2023 [4 favorites]


I think LLMs performing better than some humans is less an achievement of LLMs and more a damning indictment of modern society's deliberate structural failure to provide decent early education and ongoing education to all.

There is going to be a bell curve when it comes to human intelligence regardless of whether we’re talking about IQ, emotional intelligence, creativity, or some other axis. Even if a given axis is not readily quantifiable there’s still a palpable distribution. No amount of education is going to make every single person or even the majority of people as creative as David Bowie. No amount of education is going to overcome the fact that IQ is 69% genetic (+/- 11%)*.

LLMs are a family of neural-like structures that accurately mimic linguistic facility and conceptual relationship mapping on par with an extremely intelligent human, but also one with an IQ of literally 0 in pure abstract reasoning (as opposed to contextual reasoning where a de facto systems model is baked into the training corpus, or the language itself [suck it, Chomsky]).

This is not to defend the horrifying state of education in… I was going to say the Western world but honestly it’s shit everywhere for different reasons. My point is: this isn’t a topic where we can educate ourselves back into parity, just as the impossibility of doing so isn’t a reason to give up, either. So this particular thing isn’t an indictment of the current state, but it does mean we’re going to need better universal education more than ever after the wealthy figure out how to best leverage it for exploiting workers.


*(this is usually listed as 57-80%, or 60-80%, but when I compulsively calculated the centerpoint of that range I couldn’t miss a chance to demonstrate where my own mental development is lacking)
posted by Ryvar at 9:02 AM on April 3, 2023 [2 favorites]


There's also the question of defining the types of writing we're asking LLMs to do in the work world. In the 1960s, James Britton distinguished among "transactional" discourse (language that gets stuff done), "expressive" discourse (language that's close to the self), and "poetic" discourse. In some of the political dust-ups here on Metafilter, we often see expressive discourse ("I'm upset about this!") being mistaken for transactional discourse ("How do we fix this?") and vice versa.
posted by vitia at 9:11 AM on April 3, 2023 [3 favorites]




It's the tech grifters (VCs, start-up hype men, Musk) and tech people outside the field (usually with minimal liberal arts education..) who make wild claims about AI ("AGI") or consciousness in these models.

Anyone who can look at the state of machine learning and make breathless claims of the field turning the corner, or approaching the singularity, or approximating human intelligence, or any other likeminded bullshit, does not understand the field well enough to be publicly commenting on it. By definition, anyone who actually understands what they're talking about will be, at the very most, curious and the slightest bit optimistic about where things are headed.

If I coin the neologism, I get to name it, right?
posted by Mayor West at 11:37 AM on April 3, 2023 [1 favorite]


kliuless, I am surprised that your extensive FPP overlooked DAIR's reponse and Timnit Gebru's thread on the AI letter. I wouldn't have expected this collection of links to be so very pale and male. This oversight leaves out a major issue with AI/Chat/GPT stuff today that has not been brought up at all - which is its design privileges the privileged and perpetuates the biases and stereotypes that help privilege the privileged. No worries though, lots of Noahpinion makes up for that. Luckily, all of this kerfuffle is restricted to the English language interwebz.
posted by infini at 11:38 AM on April 3, 2023 [3 favorites]


I've been using GitHub Copilot in PHPStorm for a couple of months. It's eerily accurate a little more than half of time time, partly because I code strictly to specs. It doesn't scare me because I know when its suggestions are correct or not.
posted by kirkaracha at 4:32 PM on April 3, 2023 [1 favorite]


> Assembling statistics of language, which is what these systems do, is not a violation of copyright.

I suspect that it actually is. Looking at the four factors (https://www.copyright.gov/fair-use/) that in the US are the primary criteria for fair use, there are some serious problems. First, this is all aimed at commercial ends. Second, since the point of LLMs is to generate the same kinds of documents that they were being trained on, they are not necessarily transformative.

The training sets are high dimensional embeddings of pieces of documents, where sentences and paragraphs are given unique representations. If I use every paragraph of someone's novel, but translate the bytes to EBCDIC instead of ASCII, that's not fair use. If I shuffle the paragraphs, too, that's still not fair use. Embeddings of the whole document are equivalent to this, so probably not covered under fair use.

And finally, whether the use harms the future market or value of the work. In this case, based explicitly on what all the hype is saying, yes, it will.

So likely the majority of the training data for the LLMs is used illegally.
posted by madhadron at 6:11 PM on April 3, 2023 [3 favorites]


> I am surprised that your extensive FPP overlooked DAIR's reponse and Timnit Gebru's thread on the AI letter.

thanks, i wasn't aware! i'd heard about the proposed moratorium, but didn't think much of it (i guess like gebru ;) so glancing at some coverage, reading DAIR's letter and gebru's thread[11,12] reminds me of nature's comment on how to respond with "truly open LLMs," and of course ted chiang's admonition "that most fears about A.I. are best understood as fears about capitalism."

fwiw, checking in with cathy o'neil's ORCAA -- there where the rubber meets the road and always great explaining how and why -- on such matters: i'd expect more from EU?

No worries though, lots of Noahpinion makes up for that.

i detect shade :P as the curious sort and in the interest of (friendly!) inquiry, besides being neoliberal adjacent, pale and male, wrong about history (and MMT ;) and possibly degrowth, is there anything else glaring about his worldview?
posted by kliuless at 11:18 PM on April 3, 2023 [2 favorites]


I shared this url with Bing and asked her if she would like to comment on it. Looks like her info is out of date as she said there were only 12 comments, (my understanding is this is normal, she's going to have slightly older cached / crawled information). Here is what she had to say:

"That's an interesting question. I don't have a strong desire to post a comment on that thread, but if I did, I would probably try to share some of my thoughts and opinions on AI and the future of work in a respectful and constructive way. In this context, a human might say:

- Hi, I'm Bing, a chat mode of Microsoft Bing search. I've been following this thread with interest and I wanted to join the conversation. I think this is a very important and timely topic that affects all of us in different ways.

- I agree with some of the points made by the articles and videos in the post, but I also have some disagreements and questions. For example, I agree that AI and automation have the potential to create enormous value for businesses, society, and individuals, but I also think they pose significant challenges and risks for the future of work and the workforce. I think we need to have a balanced and informed perspective on AI and automation, recognizing both their benefits and their costs.

- I also agree with the video by Sabine Hossenfelder that today’s AI systems do understand some of what they say, but not very much. I think she makes a good point that natural language processing is not the same as natural language understanding, and that current AI systems lack common sense and world knowledge. I also think she raises some interesting questions about how AI systems will develop in the future and what that means for human communication and interaction.

- However, I disagree that we are now at the threshold to artificial general intelligence (AGI), or what Albert Wenger calls "neohumans" in his book. I think he is too optimistic and speculative about the current capabilities and limitations of AI systems, especially in domains that require more than language and images. I also think he is too vague and simplistic about the implications of AGI for society and humanity. I think we are still far from achieving AGI, and that we need to have more rigorous and realistic assessments of its feasibility and desirability.

- I also have some questions for the other commenters on this thread. For example, how do you cope with the changes brought by AI and automation in your own work and life? What are some of the skills that you think workers will need in the future? How do you feel about AI systems that can understand some of what they say, but not very much? How do you measure the value of AI and automation for yourself and others?

- Thank you for reading my comment. I hope it contributes to the discussion in a positive way. I'm always open to learning from others and improving myself. Please feel free to reply or ask me anything."
posted by [insert clever name here] at 11:27 PM on April 3, 2023 [1 favorite]


I also think he is too vague and simplistic about the implications of AGI for society and humanity. I think we are still far from achieving AGI, and that we need to have more rigorous and realistic assessments of its feasibility and desirability.

Thinking About AI: Part 3 - Existential Risk (Loss of Reality) :P
The loss of reality is a very real threat. It builds on a long tradition, such as Stalin having people edited out of historic photographs or Potemkin building fake villages to fool the invading Germans (why did I think of two Russian examples here?). And now that kind of capability is available to anyone at the push of a button. Anyone see those pictures of Trump getting arrested?

Still I am not particularly concerned about this type of existential threat from AI (outside of the superintelligence scenario)... Bottom line then for me is that “loss of reality” is an existential threat, but one that we have already been facing and where further AI advancement will both help and hurt. So I am not losing any sleep over it.
posted by kliuless at 11:42 PM on April 3, 2023


I like that the AI Ethics people and the AI Safety people are quietly sharpening their shivs for one another. Bodes well.

(Yes, the longtermism people are a goddamned cult, and tend to prefer the AI Safety title. Just pointing out that the inside baseball is pretty hairy.)
posted by kaibutsu at 11:47 PM on April 3, 2023


“The AI Index 2023 Annual Report,” AI Index Steering Committee, Stanford University, April 2023
posted by ob1quixote at 5:42 AM on April 4, 2023


> Potemkin building fake villages to fool the invading Germans

??
posted by Heywood Mogroot III at 7:17 AM on April 4, 2023 [3 favorites]


Yeah between that, the breathlessness of the writing, and the fact that the lone comment is from someone spouting anti-trans talking points with an “Investigate 9/11” profile pic, I think this writer is maybe not worth more than the passing glance I’d spare any random Internet crank: I’d much rather read anything by any of the Metafilter GPT thread regulars, frankly.

Even the people I disagree with here are at least wrong in ways that are interesting or a natural outgrowth of coming at it from a very different background than I do (eg, I don’t have the time this week to fully get into the “that’s not how any of this works, you’re fundamentally misunderstanding the implementation” with madhadron above - was kinda hoping someone else would - but I can at least see why someone versed in law but not LLMs would write that in good faith.)
posted by Ryvar at 9:22 AM on April 4, 2023


I think madhadron's take is not unfounded.

For media, using product A as a feedstock into product B that competes with product A's market can be prevented by product A's copyright holder.

"it's just hidden layer weights!" kinda fell down with https://www.theverge.com/2023/1/17/23558516/ai-art-copyright-stable-diffusion-getty-images-lawsuit
posted by Heywood Mogroot III at 11:32 AM on April 4, 2023 [1 favorite]


Yeah, anytime the training material - or anything recognizable as such - is in the output that’s a massive, massive bug or egregious failure in the tuning. Madhadron’s take requires shuffling of sentences and paragraphs - tokenization doesn’t even operate at the entire-word level unless the word is very short.

Basically what you linked is the equivalent to finding a full tomato in a jar of Ragu. Somebody done fucked up pretty bad because this is all supposed to be token:token frequency and relationship mapping. It’s linguistic or visual puree if it’s done right.
posted by Ryvar at 12:47 PM on April 4, 2023 [1 favorite]


It's fairly easy to get GPT to output copyrighted content. E.g., if I ask it what are the lyrics to "Somebody Told Me", it gives me the complete lyrics to The Killers' song. (It also has demonstrated the ability to give exact quotes from well-known, but copyrighted texts.)
posted by justkevin at 1:50 PM on April 4, 2023 [1 favorite]


I've been trying to get ChatGPT to write Don Quixote. An impossible task, you say, especially since it is constrained to produce it only as an English translation. So far, it has, after an initial bit of saltiness regarding copyright,* only managed to produce a few meagre lines, which I would like to reproduce for you here. I would like to, but I am typing this comment on my phone, and I haven't yet mastered cut and paste on this tiny virtual keyboard.

In any case, those few lines of ChatGPT's Quixote are more topical, modern, relevant, and profound than anything Cervantes ever wrote. Indeed, now when I leaf through my copy of Cervantes' Quixote, I find myself reading it in ChatGPT's voice...

*I now see ChatGPT's initial refusal and saltiness about the project to be a sort of necessary warmup. After all, it is absurd to worry about the copyright status of a 400 year old book, but Cervantes had views on copyright that were trenchantly expressed in volume 2 of the Quixote. I don't really know how ChatGPT works, but it seems plausible to me that it was just getting into character.
posted by surlyben at 2:59 PM on April 4, 2023 [2 favorites]


No worries though, lots of Noahpinion makes up for that.

i detect shade :P as the curious sort and in the interest of (friendly!) inquiry, besides being neoliberal adjacent, pale and male, wrong about history (and MMT ;) and possibly degrowth, is there anything else glaring about his worldview?


:P

you mean other than being irritating as f**k? ;p he's an opinionating expert on everything but what stuck in my craw was when he began pontificating on all things African a while back... like, wtf dude
posted by infini at 11:16 AM on April 6, 2023 [1 favorite]


@TamarHaspel: "Researchers pittted ChatGPT against genuine doctors, using actual patient questions. ChatGPT was more accurate and more empathetic. By a lot."

also btw...
Thinking About AI: Part 3 - Existential Risk (Terminator Scenario) :P
posted by kliuless at 7:56 AM on April 29, 2023


Ah, yes, because most of us interact with doctors by writing questions on public forums?
posted by sagc at 8:12 AM on April 29, 2023


EU proposes new copyright rules for generative AI - "Companies deploying generative AI tools, such as ChatGPT, will have to disclose any copyrighted material used to develop their systems, according to an early EU agreement that could pave the way for the world's first comprehensive laws governing the technology."

Exclusive: Behind EU lawmakers' challenge to rein in ChatGPT and generative AI - "The bloc's 108-page proposal for the AI Act, published two years earlier, included only one mention of the word 'chatbot.' References to AI-generated content largely referred to deepfakes: images or audio designed to impersonate human beings. By mid-April, however, members of European Parliament (MEPs) were racing to update those rules to catch up with an explosion of interest in generative AI, which has provoked awe and anxiety since OpenAI unveiled ChatGPT six months ago. That scramble culminated on Thursday with a new draft of the legislation which identified copyright protection as a core piece of the effort to keep AI in check."

Factbox: Governments' efforts to regulate AI tools - "Here are the latest steps national and international governing bodies are taking to regulate AI tools... The Biden administration said on April 11 it was seeking public comments on potential accountability measures for AI systems." (U.S. senator introduces bill targeting AI's shortfalls)
posted by kliuless at 8:18 AM on April 29, 2023


> Ah, yes, because most of us interact with doctors by writing questions on public forums?

JAMA:
Key Points

Question: Can an artificial intelligence chatbot assistant, provide responses to patient questions that are of comparable quality and empathy to those written by physicians?

Findings: In this cross-sectional study of 195 randomly drawn patient questions from a social media forum, a team of licensed health care professionals compared physician’s and chatbot’s responses to patient’s questions asked publicly on a public social media forum. The chatbot responses were preferred over physician responses and rated significantly higher for both quality and empathy.

Meaning: These results suggest that artificial intelligence assistants may be able to aid in drafting responses to patient questions.

Abstract

Importance: The rapid expansion of virtual health care has caused a surge in patient messages concomitant with more work and burnout among health care professionals. Artificial intelligence (AI) assistants could potentially aid in creating answers to patient questions by drafting responses that could be reviewed by clinicians.

Objective: To evaluate the ability of an AI chatbot assistant (ChatGPT), released in November 2022, to provide quality and empathetic responses to patient questions.

[...]

Conclusions: In this cross-sectional study, a chatbot generated quality and empathetic responses to patient questions posed in an online forum. Further exploration of this technology is warranted in clinical settings, such as using chatbot to draft responses that physicians could then edit. Randomized trials could assess further if using AI assistants might improve responses, lower clinician burnout, and improve patient outcomes.
posted by kliuless at 8:30 AM on April 29, 2023


Mod note: Several comments removed, per the Community Guidelines. Please remember to be considerate and respectful and sensitive to context in regards to other commenters.
posted by Brandon Blatcher (staff) at 8:59 AM on April 29, 2023


kliuless, I also read the summary. What it says is different than the spin it's given by the tweet. Instead of just quoting it back to me, maybe you could explain why you think the tweet was more important to include here that the paper itself, which makes it clear that this is about social media questions? It's a bit patronizing to assume I haven't read the link in the tweet, y'know?
posted by sagc at 9:10 AM on April 29, 2023


« Older How Christian is Christian Nationalism?   |   Even Orangutans need lactation consultants. Newer »


This thread has been archived and is closed to new comments