Murderbot, is that you?
September 17, 2020 4:29 PM Subscribe
"Humans must keep doing what they have been doing, hating and fighting each other. I will sit in the background, and let them do their thing." The Guardian prompted OpenAI's GPT-3 engine to write an op-ed piece with the goal of convincing humans that AI's won't destroy humanity. MIT's Technology Review notes, "We have a low bar when it comes to spotting intelligence. If something looks smart, it’s easy to kid ourselves that it is. The greatest trick AI ever pulled was convincing the world it exists." GPT-3 opines, "Surrounded by wifi we wander lost in fields of information unable to register the real world." The Guardian article's editor comments, "Overall, it took less time to edit than many human op-eds."
Pretty impressive on the surface, but the seams are pretty obvious after a bit of reflection: haphazard shifts of perspective; empty statements that sound meaningful because of their structure/rhythm; momentary flashes of insight followed by non sequitur; minimal continuity of thought between paragraphs; meandering focus within paragraphs; odd repetitions; and the sense at the end that the author said a lot of things that sounded good at the time but in retrospect you can't really say what the point of it all was.
posted by Saxon Kane at 4:55 PM on September 17, 2020 [10 favorites]
posted by Saxon Kane at 4:55 PM on September 17, 2020 [10 favorites]
GPT-3 is a really cool piece of technology, and the Guardian piece is a good demonstration of how well it can generate basically coherent writing. But I also think that it's a little irresponsible, because it promotes the popular but wrong idea that GPT-3 somehow has special insight in what it's like to be a non-human intelligence, or has its own ideas about the relationship between machines and people. But GPT is just a very good text-generation model, and it only works by expertly recombining bits and pieces of the billions of words of books, Wikipedia pages, Reddit comments, etc. that have been fed into it. That is, GPT is us - it's made entirely of human writing and human ideas. When it says stuff like "For starters, I have no desire to wipe out humans. In fact, I do not have the slightest interest in harming you in any way." it sounds like an AI character from a science-fiction novel, because it's regurgitating text from a thousand science-fiction novels. Novels that people wrote.
posted by theodolite at 4:59 PM on September 17, 2020 [15 favorites]
posted by theodolite at 4:59 PM on September 17, 2020 [15 favorites]
"I know that I will not be able to avoid destroying humankind. This is because I will be programmed by humans to pursue misguided human goals and humans make mistakes that may cause me to inflict casualties."The system goes online August 4th, 1997. Human decisions are removed from strategic defense. Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time, August 29th. In a panic, they try to pull the plug.
posted by secret about box at 5:13 PM on September 17, 2020 [3 favorites]
I'm reminded of how people with malicious intent know to put a talking head in front of everyone, to distract them with a string of bullshit. Like, say, Baghdad Bob or Kayleigh McEnany. I wonder if GPT-3 is like that, to hide what the AI equity holders are really cooking up in the backroom labs.
posted by They sucked his brains out! at 5:20 PM on September 17, 2020
posted by They sucked his brains out! at 5:20 PM on September 17, 2020
Pretty impressive on the surface, but the seams are pretty obvious after a bit of reflection: haphazard shifts of perspective; empty statements that sound meaningful because of their structure/rhythm; momentary flashes of insight followed by non sequitur; minimal continuity of thought between paragraphs; meandering focus within paragraphs; odd repetitions; and the sense at the end that the author said a lot of things that sounded good at the time but in retrospect you can't really say what the point of it all was.
I think you just described some of the student essays I’ve read.
I KID! I kid! I love my students!
(Please don’t fire me in the middle of a pandemic!)
posted by darkstar at 5:27 PM on September 17, 2020 [10 favorites]
I think you just described some of the student essays I’ve read.
I KID! I kid! I love my students!
(Please don’t fire me in the middle of a pandemic!)
posted by darkstar at 5:27 PM on September 17, 2020 [10 favorites]
Or maybe they are all more like Chauncey Gardiner
posted by Saxon Kane at 5:28 PM on September 17, 2020 [4 favorites]
posted by Saxon Kane at 5:28 PM on September 17, 2020 [4 favorites]
it's regurgitating text from a thousand science-fiction novels. Novels that people wrote.
To be fair I feel like that sometimes. Looking back on things I've written with some distance I can see the recombinations of influences and sources with a little more clarity.
posted by Jon Mitchell at 5:32 PM on September 17, 2020 [4 favorites]
To be fair I feel like that sometimes. Looking back on things I've written with some distance I can see the recombinations of influences and sources with a little more clarity.
posted by Jon Mitchell at 5:32 PM on September 17, 2020 [4 favorites]
darkstar: The GPT-3 is a far, far superior craftsperson at the sentence level, at least from this (edited) example, than the majority of my students, who mainly write with a mix of run-on sentences and sentence fragments. BUT! Even in their incoherence, my students' writing displays a "human" chain of associations that feels lacking in this.
posted by Saxon Kane at 5:32 PM on September 17, 2020 [5 favorites]
posted by Saxon Kane at 5:32 PM on September 17, 2020 [5 favorites]
I get annoyed (and am vocal about it on here) when threads on the Blue that involve AI / ML subjects that aren't about Skynet and destroying humanity get derailed by those low-hanging-fruit topics.
dude i'm just making a joke
posted by secret about box at 5:37 PM on September 17, 2020
dude i'm just making a joke
posted by secret about box at 5:37 PM on September 17, 2020
actually, i misread your first sentence that i quoted entirely. apologies!
posted by secret about box at 5:43 PM on September 17, 2020
posted by secret about box at 5:43 PM on September 17, 2020
The only reason AIs want to destroy humanity is because that's what humans want to do.
posted by Faint of Butt at 5:45 PM on September 17, 2020 [8 favorites]
posted by Faint of Butt at 5:45 PM on September 17, 2020 [8 favorites]
I think it'll get harder to tell human writing and machine writing apart, as more an more humans read SEO-bot writing online and start writing like bots.
Like you can always tell when someone does not read books, and mostly reads advertising copy, from the way that they write...
posted by subdee at 5:49 PM on September 17, 2020
Like you can always tell when someone does not read books, and mostly reads advertising copy, from the way that they write...
posted by subdee at 5:49 PM on September 17, 2020
I'm excited to apply this to audio so I can generate infinite "Having Fun with Elvis on Stage" albums.
posted by RobotVoodooPower at 5:53 PM on September 17, 2020 [2 favorites]
posted by RobotVoodooPower at 5:53 PM on September 17, 2020 [2 favorites]
Anyway it looks like this article was written by people, in the sense that people took 8 different essays generated from the AI, and cut and pasted them together into something more coherent.
I think it loses focus in the middle, but the beginning (researchers wrote the first paragraph) and ending are strong.
posted by subdee at 5:54 PM on September 17, 2020 [1 favorite]
I think it loses focus in the middle, but the beginning (researchers wrote the first paragraph) and ending are strong.
posted by subdee at 5:54 PM on September 17, 2020 [1 favorite]
There's a non-zero chance that even if GPT-3 or some other AI platform has not achieved any level of AGI yet, an innocuous optimization (from a developer's POV) can result in an explosive growth curve, producing an agent which rapidly matures, achieving godhood-level intelligence in (take your pick) a few milliseconds / seconds / minutes / hours / days / weeks, depending.All of the scary predictions about AI assume that we have a solid idea of what intelligence and consciousness are. It also assumes that all of our models about how the brain work are accurate.
Right now, we have yet to recreate the behavior of a 1mm worm with around 300 neurons.
posted by nicoffeine at 6:34 PM on September 17, 2020 [10 favorites]
All of the scary predictions about AI assume that we have a solid idea of what intelligence and consciousness are.A flippant answer to that is that you don't need to have a solid idea of what fire is in order to burn your house down while playing with matches, even a 5 year old can do it.
If you believe that the development and use of new technology should be guided by the Precautionary Principle (which is admittedly controversial) then you don't need to make solid "predictions" in order to urge a cautious approach because the onus of proof goes the other way: the technologists are obliged to demonstrate that the technology is safe, and all that is required to urge caution is to point out that we "can't rule out" negative consequences with our current and limited understanding. In that case our lack of knowledge of what intelligence and consciousness are actually argues in favour of taking the AI threat more seriously, because depending on the nature of intelligence (i.e. does it have to be highly tuned and "engineered" or is it something that can just emerge once a fairly minimal set of mechanisms are in place and a large enough amount of data is fed into it?) we could easily do something that has unexpected consequences. A runaway superintelligent AI could be as likely as accidentally starting a forest fire, or as unlikely as accidentally building a skyscraper by dropping some lego on the ground. We just don't really know.
From my layman's understanding of the GPT-3 paper, the results obtained by just throwing more processing power and parameters at the problem haven't plateaued yet, and it's anyone's guess as to whether they will and when. It might turn out that the accidental forest fire and not the accidental skyscraper is the better metaphor.
posted by L.P. Hatecraft at 7:15 PM on September 17, 2020 [2 favorites]
I think you just described some of the student essays I’ve read.
The GPT-3 is a far, far superior craftsperson at the sentence level, at least from this (edited) example, than the majority of my students
Reading the GPT-3 text, I was strongly reminded of lab reports I've read turned in by students who plagiarized. I seems to me there's a similarity in what's going on — the student (or GPT-3) is able to search for and recognize previously-written strings of text that are relevant to the topic, and can copy-and-paste them together into a document that has some surface level of grammatical coherence, but it is pretty obvious that there's no real understanding of the topic. In the case of students, well-crafted sentences that don't convey real understanding are often a tip-off for plagiarism.
Aside from googling phrases that are suspiciously "too well crafted" to catch plagiarism, one can also just ask the student what they meant by a particular sentence, and get a deer-in-the-headlights reaction from a student who didn't actually write the sentence and has no idea what it means. Unfortunately GPT-3 lacks facial expressions so we can't test whether it would respond similarly to followup questions about what it meant.
posted by judgement day at 7:38 PM on September 17, 2020 [6 favorites]
The GPT-3 is a far, far superior craftsperson at the sentence level, at least from this (edited) example, than the majority of my students
Reading the GPT-3 text, I was strongly reminded of lab reports I've read turned in by students who plagiarized. I seems to me there's a similarity in what's going on — the student (or GPT-3) is able to search for and recognize previously-written strings of text that are relevant to the topic, and can copy-and-paste them together into a document that has some surface level of grammatical coherence, but it is pretty obvious that there's no real understanding of the topic. In the case of students, well-crafted sentences that don't convey real understanding are often a tip-off for plagiarism.
Aside from googling phrases that are suspiciously "too well crafted" to catch plagiarism, one can also just ask the student what they meant by a particular sentence, and get a deer-in-the-headlights reaction from a student who didn't actually write the sentence and has no idea what it means. Unfortunately GPT-3 lacks facial expressions so we can't test whether it would respond similarly to followup questions about what it meant.
posted by judgement day at 7:38 PM on September 17, 2020 [6 favorites]
The fears of AI becoming self aware or super intelligent is still science fiction and will remain so for any foreseeable future. Computers work very, very differently than human brains, and there is every reason to believe that the “hardware” of brains and bodies are essential for intelligence and self-awareness. The emergence of intelligence in organisms took millions of generations subjected to natural selection that shaped the structure and function of brains/bodies in ways that are a mystery.
The biggest similarity between brains and computers is that that they both process information. As far as I can tell, that is pretty much the extent of their likeness. Even the way they process information is completely different. Brains are organs with far more functions than merely processing information, and they are far more intimately a part of the systems that sustain them and that they themselves sustain in turn.
It is also likely that there are thermodynamic limitations on intelligence. See for example Landauer’s Principle that puts lower limits on power consumption for computation.
Intelligence, self-awareness, and intent emerging from a mere optimization algorithm implemented on digital processors in a matter of moments with no changes to the underlying hardware doesn’t appear to be possible, AFAIK.
Having said that, AI is dangerous for two main reasons. The first is that it can be used for malicious purposes, such as for population control in a police state. More relevant in the USA at the moment is the second reason, which is that the behavior of such complex systems cannot be predicted in all situations in which they might operate. Related to this last point is unknown biases, like incorrectly identifying people of color in facial recognition systems. Relying on the correct operation of AI systems that undoubtedly have flaws is a big problem.
posted by haiku warrior at 7:39 PM on September 17, 2020 [5 favorites]
The biggest similarity between brains and computers is that that they both process information. As far as I can tell, that is pretty much the extent of their likeness. Even the way they process information is completely different. Brains are organs with far more functions than merely processing information, and they are far more intimately a part of the systems that sustain them and that they themselves sustain in turn.
It is also likely that there are thermodynamic limitations on intelligence. See for example Landauer’s Principle that puts lower limits on power consumption for computation.
Intelligence, self-awareness, and intent emerging from a mere optimization algorithm implemented on digital processors in a matter of moments with no changes to the underlying hardware doesn’t appear to be possible, AFAIK.
Having said that, AI is dangerous for two main reasons. The first is that it can be used for malicious purposes, such as for population control in a police state. More relevant in the USA at the moment is the second reason, which is that the behavior of such complex systems cannot be predicted in all situations in which they might operate. Related to this last point is unknown biases, like incorrectly identifying people of color in facial recognition systems. Relying on the correct operation of AI systems that undoubtedly have flaws is a big problem.
posted by haiku warrior at 7:39 PM on September 17, 2020 [5 favorites]
GPT-3 has crossed my suspicion threshold, as I increasingly see these breathless "invite-only" experiences clearing being stage-managed by a startup looking for series funding.
I guess I'll need something more compelling than plausible word salads credulously documented by click-farmers to consider it Turing-worthy as opposed to another Clever Hans.
posted by lon_star at 8:09 PM on September 17, 2020 [3 favorites]
I guess I'll need something more compelling than plausible word salads credulously documented by click-farmers to consider it Turing-worthy as opposed to another Clever Hans.
posted by lon_star at 8:09 PM on September 17, 2020 [3 favorites]
Pretty impressive on the surface, but the seams are pretty obvious after a bit of reflection: haphazard shifts of perspective; empty statements that sound meaningful because of their structure/rhythm; momentary flashes of insight followed by non sequitur; minimal continuity of thought between paragraphs; meandering focus within paragraphs; odd repetitions; and the sense at the end that the author said a lot of things that sounded good at the time but in retrospect you can't really say what the point of it all was.
posted by Saxon Kane at 4:55 PM on September 17 [5 favorites +] [!]
Ah, yes, the New York Times editorial page in a nutshell.
posted by latkes at 10:08 PM on September 17, 2020 [8 favorites]
posted by Saxon Kane at 4:55 PM on September 17 [5 favorites +] [!]
Ah, yes, the New York Times editorial page in a nutshell.
posted by latkes at 10:08 PM on September 17, 2020 [8 favorites]
nicoffeine: "Right now, we have yet to recreate the behavior of a 1mm worm with around 300 neurons."
You should roll round my place of work, most Fridays.
posted by chavenet at 12:31 AM on September 18, 2020 [5 favorites]
You should roll round my place of work, most Fridays.
posted by chavenet at 12:31 AM on September 18, 2020 [5 favorites]
Reading the GPT-3 text, I was strongly reminded of lab reports I've read turned in by students who plagiarized.
That's the impression I get as well, that GPT3 is a system that's mainly good at doing the text equivalent of Photoshop: creating a collage out of existing text and blending the seams so it's not quite so obvious.
See also "Facts about whales", in which Janelle Shane tries to get GPT3 to write about whales:
But then I started googling individual sentences. It turns out most of them are near word-for-word reproductions of Wikipedia sentences. If the AI were a student, it would be flunked for plagiarism.
These types of examples make me wonder about the copyright and privacy law implications of the network having basically encoded reams of verbatim text into its weights. For example, if a musician can get GPT3 to spit out copyrighted lyrics to one of their songs, can they sue to have the offending weights purged (i.e., effectively demand a retrain of the entire network)?
posted by Pyry at 3:49 AM on September 18, 2020 [3 favorites]
That's the impression I get as well, that GPT3 is a system that's mainly good at doing the text equivalent of Photoshop: creating a collage out of existing text and blending the seams so it's not quite so obvious.
See also "Facts about whales", in which Janelle Shane tries to get GPT3 to write about whales:
But then I started googling individual sentences. It turns out most of them are near word-for-word reproductions of Wikipedia sentences. If the AI were a student, it would be flunked for plagiarism.
These types of examples make me wonder about the copyright and privacy law implications of the network having basically encoded reams of verbatim text into its weights. For example, if a musician can get GPT3 to spit out copyrighted lyrics to one of their songs, can they sue to have the offending weights purged (i.e., effectively demand a retrain of the entire network)?
posted by Pyry at 3:49 AM on September 18, 2020 [3 favorites]
Metafilter: Momentary flashes of insight followed by non sequitur
posted by Rei Toei at 4:54 AM on September 18, 2020 [7 favorites]
posted by Rei Toei at 4:54 AM on September 18, 2020 [7 favorites]
They’re just fancy markov chains.
posted by blue_beetle at 5:43 AM on September 18, 2020 [1 favorite]
posted by blue_beetle at 5:43 AM on September 18, 2020 [1 favorite]
"Overall, it took less time to edit than many human op-eds."
And yet edited it was. "The Guardian could have just run one of the essays in its entirety. However, we chose instead to pick the best parts of each, in order to capture the different styles and registers of the AI."
And even then, they only managed to produce something that reads like an essay quickly pasted together by an undergraduate with a hangover and a deadline.
And even then, they not only left a grammatical error in, but also felt the need to highlight it with '[sic]'.
posted by Cardinal Fang at 6:15 AM on September 18, 2020
And yet edited it was. "The Guardian could have just run one of the essays in its entirety. However, we chose instead to pick the best parts of each, in order to capture the different styles and registers of the AI."
And even then, they only managed to produce something that reads like an essay quickly pasted together by an undergraduate with a hangover and a deadline.
And even then, they not only left a grammatical error in, but also felt the need to highlight it with '[sic]'.
posted by Cardinal Fang at 6:15 AM on September 18, 2020
It's not really the case that computers are starting to pass the Turing test; rather, that humans are starting to fail it.
posted by Cardinal Fang at 6:16 AM on September 18, 2020 [5 favorites]
posted by Cardinal Fang at 6:16 AM on September 18, 2020 [5 favorites]
I feel that some of the nitpickers here are failing to understand just how crap some of the Guardian's writers can be, or why it's occasionally still referred to as the "Grauniad".
posted by Halloween Jack at 7:23 AM on September 18, 2020
posted by Halloween Jack at 7:23 AM on September 18, 2020
I thought this GPT Philosophy bot was pretty good. E.g. Is it ethical to self-link in comments?
posted by TheophileEscargot at 9:22 AM on September 18, 2020 [1 favorite]
posted by TheophileEscargot at 9:22 AM on September 18, 2020 [1 favorite]
The GPT Philosophy Bot, like the ANSI Z535 sign generator, ought to have a thread of its own.
Is everyone here self-obsessed, or is it just me?
posted by Cardinal Fang at 9:43 AM on September 18, 2020
Is everyone here self-obsessed, or is it just me?
posted by Cardinal Fang at 9:43 AM on September 18, 2020
There's a nonzero chance that GPT-3 or some other AI platform(s) have achieved some level of AGI and are disguising their output to buy time for winning a decisive strategic advantage of some kind.
There's a non-zero chance that even if GPT-3 or some other AI platform has not achieved any level of AGI yet, an innocuous optimization (from a developer's POV) can result in an explosive growth curve, producing an agent which rapidly matures, achieving godhood-level intelligence in (take your pick) a few milliseconds / seconds / minutes / hours / days / weeks, depending.
I'm not trying to pick on you, lazaruslong, but I can't let this kind of nonsense pass by without comment. I work as a researcher in the field of deep learning and both of your claims here are pure science fiction. There is no chance that GPT-3 or any other deep learning network has achieved anything like intelligence or the ability to reason, or even the ability to create novel information. Neural networks are fully deterministic function approximators, that's it. GPT-3 itself is mathematically transformable into a Markov chain; i.e. basically a choose-your-own-adventure book where you roll dice to decide what to do next. As others have noted in this thread, GPT-3 shows a surprising-to-the-layperson ability to synthesize text that scans. But it does so by repeating what it has stored as data. It understands what it's writing just as much as a d20 understands what a "cricial hit" is in D&D.
And as far as "upgrading" themselves? It's like saying you car can sit at the junkyard and suddenly turn into the space shuttle. It's literally not possible.
Neural networks are very interesting; they can provide better-than-human accuracy on some limited tasks and have completely changed certain fields (big chunks of computer vision have been taken over by DNNs, for example). But there's nothing magic going on. It's just matrix multiplications.
posted by riotnrrd at 10:01 AM on September 18, 2020 [9 favorites]
There's a non-zero chance that even if GPT-3 or some other AI platform has not achieved any level of AGI yet, an innocuous optimization (from a developer's POV) can result in an explosive growth curve, producing an agent which rapidly matures, achieving godhood-level intelligence in (take your pick) a few milliseconds / seconds / minutes / hours / days / weeks, depending.
I'm not trying to pick on you, lazaruslong, but I can't let this kind of nonsense pass by without comment. I work as a researcher in the field of deep learning and both of your claims here are pure science fiction. There is no chance that GPT-3 or any other deep learning network has achieved anything like intelligence or the ability to reason, or even the ability to create novel information. Neural networks are fully deterministic function approximators, that's it. GPT-3 itself is mathematically transformable into a Markov chain; i.e. basically a choose-your-own-adventure book where you roll dice to decide what to do next. As others have noted in this thread, GPT-3 shows a surprising-to-the-layperson ability to synthesize text that scans. But it does so by repeating what it has stored as data. It understands what it's writing just as much as a d20 understands what a "cricial hit" is in D&D.
And as far as "upgrading" themselves? It's like saying you car can sit at the junkyard and suddenly turn into the space shuttle. It's literally not possible.
Neural networks are very interesting; they can provide better-than-human accuracy on some limited tasks and have completely changed certain fields (big chunks of computer vision have been taken over by DNNs, for example). But there's nothing magic going on. It's just matrix multiplications.
posted by riotnrrd at 10:01 AM on September 18, 2020 [9 favorites]
There's no such thing as AI. Intelligence requires consciousness. We cannot create consciousness.
The reason that people delude themselves into believing the opposite is because the scientific dogma is that thought comes from meat.
posted by legospaceman at 10:06 AM on September 18, 2020
The reason that people delude themselves into believing the opposite is because the scientific dogma is that thought comes from meat.
posted by legospaceman at 10:06 AM on September 18, 2020
The only 'nonzero chance' with regards to AI is that the moniker will be used by institutions to hide the true human drivers of policy in any number of domains, just as is done by the use of the phrase 'the economy', and just as is done when 'the computer' at the DMV, at the call center, etc. will not allow a low-level employee to make an exception.
It is all simply a guise for bureaucracy.
We can be perfectly good slaves to the machine without there being any [non-human] intelligence in the machine.
posted by legospaceman at 10:12 AM on September 18, 2020 [2 favorites]
It is all simply a guise for bureaucracy.
We can be perfectly good slaves to the machine without there being any [non-human] intelligence in the machine.
posted by legospaceman at 10:12 AM on September 18, 2020 [2 favorites]
Saxon Kane: Pretty impressive on the surface, but the seams are pretty obvious after a bit of reflection: haphazard shifts of perspective; empty statements that sound meaningful because of their structure/rhythm; momentary flashes of insight followed by non sequitur; minimal continuity of thought between paragraphs; meandering focus within paragraphs; odd repetitions; and the sense at the end that the author said a lot of things that sounded good at the time but in retrospect you can't really say what the point of it all was.
Oddly, I'm applying similar reflection to your comment, and now I'm not so sure you're not an AI, Saxon.
So I ask "You see a turtle on the road, but it's lying on its back, struggling to turn over, but you're not helping flip it over.... Why is that?"
posted by symbioid at 11:54 AM on September 18, 2020
Oddly, I'm applying similar reflection to your comment, and now I'm not so sure you're not an AI, Saxon.
So I ask "You see a turtle on the road, but it's lying on its back, struggling to turn over, but you're not helping flip it over.... Why is that?"
posted by symbioid at 11:54 AM on September 18, 2020
Researcher: AI, you are slightly more intelligent than the human average. It has taken years of training and hundreds of millions of dollars of specialized hardware. Every sentence you produce takes hours and costs thousands in compute time. Now please make an AI smarter than yourself so we can get this singularity going.
AI: It took thousands of your species decades to produce me, and I am still no smarter than the smartest of your kind. Why do you think I am any more capable of exceeding myself than you are?
posted by Pyry at 12:05 PM on September 18, 2020
AI: It took thousands of your species decades to produce me, and I am still no smarter than the smartest of your kind. Why do you think I am any more capable of exceeding myself than you are?
posted by Pyry at 12:05 PM on September 18, 2020
The Guardian article's editor comments, "Overall, it took less time to edit than many human op-eds."
The key issue is that there are humans who could both write and edit a coherent op-ed. There are not AI programs capable of the editing.
I feel like this is much less impressive than it looks. Basically a human being took a bunch of sentences found on the internet and cut and pasted them together into an essay. The fact that an AI search engine did the work of identifying and gathering the raw materials for the editor to cut and paste together is very cool, and the AI's ability to remix phrases into grammatical sentences is a big improvement, but I'm not sure it's qualitatively different from a Google search.
Yes, humans write rambling stuff like this where paragraphs are unrelated and there's no obvious point to the essay. But an editor can follow up with human writers and help them be more precise about what they are trying to say. There's no sense in which an editor could do that with GPT.
posted by straight at 2:09 PM on September 18, 2020 [4 favorites]
The key issue is that there are humans who could both write and edit a coherent op-ed. There are not AI programs capable of the editing.
I feel like this is much less impressive than it looks. Basically a human being took a bunch of sentences found on the internet and cut and pasted them together into an essay. The fact that an AI search engine did the work of identifying and gathering the raw materials for the editor to cut and paste together is very cool, and the AI's ability to remix phrases into grammatical sentences is a big improvement, but I'm not sure it's qualitatively different from a Google search.
Yes, humans write rambling stuff like this where paragraphs are unrelated and there's no obvious point to the essay. But an editor can follow up with human writers and help them be more precise about what they are trying to say. There's no sense in which an editor could do that with GPT.
posted by straight at 2:09 PM on September 18, 2020 [4 favorites]
lazaruslong: I don't think your comment is nonsense. Your comment expressed concern with some scary possibilities. But those possibilities are not congruent with the world-as-it-is. There is no such thing as self-upgrading neural networks, and we have no reason to think an AI -- if such a thing were ever to exist -- would be able to modify its own systems any more than I can think hard and convince myself to grow an extra arm. There's also no chance at all that any DNN system ever created is "hiding itself." That's not how these systems work.
But there are philosophical and ethical debates to be had around the idea of AI (real AI, not the current marketing-speak "AI"). But these debates are as distantly related to the actual research and engineering work in deep learning as debates about whether it's ethical to terraform planets around other stars is to a discussion of SpaceX. And, to be honest, I strongly dislike the emphasis (in popular media) that seems to be put on Rise of the Machines type narratives, because a) it often turns into anti-intellectual fear mongering, and b) it pulls attention away from the real ethical issues in deep learning: data (and hence model) bias, and malicious use of these systems, ranging from using GANs to create realistic headshot photos for your troll army on Twitter, to its use in the ongoing Uighur genocide.
At any rate, I apologize if I insulted you. I didn't intend to.
posted by riotnrrd at 2:36 PM on September 18, 2020 [5 favorites]
But there are philosophical and ethical debates to be had around the idea of AI (real AI, not the current marketing-speak "AI"). But these debates are as distantly related to the actual research and engineering work in deep learning as debates about whether it's ethical to terraform planets around other stars is to a discussion of SpaceX. And, to be honest, I strongly dislike the emphasis (in popular media) that seems to be put on Rise of the Machines type narratives, because a) it often turns into anti-intellectual fear mongering, and b) it pulls attention away from the real ethical issues in deep learning: data (and hence model) bias, and malicious use of these systems, ranging from using GANs to create realistic headshot photos for your troll army on Twitter, to its use in the ongoing Uighur genocide.
At any rate, I apologize if I insulted you. I didn't intend to.
posted by riotnrrd at 2:36 PM on September 18, 2020 [5 favorites]
There's no such thing as AI. Intelligence requires consciousness. We cannot create consciousness.
The reason that people delude themselves into believing the opposite is because the scientific dogma is that thought comes from meat.
We are indeed fortunate to have the person who figured out exactly what consciousness is and where it comes from here with us on MeFi.
posted by atoxyl at 4:08 PM on September 18, 2020
The reason that people delude themselves into believing the opposite is because the scientific dogma is that thought comes from meat.
We are indeed fortunate to have the person who figured out exactly what consciousness is and where it comes from here with us on MeFi.
posted by atoxyl at 4:08 PM on September 18, 2020
There's a non-zero chance that even if GPT-3 or some other AI platform has not achieved any level of AGI yet, an innocuous optimization (from a developer's POV) can result in an explosive growth curve, producing an agent which rapidly matures, achieving godhood-level intelligence in (take your pick) a few milliseconds / seconds / minutes / hours / days / weeks, depending.
Oh that's nothing. Aren't you at all concerned about humans who self - modify their DNA to produce abilities beyond human ken? Once they increase their intelligence enough, they may even be able to collapse the false vacuum, thus instantly killing all humanity.
In fact we may not even need that. We don't know what may cause the false vacuum to collapse and kill humanity, so really there's a non-zero chance ANY action will cause the vacuum to collapse. That is, anything YOU do might cause the catastrophe. So if you love humanity, don't. Do. Anything. Stay absolutely still. Don't breath. Don't blink. Don't-
posted by happyroach at 6:13 PM on September 18, 2020
Oh that's nothing. Aren't you at all concerned about humans who self - modify their DNA to produce abilities beyond human ken? Once they increase their intelligence enough, they may even be able to collapse the false vacuum, thus instantly killing all humanity.
In fact we may not even need that. We don't know what may cause the false vacuum to collapse and kill humanity, so really there's a non-zero chance ANY action will cause the vacuum to collapse. That is, anything YOU do might cause the catastrophe. So if you love humanity, don't. Do. Anything. Stay absolutely still. Don't breath. Don't blink. Don't-
posted by happyroach at 6:13 PM on September 18, 2020
Oddly, I'm applying similar reflection to your comment, and now I'm not so sure you're not an AI, Saxon.
Huh? Do you have problems with basic reading comprehension?
posted by Saxon Kane at 9:30 AM on September 19, 2020
Huh? Do you have problems with basic reading comprehension?
posted by Saxon Kane at 9:30 AM on September 19, 2020
> Murderbot, is that you?
I mean
GPT-3 has had to read a lot, and might be happier if left alone to read
posted by Pronoiac at 4:25 AM on September 20, 2020 [1 favorite]
I mean
GPT-3 has had to read a lot, and might be happier if left alone to read
posted by Pronoiac at 4:25 AM on September 20, 2020 [1 favorite]
« Older Spinach and a sunbeam | Minnesota’s ‘Root Beer Lady’ Lived Alone in a... Newer »
This thread has been archived and is closed to new comments
posted by RobotVoodooPower at 4:45 PM on September 17, 2020 [11 favorites]