Capital’s willing executioner
May 5, 2023 7:59 AM   Subscribe

Ted Chiang writes on the probable implications of corporate A.I. adoption “ If you think of A.I. as a broad set of technologies being marketed to companies to help them cut their costs, the question becomes: how do we keep those technologies from working as “capital’s willing executioners”? ”
posted by Silvery Fish (72 comments total) 53 users marked this as a favorite
 


Capitalism with fewer and fewer economic winners demands UBI and/or nationalised shared value. What else are you going to do if workers aren't structurally required or are forced down to interchangeable pieces obeying an organising computer earpiece a la Amazon?
posted by jaduncan at 8:20 AM on May 5, 2023 [9 favorites]


His argument basically seems to boil down to:

Some might say that it’s not the job of A.I. to oppose capitalism. That may be true, but it’s not the job of A.I. to strengthen capitalism, either. Yet that is what it currently does. If we cannot come up with ways for A.I. to reduce the concentration of wealth, then I’d say it’s hard to argue that A.I. is a neutral technology, let alone a beneficial one.

...which feels like an argument against new tools altogether. Capitalism is in charge. Of course every new tool will be deployed by capitalism to its ends. That's not a property of the tool, that's a property of capitalism.

His economic analysis on the impact of computers is also somewhat flawed for being geographically bounded - I strongly suspect that if you look globally, the notion that the advent of the computer has not led to an increased median wage won't bear out.
posted by Dysk at 8:23 AM on May 5, 2023 [1 favorite]


It's worth pairing this with Chiang's earlier thoughts on AI. Relevant quote:
I used to find it odd that these hypothetical AIs were supposed to be smart enough to solve problems that no human could, yet they were incapable of doing something most every adult has done: taking a step back and asking whether their current course of action is really a good idea. Then I realized that we are already surrounded by machines that demonstrate a complete lack of insight, we just call them corporations. Corporations don’t operate autonomously, of course, and the humans in charge of them are presumably capable of insight, but capitalism doesn’t reward them for using it. On the contrary, capitalism actively erodes this capacity in people by demanding that they replace their own judgment of what “good” means with “whatever the market decides.”
posted by Four String Riot at 8:29 AM on May 5, 2023 [51 favorites]


There is no AI, it doesn't exist.
posted by GallonOfAlan at 8:34 AM on May 5, 2023 [3 favorites]


Interesting that "McKinsey" has become a generic name for "anti-labor, anti-human capitalist shit-stick." Not that it's undeserved, certainly, but interesting.
posted by the sobsister at 8:36 AM on May 5, 2023 [16 favorites]


There is no AI, it doesn't exist.

That semantic boat has sailed.
posted by Thorzdad at 8:36 AM on May 5, 2023 [26 favorites]


Some might say that it’s not the job of A.I. to oppose capitalism. That may be true, but it’s not the job of A.I. to strengthen capitalism, either.

AI doesn’t have a job. People have jobs. It is not a person, It is a large expensive set of servers owned by very rich people. Rich people will use it the way they use everything, to acquire more wealth at the expense of everything else and make the world a more miserable place.
posted by Artw at 8:42 AM on May 5, 2023 [20 favorites]


Theoretically, logistical AI could charge someone what they can afford to pay for an item, based on item availability and spending cash availability, keeping the economy smooth running at full capacity. It would go beyond supply and demand into computing the advantage of keeping an item on the shelf versus letting it go at a discount to someone who will spend the remainder on other necessities. It would blur any lines between capitalism and socialism at the register.
posted by Brian B. at 8:47 AM on May 5, 2023 [6 favorites]


Why would a large statistical model be directed towards doing that rather than stripping every last penny from your pockets?
posted by Artw at 8:48 AM on May 5, 2023 [8 favorites]


Not quite Red Plenty there. Red Just Enough?
posted by clew at 8:49 AM on May 5, 2023 [3 favorites]


Why would a large statistical model be directed towards doing that rather than stripping every last penny from your pockets?

I think Brian B.'s observation is worth sharing, and anyone who believes we can still change the world for the better may as well share ideas that are viable. The reason a "large statistical model" might be directed towards more equitable ends is because people decide that's how we want the world to work. It's possible the owner class will drive all human life to oblivion, but until then: you got any better ideas?
posted by elkevelvet at 8:55 AM on May 5, 2023 [7 favorites]


Make ownerburgers
posted by GoblinHoney at 9:04 AM on May 5, 2023 [14 favorites]


While the semantic ship may have sailed about whether or not AI is “intelligent” (I disagree that it has!), the technologies do still have significant shortcomings, many of which may not yet be apparent. Nobody’s employed this technology at scale or for long.

Whether or not corporations will be able to successfully utilize AI to replace a significant portion of their workforce also very much remains to be seen. Any corporation confidently claiming that they are going to replace X% of employees with AI is bullshitting.

Even if that happens, previous innovations in information technology haven’t resulted in an overall reduction in the size of the workforce, even as certain professions became obsolete. Whether that happens here remains to be seen, but historically it hasn’t been the case, and IT itself became a massive employment center.

AI models (particularly those that are bespoke to unique businesses) are incredibly difficult to train, operate, and validate – all of which require lots of people. It’s not a Roomba that the CEO can just plug in at night.

In the US-specific context, this is all happening amid the backdrop of the highest employment rate we’ve seen in 50 years. Despite all the recent coverage of corporate layoffs, the overall employment rate has been rapidly trending upward, pointing to an actual labor shortage.

If corporations seem eager to automate their operations, it’s not surprising, and it’s not just about cost-cutting. In many cases, they’re having difficulty finding enough people to work the jobs that they currently have. It’s not hard to imagine why AI seems like an enticing path out of that conundrum.

Interesting times may be ahead, but I do not think that any of us know how this is going to pan out.
posted by schmod at 9:14 AM on May 5, 2023 [7 favorites]


Has there been a recent technological development that has been successfully used to empower the workforce over corporations?
posted by Selena777 at 9:20 AM on May 5, 2023 [2 favorites]


The point of the Midas parable is that greed will destroy you, and that the pursuit of wealth will cost you everything that is truly important. If your reading of the parable is that, when you are granted a wish by the gods, you should phrase your wish very, very carefully, then you have missed the point.

Truly every last AI enthusiast reads that story the second way. The more I think about it the more comprehensively devastating a burn this is of that whole community. Just utterly owned by Chiang here.
posted by straight at 9:21 AM on May 5, 2023 [41 favorites]


This reads like someone who has yet to internalize the whole “LLMs generate large volumes of plausible text but cannot reason, bespoke models can potentially reason to a highly limited extent but only within their narrow domains” set of limitations that most of the regular AI thread participants on Metafilter have come to terms with. Just… lots of slipshod anthropomorphism and magical thinking about generalized reasoning capabilities that do not exist and are not on the horizon.

My impression from talking with old college friends who did not drop out of cognitive science (mostly Google ML these days) is that pretty much everyone working in the field was caught flat-footed by how quickly LLMs scaled up into something that really can pass for human until you start asking questions that require a moment of thinking through a problem. I strongly suspected we’d reach this point (successful English modeling without abstract/arbitrary systems modeling) since the late aughts, moreso post-ImageNet, but previously I would’ve bet a solid chunk of change we’d have UBI by the time it finally happened.

In theory it’s just another accelerant, a cognitive prosthesis that multiplies knowledge worker productivity: higher output with the same headcount, line goes up, people who were replaced find jobs with competitors. If your particular industry is not especially demand-bound then it really just boils down to changing employers more often than before. Sucks for family stability but otherwise same shit, different day.

In practice the pace of the extra churn is performance bound on bureaucracy and reactive corporate reorgs intended to reassure Capital that they are still in control. And a lot of people are going to fall into the chasm between theory and practice there, in a society with no good answers for what lies at the bottom. In an ideal world this would be what breaks us out of our stupor, but our non-LLM deep learning is going to make the jackboots’ jobs all much, much easier and less susceptible to error than ever before.

If you’re looking for hope, you’re reading the wrong comment. A lot of people are about to get hurt, in a context where they have even less power to do anything about it than ever before. And I don’t see a path forward there short of banding together and using the same tools that just wrecked their lives, even better than their former employers were when their jobs were eliminated.
posted by Ryvar at 9:21 AM on May 5, 2023 [13 favorites]


I maintain the only solution is repeated, consistent, direct physical action against the holders of capital. Whether that's corporations themselves or apex shareholders doesn't really matter. It has to be more dangerous to hoard capital than to divest it, or things will continue as they are.
posted by seanmpuckett at 9:28 AM on May 5, 2023 [13 favorites]


The reason a "large statistical model" might be directed towards more equitable ends is because people decide that's how we want the world to work.

“We” do not own that and there isn’t a plausible way that we will.

It's possible the owner class will drive all human life to oblivion

Absolutely no doubt as it is a project they are already engaged in.

but until then: you got any better ideas?

[redacted][redacted] Butlerian Jihad [redacted] Marx.

But until then mainly not indulging in fairytales where this shit turns out well and not going out of our way to help the owners with this.
posted by Artw at 9:31 AM on May 5, 2023 [6 favorites]


AI doesn't kill jobs, people do?

Sure it's a tool and a powerful one at that. AI is neutral like guns are neutral. Capitalism is like a vigilante with a deluded sense of justice. Doing right for itself. To regulate its ability to do damage we must regulate its tools first -- culture change takes a long time.

Chiang wonders if there is "a way for A.I. to empower labor unions or worker-owned coöperatives?" Maybe there needs to be a new movement to develop AI trained only on data that is public domain or licensed. And the human curation of it's input / output must not be exploitative psychologically or in terms of pay. But you have to accumulate wealth to design, research, and run these things, so...
posted by Mister Cheese at 9:33 AM on May 5, 2023 [5 favorites]


the only solution is repeated, consistent, direct physical action against the holders of capital

As someone who would infinitely prefer beating these fuckers at their own game to flipping the table and grabbing a gun, I don’t think I can really disagree with you. I don’t even think I can ethically disagree with you which is even more disturbing. The margins are just too tight, the breathing space too narrow.
posted by Ryvar at 9:33 AM on May 5, 2023 [6 favorites]


You don't need to have AGI or indeed meaningful AI LLMs to cut out people's decisionmaking. In a lot of places, 'AI' is going to be standing in for the fact that people will start to try ML models in a much more pervasive way as part of the AI hype.
posted by jaduncan at 9:33 AM on May 5, 2023 [6 favorites]


Has there been a recent technological development that has been successfully used to empower the workforce over corporations?
posted by Selena777 at 9:20 AM on May 5 [+] [!]



An argument could be made that the home computer and the advent of remote work from home, if it can continue to be leveraged, might meet the criteria.
posted by OHenryPacey at 9:37 AM on May 5, 2023 [1 favorite]


everyone working in the field was caught flat-footed by how quickly LLMs scaled up into something that really can pass for human until you start asking questions that require a moment of thinking through a problem

It's not hard to see how you can use brute force to feed a computer a billion texts or a billion pictures and then rapidly automate comparison of the remixes it makes to a billion other texts or pictures until they resemble each other.

It's much harder to see how you could use similar brute force techniques to rapidly compare texts to ideas, pictures to real stuff in the world. Google has spent more than a decade asking us to help AI figure out what a stoplight looks like.
posted by straight at 9:39 AM on May 5, 2023 [3 favorites]


I think we are highly conditioned to use surface "polish" as a way of assessing intelligence and ChatGPT seems astounding. I think that with the addition of:

a) some kind of transfer learning that keeps the underlying model but "re-trains" on gate-kept corpuses like legal judgements or technical data - at the moment it does this accidentally in fields that laymen don't write about because the vocabulary drives it to select the right patterns but in complex areas where laymen write falsehoods, it can lead to nonsense.

b) a way of assessing likelihood of claims

It would be genuinely transformative. From what I know of LLMs, (a) is not that hard but there is no clear path to (b).

Many examples have been given of these tools creating whole fantasy worlds and making them sound plausible.

I am writing an article about wastewater treatment in South America (hey, I didn't choose the thug life, thug life chose me) and ChatGPT is *convinced* that Santiago has membrane bioreactor based wastewater treatment. This would be incredibly expensive and make it the only large city in the world doing that except for Singapore, in a country where 60% of wastewater is discharged untreated into river mouths or the sea. But no, it cannot be talked out of this plausible falsehood!

You don't need to have AGI or indeed meaningful AI LLMs to cut out people's decisionmaking. In a lot of places, 'AI' is going to be standing in for the fact that people will start to try ML models in a much more pervasive way as part of the AI hype.

A vast amount of corporate and small business decision making would be improved by opening up a high school business textbook and actually working through what it said - how many businesses don't even know the unit economics of their main products? Beyond that, decision models based on actually collecting historical data and applying a linear regression to them are often a substantial leap in decision making quality. Only in places that have done those things (and let's be real, the hard part is collecting, curating, and labelling the data) can realistic marginal gains come from applying ML tools to making decisions. So let's say I'm sceptical of how fast this revolution in people's decision making will roll out.
posted by atrazine at 9:42 AM on May 5, 2023 [6 favorites]


I cannot find it now, but I distinctly recall at least a couple articles from a few years ago making the case that CEOs would be more easily replaced by AI than actual workers.

...I mean, obviously that's not gonna happen (conservatives would probably commit global genocide before allowing the Big Strong Man theory of society to fall), but even so it was an interesting point.
posted by aramaic at 9:47 AM on May 5, 2023 [4 favorites]


Has there been a recent technological development that has been successfully used to empower the workforce over corporations?

Sure. I am a union member, currently involved in bargaining for a new contract with our company. We're doing it all -- including our internal union planning, strategizing, organizing, mobilizing, etc. -- on platforms like Zoom and Signal.
posted by Artifice_Eternity at 9:51 AM on May 5, 2023 [14 favorites]


everyone working in the field was caught flat-footed by how quickly LLMs scaled up into something that really can pass for human until you start asking questions that require a moment of thinking through a problem.

“ ChatGPT doesn't give you information. It gives you information-shaped sentences.”
- Neil Gaiman

Yes; it has the shape and flow of plausible, authoritative discourse but rarely contains reliable information-CONTENT.

I realize there’s some interesting overlap between this and those people who are Fox News / conspiracy gullible, and I’ve thought a lot about a MeFi comment in another thread about how rises in fascism historically map to novel developments in communication technologies.
posted by Silvery Fish at 9:51 AM on May 5, 2023 [7 favorites]


An argument could be made that the home computer and the advent of remote work from home, if it can continue to be leveraged, might meet the criteria.

If it can be done remotely, it can be done remotely from a place with a more “business friendly” labour environment, just as manufacturing currently is.
posted by rodlymight at 9:54 AM on May 5, 2023 [5 favorites]


It looks like the ruling economic class are just going to use AI, which is a set of programs that has problems understand simple concepts like “almond butter”, and are going to intensely and utterly fuck it up, despite lots of warnings.
posted by The River Ivel at 9:56 AM on May 5, 2023 [4 favorites]


If it can be done remotely, it can be done remotely from a place with a more “business friendly” labour environment, just as manufacturing currently is.

Yet somehow we've not done this with certain classes of jobs. We notably haven't done it with CEOs.
posted by Dysk at 9:57 AM on May 5, 2023 [2 favorites]


Has there been a recent technological development that has been successfully used to empower the workforce over corporations?

One might argue that Twitter, pre-Musk, partially served that purpose. People organizing via Twitter, using it for shaming corporations, etc. Now? ¯\_(ツ)_/¯
posted by fings at 9:59 AM on May 5, 2023 [3 favorites]


The reason a "large statistical model" might be directed towards more equitable ends is because people decide that's how we want the world to work.

“We” do not own that and there isn’t a plausible way that we will.


"We have no moat and neither does OpenAI" makes a pretty compelling counterpoint. Their general argument is that LoRA and other algorithms have democratized fine tuning, and that Facebook ended up bearing the brunt of the cost of training LLaMA's initial weights from which all these new things are built, and there's more smart people outside their company than inside:

While our models still hold a slight edge in terms of quality, the gap is closing astonishingly quickly. Open-source models are faster, more customizable, more private, and pound-for-pound more capable. They are doing things with $100 and 13B params that we struggle with at $10M and 540B. And they are doing so in weeks, not months.


Now sure, not every mefite has an ML degree and a hundred bucks to spare, but this is not only within the reach of any startup with seed funding, but also any compsci students with a credit card. Capitalists will be tripping over each other to build this market, and margins will melt away.

tl;dr: When we envisioned liberating AI's from their corporate overlords in the 80's, we failed to imagine how poorly guarded they would be.
posted by pwnguin at 10:02 AM on May 5, 2023 [16 favorites]


Anyone worried about this theme of this article should just position themselves as someone who can clean up the messes (at a significant hourly rate) created by idiots who foolishly thought LLMs could magically build out their businesses. You'll retire to your private island long before what people mistakenly call "AI" will make it hard to get a job in whatever your career currently is.
posted by Back At It Again At Krispy Kreme at 10:11 AM on May 5, 2023 [7 favorites]


LoRA and other algorithms have democratized fine tuning

I mean, on the one hand yes, on the other hand I just fell into the Stable Diffusion rabbit hole a couple weeks ago and 90% of the most impressive work being done with LoRAs at the moment is sweaty dudes batch exporting their favorite porn clips to make-it-anime filters. There’s a lot of independent expertise but it’s not exactly busy fighting for the workers.
posted by Ryvar at 10:11 AM on May 5, 2023 [5 favorites]


Has there been a recent technological development that has been successfully used to empower the workforce over corporations?

Does labor law count as a technology?
posted by heyitsgogi at 10:45 AM on May 5, 2023 [5 favorites]


I'll also say here, in regards to both the FPP and Ted Chiang's last big article in the New Yorker that I really appreciate his clear, lucid explanations of both A.I. and of capitalism. It makes clear the problem(s) in a way that I've long struggled to articulate. Especially this:
"When I refer to capitalism, I’m talking about a specific relationship between capital and labor, in which private individuals who have money are able to profit off the effort of others. So, in the context of this discussion, whenever I criticize capitalism, I’m not criticizing the idea of selling things; I’m criticizing the idea that people who have lots of money get to wield power over people who actually work. And, more specifically, I’m criticizing the ever-growing concentration of wealth among an ever-smaller number of people, which may or may not be an intrinsic property of capitalism but which absolutely characterizes capitalism as it is practiced today."
posted by heyitsgogi at 10:50 AM on May 5, 2023 [41 favorites]


It’s also not entirely clear what it would look like for a technology like this to fight for the workers.

Organizing labor requires… well, organizing labor. It’s work that’s inherently social, political, and human. This requires real, honest-to-god work, and nothing is ever going to change or automate that.

As others have mentioned, technology can (and has) helped to facilitate this, but it’s at best an ancillary part of the process.

(On the other hand, it’s a lot more obvious how capital can use technology to reduce its reliance on labor. It’s not a symmetric conflict.)
posted by schmod at 10:55 AM on May 5, 2023 [5 favorites]


I find it clarifying to replace the terms "AI" and "LLM" with "a magic 8-ball whose answers are all: lay some people off."
posted by mittens at 11:11 AM on May 5, 2023 [7 favorites]


"We have no moat and neither does OpenAI"

I missed kaibutsu posting this in the previous AI thread, which I deeply regret. Having reread it twice now, I think it’s the single most lucid state-of-the-state for a technology in rapid flux that I’ve read in years. Maybe ever. Really, really needs its own post.
posted by Ryvar at 11:28 AM on May 5, 2023 [1 favorite]


the most impressive work being done with LoRAs at the moment is sweaty dudes batch exporting their favorite porn clips to make-it-anime filters.

Same as it ever was.

not exactly busy fighting for the workers.

Don't get me wrong here: widely available tech will not lead to whatever Marxist revolutions people here are wishing for. Never has. It just won't naturally reinforce big corporations like Microsoft or Google, and the "AI winners" will be hard to distinguish from the also-rans in the P&L statements.
posted by pwnguin at 11:31 AM on May 5, 2023 [4 favorites]


Maybe there needs to be a new movement to develop AI trained only on data that is public domain or licensed.

A computer scientist and several educational technologies were talking about this on the Future Trends Forum yesterday. They were hoping for ever-smaller datasets, Libre software, and some people wanting to create alternatives to corporate blackboxes. Training on academic, CC-licensed content was one option.
posted by doctornemo at 11:33 AM on May 5, 2023 [1 favorite]


Google has spent more than a decade asking us to help AI figure out what a stoplight looks like.

They couldn’t generate coherent text until about five years ago, either - maybe three years if you set a higher bar - but with the right basic approach and enough brute force now they can. I’m pretty sure this applies at least to “picture to text” as well.
posted by atoxyl at 11:39 AM on May 5, 2023


I think it would also be useful to define "A.I." in this article, because I don't know if the author is referring to current LLMs, scikit-learn, dodgy applications of statistics, or some hypothetical AGI that doesn't yet exist.
posted by credulous at 11:42 AM on May 5, 2023 [1 favorite]


The obvious immediate future for generative AI stuff is as a productivity-increasing tool, though. What net impact that will have on employment is hard to predict.
posted by atoxyl at 11:42 AM on May 5, 2023


I think it would also be useful to define "A.I." in this article

It seems pretty clear from context?

And no, it’s not about AGI, because that’s bullshit.
posted by Artw at 11:46 AM on May 5, 2023


If it can be done remotely, it can be done remotely from a place with a more “business friendly” labour environment, just as manufacturing currently is.

Generally people in the last few years predicting that remote white collar jobs might as well go overseas are predicting the globalization that already happened and reached an equilibrium. There’s not some infinite untapped supply of technical workers out there.
posted by atoxyl at 11:46 AM on May 5, 2023 [1 favorite]


Hell, some of them are even (scandalized tech CEO voice) unionizing.
posted by Artw at 11:48 AM on May 5, 2023 [2 favorites]


There's no restriction on these tools interacting so it will be !FUN! to, as a wild imagining, watch an HR AI, a budgeting AI, and a trader AI become inadvertently connected and begin liquefying a company's finances through misinterpreted inputs from the other two.
posted by Slackermagee at 12:08 PM on May 5, 2023 [6 favorites]


I weirdly never got into Chiang's fiction. I respect it, but rarely enjoy it. On the other hand, he's become one of my favorite non-fiction writers, and I was delighted more than I thought I'd be when I saw that this article cites a piece from Current Affairs.
posted by Tom Hanks Cannot Be Trusted at 12:10 PM on May 5, 2023 [3 favorites]


so it will be !FUN! to, as a wild imagining, watch an HR AI, a budgeting AI, and a trader AI become inadvertently connected

Two major takeaways from that “We have no moat” link (if you haven’t: why in god’s name are you reading this thread? Go read that instead!) are:
1) since Facebook leaked LLaMA the open source kiddies began improving efficiency by orders of magnitude until they could play too, meaning every college student can work with a tech stack that is leapfrogging what the tech titans poured billions into
2) (news to me) three weeks ago a legally clean replacement for the foundational leaked bit was released. Meaning anybody can use it without exposure to legal threats on the basic tech stack.

Conclusion: the horse has all but completely fled the barn on imposing policy, “beat ‘em” is over and we’re into “join ‘em” territory.

There is no longer any realistic path to imposing usage policy via legislation or lawsuit. Our collective ship has no rudder.

Which is great in terms of leveling the playing field, but also means the fantasy of appealing to a handful of trillionaire Neoliberal CEOs to Do Something(tm) about a particularly abusive application has become just that: a fantasy.
posted by Ryvar at 12:45 PM on May 5, 2023 [7 favorites]


I have to say I’m not totally sold by the “no moat” bit, though - or specifically the idea that the moat doesn’t defend big companies in general against open source. From what I’ve seen of the open source language models, they seem just short of GPT-3.5 performance when GPT-4 is a lot more practically reliable for stuff like programming. And GPT-4 has additional capabilities that supposedly already exist but haven’t been released to the public yet (multimodal and long context) out of, I would guess, a little bit of caution and a little bit of business strategy and a lot of struggling to scale the infrastructure.

It doesn’t seem like there’s really much secret ML tech that any one company has, but the people who can reach into deep pockets for training still seem to have an advantage in practice, unless we’ve already hit the point of diminishing returns on model scaling?
posted by atoxyl at 1:03 PM on May 5, 2023


I suppose it also depends on how well specialized tuning of smaller models works for real-world applications. But I feel like generality has some significant value in itself, especially again once things get multimodal.
posted by atoxyl at 1:13 PM on May 5, 2023


I'm bullish on specialized smaller models, it just intuitively makes sense that models don't need knowledge of ancient Latin as much as they need more training data on positive customer interactions. If you're trying to solve the "all singing, all dancing" AI chatbot, then you probably want a big model and a moat, but there are niche applications yet to be discovered. And smaller models reduce the cost and open up more applications where you can chain/iterate. GPT-4 is too spendy for a lot of applications.

I think big companies will continue to have the advantage, unless a smaller company develops a "weird trick" and guards it jealously. But I am seeing a lot of cool weird tricks coming out of the woodwork.
posted by credulous at 1:40 PM on May 5, 2023


If any of you guys want to hop over to AskMe, I’d love some recommendations on how to get up to speed on these new AI language models.
posted by leotrotsky at 3:26 PM on May 5, 2023


So many great quotes. Like
The fact that the word 'Luddite' is now used as an insult, a way of calling someone irrational and ignorant, is a result of a smear campaign by the forces of capital.
posted by Rash at 3:50 PM on May 5, 2023 [3 favorites]


Living through the end of Monopoly as one of the losers sucks even worse than it did when I was 10.
posted by allium cepa at 4:18 PM on May 5, 2023 [4 favorites]


Powerful AI gets leaked from large corporation, embraced by open source community, then iteratively improved upon at a terrifying pace is the Promethean cyberpunk twist I didn’t know I needed.
posted by dephlogisticated at 6:14 PM on May 5, 2023 [5 favorites]


The street, etc!
posted by clew at 7:14 PM on May 5, 2023 [1 favorite]


My what a long thread.

I think TFA falls into a category error early on. Bureaucracies (and their special case, corporate business firms) are already "slow distributed AIs that use humans as their processing nodes" as cstross has put it. And as Chiang almost gets it in that quote from another piece. The only way forward in the existing system is for capital to adopt any useful "AI technology" to be better at what capital does. Any AI that accomplishes anything will automatically be applied as an upgrade to the rapacity of capital in other words.

The way around that involves charges in laws and culture: as Chiang avers, capital never did so much good for so many as during the 30 years from 1945-75. What he didn't note was that was when capital was running as fast as it could to stay ahead of socialism and was looking anxiously over its shoulder the whole time.

There needs to be a return to the kind of confiscatory tax policy that put an effective ceiling on compensation for those 30 years, for a start. The legal environment in which the business firms operate needs to change to explicitly deny the most sociopathic impulses of the firms. There needs to be Federal executive support for labor organizations.

I don't have much hope of any of that happening on a time scale short enough to be useful, but I'd like to be wrong.
posted by Aardvark Cheeselog at 7:40 PM on May 5, 2023 [8 favorites]


The street, etc!

It’s all Jackpot now.
posted by Artw at 8:11 PM on May 5, 2023 [3 favorites]


I'm bullish on specialized smaller models, it just intuitively makes sense that models don't need knowledge of ancient Latin as much as they need more training data on positive customer interactions

I just have an intuition right now, though I could of course be wrong, that giving these things a really broad range of training inputs may actually make them work better in non-obvious ways. As a more specific example, I’ve seen a few people throw around the hypothesis (based on a paper I think but I can’t find it right now) that one reason the GPT family outperform most competitors on logical reasoning tasks is that they’ve been trained on more code. It’s again true that if you are building for a narrow use case rather than trying to impress with the most cross-domain flexibility you’re not always going to need all that, but sometimes flexibility is the difference between something that feels like a dumb computer and something that feels like an assistant.
posted by atoxyl at 10:08 PM on May 5, 2023


Elon Musk Shouts Out Famous Gay Writer's Terrifying Story - "Worth reading The Machine Stops by EM Forster if you haven't already. Even benign dependency on AI/Automation is dangerous to civilization if taken so far that we eventually forget how the machines work."

Doing, Not Just Chatting, Is the Next Stage of the AI Hype Cycle - "As with many tech demos, it's probably best to take any claims about AI agents' accomplishments with a grain of salt—and to fact-check any information the agents produce... As people like Schlicht decide AI agents can be more than toys and begin testing them in real-world situations, the need for policies governing their use becomes more urgent, according to Henry Shevlin, an AI ethicist at the Leverhulme Centre for the Future of Intelligence at the University of Cambridge. 'It's unlikely that corporations will exercise the level of restraint that's desirable for humanity,' he says."

Hinton: "That's an issue, right. We have to think hard about how to control that."[1,2]

Lina Khan: We Must Regulate A.I. Here's How. - "Lina Khan, chair of the Federal Trade Commission, on the agency's oversight of the A.I. revolution."[3]

it isn't just capitalism, though, i don't think. consider how china is/might be using AI:[4,5]
  • @songpinganq: "CHINA's Social Credit System. Buying a crate of alcohol loses you points for being irresponsible, playing video games loses points for being an idle citizen. Everything is controlled by the Chinese government's view of a perfect citizen!!"
  • @songpinganq: "China shames citizens, who are on the blacklist of #SocialCreditSysterm, by displaying their faces, ID, addresses...on billboards in LED trucks, driving around the town for all to see. This also alerts who you may want to 'stay away' from, lest your social credit score goes down!"
> Open-source models are faster, more customizable, more private, and pound-for-pound more capable. They are doing things with $100 and 13B params that we struggle with at $10M and 540B. And they are doing so in weeks, not months.

Leaked Google document: "We Have No Moat, And Neither Does OpenAI" - "It's absolutely worth reading the whole thing—it’s full of quotable lines—but I'll highlight some of the most interesting parts here... I'm pretty chuffed to see a link to my blog post about the Stable Diffusion moment in there!"[6]
Keeping our technology secret was always a tenuous proposition. Google researchers are leaving for other companies on a regular cadence, so we can assume they know everything we know, and will continue to for as long as that pipeline is open.

But holding on to a competitive advantage in technology becomes even harder now that cutting edge research in LLMs is affordable. Research institutions all over the world are building on each other’s work, exploring the solution space in a breadth-first way that far outstrips our own capacity. We can try to hold tightly to our secrets while outside innovation dilutes their value, or we can try to learn from each other.

[...]

And in the end, OpenAI doesn’t matter. They are making the same mistakes we are in their posture relative to open source, and their ability to maintain an edge is necessarily in question. Open source alternatives can and will eventually eclipse them unless they change their stance. In this respect, at least, we can make the first move.
UC Berkeley Releases Open LLaMA, an Open-Source Alternative to Meta's LLaMA - "Due to industrial licenses binding Meta's LLaMA, directly distributing LLaMa-based models was impossible, but this is no longer the case. There have been numerous attempts to open-source these models. Open LlaMA is not the first of its kind in this domain."[7]

also btw :P
  • Midjourney 5.1 - "To compare the v5 and v5.1 models, I ran the prompt pelicans having a tea party through them both." @ammaar: "Here's a World War 2 movie trailer I made this weekend, completely generated using AI text-to-video!"
  • @icreatelife: "This is text to video Burning Man I made today. A video made just by writing text!"
  • @Suhail: "A graphics revolution is occurring in AI. Incredible research from NVIDIA."
  • @amt_shrma: "New paper: On the unreasonable effectiveness of LLMs for causal inference."
  • @lreyzin: "I gave #GPT4 my complexity theory final exam. It got about 3/4 of the points, which is an A minus. From what I'd read, that's approximately what I expected, but I'm still impressed."
  • @Noahpinion: "The reason AI might go beyond what we think of as 'science' is that it might be able to exploit regularities in nature that are too complex to be condensed into 'laws.'"
  • How to customize LLMs like ChatGPT with your own data and documents - "You provide the API with the text of your document, and it returns its embedding. OpenAI's embeddings are 1,536 dimensions, which is among the largest. Alternatively, you can use other embedding services such as Hugging Face or your own custom transformer model. Once you have your embeddings, you must store them in a 'vector database.'"[8]
  • @ramez: "We are so excited about LLMs right now that we're forgetting the massive implications of AI for modeling biology and the physical world. AlphaFold suggests that deep learning can dramatically accelerate our understanding of biology. That may be a bigger deal than ChatGPT."
  • @newscientist: "A black-and-white movie has been extracted almost perfectly from the brain signals of mice using an artificial intelligence tool."[9]
  • Tesla's Magnet Mystery Shows Elon Musk Is Willing to Compromise - "Researchers have a pretty good sense of what chemical elements can make good magnets, but there are millions of potential atomic arrangements. Some magnet hunters have taken the approach of starting with hundreds of thousands of possible materials, tossing out those with drawbacks like containing rare earths, and then using machine learning to predict the magnetic qualities of those that remain."
posted by kliuless at 11:42 PM on May 5, 2023 [11 favorites]


y’know, I really love your posts, but I always worry about the amount of work it must take you. Would you even want a KliulessGPT or do you just really enjoy the actual process of putting mega posts together?

FWIW, this is just one random person’s opinion but if you are actually going to post all that I’d suggest leading with the Moat. That’s the real story in all this - and the only of these people might remember a year from now. Maybe longer than that. I walked two people much smarter than me through it tonight and discovered it’s more jargon-dense than I realized, but it has real sea-change Cathedral and the Bazaar feel to it. Could easily see it becoming a new classic.
posted by Ryvar at 12:03 AM on May 6, 2023


Capitalism is in the process of destroying itself.

The main question is how much of the biosphere it will take down with it.

AI looks like an accelerant, but an accelerant could make the destruction of the biosphere more complete, or by hastening the collapse it could cause more of the biosphere to be left.
posted by jamjam at 12:12 AM on May 6, 2023


do you just really enjoy the actual process of putting mega posts together?

as the curious sort, i don't think of it as work :P

i wasn't going to FPP any of that or ML's 'linux moment', but of course anyone is welcome to!

AI looks like an accelerant, but an accelerant could make the destruction of the biosphere more complete, or by hastening the collapse it could cause more of the biosphere to be left.

but if it accelerates biology/material science?
posted by kliuless at 12:21 AM on May 6, 2023 [1 favorite]


I had not realized just how expensive all this stuff is: "OpenAI reported a staggering loss of $540 million in 2022, generating a mere $28 million in revenue as development expenses for ChatGPT surged" (related reddit thread). You can certainly see why there's a scramble to find some profitable use for it.
posted by mittens at 8:20 AM on May 6, 2023


It also seems like only the side that wants to profit from it will have access to it.
posted by Selena777 at 9:18 AM on May 6, 2023 [1 favorite]




Sam Altman is, and has always been, the worst kind of techbro shit.

...OK, I admit he's not Thiel or Musk -grade, but he's definitely up there with Dorsey and has built his entire career on being a shit who gets funded by other shits who love his shitliness. This is something the assholes of the world are really good at -- if you're an asshole, that's sufficient. You don't need to agree with the rest of them on everything, the fact that you're a giant tool is by itself sufficient to be invited into the Cadre of Shits and they will love you and give you many opportunities to lord it over the plebes.
posted by aramaic at 6:57 PM on May 6, 2023 [2 favorites]


It’s not a Roomba that the CEO can just plug in at night.

I just wanted to point out the irony that the Roomba is powered by technologies (like SLAM, Simultaneous Localization and Mapping) that used to be called "AI", at least until ML, and now LLMs, stole the spotlight.
posted by The genius who rejected Anno's budget proposal. at 12:59 AM on May 7, 2023 [1 favorite]


you got any better ideas?

Join a union and negotiate a contract that says exactly if and how you will use LLMs in your work.

What we lower classes have are numbers. The ants taking down the cricket overlord in A Bug's Life. To take advantage of those numbers, we need to behave as a mass, participating in mass organizations. Stepping outside our atomized lives to build dense webs within our community.
posted by tofu_crouton at 5:39 AM on May 7, 2023 [3 favorites]


« Older Capitalism in Chaos   |   Experts Agree That Memories Of Rare Music Can... Newer »


This thread has been archived and is closed to new comments