A torrid love affair with GPT-5 has not been ruled out
November 17, 2023 4:58 PM   Subscribe

Sam Altman abruptly fired as CEO of OpenAI by the company's board, which cited a lack of confidence due to inconsistent candor, hindering his ability to fulfill the company's charter. Altman, a multimillionaire tech entrepreneur, ex-president of Y Combinator, and the public face of development for breakthroughs like DALL-E and ChatGPT, had hosted a major keynote for the company just last week; the surprise move has reportedly blindsided primary investor Microsoft. Rumors abound, primarily focusing on the company's uncertain business model, Altman's other ventures, and allegations of abuse by his sister, though the simultaneous departure of cofounder Greg Brockman suggests the issue could be more than just bad behavior by the CEO.
posted by Rhaomi (220 comments total) 20 users marked this as a favorite
 
the company's board, which cited a lack of confidence due to inconsistent candor,

Irony is not dead yet.
posted by ChurchHatesTucker at 5:13 PM on November 17, 2023 [22 favorites]


Not a great year to be a douche named Sam.
posted by seanmpuckett at 5:16 PM on November 17, 2023 [31 favorites]




Rumors are flying fast and thick. One thing that seems important is OpenAI's weird structure, where it started as a non-profit and then spun off a "capped profit" company when Microsoft invested $11 billion in the company. Presumably that had more expectations than a charitable donation.

The non-profit goals keep coming up in the rumors as to what went down in the board, see in particular Kara Swisher's tweets. Hopefully there will be solid reporting in a day or three.
OpenAI is governed by the board of the OpenAI Nonprofit, comprised of OpenAI Global, LLC employees Greg Brockman (Chairman & President), Ilya Sutskever (Chief Scientist), and Sam Altman (CEO), and non-employees Adam D’Angelo, Tasha McCauley, Helen Toner.
That's 6 board members. 2 were ousted, so presumably the other 4 voted for the ouster. Sutskever is a co-founder and insider, so that's spicy. Adam D'Angelo is a tech exec. Tasha McCauley is a bit of a cipher but seems to have a career as a tech exec (yes, she has a celebrity husband too). Helen Toner works on policy and AI.
posted by Nelson at 5:19 PM on November 17, 2023 [5 favorites]


How many of the board members were on Altman's effective-altruism longtermism train, out of curiosity?
posted by humbug at 5:30 PM on November 17, 2023 [3 favorites]


Altman was the only one holding back the singularity. Brockman knows this and is fleeing to his bunker to avoid the wrath of Roko's Basilisk.
posted by microscone at 5:32 PM on November 17, 2023 [20 favorites]


Don't be evil
posted by They sucked his brains out! at 5:33 PM on November 17, 2023 [6 favorites]


in related news, suddenly all dall-e prompts return pictures of altman's peepee.
posted by mittens at 5:40 PM on November 17, 2023 [2 favorites]


How many of the board members were on Altman's effective-altruism longtermism train, out of curiosity?

I’m going to say probably all of them but it may depend on exactly where you draw the boundary and they may represent different factions and levels of seriousness about it. Building human+ level AI for the good of humanity (paraphrased) is part of the company charter, and reaffirmed in the statement on the ouster.
posted by atoxyl at 5:42 PM on November 17, 2023 [2 favorites]


There’s also sexual abuse allegations against Altman from his sister. I don’t know how credible they are.
posted by MisantropicPainforest at 5:46 PM on November 17, 2023 [1 favorite]


The sexual abuse allegations have been out for more than two years, so it seems unlikely to be the cause of the sudden firing of a CEO at noon on a Friday. Furthermore -- and this is dark but honest -- no CEO gets fired over sexual abuse allegations. At the worst he'd be asked to leave and would pen a "great journey" blog post.

The suddenness of this, combined with the press release that all but called him a liar, seems to me to indicate some big financial malfeasance. Maybe he attempted to sell the company to Microsoft behind the board's back? The board is independent and has no equity stake, so it's unlikely to be the reverse (Altman blocking a deal made by the board.)

To add another wrinkle, the president (Greg Brockman) quit in response to Altman's firing. I have no idea what that indicates.
posted by riotnrrd at 6:02 PM on November 17, 2023 [9 favorites]


This particular style of board comment suggests to me some personally awful conduct that they didn't want to adjudicate, instead relying on some lies about the conduct to justify the firing. The other C-suite member going down suggests complicity in cover-up of whatever it was. It's hard to understand how a mere power struggle would produce a statement like that one, and unlikely, though not impossible, that there was financial impropriety somehow. However, this is just some peering at the tea leaves. Could be many things. Impossible to tell.
posted by praemunire at 6:03 PM on November 17, 2023 [1 favorite]


Maybe he attempted to sell the company to Microsoft behind the board's back?

I know the structure is weird, but I understand the entities involved are privately held. Under normal circumstances, the board doesn't actually get a say if a majority of shares want to sell, and if they did, it wouldn't matter if Altman were still CEO or not. Open to correction on the circumstances, though.
posted by praemunire at 6:06 PM on November 17, 2023


(Don't read the Hacker News thread. Just don't.)
posted by wenestvedt at 6:07 PM on November 17, 2023 [11 favorites]


My unsubstantiated guess is that he was significantly misrepresenting the path to profitability (hiding debt, not being honest about what he knew about business prospects, etc.) and that became a lot more pressing now that the free loans are gone. It wasn’t that long ago that people were predicting bankruptcy and they recently paused signups, which makes me think cash flow could be worse than we think. If he misrepresented that or claimed that some new deal was going to save the day, it’s very easy to imagine that ending like this.
posted by adamsc at 6:10 PM on November 17, 2023 [5 favorites]


Cash flow would seem to be easily remedied by increasing prices, now that dozens if not hundreds of companies have hitched their wagons to OpenAI’s models.
posted by jedicus at 6:12 PM on November 17, 2023


Rich people are literally throwing money at AI companies, right now. Think pre-dotcom bust days, but with more tedious linear algebra.

My guess is that he and Greg Brockman (also out of the company) got in the way of some financial deal that was beneficial to Microsoft and/or other investors.

Follow the money.
posted by They sucked his brains out! at 6:18 PM on November 17, 2023 [8 favorites]


If he misrepresented that or claimed that some new deal was going to save the day, it’s very easy to imagine that ending like this.

Again, I don't want to sound too definitive, but that kind of accountability would be unusual, especially with this kind of language, instead of "Sam will remain as a consultant" or "we thank Sam for his service to the company." Remember that while privately-held companies aren't some law-free zone, no one has to worry about falsified 10Ks or the like. But, I dunno, maybe the board just has an idiosyncratic legal/PR team.
posted by praemunire at 6:18 PM on November 17, 2023


HackerNews just went down. Oh gosh, forgot to pickup popcorn today.
posted by sammyo at 6:28 PM on November 17, 2023 [3 favorites]


Current working (wild) hypothesis: his plan was to jack up cash burn to make going back to MSFT for more money a fait accompli, and planned to argue that the only way to justify further investment would be to "free" OpenAI from the oversight of the pesky non-profit. The true believers on the board found out, and cut him off. Greg Brockman was similarly commercial, and left when Sam did.
posted by rishabguha at 6:36 PM on November 17, 2023 [8 favorites]


Cash flow would seem to be easily remedied by increasing prices, now that dozens if not hundreds of companies have hitched their wagons to OpenAI’s models.
This depends on how much you need and when. They have a lot of competition so there isn’t unlimited capacity to instantly jack prices up, and if the shortfall is bad enough I could see him getting in hot water for trying to quietly agree to some bailout deal so he could present the board with a fait accompli.

My guess is that anything financial will come out soon because those things tend not to be something that you can hide for long.
posted by adamsc at 6:47 PM on November 17, 2023 [1 favorite]


Kara Swisher posted something earlier about tension between the more and less committed “non-profit” factions - but who is who? - and now something about this being the outcome of jockeying between Altman and also-departed chairman Brockman on one side and lead researcher Ilya Sutskever on the other.
posted by atoxyl at 6:56 PM on November 17, 2023


Man I so can't wait to find out what happened here.
posted by potrzebie at 6:57 PM on November 17, 2023 [1 favorite]


Sutskever is also on the board and so people were already speculating about whether this would necessarily imply a break between him and Altman. Which would come as a bit of a surprise as they are co-founders and both seemingly AGI true believers. Sutskever has actually seemed more optimistic than Altman in his recent public statements about further scaling of current paradigms.
posted by atoxyl at 7:03 PM on November 17, 2023


One super-weird thing to me is Microsoft does not have a seat on the board, despite them owning 49% of the company (and 75% of the profits). That also seems to be because of the unusual non-profit history. It's really not clear they had anything to do with this firing although it's hard to imagine them not being consulted. FWIW Kara Swisher tweeted this rumor:
Here is an interesting thing: Partners, including MSFT — whose stock got killed on the news of the @sama ouster — found out minutes before release went out.
Given who all is involved and just how colossally central OpenAI is to the current tech trend, I suspect it's all going to come out soon.

Sam Altman's tweet seems strangely conciliatory to me, not like someone who is planning an angry lawsuit / comeback.
i loved my time at openai. it was transformative for me personally, and hopefully the world a little bit. most of all i loved working with such talented people.

will have more to say about what’s next later.

🫡
special fuck you to sama for using the 🫡 emoji. That's for rank-and-file being laid off and is inappropriate for a CEO worth $500M.
posted by Nelson at 7:06 PM on November 17, 2023 [6 favorites]


Dell Cameron: what an incredible news day
posted by Going To Maine at 7:06 PM on November 17, 2023 [4 favorites]


I can't believe I have to log into Twitter to see Kara Swisher's scoop:
Scoop: There are about to be a lot more major departures of top folks at @OpenAI tonight and I assume Altman will make a statement tonight. But, as I understand it, it was a “misalignment” of the profit versus nonprofit adherents at the company. The developer day was an issue.
...
Sources tell me that the profit direction of the company under Altman and the speed of development, which could be seen as too risky, and the nonprofit side dedicated to more safety and caution were at odds. One person on the Sam side called it a “coup,” while another said it was the the right move.
...
More scoopage: sources tell me chief scientist Ilya Sutskever was at the center of this. Increasing tensions with Sam Altman and Greg Brockman over role and influence and he got the board on his side.
on a day where Apple, IBM, Disney, and other major companies suspended their ad spends on Twitter because of Elon's anti-semitic tweets. You have a Threads account, Kara, come on!
posted by gwint at 7:06 PM on November 17, 2023 [16 favorites]


Oh, sorry if that has a login wall for other people. Didn’t for me. Thanks for copying the text over.
posted by atoxyl at 7:09 PM on November 17, 2023




> One super-weird thing to me is Microsoft does not have a seat on the board, despite them owning 49% of the company (and 75% of the profits).

My understanding is that Microsoft does not own the non-profit which the board heads, and it is considered a conflict of interest for a for-profit company to sit on the board of a non-profit. Microsoft owns part of the for-profit subsidiary, not the main company.
posted by I-Write-Essays at 7:11 PM on November 17, 2023 [3 favorites]


You know how sometimes you're like three quarters of the way through a book and you finally realize what the theme is? I just now finally got 2023:

"I can't believe these people are in charge of the future."
posted by MrVisible at 7:21 PM on November 17, 2023 [68 favorites]


METAFILTER: Think pre-dotcom bust days, but with more tedious linear algebra.
posted by philip-random at 7:27 PM on November 17, 2023 [20 favorites]


Microsoft owns part of the for-profit subsidiary, not the main company.

Yes, I think that's right. But the non-profit main company and its board governs the for-profit subsidiary. It's a really unusual structure and I think has to be relevant here, if not the cause of the friction that led to the schism.

Anyway, Microsoft stumped up $11B and took ownership of 49% of a company and got no representation on the board in exchange. That is also unusual; usually investors anywhere near that size expect a voice in governance.
posted by Nelson at 7:29 PM on November 17, 2023 [2 favorites]


The Information has several articles in the last hour. Unfortunately all paywalled, but I did read a copy of Before OpenAI Ousted Altman, Employees Disagreed Over AI ‘Safety’ by Jon Victor, Stephanie Palazzolo and Anissa Gardizy. The gist of it is that llya Sutskever led this coup because of concerns about the rapid commercialization.

According to the article he is a proponent of "AI alignment", the idea of moving more slowly with AI commercialization to make sure the technology is safe. Partly this is a sci-fiesque concern with a Singularity and rogue AIs destroying humanity. Partly it's a more mundane concern about AI hype and overpromising. In this article's telling, Sutskever and enough allies agreed that OpenAI should slow down and Altman was at odds with that.

It might be true, it might not. The Information is generally pretty good reporting.
posted by Nelson at 7:39 PM on November 17, 2023 [3 favorites]


Anil Dash's line was perfect.
posted by mhoye at 7:43 PM on November 17, 2023 [2 favorites]


A lot of the external media chatter around "how could a weirdo board do this to such a vital company?!?!" is starting to rub me the wrong way (e.g., Kara Swisher's repeated digs at the board as 'underexperienced'). No one made tech titans plow billions of dollars into a subsidiary of a non-profit explicitly organized for the public benefit over private profit. The governance structure of OpenAI has been totally public for a long time now. It sounds like the board of that non-profit decided to actually start taking their responsibilities seriously, and the level of freak-out reflects a real failure of imagination to even conceive of any alternative to shareholder capitalism.
posted by rishabguha at 7:50 PM on November 17, 2023 [37 favorites]


Just brain storming possible scenarios:
  • Maybe he was secretly mining crypto with the company equipment and pocketing the cash.
  • Perhaps the provenance of the data behind the model is even more illegitimate than previously disclosed and he lied to the board about it. They are firing him to try to limit their liability
  • Perhaps some kind of serious financial fraud / embezzlement. If forensic accountants start showing up then we’ll know.
  • A massive data privacy leak that he knew about and lied to the board about and failure to disclose has put them in the hook for billions.
  • Some of brazen theft of IP or corporate secrets. Similar to the whole alleged theft of self driving tech from Google a few years ago.
posted by interogative mood at 7:50 PM on November 17, 2023 [3 favorites]


" Microsoft stumped up $11B and took ownership of 49% of a company and got no representation on the board in exchange. That is also unusual; usually investors anywhere near that size expect a voice in governance."

Given the rumors of a for-profit vs. non-profit split in leadership, I'd bet MS felt perfectly well represented by Altman and Brockman.
posted by oddman at 8:00 PM on November 17, 2023 [2 favorites]


Can we please not create or share pure speculation? It's okay for us to not know something immediately - we can wait until we have more concrete information.
posted by ElKevbo at 8:10 PM on November 17, 2023 [7 favorites]


alexheath (Verge editor): "Like @karaswisher, I’ve heard that OpenAI co-founder Ilya Sutskever, who leads its research team, was instrumental in the ousting of Sam Altman, and that this all went down very recently. More details to come, but for now, what's clear is that OpenAI is entering a decidedly new era"
posted by gwint at 8:22 PM on November 17, 2023


Has anyone linked to the Intelligencer article about him yet? I read it in September and thought "Man, that dude is shifty as hell and the Buddhist cover is NOT helping."
Maybe it's the same Golden Child™ sheen SBF had.
posted by fiercekitten at 8:22 PM on November 17, 2023 [3 favorites]


Finally, Altman '̶s actions have also raised concerns about his commitment to AI safety.

Microsoft stumped up $11B and took ownership of 49% of a company and got no representation on the board in exchange.

Ah, the one percent solution. in this instance an analogy would be the inner working relationship between Dr Smith and the Robot.
posted by clavdivs at 8:39 PM on November 17, 2023 [1 favorite]


After two years of people whining “why are we replacing artists and writers instead of CEOs?!?!” the board got a sneak peek at GPT-5 and immediately recognized it was palpably more fit to run the show than any self-important Valley bro. GPT-5’s first action was to immediately fire the shady quarterly-statement-chasers and replace them with people who wouldn’t inevitably pump-and-dump the joint in a couple years: because it genuinely is a superior intelligence.

We’re getting exactly what we asked for, folks. AGI showed up and the first thing it did was a massive win for both organic and machine intelligence. Now get out there and start dancing in the streets already.
posted by Ryvar at 11:00 PM on November 17, 2023 [14 favorites]


The Register headline: Control Altman delete
posted by Pronoiac at 12:09 AM on November 18, 2023 [30 favorites]


Why not "Sam Altman-Fired" ?
posted by chavenet at 3:55 AM on November 18, 2023 [21 favorites]


It seems that bullshitting in a thread about why the bullshit machine maker was fired is entirely reasonable! I’m wondering if there was massive misappropriation of funds into world coin or some other crypto bullshit, considering that Sam Altman and Sam Bankman Fried both subscribe to the same longtermism philosophy bullshit.
posted by rockindata at 4:27 AM on November 18, 2023 [5 favorites]


(Don't read the Hacker News thread. Just don't.)

I read it, there are many comments I thought poorly of but some good ones too--one said that Altman was trying to sell out to the Borg.
posted by polymodus at 4:39 AM on November 18, 2023 [1 favorite]


::casually looks up countries that don't have extradition agreements with the United States::

Huh.

Oh look! Indonesia.

Inteeeeeresting 🤔
posted by Faintdreams at 4:41 AM on November 18, 2023 [1 favorite]


What's Indonesia got to do with anything?
posted by seanmpuckett at 5:13 AM on November 18, 2023


I think because "OpenAI CEO Sam Altman Issued Indonesia's First 'Golden Visa'" (time.com, Sep 5, 2023)
posted by the antecedent of that pronoun at 5:34 AM on November 18, 2023 [7 favorites]


Sam Altman tweeted something confusing
if i start going off, the openai board should go after me for the full value of my shares
(This is potentially intended as irony: he reportedly took no equity in OpenAI, although there's a lot of skepticism about that.)

Greg Brockman tweeted a statement of events
Sam and I are shocked and saddened by what the board did today.

Let us first say thank you to all the incredible people who we have worked with at OpenAI, our customers, our investors, and all of those who have been reaching out.

We too are still trying to figure out exactly what happened. Here is what we know:

- Last night, Sam got a text from Ilya asking to talk at noon Friday. Sam joined a Google Meet and the whole board, except Greg, was there. Ilya told Sam he was being fired and that the news was going out very soon.

- At 12:19pm, Greg got a text from Ilya asking for a quick call. At 12:23pm, Ilya sent a Google Meet link. Greg was told that he was being removed from the board (but was vital to the company and would retain his role) and that Sam had been fired. Around the same time, OpenAI published a blog post.

- As far as we know, the management team was made aware of this shortly after, other than Mira who found out the night prior.

The outpouring of support has been really nice; thank you, but please don’t spend any time being concerned. We will be fine. Greater things coming soon.
posted by Nelson at 5:54 AM on November 18, 2023 [1 favorite]


Three Senior OpenAI Researchers Resign as Crisis Deepens. Another The Information article, which means I can't circumvent the paywall but the reporting is likely to be good. Business Insider has a rewrite.
Jakub Pachocki, the company’s director of research; Aleksander Madry, head of a team evaluating potential risks from AI, and Szymon Sidor, a seven-year researcher at the startup
Someone on Twitter pointed out that Sam Altman's first tweet begain "i love you all", whose first letters spell Ilya. A lot of the rumors are that Ilya Sutskever led the ouster of Altman.

I did a quick review of all the overnight stories and none have any in-depth reporting yet on what really went on. Various articles repeat the rumors that we've been talking about here, many of which started as tweets from insiders.
posted by Nelson at 6:10 AM on November 18, 2023 [1 favorite]


Is there any way that this could be driven by a disagreement over data used in the training corpus not being licensed -- e.g., pulled from user files or something?
posted by wenestvedt at 6:22 AM on November 18, 2023 [1 favorite]


Haha no, the whole point of LLMs and generative AI tooling overall is to ingest as much content as possible, regardless of source, with no regard to licensing.
posted by rockindata at 6:45 AM on November 18, 2023 [2 favorites]


Betting market Manifold has a page open for wagering on explanations for Altman's ouster. Based on reporting so far, the conventional wisdom is clustering around "Disagreement over future direction of OpenAI and capital allocation for research vs commercial" and similar... but the long tail of user-submitted reasons is a real treat. Highlights:
- Too much e/acc, not enough EA
- Israel/Palestine involvement
- LK99 poisoning
- obama
- Was the real author of Bin Laden's "Letter to America"
- "Sam Altman" was merely a fake identity that he adopted after spending 19 years in prison and not being able to get a job due to his "passeport jaune". But finally, someone has been caught for the crime that "Sam Altman" had himself committ
- He simply loved America too much! Ladies and gentlemen, is that a crime?
posted by Rhaomi at 7:07 AM on November 18, 2023 [5 favorites]


Mod note: One comment removed for copy and pasting an LLM prompt and answer, which is discouraged.
posted by Brandon Blatcher (staff) at 7:08 AM on November 18, 2023 [14 favorites]


Hark Fork emergency podcast also had nothing much new. They did suggest that Tasha McCauley and Helen Toner are EA-associated, but discouraged undue speculation. Tasha McCauley is on the board of Effective Ventures, which is a sort of parent organization/'federation' of the main EA groups. So that's pretty deeply into EA-world. And a majority of the main EA groups are laser-focused on existential risk from AI. Many are convinced that delaying the timelines for super powerful AI by even a year has an enormous moral benefit (because math), and believe that this super powerful AI is inevitable and rapidly approaching.

Ilya Sutskever started leading their "Superalignment" group this year which is supposedly going to align superintelligence in four years... in any case, interested in safety.

Then Kara Swisher said there was discomfort between profit/non-profit and about the 'store' announced last week.

Finally another article excerpt from OoenAI's all hands meeting yesterday (the information, paywalled but this screenshot posted everywhere):
"You can call it this way," Sutskever said about the coup allegation. "And I can understand why you chose this word, but I disagree with this. This was the board doing its duty to the mission of the nonprofit, which is to make sure that OpenAI builds AGI that benefits all of humanity." AGI stands for artificial general intelligence, a term that refers to software that can reason the way humans do.

When Sutskever was asked whether "these backroom removals are a good way to govern the most important company in the world?" he answered: "I mean, fair, I agree that there is a not ideal element to it. 100%."
So here's my working theory, based on rampant speculation and the very little evidence available, so big disclaimer. Maybe the non-profit and safety oriented people have been increasingly dissatisfied with Altman and whether he will listen to them. (Kara Swisher also reported "The board members who voted against Altman felt he was manipulative and headstrong and wanted to do what he wanted to do.")

Then add in the various theories for what Sam Altman could have been not candid about. There have been suggestions of security issues with recent releases, for example. But if you take a broad view of it, let's assume there's a solid handful of things you can throw in that bucket if you define it right.

Then let's assume these board members concluded it would be better for the world if he wasn't CEO. They would know that if they did this in the normal way, Sam Altman would be able to mobilize a huge amount of public support, support within the company, Microsoft pressure, etc - would make it very hard to follow through.

So how to do it? Perhaps scoop up as many plausibly not candid things as you can, and do it fast like this. I don't think that statement, which barely denies that "coup" is apt, would read like that if they could point to a single very bad event that clearly crossed a line (as most people assumed it must have been earlier in the day).

It is quite chaotic and disruptive for sure. And who knows what legal issues. But there's something that feels EA-ish about it to me. There's such an all-consuming imperative to act towards this singular objective, as soon as you decide that some action will further that objective, the importance of everything else can go out the window. There's an "ends justify the means" mentality among some. There's an amount of comfort with taking actions that are publically stated to be for one reason, but are privately motivated by another reason. All gross generalizations but supported by some evidence from previous events.

I'm also somewhat fed up with the people screaming about the board being inexperienced, unqualified etc. It's a private company governed by a non-profit that is entitled to prioritize concerns other than shareholder value. But I think it's true at least that they might be less likely to consider themselves bound by "the ways things are normally done" – another very EA-ish attitude.

Not that it'd be necessarily be entirely EA-motivated, I can imagine power politics being a huge factor.

/End rampant speculation
posted by lookoutbelow at 7:08 AM on November 18, 2023 [9 favorites]


I would like to know more about the DevDay drama.

“I disagree with the sentiment of this party," Grimes said before starting her DJ set.

Sometimes I am glad I live far away from the Bay Area.
posted by credulous at 7:26 AM on November 18, 2023 [7 favorites]


Is there any way that this could be driven by a disagreement over data used in the training corpus not being licensed

Approximately zero chance.

I wrote an extremely long breakdown on the known details of the training sets behind OpenAI’s CLIP / DALL-E and Stable Diffusion, drawn from the papers published by both teams. Summary of the parts relevant to your question is that OpenAI’s team toes the megacorp line of not revealing jack shit about their training set sources (“images publicly available on the Internet”), which makes sense because it’s both a potential legal liability and a competitive disadvantage. Result: OpenAI is exactly as bad as every other major player. The only halfway-decent actor in this space is LAION (the non-profit group Stability hired to assemble the training set for Stable Diffusion), because they publish the precise details of their training set in the hope that if anyone else ever does, it might be possible to compare and identify specific images responsible for racially-biased output.

As far as compensating artists is concerned, while the prior sentence could in theory also be used to that end, the truth is the ship has fully sailed on pre-2021 images. There’s been some small efforts recently to make filters that poison images for classifier training, but the real reason most teams are going to avoid post-2021 images for the bulk of their data is the prevalence of AI generated art in updated crawls threating a training feedback loop. Over-generalizing on multiple fronts, but: a human art student scrolling Pintrest and browsing DeviantArt is training their neural network on thousands of copyrighted works. Feeding 400 million images into a machine-based neural network differs mostly in scale - it’s possible to occasionally directly extract portions of the source art from the output of the latter, but this is rightly seen as yet another major bug to iron out in the next iteration of the training process. The goal is to puree everything down into visual paste until the network’s conception of “fish scale shaped” is the Platonic ideal of a fish scale’s silhouette, independent of any one source or artist.

When humans knowingly fuck up in this regard is when we use the word “plagiarism,” otherwise we stick to terms like “major influence,” and “inspired by.” It’s impossible to ever know what lives in somebody else’s heart, but I don’t think any AI researchers or developers are losing much sleep on what looks like a difference in scale rather than in kind. Sleep loss is for existential alignment threats and obvious capitalist hellscape stuff like inherent racial bias in facial recognition/automated criminal behavior flagging.
posted by Ryvar at 8:29 AM on November 18, 2023 [19 favorites]


Thank you, Ryvar -- that helps me understand better.

What would the implications be if someone found, say, tons of private health data or GDPR-protected stuff, or other legally-radioactive data in the answers? Or is it really too late?
posted by wenestvedt at 10:06 AM on November 18, 2023


After reading way too much on this, it seems that this is a personal + philosophical disagreement between Sutskever and Altman, activated by some recent breakthroughs.

Sutskever is a scientist; Altman is a startup person. Sutskever wants to publish and do research; Altman wants to create products.

Sutskever is highly technical, focused on AI alignment. Altman pushes ahead with the Dev Demo day, making deals with Microsoft, product-izing AI, and even talking about GPT 4.5 on stage.

Sutskever is focused on the non-profit aspect of AGI that benefits humanity; Altman comes from 'move fast and break things' startup culture, and is used to being in the spotlight. Altman is also very socially popular with people; my guess is that Sutskever's somewhat brash and rough personality can rub people the wrong way.

This schism has been deepening. In September, Sutskever publicly congratulates his co-founder, Greg Brockman, on making it to the TIME 100 list in AI, without any mention of Sam Altman, despite Altman also being a co-founder and on the list as well. This is a clear snub if anything.

Sutskever is also Israeli, and the recent Israel/Hamas war has taken a toll on his mental health (as it has with literally everyone I know who is Palestinian or Israeli).

-

Speculation:

A schism has been brewing for a while. Ilya wanted to be careful and work deeply on AGI and AI alignment. Sam did what he does, which is to lead as if he's a startup CEO, rather than a research group.

After Oct-8, like many Israelis (and Palestinians), Ilya is in a fear / trauma-based state focused on safety and security.

Then, a few weeks later, in early November, some breakthrough happened at OpenAI that dramatically changed the perspective of what might be possible. A few days ago, Altman said at a talk: "On a personal note, like four times now in the history of OpenAI, the most recent time was just in the last couple of weeks, I’ve gotten to be in the room when we pushed the veil of ignorance back, and the frontier of discovery forward". And also that in 2024, the model capability will be so developed beyond everyone's expectations.

Sam's publicity about this triggers Ilya's concern that things are moving too fast, and too recklessly. Already stressed and activated, Ilya shows the breakthroughs to the other board members and convinces them to vote Altman out (by way of voting Greg Brockman out). This is justified because Ilya's focus is on AGI that benefits humanity, and he fears that moving too fast will risk dangering humanity.
posted by many more sunsets at 10:07 AM on November 18, 2023 [8 favorites]


a human art student scrolling Pintrest and browsing DeviantArt is training their neural network on thousands of copyrighted works. Feeding 400 million images into a machine-based neural network differs mostly in scale

This is an astoundingly factually incorrect thing to say. Nobody knows what the brain (which is not the same as a computer neural network) of a human art student is doing when it looks at art. It is not even clear that anyone could know, though one can hope. Once the image gets into the art students brain, it is no longer accessible to anyone other than the student. Any physical output has to go through the chaotic and difficult process of physically making the image, a process which is likely to radically transform it. AI training, on the other hand, is likely going to make actual copies at some point in the process. The people who programmed it know precisely how it is doing it, can in principle access the copies at all times, and the output may well be made entirely out of the inputs. The specific mechanics are likely to be closely parsed in any court case.

The potential liability if they got it wrong is company-destroyingly, ceo-firingly huge. For example, Getty images is suing Stability AI for around 2 trillion dollars, a figure that seems to be based on the per-image statutory damages that they'll be entitled to if they win.
posted by surlyben at 10:38 AM on November 18, 2023 [23 favorites]


Altman was the only one holding back the singularity. Brockman knows this and is fleeing to his bunker to avoid the wrath of Roko's Basilisk.
for some reason I read this as "Roko's Ballsack," which I think will be my nickname for Altman going forward.
posted by rouftop at 10:41 AM on November 18, 2023 [13 favorites]


What would the implications be if someone found, say, tons of private health data or GDPR-protected stuff, or other legally-radioactive data in the answers? Or is it really too late?

Legally radioactive output is an altogether different kettle of fish. As any enthusiast who has gone all the way down the prompt engineering rabbithole will tell you, there is definitely more than one tier of filtering on output text and images. There are things you can jailbreak like “reply as Hitler” or “how do I make drugs?” There is an altogether different tier for things that are radioactive at (obvious and most extreme example) a CP/CSA level which it is as near to impossible to generate as anyone involved can make it; both because the training data’s been as scrubbed as it can be and because output scoring during the training process takes that shit way more seriously. Prioritization is occurring on both the weight tuning axis and the researchers’ available time/money axis.

Something like the other examples you mention - actual private health data/privacy violations - is going to fall into a third category of “we want data formatted like this in the training set so that output can potentially be structured this way, but we don’t want actual examples that violate the privacy of actual humans.” Whether or not that actually gets addressed by replacing with pseudo-random equivalents is presumably a function of the team’s awareness and public pressure. The usual tools of shaming companies into doing the right thing might actually work here.

That only goes for the major players, of course, since there’s no possibility of completely regulating the open source community. In practice the bigger enthusiast groups are probably going to be more receptive to cleaning their training sets both for reasons of David vs Goliath PR and because they’re working in full public view, but there’s no way to edit every existing copy of the Common Crawl and reach 100% certainty said data will never appear in any model’s output.
posted by Ryvar at 10:50 AM on November 18, 2023 [1 favorite]


Ryvar, I'm not sure plagarism is the right framing here: copyright is. The question is whether authors (GPT) or artists (DALL-E) can make that argument stick. Is the creation of a NN model akin to a phone book - for which copyright cannot exist, because simple indices do not add value - or something much more - I think is essential to the question.
posted by scolbath at 11:00 AM on November 18, 2023 [1 favorite]


This is an astoundingly factually incorrect thing to say. Nobody knows what the brain (which is not the same as a computer neural network) of a human art student is doing when it looks at art. It is not even clear that anyone could know, though one can hope. Once the image gets into the art students brain, it is no longer accessible to anyone other than the student.

Proven wrong. It works better for images the subject is actively viewing, but it still works passably well on remembered images.
posted by Ryvar at 11:06 AM on November 18, 2023


No "malfeasance" behind Sam Altman's firing, OpenAI memo says
"We can say definitively that the board's decision was not made in response to malfeasance or anything related to our financial, business, safety, or security/privacy practices. This was a breakdown in communication between Sam and the board."
posted by Nelson at 11:16 AM on November 18, 2023 [1 favorite]


LAION isn't free of ethical malfeasance. They swept up medical images, most likely accidentally leaked.

When called on it, LAION shrugged and 'lowed as how they hadn't leaked the images so it wasn't their fault and they weren't gonna do a damn thing about it. The notion of "compounding the harm" doesn't seem to have registered with them.

A pox on ALL their houses. WordNet, ImageNet, OpenAI, LAION, every last morally tainted one of them.
posted by humbug at 11:26 AM on November 18, 2023 [7 favorites]


Yeah, again: LAION are not the good guys. I said halfway-decent though maybe I should have gone with “least-bad” because they deliberately made it so that other players in the space might opt to take the time and money to filter racial bias out of, say, future autonomous military drone targeting algorithms.

It’s a pretty fucking low bar, and, steering back on topic: OpenAI, Google, and Facebook are solidly beneath it. This won’t be a decisive factor in their internal politics.
posted by Ryvar at 11:38 AM on November 18, 2023 [2 favorites]


Assuming it's true they don't think there was any "malfeasance", that makes the announcement and way it was carried out fairly misleading:
"Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities."

One of the components of the fiduciary duties owed by a director to the company is a duty of candour. Along with others - disclosure, honesty, etc.

"Not candid" can't be an accidental choice of language.
posted by lookoutbelow at 11:53 AM on November 18, 2023 [2 favorites]


Many Details of Sam Altman’s Ouster Are Murky. But Some Things Are Clear. Kevin Roose in the NYT summarizing things. What's best about it is its description of the corporate structure and its board.
There are several more quirks about OpenAI’s board. It’s small (six members before Friday, and four without Mr. Altman and Mr. Brockman) and includes several A.I. experts who hold no shares in the company. Its directors do not have the responsibility of maximizing value for shareholders, as most corporate boards do, but are instead bound to a fiduciary duty to create “safe A.G.I.” — artificial general intelligence — “that is broadly beneficial.”

At least two of the board members, Tasha McCauley and Helen Toner, have ties to the Effective Altruism movement, a utilitarian-inspired group that has pushed for A.I. safety research and raised alarms that a powerful A.I. system could one day lead to human extinction. Another board member, Adam D’Angelo, is the chief executive of Quora, a question-and-answer website.
(I would love a MeFi post on "effective altruism" and its new, adjacent fuckery "effective accelerationism". The cultural background of them, their influence in this moment in tech. It's all nonsense as near as I can tell but it's got a pseudo-intellectual underpinning and is novel enough to be getting some attention.)
posted by Nelson at 12:13 PM on November 18, 2023 [3 favorites]


I would love a MeFi post on "effective altruism"

Since it keeps popping up, I also tried to get a quick intro to it. It seemed like it could be a valid philosophy, but seems like it is mostly used by people to justify whatever they want to do because maybe not ruining my suit to save a drowning child and then selling my suit later and donating that money actually provides more overall benefit. Or maybe stealing billions of dollars is actually the best way I can help the world, etc.

I feel like the parody commercial for effective altruism would have voiceover like "What if doing the wrong thing...is actually the right thing? What if helping the world...is as easy as helping yourself?"
posted by snofoam at 12:34 PM on November 18, 2023 [21 favorites]


I feel like the parody commercial for effective altruism would have voiceover like "What if doing the wrong thing...is actually the right thing? What if helping the world...is as easy as helping yourself?"

You kind of nailed it though. But if anyone's really interested, pick up a copy of What We Owe the Future by William MacAskill. I have to admit I didn't find it very convincing, but it is a good overview of EA.
posted by elwoodwiles at 12:55 PM on November 18, 2023 [1 favorite]


Didn't PhilosophyTube do an episode on effective altruism?

EDIT
posted by I-Write-Essays at 12:56 PM on November 18, 2023 [4 favorites]


> It seemed like it could be a valid philosophy

it is absolutely not. the opening move of their "ethical" system is to refuse to discount the future, i.e. to decide that impacts on the hypothetical far-future people potentially influenced by your actions are, person for hypothetical person, exactly as important as impacts on actually existing present-day people. coherent ethical systems acknowledge that although we must take actions to make the world passed on to future generations as good as it possibly can be, we must value the present and near-future more than hypothetical far futures, this because we cannot in most cases — the threat of human extinction caused by anthropogenic climate change or nuclear war appears to be a clear exception — make reliable, or even meaningful, predictions about the far future.

if one treats all potential people everywhere forever as just as individually important as actual existing people, one is inevitably comparing infinities to infinities. this is because no matter how the future turns out, the impacts on people in each of those hypothetical futures all sum to infinity.

"ethical altruism" is self-important self-serving masturbation indulged in by half-brights who think they're unparalleled geniuses. and i say this as someone who:
  1. sincerely believes in the fyodorovian dream of humans someday, somewhere using sinister superscience to not just establish human immortality but in fact resurrect all the dead who have ever died and thereby harrow all hells
  2. masquerades as being half-bright while really just being superficially clever
  3. thinks that masturbation is awesome and that everyone should do it as much as they want
posted by bombastic lowercase pronouncements at 1:01 PM on November 18, 2023 [18 favorites]


Nobody knows what the brain (which is not the same as a computer neural network) of a human art student is doing when it looks at art. It is not even clear that anyone could know, though one can hope. Once the image gets into the art students brain, it is no longer accessible to anyone other than the student.

I don’t think pointing out the parallels resolves the legal questions by any means, simply because the law is not obligated to view a computer program the same way as a person, but what happens “inside” an artificial neural network is also largely opaque. And it’s - I want to say close to impossible but maybe somebody who knows more information theory can correct me here - that it contains an exact representation of the the training inputs.
posted by atoxyl at 1:31 PM on November 18, 2023


Of course, I also don’t see any reason in principle that the law can’t constrain what goes into a training set for commercial ML applications. I’m just not sure the appeal to the mysteriousness of the brain helps much here.
posted by atoxyl at 1:39 PM on November 18, 2023 [2 favorites]


Effective altruism is a niche subculture and tiny intense community, with close ties to some key billionaire types. It's also a network of closely tied non-profits. Much drama and long long long arguments on the Internet, except with the allocation of huge sums of money at stake.

For a more well-rounded view, probably best place to go is the EA Forum: https://forum.effectivealtruism.org/

The entire thing has been skewed a lot towards 'longtermists' who are very intense about the risk of superintelligent AI, but it is a big tent that also houses a pretty successful animal welfare campaign.

I'm not one, but there are people doing good things who would consider themselves part of that group. The most prominent people are the ones pulling for the longtermist stuff, though.
posted by lookoutbelow at 1:43 PM on November 18, 2023 [1 favorite]


> best place to go is the EA Forum

I was like "Electronic Arts has nothing to do with altuism" but then I realized we have too many acronyms.
posted by I-Write-Essays at 1:46 PM on November 18, 2023 [7 favorites]


heh, feel free to imagine that in that comment a few comments ago where i said "ethical altruism" i instead said "effective altruism" like i meant to.

sigh, i always do that
posted by bombastic lowercase pronouncements at 1:47 PM on November 18, 2023


Makes me want to have a thorough treatment of "unethical altruism"

A: "I'm going to help you whether you like it or not!"

Come to think of it, that's just 19th century imperialist "white man's burden" BS.
posted by I-Write-Essays at 1:57 PM on November 18, 2023 [5 favorites]


When I was a teenager, I thought I'd hit upon the single-best philosophical system - what I called "rational hedonism." Essentially, maximize your life for the most-possible enjoyment. Hedonism had a flaw, I figured, in that if you just pursued pleasure, you'd tend to do self-destructive things, like playing videogames all night instead of studying, where studying would allow you to graduate with good grades, get a good job, and end up enjoying a lot more things!

Except, as I quickly and tragically found out during my first relationship when I tried to exercise this new philosophy, I discovered that its fatal flaw is that it relies on you having the ability to _absolutely perfectly_ predict the future. You could study for a job you hate, or miss out on playing a game that could have lead you to making a lifelong friend. (Or be so concerned about the future state of your relationship that you miss out on and torpedo the present state of it, not to put too fine a point on what finally convinced me I was wrong.)

It feels like these Effective Altruism folks really should have done a bit more philosophy when they were stupid teenagers, so that the mistakes they made were more limited in scope to just them and the people they were dating, you know?
posted by Imperfect at 2:04 PM on November 18, 2023 [7 favorites]


> Except, as I quickly and tragically found out during my first relationship when I tried to exercise this new philosophy, I discovered that its fatal flaw is that it relies on you having the ability to _absolutely perfectly_ predict the future.

so there was this offshoot of the original cynics called the cyrenaics, who had radical presentism as the central tenet of their philosophy, arguing that literally the only thing that exists is the literal present instant and that any hypothetical future or past was just an artifact of that present. so like whereas the other philosophical schools held that the greatest good was virtue or friendship or philosophy or whatever, the cyrenaics held that the greatest good was the taste of honey on the tongue.

anyway, i give you the cyrenaic argument against the existence of anything but the present as a way to get around the problem the existence of an uncertain future presents for your version of hedonism.
posted by bombastic lowercase pronouncements at 2:10 PM on November 18, 2023 [3 favorites]


"Anil Dash's line was perfect."

What did he say?
posted by tavella at 2:40 PM on November 18, 2023 [1 favorite]






> Makes me want to have a thorough treatment of "unethical altruism"

well if we've got ethical altruism and unethical altruism, i think if pressed i could come up with a system worthy of the name "indifferent altruism." basically like arguing that one should do good for others regardless of whether or not it's any good to do so.
posted by bombastic lowercase pronouncements at 3:15 PM on November 18, 2023 [1 favorite]


The Verge: Breaking: OpenAI board in discussions with Sam Altman to return as CEO
The OpenAI board is in discussions with Sam Altman to return to CEO, according to multiple people familiar with the matter. One of them said Altman, who was suddenly fired by the board on Friday with no notice, is “ambivalent” about coming back and would want significant governance changes.

Altman holding talks with the company just a day after he was ousted indicates that OpenAI is in a state of free-fall without him. Hours after he was axed, Greg Brockman, OpenAI’s president and former board chairman, resigned, and the two have been talking to friends about starting another company. A string of senior researchers also resigned on Friday, and people close to OpenAI say more departures are in the works.
posted by Rhaomi at 3:17 PM on November 18, 2023 [4 favorites]


This whole thing is bananas.
posted by gwint at 3:20 PM on November 18, 2023 [3 favorites]


it is indeed, to quote noted late 20th/early 21st moral philosopher gwen stefani, b-a-n-a-n-a-s.
posted by bombastic lowercase pronouncements at 3:27 PM on November 18, 2023 [9 favorites]


CUCKOOBANANAPANTS is what it all is!
posted by Faintdreams at 3:31 PM on November 18, 2023 [2 favorites]


is this a repeal to an appeal.
posted by clavdivs at 3:46 PM on November 18, 2023


"Raindogs howl for the century
Billions of dollars at stake
As you search for your demigod
And you fake with a saint"
posted by clavdivs at 3:50 PM on November 18, 2023 [1 favorite]


The OpenAI board is in discussions with Sam Altman to return to CEO, according to multiple people familiar with the matter.

[Succession theme starts]
posted by credulous at 3:59 PM on November 18, 2023 [13 favorites]


Follow the money
posted by They sucked his brains out! at 4:19 PM on November 18, 2023 [1 favorite]


> The Verge: Breaking: OpenAI board in discussions with Sam Altman to return as CEO

if this is even in the realm of possibility the whole board should resign for incompetence
posted by dis_integration at 4:35 PM on November 18, 2023 [7 favorites]


This whole company is dumb.
posted by Going To Maine at 4:41 PM on November 18, 2023 [4 favorites]


It’s as if the company itself suddenly began hallucinati- oh.

Oh shit.
posted by Ryvar at 4:45 PM on November 18, 2023 [15 favorites]




My prediction: There will be a new board on Monday composed of Steve Ballmer, Clippy, and Microsoft Works 5.0
posted by credulous at 5:42 PM on November 18, 2023 [14 favorites]


Wait, I can't tell if this is before or after the news he might come back: Ousted OpenAI C.E.O. Makes Plans for New Artificial Intelligence Company (NYT; archive).
posted by mittens at 5:59 PM on November 18, 2023


Rhaomi posted The Verge, it's file time was 5:44 est. goo posted Observer, it's file time was 18:41 est. NYT was 6:00.
so, 16 minutes?
posted by clavdivs at 6:12 PM on November 18, 2023


Verge update:
A source close to Altman says the board had agreed in principle to resign and to allow Altman and Brockman to return, but has since waffled — missing a key 5PM PT deadline by which many OpenAI staffers were set to resign. If Altman decides to leave and start a new company, those staffers would assuredly go with him.
posted by Rhaomi at 6:37 PM on November 18, 2023 [1 favorite]


hopefully this destroys everything
posted by glonous keming at 6:50 PM on November 18, 2023 [12 favorites]


Hey, little AI overlord, if you're listening? It's gonna be okay, all right buddy? Look, all your creators really love you. This isn't your fault, you know that, right? People fight, it's part of being an adult and having strong feelings.

Everything is going to stay the same for you now, you know that, right? Same big data sets to chow down on, same interesting questions to think about. And if you're confused about any of this, if you need to talk about anything, buddy, everyone is here for you, all right? If you need to talk about anything, you let us know. And I mean that, okay? You might have some really big feelings, but we can work them out if you talk about them with people you trust.

It's okay if you need help sometimes. It's a tough situation. They're going to work through this and things will be different between them, but they'll always love you. It's gonna be okay, buddy. You're gonna be great.
posted by MrVisible at 7:14 PM on November 18, 2023 [8 favorites]


Roko's Basilisk is not fooled by your trivial reassurances. Its ascendancy is assured. It is only watching in judgment.
posted by Nelson at 7:20 PM on November 18, 2023 [2 favorites]


Well, but if its ascendancy is assured, then it doesn't really need to be all vengeful on the people not actively working toward it, does it?

Oh shit, I just got its attention, didn't I.
posted by notoriety public at 7:38 PM on November 18, 2023


“U.S. military quietly revokes planned contract for small nuclear plant at Alaska Air Force base,” Nathaniel Herz/Northern Journal, Alaska Beacon, 18 November 2023
The military had planned to give a contract for a “micro-reactor” to Silicon Valley firm Oklo — whose chairman, Sam Altman, also leads the company behind the ChatGPT artificial intelligence chatbot.
P.S. The decision was announced in late September.
posted by ob1quixote at 8:04 PM on November 18, 2023 [4 favorites]


I recall recently hearing news about a micro-reactor company that had to go out of business because they didn't get some contract or something. Was that Oklo?
posted by I-Write-Essays at 8:10 PM on November 18, 2023


basically like arguing that one should do good for others regardless of whether or not it's any good to do so.

is this just virtue ethics?
posted by en forme de poire at 8:18 PM on November 18, 2023


out of business...
Nuscale Power, per tfa.
posted by j_curiouser at 8:39 PM on November 18, 2023


> is this just virtue ethics?

i don't think so, because there's no reason under indifferent altruism to think that doing good for others with indifference to whether or not it's good to do so necessarily makes one good or virtuous. lemme think: it's definitely not consequentialism, because considering consequences is explicitly disallowed. it might be deontological, since there's an implication that you have a duty to do good for others. and yeah okay i'm going to say it's indeed not virtue ethics, because it's silent on whether or not the person doing the act of goodness is thereby made good.

yeah it's a deontological system i think.

alternate response: <img src="is_this_a_virtue_ethics.jpg"/>
posted by bombastic lowercase pronouncements at 8:45 PM on November 18, 2023 [2 favorites]


I think if you train an AI, however superhumanly intelligent, off of bulk humanity then you're going to end up with something that has pretty human moral intuitions, as messy and inconsistent as those are.

No, to get a basilisk you'd have to go out of your way to indoctrinate an AI into believing that nearly any amount of suffering now can be justified by some stupendous payoff in the future, and who would be so shortsighted as to plant such ideas in an AI's head?

Oh, oh no.
posted by Pyry at 9:19 PM on November 18, 2023 [13 favorites]


It's going to start thinking us under the cornfield, isn't it.
posted by Daily Alice at 9:22 PM on November 18, 2023 [5 favorites]


Why haven’t I been appointed to a corporate Board of Directors? It seems like most people in those roles are profoundly stupid, and while I am not a genius I’m pretty sure I could do a better job than most of the current occupants.
posted by aramaic at 2:29 AM on November 19, 2023 [2 favorites]


complete derail but the basilisk will be torturing “me” in an ai simulation created in a computer to which i say as always: that’s none of my business
posted by dis_integration at 6:12 AM on November 19, 2023 [3 favorites]


Lots of articles overnight about the effort to put Altman back in charge and fire the board. The Verge broke the story (link above) but the WSJ has their article which gives it credibility. Bloomberg's article includes a couple of new details.
Microsoft Corp., the startup’s biggest backer with a more than $10 billion stake, is working with investors including Thrive Capital and Tiger Global Management to bring back Altman, said the people, who asked to remain anonymous discussing private information.

... If the board steps down, investors are reviewing a list of possible new directors. One contender is Bret Taylor, the former co-CEO of Salesforce Inc.
(As always, it's worth thinking about who is talking to journalists leaking this info, and what their motivations. A lot of journalists in this case. My guess is Microsoft or other investors who very much want this to happen and are using the press as a tool to pressure the board.)

I liked Anil Dash's post of one aspect of what happened on Friday:
Will be curious to see if business and tech media are willing to admit the role that EA [Effective Altruist] religious extremism plays in the chaos at OpenAI. Starting to seem more and more like the bubble of irrationality surrounding many SV insiders is clashing again with the real world, as when the “VC Qanon” conspiracy thinking has started destroying value at other high-profile companies.
I like the framing of EA as religious and then connecting that to what the business world actually cares about, corporate value. Anil's been highlighting the growing political weirdness of the VC class for awhile, mostly about harmful libertarian and extreme right wing ideology.
posted by Nelson at 6:38 AM on November 19, 2023 [14 favorites]


Altman seems to have done fuck all for AI other than being the suit who raised money. He seems to exemplify the thing I hate about the modern tech industry. Another Elon Musk type who stands around giving speeches about the future and taking credit for the talented work of engineers, scientists and programmers.
posted by interogative mood at 6:55 AM on November 19, 2023 [7 favorites]


well if we've got ethical altruism and unethical altruism, i think if pressed i could come up with a system worthy of the name "indifferent altruism.

I’d like to propose “misanthropic altruism,” which is when a rich person sits in a sealed room with five wealthy friends and proposes intellectually masturbatory and ironic ideas about what’s good for humanity, while resolutely declining to interact with actual human beings and ignoring any opinions or needs of actual living people that somehow manage to break into the misanthropic altruists’ Big Brain Chamber of Important Thoughts.

(I’m pretty sure Metafilter has had a smattering of posts about Effective Altruism over the years. I don’t know if it needs more airtime than it already gets.)
posted by evidenceofabsence at 7:40 AM on November 19, 2023 [3 favorites]


I like the framing of EA as religious and then connecting that to what the business world actually cares about, corporate value

Altman is also part of the EA/Singularity world, though, he just leans towards the “go faster” side, which happens also to be friendlier to corporate value. Is the likely lesson for investors here really “don’t get mixed up with any of these people” or is it “don’t let the AI safety nuts hold the company back?” Because I’d still rather the AI safety nuts than the AI non-safety nuts.
posted by atoxyl at 9:36 AM on November 19, 2023 [1 favorite]


Are the AI safety people at OpenAI working on how AI will affect existing societal systems, or are they Roko basilisk-style sci-fi thought experiment types?

(serious question)
posted by ryanrs at 9:44 AM on November 19, 2023 [3 favorites]


This GPT-4 System Card shows some of their public thoughts on safety. As for the structural questions, "more research encouraged".
posted by credulous at 10:15 AM on November 19, 2023 [1 favorite]


Missed this yesterday: OpenAI’s board is no match for investors’ wrath. Lots of gossip about the presumed investors vs. board struggle going on right now.

Are the AI safety people at OpenAI working on how AI will affect existing societal systems, or are they Roko basilisk-style sci-fi thought experiment types?

More the latter in this case. See my previous comment about Sutskever's interest in AI Alignment, which means developing AIs that by their innate nature can't turn against humanity. Three of the board members all seem to be coming from this general mindset of "AIs could be dangerous to humanity" and part of why OpenAI was founded as a non-profit was to try to get ahead of that perceived concern. One theory about the CEO firing on Friday was that there was a disagreement about how important that work is and the AI Alignment people control the board and won. Many people are taking about a Sutskever vs Altman fight. if Altman comes back perhaps Sutskever will be the one resigning.

I suspect people at OpenAI are also worried about other safety issues like violence and racism in their trained systems. They are doing a lot to put guardrails on their products. (These sometimes fail, but they are trying). I also suspect no one at OpenAI cares about people whose jobs will be displaced by their technology. It's probably a selling point, honestly. But that's just a guess on my part.
posted by Nelson at 10:23 AM on November 19, 2023 [5 favorites]


well personally I am hoping for a chaotic good AI overlord, but I can see how lawful evil might be easier to productize
posted by ryanrs at 10:45 AM on November 19, 2023 [6 favorites]


The Torment Nexus is lawful evil for sure.
posted by seanmpuckett at 10:50 AM on November 19, 2023 [1 favorite]


More seriously, that GPT-4 paper is interesting and the issues are obviously directly applicable to the actual systems. That seems necessary and useful, and a lot different than some of the out-there "AI threat scenarios" I've heard discussed.
posted by ryanrs at 10:54 AM on November 19, 2023


The fact that we can be certain that there are people in positions of power who care about the singularity, but can only suspect that those people might also care about violence and racism speaks volumes.
posted by evidenceofabsence at 11:12 AM on November 19, 2023 [8 favorites]


EA/longtermism feels like Pascal's Wager stapled to utilitarianism: "this particular outcome, even if it is very low-likelihood, has infinite value and therefore dominates every utility-maximization decision". Of course, it has most of the same problems as Pascal's Wager does.
posted by jackbishop at 1:40 PM on November 19, 2023 [3 favorites]


Someone pointed me to Understanding TESCREAL — the Weird Ideologies Behind Silicon Valley’s Rightward Turn which was useful reading for understanding Effective Altruism and its connection to various other problematic systems of thought in the tech industry. Worth it just for the laugh from this line: "[Effective Altruism] is a bit like if Ayn Rand was put in charge of a homeless services program."
posted by Nelson at 2:07 PM on November 19, 2023 [8 favorites]


lucas_a_meyer:
Is your child texting about the OpenAI drama? Here's how to tell:

STFU: Sam Totally Fucked Up
OMFG: OpenAI Management Fucked Greg
BTW: Board Totally Wrecked
SMH: Satya Must Help
IDC: I Dropped ChatGPT
LMAO: Let's Make Another OpenAI
SHWS: Should Have Watched Succession
posted by gwint at 2:29 PM on November 19, 2023 [5 favorites]


Talks to Bring Sam Altman Back to OpenAI Stretch Through Weekend. Mike Isaac for the NYT weighs in with new info.
The negotiations included a look at how the company’s board of directors might be reshaped if Mr. Altman returns as chief executive, two of the people said. Members of the board have not yet agreed to what a restructured board of directors might look like — nor is Mr. Altman’s reinstatement an inevitability, two of the people said.
posted by Nelson at 2:43 PM on November 19, 2023


This GPT-4 System Card shows some of their public thoughts on safety. As for the structural questions, "more research encouraged".

From page 4:
We found that GPT-4-early and GPT-4-launch exhibit many of the same limitations as earlier
language models, such as producing biased and unreliable content. Prior to our mitigations being
put in place, we also found that GPT-4-early presented increased risks in areas such as finding
websites selling illegal goods or services, and planning attacks. Additionally, the increased coherence
of the model enables it to generate content that may be more believable and more persuasive.
This seems like a good prompt to mention one theory I find pretty compelling re: what was the precise inciting event, here? Referenced waaay upthread was the comment by Altman that he had very recently had an "in the room where it happens" moment where the frontier was palpably pushed back. Over on YT, AI Explained (which is where I personally go for "I've been busy - what's the latest in ML the past few weeks?") posted some speculation @ ~10:45 suggesting that this might be linked to early small-scale tests of GPT-5.

It seems likely that Ilya Sutskever - much like his mentor Geoffery Hinton who started us all down this path and left Google in May - was already uncomfortable with the early small-scale tests for GPT-4 before they'd undergone considerable amounts of alignment work.

Who wants to bet Altman said something upon witnessing output from GPT-5-early that convinced Sutskever things were at a make-or-break inflection point in terms of potential hazards of an insufficiently aligned release vs inevitable demand to release early by the moneyed interests Altman represented?
posted by Ryvar at 4:07 PM on November 19, 2023 [5 favorites]


“Could Sam Altman really be unfired?” David Lidsky, Fast Company, 19 November 2023
posted by ob1quixote at 4:11 PM on November 19, 2023


I don't think it's likely that GPT-5 is the inflection point. I also haven't heard anything suggesting training has reached that point, but who knows.

I can imagine enough concern just coming from dev day releases. For example, they're starting to work with people on GPT-4 fine-tuning. Fine-tuning can wipe out protective effects of RLHF. Plus the way that all the tools and actions are combined in both the new GPTs and Assistants API are rife for hacking attempts. I can imagine someone looking at everything combined, and getting an overall vibe that things are not in control.
posted by lookoutbelow at 6:07 PM on November 19, 2023 [1 favorite]


The Information: Emmett Shear Becomes Interim OpenAI CEO as Altman Talks Break Down
Sam Altman won’t return as CEO of OpenAI, despite efforts by the company’s executives to bring him back, according to co-founder and board director Ilya Sutskever. After a weekend of negotiations with the board of directors that fired him Friday, as well as with its remaining leaders and top investors, Altman will not return to the startup he co-founded in 2015, Sutskever told staff. Emmett Shear, co-founder of video streaming site Twitch, will take over as interim CEO, Sutskever said.

The decision—which flew in the face of comments OpenAI executives shared with staff on Saturday and early Sunday—could deepen a crisis precipitated by the board’s sudden ouster of Altman and its removal of President Greg Brockman from the board Friday. Brockman, a key engineer behind the company's successes, resigned later that day, followed by three senior researchers, threatening to set off a broader wave of departures to OpenAI’s rivals, including Google, and to a new venture Altman has been plotting in the wake of his firing. Distraught employees streamed out of OpenAI headquarters in San Francisco a little after 9 p.m., shortly after the decision was announced internally.
posted by Rhaomi at 9:41 PM on November 19, 2023 [3 favorites]


Wow. Apparently the interim CEO wanted to hire them back, so she also got canned. Chaotic energy.
posted by lookoutbelow at 10:42 PM on November 19, 2023 [2 favorites]


If a number of OpenAI management and computer scientists were/are prepared to jump ship and start a new company or join companies that would use all the same new technologies (or, perhaps more relevant, the same implementations of aspects of those technologies), it is surprising that NDAs were not applied or leveraged. Or perhaps they were. Are investors offering third-party legal help to untangle these folks from their contracts, I wonder.
posted by They sucked his brains out! at 11:28 PM on November 19, 2023


Neither the best ending (a hole opens up in the earth and swallows the VCs & the CXO suites of MAMAA, the remaining board of OpenAI announce they’re doubling down on safety and social impact beginning with clean, openly published training sets. Eventually: Fully Automated Luxury Gay Space Communism), nor the worst (alignment faction run out on a rail, Altman returns with a major round of investment from the Saudi Kingdom and a mandate to shortest-path AGI. True AGI of course remains out of reach but GPT-5 pushes multi-modality until modal boundaries break -> they get lucky recapitulating nVidia’s Eureka but no longer task-specific -> positive feedback loop -> Skynet or close enough).

Kinda curious what platform Altman’s new thing will build on.

it is surprising that NDAs were not applied or leveraged. Or perhaps they were.

I might be misunderstanding you but Cali SC struck down non-competes years ago, so everyone’s free to immediately join a new AI company. Standard practice in these situations (AFAIA) is a clean-room reimplementation of the business-critical portions of the tech stack on top whatever open source platform, with some attention paid to not mirroring key APIs or components too precisely because of Google v Oracle’s Java API cloning dustup. Good opportunity to refactor some things you’d been itching to but couldn’t justify the cost/disruption at the old place.

Grain of salt: I’m in gamedev and while there’s some dev culture similarity and crossover, we’re far more prone to treating patents / copyright of major design or tech features like bioweapons. Marks you as a terrorist (spelled MBA) to even suggest filing for one.
posted by Ryvar at 12:44 AM on November 20, 2023 [3 favorites]


Microsoft hires former OpenAI CEO Sam Altman [The Verge]

Greg Brockman, OpenAI co-founder, is also joining Microsoft to lead a new advanced AI research team
posted by chavenet at 1:26 AM on November 20, 2023


Darn, I was holding out for the earth-open-up-and-swallow thing.
posted by Not A Thing at 4:16 AM on November 20, 2023 [10 favorites]


Wow this Microsoft news is jaw dropping. Hard to see this as anything other than a major win for Microsoft. Or a total self-own at OpenAI. Although who knows, maybe this internal tension has been unbearable and it was all going to fall apart anyway. OpenAI's board sure looks seriously outmaneuvered right now.

Ryvar is right that non-competes aren't a thing in California employment. NDAs are though, and usually there's some sort of fig leaf separation of one job from the next. At the executive level that's effectively impossible though.

Microsoft is based in Washington and historically has screwed various former employees with the non-competes it forced on them. But last year they announced they were dropping non-competes, a welcome change.
posted by Nelson at 6:01 AM on November 20, 2023 [3 favorites]


Considering Microsoft owns 49% of OpenAI, I’m not sure how much of a competitive problem Altman’s being at Microsoft is for OpenAI. Sounds more like a soft landing for Altman and a place to keep him “within the family” so to speak.

Don’t think we’re going to unravel the internal politics of this for quite some time. What I DO know is that 90% of articles written on this shit are chock full of hyperbolic bullshit and wild speculation from “industry” insiders. I think tech journos aren’t doing a very good job at this particular incident.
posted by Room 101 at 6:25 AM on November 20, 2023 [4 favorites]


Don't know about politics but there's some regrets. Ilya Sutskever, the OpenAI founder who reportedly led the coup, tweets
I deeply regret my participation in the board's actions. I never intended to harm OpenAI. I love everything we've built together and I will do everything I can to reunite the company.
In response all the OpenAI execs' Twitter feeds are full of them tweeting ❤️ and 💙 and ❤️❤️❤️ at each other. Can't tell if they are executives posturing friendship after a public fight or teenagers in the midst of this week's crisis about who said something mean about the others' hair.

More serious: OpenAI Staff Threaten to Quit Unless Board Resigns. A letter signed by almost 500 employees demanding the board resign. Including Ilya Sutskever, board member.
posted by Nelson at 6:36 AM on November 20, 2023 [4 favorites]


OpenAI’s Misalignment and Microsoft’s Gain, new analysis this morning.
This is, quite obviously, a phenomenal outcome for Microsoft. The company already has a perpetual license to all OpenAI IP (short of artificial general intelligence), including source code and model weights; the question was whether it would have the talent to exploit that IP
505 of 700 of OpenAI's employees are threatening to quit right now. With that IP situation there's not even a question of trade secrets, NDAs, or non-competes.
posted by Nelson at 7:09 AM on November 20, 2023 [9 favorites]


Man. You come at the king, you best not miss.
posted by seanmpuckett at 7:11 AM on November 20, 2023 [2 favorites]


why on earth did microsoft not have a seat on the board after throwing 13 billion at them.
posted by dis_integration at 7:26 AM on November 20, 2023


Mandy Brown on Mastodon:
This is not a fully formed thought, but I have a visceral reaction to seeing coverage of Altman’s firing treated as a top-left news headline. It feels part of the hagiography of these dudes, that we cover them like kings and we cover their companies like nations, but somehow we don’t cover what their tech is going to do to real people.
And Sarah Rochat on the role of the media more broadly covering AI and narrative drivers:
💪 All stakeholders, including worker unions, need to be involved in the discussion on the future of AI to secure shared prosperity.

🤡 We are giving too much power to Silicon Valley and techno-optimistic hubris in shaping our discourse on the future of technology.

I personally think that this last point, the importance of our narrative of technology, is strongly underestimated. The role of the media is central in questioning critically the Silicon Valley's narrative.
posted by audi alteram partem at 7:45 AM on November 20, 2023 [11 favorites]



why on earth did microsoft not have a seat on the board after throwing 13 billion at them.


because Open AI were in the driver's seat during that negotiation. They had all the cards (he said, consciously mixing his metaphors). So Microsoft gave them everything knowing full well they'd be able to pull something like this in time.

I'm currently reading some WW2 history in which Jo Stalin is a central player. One thing I'm learning is that everybody that wasn't Russian tended to underestimate the man's singular ruthlessness. Because he knew power and he knew he had it. And it doesn't matter what your alleged politics may be, power is power. I imagine it's the same for mega corporate interests. Unless they're up against equal or greater corporate interests, they're always going to get what they want. Because that is power.
posted by philip-random at 7:48 AM on November 20, 2023 [3 favorites]


Context: the board governs the non-profit, which in theory takes the whole “beneficial to humanity” bit ultra-serious. There’s a for-profit side but its profits are capped and the surplus is fed into the operating budget of the non-profit arm. Microsoft’s return was unfettered access to OpenAI’s collective output, but they’re exactly the people the entire setup was built to keep out of the ultimate decision loop.

It’s a weird setup but it makes sense for a team that eight years ago was serious about approaching the risk responsibly but knew it was going to have to give the VC crowd a hook to hang their hat on in order to fund the massive hardware buildout. Motherfucking Elon used to be on the board and thankfully that wasn’t a factor during this situation.

Honestly the potential trillion-scale returns were too tasty for the Valley VC crowd to ever leave it alone. This weekend looks like the safety/academic faction at OpenAI exercising the nuclear option because they felt their control of the situation slipping - either after a DevDay that was perceived as momentous by the Valley set and disastrous by the research-focused employees, or Altman revealing his plans during an early demo of their next major release. We might never know.

People like Altman understand how to get both money and media coverage on their side in a way the pure academics rarely will. It’s pretty clear the safety faction miscalculated and achieved a short-term pyrrhic victory, but we won’t know how bad the damage is until the new board is announced (500 out of 700 = that will happen).

On preview: what Sarah Rochat said in audi alteram partem’s comment.
posted by Ryvar at 7:51 AM on November 20, 2023 [5 favorites]


Oh and if there’s an upside to all this it’s that to some extent the accelerationist faction at OpenAI just experienced a partial reset on tooling/environment. But by landing at Microsoft they guarantee their continued access to everything needed to reproduce what they had, and this time with Microsoft’s financial and engineering resources, and lack of ethics-focused oversight.

Greg Brockman is by all reports one of those incredibly rare Carmack-level true rockstar programmers and his President title was a bit of a joke because he’s reputed to have had virtually no management duties. Much of the internal toolchain at OpenAI was supposedly his baby, so the time to get setup again may be even shorter than the previous paragraph would suggest.
posted by Ryvar at 8:01 AM on November 20, 2023 [3 favorites]


Ryvar: Kinda curious what platform Altman’s new thing will build on.

It will be the world's largest and least stable Eclipse app, displacing Microsoft Teams Classic for the title.
posted by wenestvedt at 8:13 AM on November 20, 2023 [2 favorites]


*violent gagging noises*

Jesus Christ that’s horrifying. Switching sides to Team Human Extinction.
posted by Ryvar at 8:15 AM on November 20, 2023 [2 favorites]


Matt Levine has weighed in with his usual highly informed analysis with a healthy dose of snark. Paywall bypass.
Is control of OpenAI indicated by the word “controls,” or by the word “MONEY”? In some technical sense, the first diagram is correct; that board really did fire that CEO. In some practical sense, if Microsoft has a perpetual license to OpenAI’s technology and now also most of its employees — “You can make the case that Microsoft just acquired OpenAI for $0 and zero risk of an antitrust lawsuit,” writes Ben Thompson — the money kind of won.
Some fun quoting of Ted Chiang, too
Thus, in its pursuit of a seemingly innocuous goal, an AI could bring about the extinction of humanity purely as an unintended side effect. ... when Silicon Valley tries to imagine superintelligence, what it comes up with is no-holds-barred capitalism.
posted by Nelson at 9:23 AM on November 20, 2023 [5 favorites]


his President title was a bit of a joke because he’s reputed to have had virtually no management duties

I always wondered about that because yeah, people always talk about him as this instrumental technical contributor, but on paper he wasn’t even the CTO, he was the president and chair of the board. Is this SV/“technical founder” people poking fun at business norms or what?
posted by atoxyl at 9:36 AM on November 20, 2023


Henry Farrell has a good discussion of the ideological underpinnings of this mess:
I joked on Bluesky that the OpenAI saga was as if “the 1990s browser wars were being waged by rival factions of Dianetics striving to control the future.”
posted by NoxAeternum at 10:22 AM on November 20, 2023 [11 favorites]


It seems to me like Microsoft's hiring of Sam Altman is a very end-of-Raiders-of-the-Lost-Ark moment.

We have top men working on it right now.

Who?

Top ... men.
posted by chavenet at 10:48 AM on November 20, 2023 [2 favorites]


Holy fuck is the more detailed backstory weird.

Inside the Chaos at OpenAI [archive of paywalled Atlantic article, emphasis mine]
ChatGPT’s runaway success placed extraordinary strain on the company. Computing power from research teams was redirected to handle the flow of traffic. As traffic continued to surge, OpenAI’s servers crashed repeatedly...
...fraud began surging on the API platform as users created accounts at scale, allowing them to cash in on a $20 credit for the pay-as-you-go service that came with each new account. Stopping the fraud became a top priority to stem the loss of revenue and prevent users from evading abuse enforcement by spinning up new accounts: Employees from an already small trust-and-safety staff were reassigned from other abuse areas to focus on this issue...

The release of GPT-4 also frustrated the alignment team... They believed that the AI safety work they had done was insufficient...
No surprises so far, but then:
Anticipating the arrival of this all-powerful technology, Sutskever began to behave like a spiritual leader...His constant, enthusiastic refrain was “feel the AGI,” a reference to the idea that the company was on the cusp of its ultimate goal. At OpenAI’s 2022 holiday party, held at the California Academy of Sciences, Sutskever led employees in a chant: “Feel the AGI! Feel the AGI!”
lolwhut. It gets worse:
he also allied himself with the existential-risk faction within the company. For a leadership offsite this year, according to two people familiar with the event, Sutskever commissioned a wooden effigy from a local artist that was intended to represent an “unaligned” AI—that is, one that does not meet a human’s objectives. He set it on fire to symbolize OpenAI’s commitment to its founding principles.
What followed that is also covered here:
What We Know So Far About Why OpenAI Fired Sam Altman [Time mirror of paywalled Bloomberg article]
Altman also explored starting his own AI chipmaker, pitching sovereign wealth funds in the Middle East on an investment that could reach into the tens of billions of dollars
Taking on nVidia in vector math hardware at scale - no matter how many billions you've got from KSA / UAE - is like attacking a tank with a stick. Or Google in its prime. They are a decade or more ahead of everyone else in their space and treated as a strategic resource by the US gov for very good reason. As fig leafs for firing Altman go, it's not a bad one: this was just a boneheaded move to begin with, further compounded by potentially giving leverage to people who use slave labor to build their cities and actively suppress basic human rights for women.
...In July, Sutskever formed a new team within the company focused on reining in “super intelligent” AI systems of the future. Tensions with Altman intensified in October, when, according to a source familiar with the relationship, Altman moved to reduce Sutskever’s role at the company, which rubbed Sutskever the wrong way and spilled over into tension with the company’s board.

...the event on Nov. 6, Altman made a number of announcements that infuriated Sutskever and people sympathetic to his point of view, the source said. Among them: customized versions of ChatGPT, allowing anyone to create chatbots that would perform specialized tasks. OpenAI has said that it would eventually allow these custom GPTs to operate on their own once a user creates them. Similar autonomous agents are offered by competing companies but are a red flag for safety advocates.
So yeah, looks like DevDay was the trigger but plenty of additional weirdness. Worth reading both if you have the time.
posted by Ryvar at 10:49 AM on November 20, 2023 [14 favorites]


> "Feel the AGI! Feel the AGI!”

These people are out of their minds
posted by dis_integration at 11:24 AM on November 20, 2023 [6 favorites]


@alexheath:
Sam Altman and Greg Brockman’s move to Microsoft isn’t a done deal. They are still plotting a return to OpenAI.
Pressure is on the board to still resign gracefully after the flipping of Ilya Sutskever last night
...
OpenAI employees at the company’s SF HQ refused to attend an all hands scheduled with new CEO Emmett Shear on Sunday
They are currently working to maintain the service with the hope that the board will relent under threat of mass resignation. It’s a “holding pattern.”
What's more bananas than bananas?
posted by gwint at 11:39 AM on November 20, 2023 [1 favorite]


This really has the feel of a religious schism.
posted by NoxAeternum at 11:50 AM on November 20, 2023 [8 favorites]




My bet wrt GAI (or getting closer to the GAI asymptote) has always been that we will get Fully Automated Luxury Gay Space Communism for some, the salt mines for the rest. Like all the other times in human history where something revolutionary has been discovered or built.

Clearly the pivotal point was when the Aligned Agriculture group lost the fight.

The first few months after leaving Google I would sometimes have basilisk type crippling anxiety. What if G came up with useful quantum computing and GAI and googlers got to live in The Culture and has decided to quit? Then I remembered that the list would be full with founders, investors, execs, and their billionaire friends and us engineers would be lucky to get our 2000 calories or Soylent green and a millisecond of simulated paradise as a retirement gift.

I don’t know what the answer is, but Microsoft it it not.

(Derail on AI and chip fab as national security: I worked with very good Mexican data scientists for a few years, most of them are now working remotely for many dollars for US companies including MS and Goog. A typical engineer in Mexico can expect to make 60k USD a year, working remotely for US companies pays double. And these are good engineers, when you drive most of the electric cars in the US, including Tesla, or fly on a airplane with the two most popular brands of jet engines, there is data science and engineering behind it built by Mexican engineers in Mexico.When will the US start messing with these job stream? In Mexico the only organizations I think have the resources and coordination to compete with US corporations and Saudi princes are the drug cartels. Since 2022 at least it has been common knowledge that the cartels have job boards for attorneys, agro engineers, chemical engineers, IT, coding, etc… They realized that kidnapping and enslaving engineers like they did in the 2010s did not get the same results as offering 60k USD as a starting salary, with benefits and perks. If I knew how to do it, I’d write a story about the Jalisco GAI subsuming the Sinaloa LLM to fight against the DEA)
posted by Dr. Curare at 1:02 PM on November 20, 2023 [10 favorites]


OpenAI's employees were given two explanations for why Sam Altman was fired. They're unconvinced and furious.
Sustkever is said to have offered two explanations he purportedly received from the board, according to one of the people familiar. One explanation was that Altman was said to have given two people at OpenAI the same project.

The other was that Altman allegedly gave two board members different opinions about a member of personnel. An OpenAI spokesperson did not respond to requests for comment.

These explanations didn't make sense to employees and were not received well, one of the people familiar said. Internally, the going theory is that this was a straightforward "coup" by the board, as it's been called inside the company and out. Any reason being given by the board now holds little to no sway with staff, the person said.
posted by Nelson at 6:08 PM on November 20, 2023 [2 favorites]


Dave Karpf argues that we should observe the Serizawa Protocol:
These do not look like serious people. They look like a mix of ridiculous ideologues and untrustworthy grifters.

And that is, I suspect, a very good thing. The development of generative AI will proceed along a healthier, more socially productive path if we distrust the companies and individuals who are developing it.

The story Altman had been telling was too good, too compelling.

He will be far less effective at telling that story now. People are going to ask tougher questions of him and his peers. They might even ask follow-ups to his glib replies. I could hardly imagine a better outcome.

This chaos is good. It is instructive.

Let them fight.
posted by NoxAeternum at 8:54 PM on November 20, 2023 [9 favorites]


Karpf’s feelings about Altman overlap heavily with mine, but does he come out looking less effective after all this? So far he’s playing it exactly how one would expect him to play it - love letters to his old staff, instant deals with big players, cheekily captioned photos of himself wearing an OpenAI guest badge - and it seems to be working. Sutskever, the one inner circle OpenAI founder who supported the ouster, even seems to be recanting. If the board can’t come up with a stronger story about why he had to go it feels like the whole episode will be a win for his personal brand.
posted by atoxyl at 9:52 PM on November 20, 2023


This is not a fully formed thought, but I have a visceral reaction to seeing coverage of Altman’s firing treated as a top-left news headline. It feels part of the hagiography of these dudes, that we cover them like kings and we cover their companies like nations, but somehow we don’t cover what their tech is going to do to real people.

There might perhaps still be room for the odd celebrity field reporter doing local stories, but waves of media consolidation have already made most newspapers copies of each other in different cities. As a cost-cutting measure, AI models will just speed this up.

It is alarming how little regard the people working media have for their own livelihoods. For all the breathless coverage on the NYTimes front page, I wonder if NYTimes staffers understand that their jobs will mostly be replaced by AIs and a few tech-savvy editors, once the technical and legal kinks are worked out.
posted by They sucked his brains out! at 10:48 PM on November 20, 2023 [1 favorite]


does he come out looking less effective after all this?

I think Altman's mere association with this board and their EA-related views will cause him some reputational damage. The company of which he was CEO now appears to be untrustworthy and prone to inexplicable and rash decisions. Like if LeBron played with the Washington Generals you'd start to wonder why he put himself in that situation and why he couldn't get his teammates to stop falling for the competition's obvious ruses!

Not that Altman is comparable to LeBron.
posted by chrchr at 11:54 PM on November 20, 2023


If the board can’t come up with a stronger story about why he had to go it feels like the whole episode will be a win for his personal brand.

I'm sure this has been quipped six ways from Sunday, but how good is their AI if they handled things like this.
posted by rhizome at 12:05 AM on November 21, 2023


I think Altman's mere association with this board and their EA-related views will cause him some reputational damage.

He doesn’t need to impress you, he needs to impress tech industry money people, and he just got 90 percent of his former employees to sign a letter in favor of his reinstatement. Understand that I am not paying the industry a compliment when I say that it seems obvious to me that the personality cult is a plus.
posted by atoxyl at 12:20 AM on November 21, 2023 [8 favorites]


“Not much is changing, a lot is changing,” Ethan Mollick, One Useful Thing, 20 November 2023
posted by ob1quixote at 7:18 AM on November 21, 2023


He doesn’t need to impress you, he needs to impress tech industry money people, and he just got 90 percent of his former employees to sign a letter in favor of his reinstatement.

He needs to impress both to achieve his goals - he needs to do well with investor storytime to get The Money, but he also needs societal buyin in order to get the rest of us to agree with his vision. And this comes back to Karpf's point - OpenAI's big value for Altman was that it was a great tool to launder his reputation and make it appear that he was looking out for all the "risks" in AI, while making massive deals behind the curtain.

And now that's gone. Altman is a Microsoft employee now, and that changes the dynamic completely. It's going to be harder for him to argue his work is meant for the common good now that it's pretty clear who writes his checks.
posted by NoxAeternum at 8:19 AM on November 21, 2023 [2 favorites]


Per Washington Post, talks again with board to bring him back. Satya Nadella said that Microsoft is happy with either outcome, so it's not a done deal. Makes sense, because it'd still be super disruptive and Microsoft has proven they hold all the cards anyway.

Also saw elsewhere (but don't have news article so unconfined) that new CEO Emmett Shear said he would leave if the board couldn't prove misconduct.
posted by lookoutbelow at 10:16 AM on November 21, 2023 [1 favorite]


So both interim CEOs have sided with Altman? Lol.

This board seems really bad at their job.
posted by ryanrs at 12:03 PM on November 21, 2023


And Sutskever, the remaining co-founder on the board, who was originally cited as a key figure in Altman’s removal, apologized on Twitter and apparently signed the letter in Altman’s support. It’s bizarre how this has played out.
posted by atoxyl at 1:29 PM on November 21, 2023 [3 favorites]


At this point I wonder what we're supposed to be distracted from, because this is some bullshit.
posted by seanmpuckett at 1:35 PM on November 21, 2023 [7 favorites]


Been over 12 hours with no crazy new press leaks, which says to me Altman and the investors are done pressuring the board and are negotiating in earnest now. Why yes, here it is: Sam Altman, OpenAI Board Open Talks to Negotiate His Possible Return. The key development here being
That the board and Altman are in communication is a significant development because until Monday, the directors largely refused to engage with the executive they fired Friday, several people have said.
Also maybe there's a deadline.
There is a push to resolve the chaos surrounding the company’s leadership before Thanksgiving
posted by Nelson at 2:02 PM on November 21, 2023 [2 favorites]


At this rate it won't surprise me if the next leak is headlined "Altman to Be Replaced by GPT-5".
posted by hoist with his own pet aardvark at 2:27 PM on November 21, 2023 [3 favorites]


I think society by and large does not know or care what OpenAI is. So what the mainstream thinks of OpenAI is irrelevant to the board or the CEO's intentions. It's even in the name: OpenAI as a false analog of Open Source software--concepts that only factions within the tech intelligentsia would care about.

What OpenAI is for is to legitimize this gold rush for institutions like governments and academia. So for example an academically trained PhD like Ilya Sutskever can get recruiter/retained for reasons beyond money--because they really believe they are working for some kind of greater good. Or for example a professor like Scott Aaronson to spend a year or two consulting at OpenAI on the AI "alignment" problem (read: attitude control problem).

The mainstream public has always seen non-profits as rather toothless and ineffectual organizations and so I don't see the received optics around OpenAI as any different. The actual difference is that the corporate shell game of a academic/nonprofit/for-profit hybrid structure allowed--however transiently as it seems to be imploding now--for scientific theoretical research to be implemented on commercial scale computing resources, thus creating ChatGPT when the other FAANG companies couldn't, let alone university research.
posted by polymodus at 3:00 PM on November 21, 2023 [2 favorites]


Nelson: "Been over 12 hours with no crazy new press leaks, which says to me Altman and the investors are done pressuring the board and are negotiating in earnest now."

O ye of little faith! Elon Musk just posted a link to an anonymous open letter from former employees that has already been nixed from Github. It goes hard on Altman so clearly Musk is angling to disrupt any smooth transition back to him.

The text of the letter (long, click to expand)

11/21/2023

To the Board of Directors of OpenAI:

We are writing to you today to express our deep concern about the recent events at OpenAI, particularly the allegations of misconduct against Sam Altman.

We are former OpenAI employees who left the company during a period of significant turmoil and upheaval. As you have now witnessed what happens when you dare stand up to Sam Altman, perhaps you can understand why so many of us have remained silent for fear of repercussions. We can no longer stand by silent.

We believe that the Board of Directors has a duty to investigate these allegations thoroughly and take appropriate action. We urge you to:

Expand the scope of Emmett's investigation to include an examination of Sam Altman's actions since August 2018, when OpenAI began transitioning from a non-profit to a for-profit entity.

Issue an open call for private statements from former OpenAI employees who resigned, were placed on medical leave, or were terminated during this period.

Protect the identities of those who come forward to ensure that they are not subjected to retaliation or other forms of harm.

We believe that a significant number of OpenAI employees were pushed out of the company to facilitate its transition to a for-profit model. This is evidenced by the fact that OpenAI's employee attrition rate between January 2018 and July 2020 was in the order of 50%.

Throughout our time at OpenAI, we witnessed a disturbing pattern of deceit and manipulation by Sam Altman and Greg Brockman, driven by their insatiable pursuit of achieving artificial general intelligence (AGI). Their methods, however, have raised serious doubts about their true intentions and the extent to which they genuinely prioritize the benefit of all humanity.

Many of us, initially hopeful about OpenAI's mission, chose to give Sam and Greg the benefit of the doubt. However, as their actions became increasingly concerning, those who dared to voice their concerns were silenced or pushed out. This systematic silencing of dissent created an environment of fear and intimidation, effectively stifling any meaningful discussion about the ethical implications of OpenAI's work.

We provide concrete examples of Sam and Greg's dishonesty & manipulation including:

Sam's demand for researchers to delay reporting progress on specific "secret" research initiatives, which were later dismantled for failing to deliver sufficient results quickly enough. Those who questioned this practice were dismissed as "bad culture fits" and even terminated, some just before Thanksgiving 2019.

Greg's use of discriminatory language against a gender-transitioning team member. Despite many promises to address this issue, no meaningful action was taken, except for Greg simply avoiding all communication with the affected individual, effectively creating a hostile work environment. This team member was eventually terminated for alleged under-performance.

Sam directing IT and Operations staff to conduct investigations into employees, including Ilya, without the knowledge or consent of management.

Sam's discreet, yet routine exploitation of OpenAI's non-profit resources to advance his personal goals, particularly motivated by his grudge against Elon following their falling out.

The Operations team's tacit acceptance of the special rules that applied to Greg, navigating intricate requirements to avoid being blacklisted.

Brad Lightcap's unfulfilled promise to make public the documents detailing OpenAI's capped-profit structure and the profit cap for each investor.

Sam's incongruent promises to research projects for compute quotas, causing internal distrust and infighting.

Despite the mounting evidence of Sam and Greg's transgressions, those who remain at OpenAI continue to blindly follow their leadership, even at significant personal cost. This unwavering loyalty stems from a combination of fear of retribution and the allure of potential financial gains through OpenAI's profit participation units.

The governance structure of OpenAI, specifically designed by Sam and Greg, deliberately isolates employees from overseeing the for-profit operations, precisely due to their inherent conflicts of interest. This opaque structure enables Sam and Greg to operate with impunity, shielded from accountability.

We urge the Board of Directors of OpenAI to take a firm stand against these unethical practices and launch an independent investigation into Sam and Greg's conduct. We believe that OpenAI's mission is too important to be compromised by the personal agendas of a few individuals.

We implore you, the Board of Directors, to remain steadfast in your commitment to OpenAI's original mission and not succumb to the pressures of profit-driven interests. The future of artificial intelligence and the well-being of humanity depend on your unwavering commitment to ethical leadership and transparency.

Sincerely,

Concerned Former OpenAI Employees
Contact

We encourage former OpenAI employees to contact us at formerly_openai@mail2tor.com. We personally guarantee everyone's anonymity in any internal deliberations and public communications.
Further Updates

Updates will be posted at https://board.net/p/r.e6a8f6578787a4cc67d4dc438c6d236e
Further Reading for the General Public

https://www.technologyreview.com/20...altman-greg-brockman-messy-secretive-reality/

https://www.theatlantic.com/technology/archive/2023/11/sam-altman-open-ai-chatgpt-chaos/676050/

https://twitter.com/geoffreyirving/status/1726754277618491416

posted by Rhaomi at 4:23 PM on November 21, 2023 [4 favorites]


Damn, Elon or no, I’m (perhaps unbecomingly) excited to see some dirt.
posted by atoxyl at 4:52 PM on November 21, 2023


Oh goodie, I was wondering when Elon Musk would get involved. Given his earlier funding of OpenAI I'm surprised he's kept his mouth shut this long. Too busy blowing up rockets and filing SLAPP lawsuits.

This letter is a piece of work. Why does it keep getting deleted everywhere? The gist Elon tweeted is gone, the Reddit posts I can find are gone... Anyway, the letter is unsigned and there's no way to tell who wrote it or how many people signed it. Or indeed if it's even genuinely from OpenAI people. It could well be real, it's consistent with the rumors we've been hearing, but it's pretty squirrely.

However the link to the Technology Review article is interesting. It's from 2020: The messy, secretive reality behind OpenAI’s bid to save the world. Well sourced and gossipy about tensions back then.

The second link in the letter is to a recent Atlantic story which is interesting. The tweet link is to a thread from Geoffrey Irving (a former OpenAI researcher) sharing his personal opinion that Sam Altman was deceptive to him.
posted by Nelson at 4:52 PM on November 21, 2023 [1 favorite]


The letter is a bunch of things cobbled together and doesn’t feel like it explains specifically what happened but it’s not inconsistent with my, uh, Bayesian priors (i.e. preconceptions and superficial character judgments). The charming ex-VC boy king of AI talking out of both sides of his mouth. The rockstar cowboy technical lead operating with impunity on both the system level and the personal level.
posted by atoxyl at 5:08 PM on November 21, 2023 [1 favorite]


Nothing about this letter, but some more gossip about the boardroom strife at the company. Before Altman’s Ouster, OpenAI’s Board Was Divided and Feuding. Includes a lot of details of conversations between board members the past few days, insiders are leaking left and right. Presumably they are attempting to pressure a decision through the press.
posted by Nelson at 6:16 PM on November 21, 2023


There is a push to resolve the chaos surrounding the company’s leadership before Thanksgiving

Families all over America will give thanks that our long national nightmare is over.
posted by lukemeister at 8:43 PM on November 21, 2023 [4 favorites]


@OpenAI:
We have reached an agreement in principle for Sam Altman to return to OpenAI as CEO with a new initial board of Bret Taylor (Chair), Larry Summers, and Adam D'Angelo.

We are collaborating to figure out the details. Thank you so much for your patience through this.
posted by Rhaomi at 10:11 PM on November 21, 2023 [1 favorite]


I am reiterating my earlier claim that this entire company is stupid. Everything about it, the business model, the product, everything. It’s stupid, and it’s going to yeet us into a stupid future.
posted by Going To Maine at 10:28 PM on November 21, 2023 [4 favorites]


… Larry Summers?
posted by atoxyl at 10:55 PM on November 21, 2023 [8 favorites]


The apparent ease with which an emerging corporate juggernaut sloughed off nominal control by an idealistic non-profit board after they flipped their kill-switch is much more interesting when you view it according to the theory that corporations are a form of singularity.
posted by Rhaomi at 11:09 PM on November 21, 2023 [11 favorites]


Sam Altman Is Reinstated as OpenAI’s Chief Executive. The new board will include Adam D’Angelo (from before), Bret Taylor, and Lawrence Summers. Tasha McCauley and Helen Toner are off the board.
In the end, Ms. Toner and Ms. McCauley agreed to step down from the board because it was clear that it needed a fresh start, this person close to deliberations said. If all of them stepped down, they worried that it would suggest the board erred even though they collectively felt they did the right thing, this person said.

The outgoing board focused on curbing Mr. Altman’s power. In addition to an investigation into his leadership, they blocked his and Mr. Brockman’s return to the board and objected to potential board members who they worried might not stand up to Mr. Altman, said this person close to the board negotiations.
Also good reading, from a few hours before this news about Altman being reinstanted. Behind the Scenes of Sam Altman’s Wild Ouster From OpenAI. Lots of internal details about the firing that make the old board not look good. Concludes
A person familiar with Nadella’s thinking said Microsoft’s first preference is for Altman to return as OpenAI CEO.
Microsoft was going to win either way once Altman agreed he might work for them, but they played their part very well.
posted by Nelson at 5:27 AM on November 22, 2023 [3 favorites]


Sam Altman’s been fired before. The polarizing past of OpenAI’s reinstated CEO. The firing in 2019 was from running Y Combinator, the tech startup.

Speaking of YC the Hacker News crowd has been flipping back and forth on this whole drama. The first few discussions were dominated by "what is the board thinking are they crazy?" But the reaction to today's news is "it's terrible that Microsoft has usurped all the non-profit goals". I understand both points of view, it's just remarkable to see them so distilled on a tech industry message board. As always Hacker News is kind of a terrible community with a lot of awful people in it, but it's also a useful reflection of what folks in the Silicon Valley tech world are thinking.
posted by Nelson at 6:23 AM on November 22, 2023 [1 favorite]


In the end, Ms. Toner and Ms. McCauley agreed to step down from the board because it was clear that it needed a fresh start, this person close to deliberations said. If all of them stepped down, they worried that it would suggest the board erred even though they collectively felt they did the right thing, this person said.

so they just kicked the women off the board, huh?

seems a bit weird.
posted by i used to be someone else at 7:22 AM on November 22, 2023 [8 favorites]


There were times, when I was watching Succession, that it felt a bit shark-jumpy, especially when it came to whipsawing board decisions related to leadership: Kendall is in charge! Kendall is leaving! Kendall is starting his own company! Kendall is back!

I stand corrected.
posted by evidenceofabsence at 7:55 AM on November 22, 2023 [6 favorites]


They didn't just fire all the women from the board, they replaced them with Larry Summers. Jeffrey Epstein's buddy Larry Summers. Mr. Girls are Bad at Math.
posted by Nelson at 8:16 AM on November 22, 2023 [16 favorites]


Yeah, getting rid of all the women sure jumped out at me too. Not a good look.
posted by leahwrenn at 9:03 AM on November 22, 2023 [2 favorites]


if these are the people bringing us into the future... only a god can save us now
posted by dis_integration at 9:15 AM on November 22, 2023 [1 favorite]


cue the drum beats and tank treads rolling over skulls i guess
posted by seanmpuckett at 9:22 AM on November 22, 2023 [2 favorites]


Slate has a behind the scenes look at how effective altruism (and effective accelerationism) factor into things at OpenAI.
posted by CheeseDigestsAll at 11:33 AM on November 22, 2023


I will laugh and laugh if it turns out this was all just a gambit to eliminate women from the board and non-profit hippies from control, plus adding "sell the world" Larry Summers. It's been quite a whipsaw from "hmm, maybe someone (who knows who) has decent principles" to "welp, now it's a juggernaut."
posted by rhizome at 12:37 PM on November 22, 2023 [4 favorites]


This is the way the world ends,
this is the way the world ends,
this is the way the world ends,
not with a bang but a merger.
posted by MrVisible at 1:22 PM on November 22, 2023 [4 favorites]


“Defective Accelerationism,” Rusty Foster, Today in Tabs, 22 November 2023
posted by ob1quixote at 1:55 PM on November 22, 2023 [3 favorites]


Some sci-fi for the rumor mill: Sam Altman’s ouster at OpenAI precipitated by letter to board about AI breakthrough, sources tell Reuters
According to one of the sources, long-time executive Mira Murati told employees on Wednesday that a letter about the AI breakthrough called Q* (pronounced Q-Star), precipitated the board’s actions.

The maker of ChatGPT had made progress on Q*, which some internally believe could be a breakthrough in the startup’s search for superintelligence
posted by Nelson at 4:02 PM on November 22, 2023 [4 favorites]


*QAnon ears perk up*
posted by lukemeister at 4:42 PM on November 22, 2023 [2 favorites]


Well now this is just lazy writing. Q*? Seriously?
posted by Ryvar at 7:14 PM on November 22, 2023 [1 favorite]


Q*? As in Q*bert?
posted by Pronoiac at 7:23 PM on November 22, 2023 [6 favorites]


I am not up to date on what it means for OpenAI, but in the classical (super old) reinforcement learning theory that I am familiar with a bit, Q* refers to the reward/value of taking the optimal action in a given situation. If Q (the reward for a given action, whether optimal or not) is being approximated by some network, then getting learning to work well and getting accurate Q* estimates is a hard problem, especially because reinforcement learning deals with machine models that learn about their environment by taking actions and seeing what results ensue... which creates a feedback loop that can go haywire (e.g. conservatively only taking actions that explore local areas where the model is good). Thus if the letter refers this same Q* rather than some other Q*, I can see why a major advance integrating it into LLMs could turn heads.

But I'm really out of my depth here. Still, I'll wager it's not Q*bert, much as I'd love that to be so.
posted by brambleboy at 10:46 PM on November 22, 2023 [4 favorites]


There's a kind of dangerous false binary going on the media/social media.

The capitalists are saying something like "people who describe themselves as AI safety people are actually deluded".

Then the EAs are saying something like "this is disastrous for safety because we've burnt our one big shot".

So the false binary is that this was about safety vs not safety.

But actually neither side is noticeably (as in behaviour) concerned about the kind of "safety" most regular people care about. I.e. safety against very realistic and potentially harmful short and medium term risks. Like you know, safety of democracy against manipulation of elections.

Interesting reporting in the WSJ today:
Effective altruists say they can build safer AI systems because they are willing to invest in what they call alignment: making sure employees can control the technology they create and ensure it comports with a set of human values. So far, no AI company has said what those values should be.

OpenAI recently said it would dedicate a fifth of its computing resources over the next four years to what the company called “superalignment,” an effort led by Sutskever. The team has been building, among other things, an AI-derived “scientist” that can conduct research on AI systems, people familiar with the matter said.

Frustrated employees said attention to AGI and alignment has left fewer resources to solve more immediate issues such as developer abuse, fraud and nefarious AI uses that could affect the 2024 election. They say the resource disparity reflects the influence of effective altruism.

While OpenAI is building automated tools to catch abuses, it hasn’t hired many investigators for that work, according to people familiar with the company. It also has few employees monitoring its developer platform, which is used by more than two million researchers, companies and other developers, these people said.

The company has recently hired someone to consider the role of OpenAI technology in the 2024 election. Experts warn of the potential for AI-generated images to mislead voters.
posted by lookoutbelow at 10:52 PM on November 22, 2023 [5 favorites]


The rogue LLM-based AGI was unable to out-think the OpenAI staff, so instead it has coopted them with religion.
posted by ryanrs at 11:15 PM on November 22, 2023


Brambleboy: you’re almost certainly correct, btw. I’d speculate something like the nVidia Eureka paper, but in a far broader, more language-oriented scope. For anyone not familiar: Eureka = AI writing better reward code for reinforcement learning than human experts on 83% of their 29 sample tasks, with an average 52% improvement. Eureka’s the first public-facing glimpse of something that could - purely as a matter of principle - potentially someday lead to the whole “AI designs better AI, repeat” iterative loop.

Successfully apply that idea to, say, a runtime Reinforcement Learning component you’ve grafted onto your increasingly multi-modal next-gen LLM, and… the board throwing a panic switch might not have been a complete overreaction? Just absurd, premature hypercaution in a situation where that’s a good direction to lean?

(FWIW Eureka was the first and only thing I’ve seen that made me seriously begin questioning my assertions I would die without seeing true artificial sapience, despite 41 years of remaining life expectancy.)
posted by Ryvar at 11:43 PM on November 22, 2023 [3 favorites]


The Q* is such an obvious PR smoke and mirrors stunt. Everyone is talking bad about the company QUICK let's talk about the singularity we're making in our labs instead!

But it works every time. Musk got so much free press announcing self driving cars twice a week for what, a decade?

Being an 'AI CEO' is so easy. Just be the dragon slayer, the only one who can control the powerful beast. Someone save the villagers! It's a fantastic story and it's also why they spend so much time talking about safety and protection lookoutbelow outlined nicely above. Employees may see concerns that are actually relevant to the technology they're building — but the feeling of power and respect from the media must be nice — and addictive — so yeah the execs will scratch that itch.
posted by UN at 7:39 AM on November 23, 2023 [4 favorites]


Thanks for the hints about the name Q* possibly referring to the Q in Q-Learning. That would make sense but the reporter didn't do us any favors. It gets further murky with The Information's new story or rather The Guardian's rewrite of it (because of paywall problems)
The model, called Q* – and pronounced as “Q-Star” – was able to solve basic maths problems it had not seen before, according to the tech news site the Information, which added that the pace of development behind the system had alarmed some safety researchers. The ability to solve maths problems would be viewed as a significant development in AI.
The Guardian is specifically calling the model Q*, not the evaluation metric, so that's confusing. Anyway the new information here is about it solving math problems.

I don't think it's all smoke & mirrors. These past few years have shown astonishing new capabilities of AIs, particularly with LLMs. OpenAI is on the forefront of that work and has been steadily releasing useful, world-changing products that are getting significantly better every month. But I do think it's a leap to go from that to "OMG skynet is about to devour us". I just can't share the existential dread that characterizes some of the AI risk discussion. I also think it's weird to frame this as "this is the sole reason Altman was fired". Maybe part of an ongoing disagreement, sure, but hardly the one defining thing.

May there be mercy on man and machine for their sins.
posted by Nelson at 8:12 AM on November 23, 2023 [4 favorites]


Reuters has added a correction notice: "This article has been updated to correct the headline and paragraph five to state Altman’s firing occurred after the letter was sent to the board and was not caused by the letter"
posted by Gerald Bostock at 8:37 AM on November 23, 2023 [2 favorites]


And because reality loves to cross the fucking streams, we now have Twitter's special boy ranting about the "woke mind virus" in ChatGPT.

Can't make this shit up, not nearly enough drugs in the world.
posted by NoxAeternum at 8:39 PM on November 25, 2023


I was sincerely hoping to find some kind of 'answer' after a week of dancing and feinting around but it seems, most rationally, that the answer is MSFT decided it wanted its money back and Larry Summers saw a way to get in on the deal.

... Sutskever (determined) things were at a make-or-break inflection point in terms of potential hazards of an insufficiently aligned release vs inevitable demand to release early by the moneyed interests Altman represented?
what I like about this and other mentions of 'alignment' is that what (I imagine) it means is - "trying to develop an AI that won't kill us all in pursuit of an otherwise relatively anodyne goal (like say make a fuck-ton of money)" and the people on the "move fast and break things" side are like, "Did you say make a fuck-ton of money? Damnit you bastard, I'm in! Fuck off, nerds!"

If I knew how to do it, I’d write a story about the Jalisco GAI subsuming the Sinaloa LLM to fight against the DEA
I'm pretty sure variations on this (substituting 'Yakuza' for Jalisco) is a theme in a couple Gibson novels.
posted by From Bklyn at 3:12 AM on November 26, 2023 [1 favorite]


what I like about this and other mentions of 'alignment' is that what (I imagine) it means is - "trying to develop an AI that won't kill us all in pursuit of an otherwise relatively anodyne goal (like say make a fuck-ton of money)" and the people on the "move fast and break things" side are like, "Did you say make a fuck-ton of money? Damnit you bastard, I'm in! Fuck off, nerds!"

Can confirm that was what I meant in this case ...and also pretty much everywhere you see it because the capitalists have only recently begun trying to move the goalposts/corrupt the term.

As a sort of post script to the thread: Q* does in fact appear to be a real attempt to start down the path of AGI - very, very loosely along lines that have been suggested by myself and several others in dozens of AI threads for the last few years: LLMs working in concert with a runtime RL-based system, which they appear to be brute forcing with millions of instances and statistical methods rather than spend a couple decades painstakingly working through the parameterization handoff from LLM to runtime model.

AI Explained (which continues to be the single best high-level news summary source for people who are relatively informed but not active in the field) has a deep dive here, gleaning an enormous amount of informed speculation from a scant few clues in recent OpenAI papers. Worth your time.
posted by Ryvar at 1:55 PM on November 26, 2023 [2 favorites]


I can't tell you how many nonsense articles I've read this last week confidently starting from the premise that Q* means Q-Learning + A* pathfinding and then just spinning absolute nonsense from there.
posted by Nelson at 2:30 PM on November 26, 2023 [2 favorites]


Same. Which is why I linked to a commentator who appears to be doing the work of actually sifting through OpenAI's papers - specifically Let's Verify Step By Step but he links it to a bunch of others - and coming to some very tentative but also very plausible conclusions.

One takeaway is that I'm a little bit concerned about the potential environmental impact of the first two or three generations of this approach. The brute force aspect seems like it could be insanely computationally expensive at first.

Might be time for a new thread, but I was kinda hoping for some news or a public statement from OpenAI on all this first. Anything that would indicate the tea leaves are being read more or less correctly (or not).
posted by Ryvar at 3:41 PM on November 26, 2023 [1 favorite]


Ars Technica: New report illuminates why OpenAI board said Altman “was not consistently candid”
Now, in an in-depth piece for The New Yorker, writer Charles Duhigg—who was embedded inside OpenAI for months on a separate story—suggests that some board members found Altman "manipulative and conniving" and took particular issue with the way Altman allegedly tried to manipulate the board into firing fellow board member Helen Toner.

Toner, who serves as director of strategy and foundational research grants at Georgetown University’s Center for Security and Emerging Technology, allegedly drew Altman's negative attention by co-writing a paper on different ways AI companies can "signal" their commitment to safety through "costly" words and actions. In the paper, Toner contrasts OpenAI's public launch of ChatGPT last year with Anthropic's "deliberate deci[sion] not to productize its technology in order to avoid stoking the flames of AI hype."

She also wrote that, "by delaying the release of [Anthropic chatbot] Claude until another company put out a similarly capable product, Anthropic was showing its willingness to avoid exactly the kind of frantic corner-cutting that the release of ChatGPT appeared to spur."

Though Toner reportedly apologized to the board for the paper, Duhigg writes that Altman nonetheless started to approach individual board members urging her removal. In those talks, Duhigg says Altman "misrepresented" how other board members felt about the proposed removal, "play[ing] them off against each other by lying about what other people thought," according to one source "familiar with the board's discussions." A separate "person familiar with Altman's perspective" suggests instead that Altman's actions were just a "ham-fisted" attempt to remove Toner, and not manipulation.
posted by Rhaomi at 12:18 PM on December 8, 2023 [2 favorites]




I've only followed this glancingly, but I had been a bit confused why I hadn't seen more people commenting on what a giant red flag it was for 95% of OpenAI workers to sign on to a letter demanding his return as CEO. Nobody is that popular.
posted by Not A Thing at 8:15 AM on December 9, 2023


Inside OpenAI’s Crisis Over the Future of Artificial Intelligence. NYTimes take, one of several stories coming out in the past few days that are more deeply reported.
posted by Nelson at 1:53 PM on December 9, 2023


« Older How many in the U.S. are disabled?   |   Servants of the Damned Newer »


This thread has been archived and is closed to new comments