Does it not look like that no one wants to work with Sam in the long run?
I'm not an AI researcher, have they done this? The commentary I've seen on o1 is basically that they incorporated techniques that were already being used.
I'd also be curious to learn: what fundamental contributions to research has OpenAI made?
The ChatGPT that was released in 2022 was based on Google's research, and IMO the internal Google chatbot from 2021 was better than the first ChatGPT.
I know they employ a lot of AI scientists who have previously published milestone work, and I've read at least one OpenAI paper. But I'm genuinely unaware of what fundamental breakthroughs they've made as a company.
I'm willing to believe they've done important work, and I'm seriously asking for pointers to some of it. What I know of them is mainly that they've been first to market with existing tech, possibly training on more data.
"I just shared this with OpenAI"
https://x.com/bobmcgrewai/status/1839099787423134051
Barret Zoph, VP Research (Post-Training)
"I posted this note to OpenAI."
https://x.com/barret_zoph/status/1839095143397515452
All used the same template.
It's possible that the competitors to OpenAI have rendered future improvements (yes even to the fabled AGI) less and less profitable to the point that the more profitable thing to do would be capitalize on your current fame and raise capital.
That's how I'm reading this. If the competition can be just as usable as OpenAI's SOA models and free or close to it, the profit starts vanishing in most predictions
Of course everybody was quick to play nice once OpenAI insiders got the reality check from Satya that he'd just crush them by building an internal competing group, cut funding, and instantly destroy lots of paper millionaires.
I'd imagine that Mira and others had 6 - 12 month agreeements in place to let the dust settle and finish their latest round of funding without further drama
The OpenAI soap opera is going to be a great book or movie someday
(1) https://www.nytimes.com/2024/03/07/technology/openai-executi...?
Guess what ? Tesla is still on the verge of 'solving FSD'. And most probably it will be in the same place for the next 10 years.
The writing is on the wall for OpenAI.
With luck, Mr. Altman's overtures to bring in middle east investors will get locals on board; either way, it's fair to say he'll own whatever OpenAI becomes, whether he's an owner or not. And if he loses control in the current scrum, I suspect his replacement would be much worse (giving him yet another advantage).
Best wishes to all.
I feel like either they're not close at all and the people know it's all lies or they're seeing some shady stuff and want nothing to do with it
Prediction 2: Russia will implode by 2035, by also spending too much money.
"You should, as a matter of course, read absolutely nothing into departure announcements. They are fully glommerized as a default, due to the incentive structure of the iterated game, and contain ~zero information beyond the fact of the departure itself."
They have hired CTO like figures from ex MSFt and so on … which would mean a natural exit for the startup era folks that we have seen recently?
Every company wants to sell itself as some grandiose savior initially ‘organize the world’s information and make it universally accessible’, ‘solve AGI’ but I guess the investors and the top level people in reality are motivated by dollar signs and ads and enterprise and so on.
Not that that’s a bad thing but really it’s a Potemkin village though…
OpenAI made them good money, yes; but if at some point there's a new endeavor in the horizon with another guaranteed billion-dollar payout, they'll just take it. Exhibit A: Ilya.
New razor: never attribute to AGI that which is adequately explained by greed.
The second you hit some kind of breakthrough, capital finds a way to remove any and all guardrails that might impede future profits.
It happened at DeepMind, Google, Microsoft and OpenAI. Why won't this happen the next time?
And ironically, many in this community say that corporations are AI.
She's a pro. Lots to learn from watching how she operates.
Sam, being the soulless grifter and scammer he is, of course will remain until the bitter end, drunk with the glimpse of power he surely got while forging backroom deals with the big boys.
“Hi all,
I have something to share with you. After much reflection, I have made the difficult decision to leave OpenAI.
My six-and-a-half years with the OpenAI team have been an extraordinary privilege. While I'll express my gratitude to many individuals in the coming days, I want to start by thanking Sam and Greg for their trust in me to lead the technical organization and for their support throughout the years.
There's never an ideal time to step away from a place one cherishes, yet this moment feels right. Our recent releases of speech-to-speech and OpenAI o1 mark the beginning of a new era in interaction and intelligence – achievements made possible by your ingenuity and craftsmanship. We didn't merely build smarter models, we fundamentally changed how AI systems learn and reason through complex problems. We brought safety research from the theoretical realm into practical applications, creating models that are more robust, aligned, and steerable than ever before. Our work has made cutting-edge AI research intuitive and accessible, developing technology that adapts and evolves based on everyone's input. This success is a testament to our outstanding teamwork, and it is because of your brilliance, your dedication, and your commitment that OpenAI stands at the pinnacle of AI innovation.
I'm stepping away because I want to create the time and space to do my own exploration. For now, my primary focus is doing everything in my power to ensure a smooth transition, maintaining the momentum we've built.
I will forever be grateful for the opportunity to build and work alongside this remarkable team. Together, we've pushed the boundaries of scientific understanding in our quest to improve human well-being.
While I may no longer be in the trenches with you, I will still be rooting for you all. With deep gratitude for the friendships forged, the triumphs achieved, and most importantly, the challenges overcome together.
Mira”
> Sam Altman has won. [...] Ilya Sutskever and Mira Murati will leave OA or otherwise take on some sort of clearly diminished role by year-end (90%, 75%; cf. Murati's desperate-sounding internal note)
https://www.lesswrong.com/posts/KXHMCH7wCxrvKsJyn/openai-fac...
_________________
1. https://news.ycombinator.com/item?id=40361128
That makes it much more probable that these execs have simply lost faith in OpenAI.
Lets write this chapter and take some guesses, it's either going to be:
1. Anthropic.
2. SSI Inc.
3. Own AI Startup.
4. Neither.
Only one is correct.
If multiple key people were drastically unhappy with her, it would have shaken confidence in herself and everyone working with her. What else to do but let her go?
If the government ever wants a third party to oversee safety of openAI wouldn't it be convenient if one of those that left the company started a company that focused on safety. Safe Superintelligence Inc. gets the bid because lobbying because whatever I don't even care what the reason is in this made up scenario in my head.
Basically what I'm saying is what if Sam is all like "hey guys, you know it's inevitable that we're going to be regulated, I'm going for profit for this company now, you guys leave and later on down the line we will meet again in an incestuous company relationship where we regulate ourselves and we all profit."
Obviously this is bad. But also obviously this is exactly exactly what has happened in the past with other industries.
Edit: The man is all about the long con anyway. - https://old.reddit.com/r/AskReddit/comments/3cs78i/whats_the...
Another edit: I'll go one further on this a lot of the people that are leaving are going to double down on saying that open AI isn't focused on safety to build up the public perception and therefore the governmental perception that regulation is needed so there's going to be a whole thing going on here. Maybe it won't just be safety and it might be other aspects also because not all the companies can be focused on safety.