lolinder
To recap OpenAI's decisions over the past year:

* They burned up the hype for GPT-5 on 4o and o1, which are great step changes but nothing the competition can't quickly replicate.

* They dissolved the safety team.

* They switched to for profit and are poised to give Altman equity.

* All while hyping AGI more than ever.

All of this suggests to me that Altman is in short-term exit preparation mode, not planning for AGI or even GPT-5. If he had another next generation model on the way he wouldn't have let the media call his "discount GPT-4" and "tree of thought" models GPT-5. If he sincerely thought AGI was on the horizon he wouldn't be eyeing the exit, and he likely wouldn't have gotten rid of the superalignment team. His actions are best explained as those of a startup CEO who sees the hype cycle he's been riding coming to an end and is looking to exit before we hit the trough of disillusionment.

None of this is to say that AI hasn't already changed a lot about the world we live in and won't continue to change things more. We will eventually hit the slope of enlightenment, but my bet is that Altman will have exited by then.

simpaticoder
A bit of a "dog bites man" story to note that a CEO of a hot company is hyping the future beyond reason. The real story of LLMs is revealed when you posit a magical technology that can print any car part for free.

How would the car industry change if someone made a 3D printer that could make any part, including custom parts, with just electricity and air? It is a sea change to manufacturers and distributors, but there would still be a need for mechanics and engineers to specify the correct parts, in the correct order, and use the parts to good purpose.

It is easy to imagine that the inventor of such a technology would probably start talking about printing entire cars - and if you don't think about it, it makes sense. But if you think about it, there are problems. Making the component of a solution is quite different than composing a solution. LLMs exist in the same conditions. Being able to generate code/text/images is of no use to someone who doesn't know what to do with it. I also think this limitation is a practical, tacit solution to the alignment problem.

austinkhale
There are legit criticisms of Sam Altman that can be levied but none of them are in this article. This is just reductive nonsense.

The arguments are essentially:

1. The technology has plateaued, not in reality, but in the perception of the average layperson over the last two years.

2. Sam _only_ has a record as a deal maker, not a physicist.

3. AI can sometimes do bad things & utilizes a lot of energy.

I normally really enjoy the Atlantic since their writers at least try to include context & nuance. This piece does neither.

deepsquirrelnet
> At a high enough level of abstraction, Altman’s entire job is to keep us all fixated on an imagined AI future

I think the job of a CEO is not to tell you the truth, and the truth is probably more often than not, the opposite.

What if gpt5 is vaporware, and there’s no equivalent 3 to 4 leap to be realized with current deep learning architectures? What is OpenAI worth then?

hdivider
In my view we should also stop taking the Great Technoking at his word and move away from lionizing this old well-moneyed elite in general.

Real technological progress in the 21st century is more capital-intensive than before. It also usually requires more diverse talent.

Yet the breakthroughs we can make in this half-century can be far greater than any before: commercial-grade fusion power (where Lawrence Livermore National Lab currently leads, thanks to AI[1]), quantum computing, spintronics, twistronics, low-cost room-temperature superconductors, advanced materials, advanced manufacturing, nanotechnology.

Thus, it's much more about the many, not the one. Multi-stakeholder. Multi-person. Often led by one technology leader, sure, but this one person must uplift and be accountable to the many. Otherwise we get the OpenAI story, and end-justifies-the-means type of groupthink wrt. those who worship the technoking.

[1]: https://www.llnl.gov/article/49911/high-performance-computin...

thruway516
"Altman is no physicist. He is a serial entrepreneur, and quite clearly a talented one"

Not sure the record supports that if you remove OpenAi which is a work-in-progress and supposedly not going too great at the moment. A talented 'tech whisperer' maybe?

rubyfan
I sort of wish there was a filter for my life that would ignore everything AI (stories about AI, people talking about AI and of course content generated by AI).

The world has become a less trustworthy place for a lot of reasons and AI is only making it worse, not better.

nabla9
The field is extremely research oriented. You can't stay on top with good engineering and incremental development and refining.

Google just paid over $2.4 billion to get Noam Shazeer back in the company to work with Gemini AI. Google has the deepest pool of AI researchers. Microsoft and Facebook are not far behind.

OpenAI is losing researchers, they have maybe 1-2 years until they become Microsoft subsidiary.

tim333
>Understand AI for what it is, not what it might become

Is kind of a boring way of looking at things. I mean we have fairly good chatbots and image generators now but it's where the future is going that's the interesting bit.

Lumping AI in with dot coms and crypto seems a bit silly. It's a different category of thing.

(By the way Sam being shifty or not techy or not seems kind of incidental to it all.)

vasilipupkin
Don't listen to David Karpfs of the world. Did he predict chat gpt? if you asked him in 2018, he would have said AI will never write a story

now you can use AI to easily write the type of articles he produces and he's pissed.

1vuio0pswjnm7
"At a high enough level of abstraction, Altman's entire job is to keep us all fixated on an imagined AI future so we don't get too caught up in the underwhelming details of the present."

Old tactic.

The project that would eventually became Microsoft Corp. was founded on it. Gates told Ed Roberts the inventor of the first personal computer that he had a programming for it. He had no such programming langugage.

Gates proceeded to espouse "vapourware" for the decades. Arguably Microsoft and its disciples are still doing so today.

Will the tactic ever stop working. Who knows.

Focus on the future that no one can predict, not the present that anyone can describe.

_hyn3
> The technologies never quite work out like the Altmans of the world promise, but the stories keep regulators and regular people sidelined while the entrepreneurs, engineers, and investors build empires. (The Atlantic recently entered a corporate partnership with OpenAI.)

Hilarious.

angarg12
> Remember, these technologies already have a track record. The world can and should evaluate them, and the people building them, based on their results and their effects, not solely on their supposed potential.

But that's not how the market works.

mppm
Around the time of the board coup and Sam's 7-trillion media tour, there were multiple, at the time somewhat credible, rumors of major breakthroughs at Open AI -- GPT5, Q*, and possibly another unnamed project with wow-factor. However, almost a year has passed, and OpenAI has only made incremental improvements public.

So my question is: What does the AI rumor mill say about that? Was all that just hype-building, or is OpenAI holding back some major trump card for when they become a for-profit entity?

thelastgallon
"Although it will happen incrementally, astounding triumphs – fixing the climate, establishing a space colony, and the discovery of all of physics – will eventually become commonplace. With nearly-limitless intelligence and abundant energy – the ability to generate great ideas, and the ability to make them happen – we can do quite a lot." - Sam Altman, https://ia.samaltman.com/

Reality: AI needs unheard amounts of energy. This will make climate significantly worse.

razodactyl
I think we should consider companies that create or own their own hardware, the ability to generate cheap electricity and the ability of neural networks to continuously learn.

I still "feel the AGI". I think Ben Goertzel'a recent talk on ML Street Talk was quite grounded / too much hype clouds judgement.

In all honesty, once the hype dies down, even if AGI/ASI is a thing - we're still going to be heads down back to work as usual so why not enjoy the ride?

Covid was a great eye-opener, we dream big but in reality people jump over each other for... toilet paper... gotta love that Gaussian curve of IQ right?

bambax
> Altman expects that his technology will fix the climate, help humankind establish space colonies, and discover all of physics. He predicts that we may have an all-powerful superintelligence “in a few thousand days.”

It seems fair to say Altman has completed his Musk transformation. Some might argue it's inevitable. And indeed Bill Gates' books in the 90s made a lot of wild promises. But nothing that egregious.

KaoruAoiShiho
Nobody is taking Sam Altman at his word lol, these ideas about intelligence have been believed for a long time in the tech world and the guy is just the best at monetizing them. People are pursuing this path because of a general conviction in these ideas of themselves, I guess to people like Atlantic writers Sam Altman is the first time they've encountered them but it really has nothing to do with Sam Altman.
theptip
The better reason to stop taking Altman at his word is on the subject of OpenAI building AGI “for the benefit of humanity”.

Now that he’s restructuring the company to be a normal for-profit corp, with a handsome equity award for him, we should assume the normal monopoly-grabbing that we see from the other tech giants.

If the dividend is simply going to the shareholder (and Altman personally) we should be much more skeptical about baking these APIs into the fabric of our society.

The article is asinine; of course a tech CEO is going to paint a picture of the BHAG, the outcome that we get if we hit a home run. That is their job, and the structure of a growth company, to swing for giant wins. Pay attention to what happens if they hit. A miss is boring; some VCs lose some money and nothing much changes.

melenaboija
It is weird that one of the most valued markets (openai, microsoft investments, nvidia gpus, ...) is based on a stack that is available to anyone that can pay for the resources to train the models and that in my opinion still has to deliver to the expectations that have been created around it.

Not saying it is a bubble but something seems imbalanced here.

twodave
The amount of comparisons I see between some theoretical AGI and something more akin to a science fiction like Jane from the Ender saga or the talking head from That Hideous Strength is I guess not surprising. But in both of those cases the only way to make the plot work was to make the AI literally an other-worldly being.

I am personally not sold on AGI being possible. We might be able to make some poor imitation of it, and maybe an LLM is the closest we get, but to me it smacks of “man attempts to create life in order to spite his creator.” I think the result of those kinds of efforts will end more like That Hideous Strength (in disaster).

rpgbr
I’ll never understand how so many smart people haven’t realize that the biggest “hallucination” produced by AI was this Sam Altman.
bhouston
Sam Allan has to be a promoter and true believer. It is his job to do that and he does have new tech that didn’t exist before and it is game changing.

The issue is more that the company is hemorrhaging talent, and doesn’t have a competitive moat.

But luckily this doesn’t affect most of us, rather it will only possibly harm his investors if it doesn’t work out.

If he continues to have access to resources and can hire well and the core tech can progress to new heights, he will likely be okay.

AndrewKemendo
OpenAI cant be working on AGI because they have no arc for production robotics controllers

AGI cannot exist in a box that you can control. We figured that out 20 years ago.

Could they start that? Sure theoretically. However they would have to massively pivot and nobody at OAI are robotics experts

throwintothesea
The Gang Learns That Normal People Don't Take Sam Altman Seriously
mark_l_watson
I appreciate everything that OpenAI has done, the science of modeling and the expertise in productization.

But, but, but… their drama, or Altman’s drama is now too much for me, personally.

With a lot of reluctance I just stopped doing the $20/month subscription. The advanced voice mode is lots of fun to demo to people, and o1 models are cool, but I am fine just using multiple models for chat on Abacus.AI and Meta, an excellent service, and paid for APIs from Google, Mistral, Groq, and OpenAI (and of course local models).

I hope I don’t sound petty, but I just wanted to reduce their paid subscriber numbers by -1.

yumraj
Open AI’s AGI is like Tesla’s completely automated self driving.

So close, yet so far. And, both help the respective CEOs in hyping the respective companies.

rsynnott
I mean, in general, if you’re taking CEOs at their word, and particularly CEOs of tech companies at their word, you’re gonna have a bad time. Tech companies, and their CEOs, predict all manner of grandiose nonsense all the time. Very little of it comes to pass, but through the miracle of cognitive biases some people do end up filtering out the stuff that doesn’t happen and declaring them visionary.
hnadhdthrow123
Will Human ego, greed, selfishness lead to our destruction? (AI or not)

https://news.ycombinator.com/item?id=35364833

wnevets
He just needs a paltry trillion dollars to make this AI thing happen. Stop being so short sighted.
flenserboy
that ship sailed a long time ago
latexr
Distorting the old Chinese proverb, “The best time to stop taking Sam Altman at his word was the first time he opened his mouth. The second best time is now”. We’ve known he’s a scammer for a long time.

https://www.technologyreview.com/2022/04/06/1048981/worldcoi...

https://www.buzzfeednews.com/article/richardnieva/worldcoin-...

cowmix
I keep thinking about Sam Altman’s March ’23 interview on Lex Fridman’s podcast—this was after GPT-4’s release and before he was ousted as CEO. Two things he said really stuck with me:

First, he mentioned wishing he was more into AI. While I appreciate the honesty, it was pretty off-putting. Here’s the CEO of a company building arguably the most consequential technology of our time, and he’s expressing apathy? That bugs me. Sure, having a dispassionate leader might have its advantages, but overall, his lack of enthusiasm left a bad taste in my mouth. Why IS he the CEO then?

Second, he talked about going on a “world tour” to meet ChatGPT users and get their feedback. He actually mentioned meeting them in pubs, etc. That just sounded like complete BS. It felt like politician-level insincerity—I highly doubt he’s spoken with any end-users in a meaningful way.

And one more thing: Altman being a well-known ‘prepper’ doesn’t sit well with me. No offense to preppers, but it gives me the impression he’s not entirely invested in civilization’s long-term prospects. Fine for a private citizen, but not exactly reassuring for the guy leading an organization that could accelerate its collapse.

est
sama is best match with today's LLM because of the "scaling law" like Zuckerberg described. Everyone is burning cash to race to the end, but the billion dollar question is, what is the end for transformer based LLM? Is there an end at all?
wicndhjfdn
Our economy runs on market makers AI, Blockchain, whether they are what they seem in the long run is beside the point. They're sole purpose is to generate economic activity. Nobody really cares if they pan out.
xyst
Sam Altman is the modern version of a snake oil salesman
wg0
At the expense of irking many, I'd like to add Musk to the list if someone didn't already.

From robotics, neurology, transport to everything in between - not a word should be taken as is.

_davide_
Did I ever? :')
euphetar
While I do not have much sympathy for Altman, the article is very low quality and contains zero analysis

Yeah, maybe on the surface chatbots turned out to be chatbots. But you have to be a poor journalist to stop your investigation of the issue at that and conclude AI is no big deal. Nuance, anyone?

swiftcoder
I feel like the time to stop taking Sam Altman at his word was probably when he was shilling for an eyeball-scanning cryptocurrency...

But apparently as a society we like handing multi-billion dollar investments to folks with a proven track record of (not actually shipping) complete bullshit.

jacknews
Does anyone take him at face value anyway?

The other issue is that AI's 'boundless prosperity' is a little like those proposals to bring an asteroid made of gold back to earth. 20m tons, worth $XX trillion at current prices, etc. The point is, the gold price would plummet, at the same time as the asteroid, or well before, and the promised gains would not materialize.

If AI could do everything, we would no longer be able (due to no-one having a job), let alone willing, to pay current prices for the work it would do, and so again, the promised financial gains would not materialize.

Of course in both cases, there could be actual societal benefits - abundant gold, and abundant AI, but they don't translate directly to 'prosperity' IMHO.

thwg
TSMC has stopped way earlier.
ein0p
Anyone who takes startup CEOs at their word has never worked in a startup. The plasticity of their ethics is legendary when there’s a prospect of increasing the revenues.
throwaway918299
I’m just expecting a Microsoft acquisition and Altman exits and moves on to his next grift.
luxuryballs
I wonder how many governments and A-listers are investing heavily in the rapid development of commodity AI video that is indistinguishable from real video. Does it seem paranoid?

They would at least be more believable if they blast claims that a certain video must be fake, especially with how absurd and shocking it is.

breck
In 2017 Daniel Gross, a YC partner at the time, recruited me to join the YC software team. Sam Altman was president of YC at this time.

During my interview with Jared Friedman, their CTO, I asked him what Sam was trying to create, the greatest investment firm of all time surpassing Berkshire Hathway, or the greatest tech company surpassing Google? Without hesitation, Jared said Google. Sam wanted to surpass Google. (He did it with his other company, OpenAI, and not YC, but he did it nonetheless)

This morning I tried Googling something and the results sucked compared to what ChatGPT gave me.

Google still creates a ton of value (YouTube, Gmail, etc), but he has surpassed Google in terms of cutting edge tech.

klabb3
> Altman expects that his technology will fix the climate, help humankind establish space colonies, and discover all of physics.

Yes. We've been through this again and again. Technology does not follow potential. It follows incentive. (Also, “all of physics”? Wtf is he smoking?)

> It’s much more pleasant fantasizing about a benevolent future AI, one that fixes the problems wrought by climate change, than dwelling upon the phenomenal energy and water consumption of actually existing AI today.

I mean, everything good in life uses energy, that’s not AIs fault per se. However, we should absolutely evaluate tech anchored in the present, not the future. Especially with something we understand so poorly like emergent properties of AI. Even when there’s an expectation of rapid changes, the present is a much better proxy than yet-another sociopath with a god-complex whose job is to be a hype-man. Everyone’s predictions are garbage. At least the present is real.

tightbookkeeper
Journalists smell blood in the water. When times were looking better they gave him uncritical praise.
photochemsyn
Of course no corporate executive can be taken at their word, unless that word is connected to a legally binding contract, and even then, the executive may try to break the terms of the contract, and may have political leverage over the court system which would bias the result of any effort to bring them to account.

This is not unusual - politicians cannot be taken at their word, government bureaucrats cannot be taken at their word, and corporate media propagandists cannot be taken at their word.

The fact that the vast majority of human beings will fabricate, dissemble, lie, scheme, manipulate etc. if they see a real personal advantage from doing so is the entire reason the whole field of legally binding contract law was developed.

fnordpiglet
While I agree anyone taking sam Altman at his word is and always was a fool, this opinion piece by a journalism major at a journalism school giving his jaded view of technology is the tired trope that is obsessed with the fact reality in the present is always reality in the present. The fact I drive a car that’s largely - if not entirely - autonomous in highly complex situations, is fueled by electricity alone, using a super computer to play music from an almost complete back catalog of everything released at my voices command, on my way to my final cancer treatment for a cancer that ten years ago was almost always fatal, while above me constellations of satellites cooperate via lasers to provide global high speed wireless internet being deployed by dozens upon dozens of private rocket launches as we prepare the final stretch towards interplanetary spaceships, over which computers can converse in true natural language with clear human voices with natural intonation…. Well. Sorry, I don’t have to listen to Sam Altman to see we live in a magical era of science fiction.

The most laughable part of the article is where they point at the fact that in the past TWO YEARS we haven’t gone from “OMG we’ve achieved near perfect NLP” to “Deep thought tell us the answer to life the universe and everything” as some sort of huge failure is patently absurd. If you took Altman at his word on that one, you probably also scanned your eye ball for fake money. The truth though is that the rate of change in the products his company is making is still breath taking - the text to speech tech in the latest advanced voice release (recognizing it’s not actually text to speech but something profoundly cooler, but that’s lost on journalism majors teaching journalism majors like the author) puts to shame the last 30 years of TTS. This alone would have been enough to have a fairly significant enterprise selling IVR and other software.

When did we go from enthralled by the rate of progress to bored that it’s not fast enough? That what we dream and what we achieve aren’t always 1:1 but that’s still amazing? I get that when we put down the devices and switch off the noise we are still bags of mostly water, our back hurts, we aren’t as popular as we wish we were, our hair is receding, maybe we need invisiline but flossing that tooth every day is easier and cheaper, and all the other shit that makes life much less glamorous than they sold us in the dot com boom, or nanotech, etc, as they call out in the article.

But the dot com boom did succeed. When I started at early Netscape no one used the internet. We spun the stories of the future this article bemoans to our advantage. And it was messier than the stories in the end. But now -everyone- uses the internet for everything. Nanotechnology permeates industry, science, tech, and our every day life. But the thing about amazing tech that sounds so dazzling when it’s new is -it blends into the background- if it truly is that amazingly useful. That’s not a problem with the vision of the future. It’s the fact that the present will never stop being the present and will never feel like some illusory gauzy vision you thought it might be. But you still use dot coms (this journalism major assessment of tech was published on a dot com and we are responding on a dot com) and still live in a world powered by nanotechnology, and AI promised in TWO YEARS is still mind boggling to anyone who is thinking clearly about what the goal posts for NLP and AI were five years ago.

nottorp
The time to stop taking him seriously was when he started his "fear AI, give me the monopoly" campaign.
whamlastxmas
It’s really sad to see all the personal attacks and cynicism that has no basis in reality. OpenAI has an amazing product and was first to market with something game changing for billions of people. Calling him a fraud and scammer is super ridiculous
slenk
Same needs to happen to Elon Musk
m3kw9
You all just sit back and not pick at every word he says, just sit calmly and let him cook. And he’s been cooking
richrichie
He seems like any other tech “evangelical” to me.
EchoReflection
according to the book "The Sociopath Next Door", approximately 1 in 25 Americans is a "sociopath" who "does not feel shame, guilt, or remorse, and can do anything at all without what "normal" people think of as an internal voice labeling things as "right" or "wrong". It makes sense to me that sociopaths would be over-represented among C-level executives and "high performers" in every field.

https://www.betterworldbooks.com/product/detail/the-sociopat...

dmitrygr
Someone took that grifter at his word ever? Haha! Wait you’re serious? Let me laugh even harder. Hahahaha
krick
Since it's paywalled, I assume we are discussing the title (as usual). It implies that there was the time when [we] took him at his word. Uh, ok, maybe. But what does it matter? "We" aren't the people at VCs who fund it, I suppose? So, what does it matter if "we" take him at his word? Hell, even if it suddenly went public, it still wouldn't mean much if we trust the guy or not, because we could buy shares for the same reason we buy crypto or TSLA shares.

As a matter of fact, I suspect the author of the article actually belongs to gullible minority who ever took Altman at his word, and now is telling everyone what they already knew. But so what? What are we even discussing? Nobody calls to remove their OpenAI (or, in fact, Anthropic, or whatever) account, as long as we find it useful for something, I suppose. It just makes no difference at all if that writer or his readers take Altman at his word, their opinions have no real effect on the situation, it seems. They are merely observers.

nomilk
tl;dr author complains that Sam's predictions of the future of AI are inflated (but doesn't offer any of his own), and complains that AI tools that surprised us last year look mundane now.

The article is written to appeal to people who want to feel clever casually slagging off and dismissing tech.

> it appears to have plateaued. GPT-4 now looks less like the precursor to a superintelligence and more like … well, any other chatbot.

What a pathetic observation. Does the author not recall how bad chatbots were pre-LLMs?

What LLMs can do blows my mind daily. There might be some insufferable hype atm, but gees, the math and engineering behind LLMs is incredible, and it's not done yet - they're still improving from more compute alone, not even factoring in architecture discoveries and innovations!

7e
I mean, there is a reason the board tried their best to exorcise Sam Altman him from the company. OpenAI could be the next Loopt.
cyanydeez
Better headline : It's too late to stop taking Sam Altman at his word

See same with Elon Musk.

Money turns genius to smooth brained egomaniacal idiots. See same with Steve Jobs

DemocracyFTW2
> Last week, CEO Sam Altman published an online manifesto titled “The Intelligence Age.” In it, he declares that the AI revolution is on the verge of unleashing boundless prosperity and radically improving human life.

/s

kopirgan
I generally don't take anyone other than Leon at his word. \s
magaretbrennan
[flagged]
bediger4000
This seems like a more general problem with journalistic practices. Journalists don't want to inject their own judgements into articles, which is admirable, and makes sense. So they quote people exactly. Quoting exactly means that bad actors can inject falsehoods into articles.

I don't have any suggestions on how to solve this. Everything I can think of has immediate large flaws.

twelve40
well the good news is all that stuff comes with an expiration date, after which we will know if this is our new destiny or yet another cloud of smoke.

This is a good reminder:

> Prominent AI figures were among the thousands of people who signed an open letter in March 2023 to urge a six-month pause in the development of large language models (LLMs) so that humanity would have time to address the social consequences of the impending revolution

In 2024, ChatGPT is a weird toy, my barber demands paper cash only (no bitcoin or credit cards or any of that phone nonsense, this is Silicon Valley), I have to stand in line at USPS and DMV with mindless paper-shuffling human robots, marveling at humiliating stupidity of manual jobs, robotaxis are still almost here, just around the corner, as always. Let's check again in a "coupe of thousand days" i guess!

m3kw9
The Atlantic is really going out of their way to hate on Altman. That publication has always been a bit of a whack job of an outfit
whoiscroberts
Any person that thinks “automating human thought” is good for humanity is evil.
mrangle
I can't imagine taking The Atlantic seriously on anything. My word. You aren't actually supposed to read the endless ragebait.

Contrary to the Atlantic's almost always intentionally misleading framing, the "dot com boom" did in fact go on to print trillions later and it is still printing them. After what was an ultimately marginal if account clearing dip for many.

I say that as someone who would be deemed to be an Ai pessimist, by many.

But its wildly early to declare anything to be "what it is" and only that, in terms of ultimate benefit. Just like it was and is wild to declare the dot com boom to be over.

neuroelectron
I'm surprised nobody noticed the elephant in the room, the fact that ChatGPT has a very hard woke slant. That said, -o1 has gotten a lot better but it's not as uncensored and unbiased as GPT-3 was when it was first released. For a while GPT-4 was very clearly biased for the left and U.S. Democrats particularly.

Now keep in mind that this is going to be the default option for a lot of forums and social media for automated moderation. Reddit is already using it a lot and now a lot of the front page is clearly feedback farming for OpenAI. What I'm getting at is we're moving towards a future where only a certain type of dialog will be allowed on most social media and Sam Altman and his sponsors get to decide what that looks like.