Skip Navigation

User banner

Sebrof [he/him, comrade/them]

@ Sebrof @hexbear.net

Posts
14
Comments
242
Joined
2 yr. ago

  • I didn't see the archive link (my bad) and I made a post to share the text. I'll keep it here regardless.

    The article mentions that LLMs have an exponentially increasing scale of cost, which OpenAI is not open about when raising capital. The author says that earlier assumptions investors made about the eventual profitability of AI were inaccurate, LLMs are a much more capital intensive operation than those of earlier tech companies.

    The author expects OpenAI to "run out of money" in 18 months, and noted that Altman raised 40 billion in private funding, perhaps the most that has been raised in any funding round ever, but OpenAI also expects to blow through 40 billion by 2028

    Mr. Altman’s $40 billion triumph also exceeded the amount that any company has raised by going public. The biggest I.P.O. ever was Saudi Aramco in 2019, which raised less than $30 billion for its government owner. Whereas Ant Group was profitable and Saudi Aramco was extremely so, OpenAI appears to be hemorrhaging cash ... the company projected last year that it would burn more than $8 billion in 2025 and more than $40 billion in 2028

    And OpenAI projects to spend 1.4 Trillion on data centers too.

    In the end, the author predicts

    The probable result is that OpenAI will be absorbed by Microsoft, Amazon or another cash-rich behemoth


    This Is What Convinced Me OpenAI Will Run Out of Money

    Jan. 13, 2026

    By Sebastian Mallaby

    Mr. Mallaby is a senior fellow at the Council on Foreign Relations.


    Wall Street fears it has an artificial intelligence problem. A.I.-related stocks are up so much that a fall feels inevitable, particularly if A.I. appears unlikely to live up to its hype. This is the wrong worry; A.I.’s promise is real. The big question in 2026 is whether capital markets can adequately finance A.I.’s development. Companies such as OpenAI are likely to run out of cash before their tantalizing new technology produces big profits.

    Since the release of ChatGPT a little over three years ago, A.I. models have acquired novel capabilities at a remarkable rate, repeatedly defying naysayers. They have learned to generate realistic images and videos, to reason through increasingly complex logic and math problems, to make sense of Tolstoy-size inputs. The next big thing will be agents: The models will fill digital shopping baskets and take care of online bills. They will act for you.

    Investors were briefly spooked last July when an M.I.T. study suggested that almost none of this is useful to businesses. Corporations had poured tens of billions of dollars into A.I., yet only one in 20 projects had succeeded, the study reported. But a Wharton study in October delivered the opposite verdict. After interviewing 801 leaders at U.S. companies, Wharton concluded that three-quarters of the businesses were getting a positive return on their A.I. investments.

    If the truth lies in the middle, this is a triumph. Businesses usually take decades to deploy new technologies successfully; progress after three years is striking. As A.I. keeps improving, and workers grow more adept at collaborating with the machines, the gains will stack up. Over a billion people use generative A.I. models every month. Not all uses are productive, but many will be.

    The problem for A.I. developers is that most users aren’t paying for their services. People can choose among multiple free and excellent models; unless they have especially complex and compute-intensive queries, they have little reason to subscribe to the premium versions. If a model maker imposes a paywall or displays irritating ads, customers will migrate elsewhere.

    This lack of stickiness is most likely temporary, however. At some point in the not-so-distant future, a model will probably know its user so well that it will be painful to switch to a different one. It will remember every detail of conversations going back years; it will understand shopping habits, movie tastes, emotional hangups, professional aspirations. When that happens, abandoning a model might feel like a divorce — doable, but unpleasant.

    At this point, the A.I. builders would turn profitable. As well as charging for subscriptions and running ads, they could sell shopping services, home entertainment, wearable devices, tax preparation. For hundreds of millions of people, A.I. companions might be the primary gateway to an internet rendered far more useful and compelling than it is today. How long will it take for these companies to reach the promised land, and can they survive in the meantime? Until fairly recently, investors hardly asked that question. They blithely assumed that capital markets would bridge the gap between the emergence of a great technology and eventual profits. After all, most of today’s tech giants spent years operating at a loss before they earned hundreds of billions.

    That blithe assumption was mistaken. Generative A.I. businesses are not like the software successes of the past generation. They are far more capital-intensive. And while behemoths such as Google, Microsoft and Meta earn so much from legacy businesses that they can afford to spend hundreds of billions collectively as they build A.I., free-standing developers such as OpenAI are in a different position. My bet is that over the next 18 months, OpenAI runs out of money.

    As far back as 2020, this outcome was predictable. Silicon Valley insiders touted the so-called scaling laws, which showed how models would become significantly more powerful but also exponentially more expensive. But OpenAI’s leader, Sam Altman, hyped up the first part of that prediction while soft-pedaling the second; he kept talking ever more cash out of investors, emerging as the best pitchman in tech history. The more capital he raised, the more the buzz around him grew. The buzzier he became, the more money he could raise.

    Last March, Mr. Altman surpassed himself, raising $40 billion from investment funds, far more than any other company has raised in any private funding round, ever. (Second prize goes to Ant Group, a Chinese fintech company that raised a comparatively modest $14 billion in 2018.) Mr. Altman’s $40 billion triumph also exceeded the amount that any company has raised by going public. The biggest I.P.O. ever was Saudi Aramco in 2019, which raised less than $30 billion for its government owner. Whereas Ant Group was profitable and Saudi Aramco was extremely so, OpenAI appears to be hemorrhaging cash. According to reporting by The Information, the company projected last year that it would burn more than $8 billion in 2025 and more than $40 billion in 2028. (Though The Wall Street Journal reported that the company anticipates profits by 2030.)

    Not even Mr. Altman can keep juggling indefinitely. And yet he must raise more — a lot more. Signaling the scale of capital that he believes he needs, OpenAI has committed to spending $1.4 trillion on data centers and related infrastructure. Even if OpenAI reneges on many of those promises and pays for others with its overvalued shares, the company must still find daunting sums of capital. However rich the eventual A.I. prize, the capital markets seem unlikely to deliver.

    The probable result is that OpenAI will be absorbed by Microsoft, Amazon or another cash-rich behemoth. OpenAI’s investors would take a hit. Chipmakers and data center builders that signed deals with Mr. Altman would scramble for new customers. Social media pundits would report every detail, and frazzled investors may dump the whole A.I. sector. But an OpenAI failure wouldn’t be an indictment of A.I. It would be merely the end of the most hype-driven builder of it.

  • The fact that the order of execution isn't the same as the order it is written confused me at first (maybe I missed that in some 101 intro idk). So now I build the query in the order it is executed ('from' first, then 'where', etc.)

    I still get confused about the more complicated queries, but the above helps with 99% of what I actually use it for. I'm not doing anything crazy

    Best of luck in your career journey. Being able to work with data + having actual values is a big plus!

  • Surely...

  • Thank you, and oscardejarjayes, for responding! Hopefully I can get started on vol 2 one day... year... decade... . I get too bogged down in supplemental material that I neglect the source

  • Entropy's not evil, just misunderstood!

  • You could likely still use the capital book club even if you are behind (assuming the questions are on, or adjacent-ish to, Capital).

    I fell behind and debated on posting there or not, and comrade @Cowbee@hexbear.net suggested posting anyway. Cowbee and @oscardejarjayes@hexbear.net could chime in too for suggested practices.

    There are other book clubs too, but I can't speak to their rules

    Tbh, the engagement in last year's Capital thread started to wane after the first couple of chapters. I fell behind, and I'm sure many others did. I'm a slow reader lol. And there was probably a sense amongst us that once one is behind we shouldn't post in the older threads, or maybe we felt that our posts in the older threads weren't going to be noticed anyway - so some people (like me) just stopped posting. Life happened :(

    There's also feelings of embarrassment or shame in feeling behind that we may just need to power through, or remind ourselves that people on Hexbear won't actually care if we fall behind in reading. Post anyway.

    Even if it's hard to have an ongoing book club in those threads, they may still be a good central place to post questions on whatever topic or chapter is on your mind.

    A comment thread discussing some of this: https://hexbear.net/comment/6802005

    I'd still like to hear a mod give the final piece of advice, but I was getting a sense that it's okay to post in older threads if that's where you are in the book.

  • Yeah, I was looking into it, and like @FloridaBoi@hexbear.net said it is mostly the driver driving an EV as if it were a regular gas car. Which is a mistake I would make too if I were driving one. They have regenerative breaking and that instant response that you mentioned. That in addition to cues we are used to in automobiles (sounds, vibrations, etc) being absent in most EVs causes motion sickness.

    At least that's what I tiktok told me..

    I've never had a pleasant experience in one, I've always gotten sick.

  • The few times I've ridden in a Tesla I've gotten car sickness and thrown up.

    I accept it as God's punishment

    Do Chinese EVs make you hurl your lunch every time you sit in the passenger seat?

  • You can prove anything I guess if you just make shit up. Tankies destroyed

  • I know your pain

  • As I still have my mouth, allow me to do the honors

  • Bunch of dorks

  • I beg to differ

  • I feel seen

  • Slop. @hexbear.net

    The Rise of AI Denialism

    archive.ph /oWrGS
  • Chapotraphouse @hexbear.net

    STEM nerd here to remind you LIBERAL arts students to suck it up and get DEFUNDED

  • Chapotraphouse @hexbear.net

    The Economist here to remind you dum dums that actually the bubble is a good thing

  • Chapotraphouse @hexbear.net

    BREAKING NEWS: Economist Sticks Head out of own Ass Just Long Enough to Notice State of World

  • gardening @hexbear.net

    Some Pics of my Nepenthes, i.e. Tropical Pitcher Plant

  • Chapotraphouse @hexbear.net

    China is so evil that when it does good it just does more evil

  • Chapotraphouse @hexbear.net

    ChatGPT Has a Stroke When You Ask It This Specific Question

    futurism.com /artificial-intelligence/chatgpt-meltdown-specific-question
  • Chapotraphouse @hexbear.net

    They thought they were making technological breakthroughs. It was an AI-sparked delusion

    www.cnn.com /2025/09/05/tech/ai-sparked-delusion-chatgpt
  • theory @hexbear.net

    Price, Value, and Exploitation using Input-Output Tables - Part 6: Resolving the Transformation Problem and Future Directions

  • theory @hexbear.net

    Price, Value, and Exploitation using Input-Output Tables - Part 5: Some Worked Examples

  • theory @hexbear.net

    Price, Value, and Exploitation using Input-Output Tables - Part 4: Capitalists, Profit, and Exploitation

  • theory @hexbear.net

    Price, Value, and Exploitation using Input-Output Tables - Part 3: A Worker-Only Economy with Means of Production

  • theory @hexbear.net

    Price, Value, and Exploitation using Input-Output Tables - Part 2: A Pure-Labor Economy

  • theory @hexbear.net

    Price, Value, and Exploitation using Input-Output Tables - Part 1: Introduction