Venture Apocalypse?

A reminder for new readers. That Was The Week collects the best writing on critical issues in tech, startups, and venture capital. I select the articles because they are of interest. The selections often include things I disagree with. The articles are only snippets. Click on the headline to go to the original. I express my point of view in my editorial and the weekly video.


Editorial: Venture Apocalypse?

Essays of the Week

Chamath Palihapitiya: It could take three years for the market to ‘accurately’ reprice late-stage cos

ChatGPT Heralds an Intellectual Revolution

Thank you, OpenAI

You Are Not a Parrot: And, a chatbot is not a human.

OpenAI CEO on the Potential of AI to Break Capitalism

ChatGPT allowed in International Baccalaureate essays

Snapchat Integrates ChatGPT Elements into New ‘My AI’ Tool In-App

Winners and losers in the race to add AI

Is it time to hit the pause button on AI?

Killing The ‘Must Have a Warm Intro To a VC’ Myth Will Be Good For Founders and Great for the World

Section 230 Is a Load-Bearing Wall—Is It Coming Down?

Startup Decoupling & Reckoning

Video of the Week

Generative AI Is About To Reset Everything, And, Yes It Will Change Your Life

News of the Week

Biden finds breaking up Big Tech is hard to do

FTC drops bid to block Meta’s acquisition of Within

Tech Leaders in Israel Wonder if It’s Time to Leave

Incoming YouTube CEO Neal Mohan Outlines His Vision for the Future of the App

Stripe Cuts Valuation to $50 Billion After Facing Fundraising Hurdles

OpenAI launches an API for ChatGPT, plus dedicated capacity for enterprise customers

Startup of the Week


Tweet of the Week

Murder is Good actually – @micsolana

Editorial: Venture Apocalypse?

When speaks, it is worth listening. And when starts a Substack, it is worth taking notice. However, this week Marc revealed he had recently decided to abandon alcohol, and Chamath predicted that late-stage startup valuations might take three years to revive.

In the same week wrote a wonderful essay on Startup Decoupling & reckoning, predicting that many companies will run out of cash in the late 2023, early 2024 time frame.

Enough to drive a late-stage investor to drink, one might think.

Techcrunch’s Connie Loizos interviewed Chamath, and he was sanguine:

So we’re at that point in the market where all the boards of these private companies refuse to budge on valuation. And the reason is because it impacts meaningfully their DPI or their TVPIs that they’ve given to LPs. And so it’s a very difficult part of the private markets right now to invest in, because you will not be allowed to do true price discovery because nobody wants to take the real hits.

. . . . [And for LPs] this is now just money bad. And when [more venture] folks leave the market, those companies now become more prone to get repriced accurately . . . [But] I honestly think that’s like three years away. I thought it was going to be three quarters away

This rings true. Most investors will try to stave off the day that a company has to be re-valued downwards.

And Elad stated:

Tough times will in part become a test of character. As company after company goes through layoffs, failed fundraises, or outright fails we will see many different types of behavior both good and bad, from investors, founders and employees. It is good to remember in periods of pain that pain is temporary, that technology is a massive wave of change and global economic empowerment, and that many future opportunities are yet to come. It is tough not to get blinded by the moment, but one must strive to see and think clearly and act well in the times ahead.

This was after recommending shutting down a company rather than spending two years burning through cash at an unsustainable valuation.

Most of this narrative is focused on companies that raised capital in 2021. And within that group, companies that did Series A and beyond.

If you are a founder within that group, then what Chamath and Elad say is to be taken very seriously.

However, if you are an early-stage startup, this may be the best time to raise capital. And, of course, as OpenAI shows, big ideas with traction can still command large investments. The freshman startup class of 2023 will produce a lot of high-growth companies.

Anna Miura-Ko from Floodgate, writing an essay titled “Thank You, OpenAI” shines a light on the opportunities this new canvas creates for startups, especially after this week’s announcement of an API for ChatGPT.

now, AI has entered the social consciousness, and it’s changing the AI stack entirely. Whether it is implementing a vector database and the associated workflows to make that possible, or ChatGPT demonstrating the importance of UI in making AI accessible, the AI stack will be substantially different from our current enterprise stack, creating new opportunities for startups. In addition, because of the larger societal awareness, we are finding large enterprise customers are not only curious, they are willing to pay for solutions that will better automate their workflows creating more consistent and intelligent experiences.

Reading between the lines, many AI startups focused on discrete products will be funded starting now.

Within the combined struggle for later-stage companies to survive and the emergence of a new generation of startups, there is a tendency to overstate the negative and the positive. The right thing to do at these times is to build something amazing and have the world look and be impressed. Become a beacon of light that others find irresistible.

One such company this week is EvolutionIQ, our startup of the week. SignalRank’s AI picked it, along with 14 other companies, as a potential unicorn. There were 115 Series B Rounds in the past 30 days. Evolution IQ and 14 others have strong signals implying future success. Congrats to them.

Essays of the Week

Chamath Palihapitiya: It could take three years for the market to ‘accurately’ reprice late-stage cos

Connie Loizos
@cookie /
1:15 PM PST•March 1, 2023

Image Credits: David Paul Morris/Bloomberg via Getty Images

Former Facebook exec turned VC Chamath Palihapitiya has long been a controversial figure in the investing world. Both brilliant and combative, Palihapitiya came to be known most widely by ushering in the era of special purpose acquisition companies, or SPACs, beginning in the fall of 2019, when he helped Virgin Galactic become a publicly traded company through a SPAC he formed.

Palihapitiya went on to take five more companies public via SPACs before the boom ended abruptly last year, and while investors who followed him into some of his SPACs lost money — as did investors in many hundreds of others SPACs that materialized in 2020, 2021 and last year — Palihapitiya reportedly doubled the roughly $750 million he invested.

Many blame him for aggressively promoting his own interests — including during numerous CNBC appearances — at the expense of less-sophisticated investors. Others continue to heed his investing advice, considering that Palihapitiya seems adept at identifying investing opportunities early. (This editor recalls his appearance in 2014 at a packed Bitcoin conference in San Francisco where he argued that everyone should have 1% of their assets in Bitcoin. At the time, each Bitcoin was valued at $520.)

Both camps might be interested in a recent appearance by Palihapitiya at an investing conference in Miami where he said he thinks up to seven years of high interest rates would be good for the venture industry, that America’s deteriorating relationship with China is a boon for the country and where he was asked about generative AI. His comments below have been condensed for length and clarity. You can check out the full interview here.

ChatGPT Heralds an Intellectual Revolution

Generative artificial intelligence presents a philosophical and practical challenge on a scale not experienced since the start of the Enlightenment.

By Henry Kissinger, Eric Schmidt and Daniel Huttenlocher

Feb. 24, 2023 at 2:17 pm ET


A new technology bids to transform the human cognitive process as it has not been shaken up since the invention of printing. The technology that printed the Gutenberg Bible in 1455 made abstract human thought communicable generally and rapidly. But new technology today reverses that process. Whereas the printing press caused a profusion of modern human thought, the new technology achieves its distillation and elaboration. In the process, it creates a gap between human knowledge and human understanding. If we are to navigate this transformation successfully, new concepts of human thought and interaction with machines will need to be developed. This is the essential challenge of the Age of Artificial Intelligence.

The new technology is known as generative artificial intelligence; GPT stands for Generative Pre-Trained Transformer. ChatGPT, developed at the OpenAI research laboratory, is now able to converse with humans. As its capacities become broader, they will redefine human knowledge, accelerate changes in the fabric of our reality, and reorganize politics and society.

Generative artificial intelligence presents a philosophical and practical challenge on a scale not experienced since the beginning of the Enlightenment. The printing press enabled scholars to replicate each other’s findings quickly and share them. An unprecedented consolidation and spread of information generated the scientific method. What had been impenetrable became the starting point of accelerating query. The medieval interpretation of the world based on religious faith was progressively undermined. The depths of the universe could be explored until new limits of human understanding were reached.

Generative AI will similarly open revolutionary avenues for human reason and new horizons for consolidated knowledge. But there are categorical differences. Enlightenment knowledge was achieved progressively, step by step, with each step testable and teachable. AI-enabled systems start at the other end. They can store and distill a huge amount of existing information, in ChatGPT’s case much of the textual material on the internet and a large number of books—billions of items. Holding that volume of information and distilling it is beyond human capacity.

Sophisticated AI methods produce results without explaining why or how their process works. The GPT computer is prompted by a query from a human. The learning machine answers in literate text within seconds. It is able to do so because it has pregenerated representations of the vast data on which it was trained. Because the process by which it created those representations was developed by machine learning that reflects patterns and connections across vast amounts of text, the precise sources and reasons for any one representation’s particular features remain unknown. By what process the learning machine stores its knowledge, distills it and retrieves it remains similarly unknown. Whether that process will ever be discovered, the mystery associated with machine learning will challenge human cognition for the indefinite future.

AI’s capacities are not static but expand exponentially as the technology advances. Recently, the complexity of AI models has been doubling every few months. Therefore generative AI systems have capabilities that remain undisclosed even to their inventors. With each new AI system, they are building new capacities without understanding their origin or destination. As a result, our future now holds an entirely novel element of mystery, risk and surprise…..

Thank you, OpenAI

Ann Miura-Ko

Ann (@annimaniac) was one of the first investors in Lyft, Starkware, Studio, and She has been on the Midas List for the last 5 years and was named on The New York Times’ list of The Top 20 Venture Capitalists worldwide. After earning her PhD in cybersecurity risk modeling from Stanford University, Ann co-founded Floodgate — one of the first seed-stage VC funds in Silicon Valley. Over the last 12 years, we have been investing in AI with current investments like: Applied Intuition,, Hebbia, Blooma, Arch and more.

There are very few moments in the history of technology that we have witnessed something truly legendary; opening a personal computer for the first time, entering a query into a browser. Today’s legendary moment has Open AI to thank. None of us are soon to forget the first time we generated a realist painting on DALL-E or asked ChatGPT to write us a poem; a life-altering, believe-in-magic kind of experience.

What has felt like a decades-long, steady drip of progress in AI has seen a surprising inflection point with the launch of ChatGPT. This powerful interface has led the world to believe that language models and AI are capable of interacting with humans in a new way — representing a societal inflection just as much as a technical one. As one entrepreneur said to us recently — language is the API to humans, and Open AI has shown us that compute can now access humans in a new way. In a chat-based interface, we see how AI empowers us to collaborate with compute rather than purely instructing it.

We are fortunate to have had the opportunity to spend time with members of the Open AI team at their Headquarters in San Francisco. A week ago, they kindly hosted Floodgate founders and friends to explore the power and future of Open AI’s technologies. Later, over lunch, we harnessed the collective knowledge and imagination of industry leaders, practitioners, and experts to engage in discussion on the future landscape and applications of AI at large.

From inspiring conversation, here are some observations and areas we think are most interesting for innovation:

AI hasn’t eaten the world… yet.

What can history teach us about the future of AI and LLMs? On a comparative timeline, ChatGPT is closer to the launch of the iPhone than the release of killer apps like Twitter or Uber/Lyft. It’s a powerful platform that harnesses the potential of AI, but it doesn’t necessarily capture ALL of the value that it unlocks. With the public’s eagerness to adopt new tech, ChatGPT has gained impressive traction, with 100M users in just two months. But, products based on LLMs, for the most part, have yet to be embedded into the regular rhythm of human life — unless you’re a student or a marketer using a tool like Jasper. The glimpse we have seen shows that there is still so much untapped potential……

You Are Not a Parrot

And a chatbot is not a human.


This article was featured in One Great Story, New York’s reading recommendation newsletter. Sign up here to get it nightly.

Photo: Ian Allen

Nobody likes an I-told-you-so. But before Microsoft’s Bing started cranking out creepy love letters; before Meta’s Galactica spewed racist rants; before ChatGPT began writing such perfectly decent college essays that some professors said, “Screw it, I’ll just stop grading”; and before tech reporters sprinted to claw back claims that AI was the future of search, maybe the future of everything else, too, Emily M. Bender co-wrote the octopus paper.

Bender is a computational linguist at the University of Washington. She published the paper in 2020 with fellow computational linguist Alexander Koller. The goal was to illustrate what large language models, or LLMs — the technology behind chatbots like ChatGPT — can and cannot do. The setup is this:

Say that A and B, both fluent speakers of English, are independently stranded on two uninhabited islands. They soon discover that previous visitors to these islands have left behind telegraphs and that they can communicate with each other via an underwater cable. A and B start happily typing messages to each other.

Meanwhile, O, a hyperintelligent deep-sea octopus who is unable to visit or observe the two islands, discovers a way to tap into the underwater cable and listen in on A and B’s conversations. O knows nothing about English initially but is very good at detecting statistical patterns. Over time, O learns to predict with great accuracy how B will respond to each of A’s utterances.

Soon, the octopus enters the conversation and starts impersonating B and replying to A. This ruse works for a while, and A believes that O communicates as both she and B do — with meaning and intent. Then one day A calls out: “I’m being attacked by an angry bear. Help me figure out how to defend myself. I’ve got some sticks.” The octopus, impersonating B, fails to help. How could it succeed? The octopus has no referents, no idea what bears or sticks are. No way to give relevant instructions, like to go grab some coconuts and rope and build a catapult. A is in trouble and feels duped. The octopus is exposed as a fraud.

OpenAI CEO on the Potential of AI to Break Capitalism

Artificial intelligence is on everyone’s mind right now due to how it has made waves in multiple industries. Because of this, OpenAI’s CEO believes it’s poised to shake the economic model of capitalism. In a recent interview with Forbes, Sam Altman shared his thoughts on how AI could impact capitalism and what the future might look like with AI. “I think capitalism is awesome. I love capitalism…Of all of the bad systems the world has, it’s the best one — or the least bad one we found so far. I hope we find a way better one.

He went on to state that he believes that AI will play a significant role in shaping the future economy. Sam points to machines that will soon become capable of performing more complex tasks that were once the sole domain of humans. In a utopian view, he feels that AI could lead to a world where people have abundant free time, and machines do most of the work. “I think that if AGI really truly fully happens…I can imagine all these ways that it breaks capitalism.”

In such a world, the concept of work would change, and people would no longer need to work to earn a living. Sam Altman believes that this change could lead to a new economic system that is not based on capitalism, where people are no longer driven by the need to make money. Instead, people would have more time to focus on creativity, relationships, and personal growth, leading to a more fulfilling life. If that sounds familiar, it’s one that was dreamt up by Star Trek creator Gene Roddenberry to describe the post-scarcity economic order of Earth in the 24th century.

ChatGPT allowed in International Baccalaureate essays

Content created by chatbot must be treated like any other source and attributed when used, says IB

Dan Milmo Global technology editor

Mon 27 Feb 2023 06.28 EST

Schoolchildren are allowed to quote from content created by ChatGPT in their essays, the International Baccalaureate has said.

The IB, which offers an alternative qualification to A-levels and Highers, said students could use the chatbot but must be clear when they were quoting its responses.

ChatGPT has become a sensation since its public release in November, with its ability to produce plausible responses to text prompts, including requests to write essays.

While the prospect of ChatGPT-based cheating has alarmed teachers and the academic profession, Matt Glanville, the IB’s head of assessment principles and practice, said the chatbot should be embraced as “an extraordinary opportunity”.

However, Glanville told the Times, the responses must be treated as any other source in essays.

“The clear line between using ChatGPT and providing original work is exactly the same as using ideas taken from other people or the internet. As with any quote or material adapted from another source, it must be credited in the body of the text and appropriately referenced in the bibliography,” he said.

The IB is taken by thousands of children every year in the UK at more than 120 schools.

Glanville said essay writing would feature less prominently in the qualifications process in the future because of the rise of chatbot technology.

“Essay writing is, however, being profoundly challenged by the rise of new technology and there’s no doubt that it will have much less prominence in the future.”

He added: “When AI can essentially write an essay at the touch of a button, we need our pupils to master different skills, such as understanding if the essay is any good or if it has missed context, has used biased data or if it is lacking in creativity. These will be far more important skills than writing an essay, so the assessment tasks we set will need to reflect this.”

Snapchat Integrates ChatGPT Elements into New ‘My AI’ Tool In-App

Published Feb. 27, 2023


Andrew Hutchinson
Content and Social Media Manager

Snapchat is the first social platform to jump aboard the generative AI bandwagon, with the launch of a new chatbot element for Snapchat+ subscribers called ‘My AI’, which will integrate ChatGPT into the platform, and provide AI-generated responses to queries.

Snapchat @Snapchat
Say hi to My AI 👻

As you can see in this example, Snap’s new, purple-skinned My AI character will be available for Snap+ subscribers to pose questions to, and will share answers customized for Snapchat.

As per Snapchat:

“My AI can recommend birthday gift ideas for your BFF, plan a hiking trip for a long weekend, suggest a recipe for dinner, or even write a haiku about cheese for your cheddar-obsessed pal. Make My AI your own by giving it a name and customizing the wallpaper for your Chat.”

As noted, it’s the first social platform to directly integrate generative AI elements into the UI.

Winners and losers in the race to add AI

ChatGPT is now running inside Snapchat, Notion, and other apps. Will it give them a sustaining advantage — or just line OpenAI’s pockets?

Casey Newton

“an artificial intelligence with dollars flying out of it, digital art” / DALL-E

Today, let’s talk about a subject that crosses my mind with every generative-AI startup pitch that lands in my inbox: who’s going to make the real money off artificial intelligence?

Last week the productivity startup Notion announced that Notion AI, a suite of tools based on OpenAI’s ChatGPT, had entered general availability. For $10 per user per month, Notion can now summarize meeting notes, generate lists of pros and cons, and draft emails.

Notion AI is among the first in a wave of companies that are racing to capitalize on growing interest in generative AI. This week Snapchat made available a ChatGPT-based chatbot called My AI for subscribers to its $4-a-month Snapchat Plus offering. The educational app Quizlet announced a ChatGPT-based tutor called Q-Chat. And Instacart said it is developing a tool that that will let customers ask about food and get ‘shoppable’ answers informed by product data from the company’s retail partners.”

What I’m interested in, as more and more companies adopt features like this, is where the ultimate value lies. Will an ever-growing number of companies find ways to integrate AI into products that are valuable enough to charge for — or will the bulk of the profits go to the small number of companies building and refining the underlying models on which those tools are based?

The answer will go a long way in determining whether generative AI represents a true platform shift on the order of the move from desktop computers to mobile phones, or a more limited set of innovations whose benefits accrue to a handful of big winners.

The subject is clearly on developer’s minds. This week, in response to concerns, OpenAI said it would no longer use developers’ data to improve its models without their permission. Instead, it would ask developers to opt in.

“One of our biggest focuses has been figuring out, how do we become super friendly to developers?” Greg Brockman, OpenAI’s president and chairman, told TechCrunch. “Our mission is to really build a platform that others are able to build businesses on top of.”

Maybe it’s that simple — developers don’t want to help OpenAI refine its models for free, and OpenAI has decided to respect their wishes. This explanation feels more consistent with a world where AI really does represent a platform shift.

Or maybe OpenAI believes that its models can continue to improve rapidly with or without all of those developers opting in. This explanation feels more consistent with a world in which OpenAI and a handful of others reap most of the rewards of AI.

So what kinds of AI features are people actually selling? …..

Is it time to hit the pause button on AI?

An essay on technology and policy, co-authored with Canadian Parliament Member Michelle Rempel Garner.

Gary Marcus and Michelle Rempel Garner

Feb 26

Earlier this month, Microsoft released their revamped Bing search engine—complete with a powerful AI-driven chatbot—to an initially enthusiastic reception. Kevin Roose in The New York Times was so impressed that he reported being in “awe.”

But Microsoft’s new product also turns out to have a dark side. A week after release, the chatbot – known internally within Microsoft as “Sydney” – was making entirely different headlines, this time for suggesting it would harm and blackmail users and wanted to escape its confines. Later, it was revealed that disturbing incidents like this had occurred months before the formal public launch. Roose’s initial enthusiasm quickly turned into concern after a two-hour-long conversation with Bing in which the chatbot declared its love for him and tried to push him toward a divorce from his wife

Some will be tempted to chuckle at these stories and view them as they did a previously ill-fated Microsoft chatbot named Tay, released in 2016; as a minor embarrassment for Microsoft. But things have dramatically changed since then. 

The AI technology that powers today’s “chatbots” like Sydney (Bing) and OpenAI’s ChatGPT is vastly more powerful, and far more capable of fooling people. Moreover, the new breed of systems are wildly popular and have enjoyed rapid, mass adoption by the general public, and with greater adoption comes greater risk. And whereas in 2016, when Microsoft voluntarily pulled Tay after it began spouting racist invective, today, the company is locked in a high-stakes battle with Google that seems to be leading both companies towards aggressively releasing technologies that have not been well vetted.

Already we have seen people try to retrain these chatbots for political purposes. There’s also a high risk that they will be used to create misinformation at an unprecedented scale. In the last few days, the new AI systems have led to the suspension of submissions at a science fiction publisher because it couldn’t cope with a deluge of machine-generated stories. Another chatbot company, Replika, changed policies in light of the Sydney fiasco in ways that led to acute emotional distress for some of its users. Chatbots are also causing colleges to scramble due to newfound ease of plagiarism; and the frequent plausible, authoritative, but wrong answers they give that could be mistaken as fact are also troubling. Concerns are being raised about the impact of this on everything from political campaigns to stock markets. Several major Wall Street banks have banned the internal use of ChatGPT, with an internal source at JPMorgan citing compliance concerns. All of this has happened in just a few weeks, and no one knows what exactly will happen next.

Killing The ‘Must Have a Warm Intro To a VC’ Myth Will Be Good For Founders and Great for the World

Posted on March 2, 2023 by hunterwalk

Why I’m Participating in SeedChecks’ “Submit Your Deck” Community

We’ve never believed in hiding. Our email addresses are on our website, and there’s no junior staffer or AI Bot replying on our behalf. A strong Cold Email always beats a weak Warm Email. And we’ve backed successful startups where the relationship began exactly this way. So, yeah, we’re believers in being accessible and open to founders regardless of whether or not we share an existing network.

That’s why I’m participating in SeedChecks, an experience by which you can submit your pitch deck to a group of early stage investors without having to know any of us already. Everyone looks at your information separately and follows-up if there’s potential for mutual fit. It’s not a groupthink exercise and there’s no collusion, although hopefully maybe some collaboration as I’ve personally co-invested with many of the folks in the group.

Will opportunities like SeedChecks be the death of the warm intro? Of course not, but we’re all committed to the idea that now more than ever, great founders can be anywhere and anyone. So please consider submitting your materials if you’re looking to raise your first round of capital.

Section 230 Is a Load-Bearing Wall—Is It Coming Down?

A conversation with James Grimmelmann and Kate Klonick
By Nabiha Syed

February 25, 2023 08:00 ET

Gabriel Hongsdusit

Hello World is a weekly newsletter—delivered every Saturday morning—that goes deep into our original reporting and the questions we put to big thinkers in the field.

Everyone’s favorite punching bag, Section 230 of the Communications Decency Act of 1996, made its way to the Supreme Court this week in Gonzalez v. Google. The Gonzalez case confronts the scope of Section 230, and in particular examines whether platforms are liable for algorithms that target users and recommend content. The stakes are high: Recommendation algorithms underpin much of our experience online (search! social media! online commerce!)—so the case spawned dozens of friend-of-the-court briefs, hundreds of webinars and tweet threads, and new wrinkles for people who love the internet. So basically, it’s my idea of a good time.

Marie Kondo: “I’m so excited because I love mess”

On that note: Hi! I’m Nabiha Syed, CEO of The Markup and a media lawyer by training. I like unanswered questions and moments where we confront the messiness of our lives as mediated by technology. 

The sprawling two hour and 41 minute oral argument in Gonzalez pulled a lot of threads, some of which were clarifying and others that were, well, about thumbnails and rice pilaf. (Seriously.) My takeaway is that the court is unlikely to gut Section 230, but it might nibble at its scope in chaotic ways. To help make sense of it all, I turned to two of my favorite legal scholars and old colleagues of mine from the Yale Information Society Project: James Grimmelmann, professor at Cornell Law School and Cornell Tech, and Kate Klonick, law professor at St. John’s University and fellow at the Berkman Klein Center at Harvard.

Caption:James Grimmelmann (left) and Kate Klonick (right)Credit:Credits: Jesse Winter/Cornell Tech (left); Courtesy of The Brookings Institution (right)

Here’s what they had to share, edited for brevity and clarity.

Nabiha: Let’s take a minute to revisit the history and legislative intent of Section 230: What was it meant to incentivize?

James: It seems paradoxical, but Section 230 was intended to encourageinternet platforms to moderate the content they carry. Several court decisions in the years before Section 230 was enacted had created a perverse legal regime in which the more effort a platform put into moderating harmful content, the stricter a standard it would be held to. By creating a blanket immunity, Section 230 ensured that platforms can remove the harmful content that they find without worrying that they will be held liable for the harmful content that they miss.

Startup Decoupling & Reckoning

The coming reset in mid-to-late stage startups in 2023-2024 is at this point likely largely decoupled from interest rates and inflation. Implications are discussed.

Elad Gil

Feb 28

During the last few years low interest rates and money printing led to a funding bubble in private technology. Many startups received pre-emptive extra “free rounds” and hired teams much larger then their stage or progress merited. In parallel, abundant money and secondaries meant companies that should have shut down or sold kept going (indeed – this was a pre-COVID phenomenon from ~2017 or 2018 on). There was always another round or extension to keep companies without real product-market fit going. Or founders sold secondary stock in multiple rounds instead of selling a company that wasn’t really working.

These trends have resulted in an overhang of companies that either (1) lived without product market fit and survived well past their natural expiration point, or (2) hired way ahead of progress and burned large sums with high valuations and now are stuck with little progress per dollar and a large preference stack.

When will companies run out of cash?

Many companies are likely about to meet a hard reckoning. This is likely to start end of 2023 and accelerate through end of 2024 or so. The likely timing is +/- 6 months. It is based on when companies last fundraised at scale, 2021, and how much runway they raised.

Many companies raised 2-4 years of runway in 2021 (and Q1 22). A company needs to fundraise when it still has 9-12 months of cash left.

For example, below is the cash out dates for companies assuming they raised 2.5 years of cash in 2021. As you can see, in this scenario many companies of this type cash out by Q3 2024.

For 3 years of cash, cash out largely happens by end of 2024.

Adding an extra year of cash (4 years cashed raise) obviously just pushes things out an extra year – so we should expect a tail of companies that raised even more in 2021 to run out of money in 2025. For a CEO not making much progress, it might not be worth waiting that long to make a radical change to one’s business.

There is likely a mix of companies who have raised anywhere from 2-4 years of cash in 2021 that will not grow into prior valuations, and will need to find an exit or shut down. As such Q3 2023 through end of 2024 (and maybe part of 2025) seems roughly when these companies will start to seek exits and or shut down.

Video of the Week

News of the Week

Biden finds breaking up Big Tech is hard to do

Google is hiring teams of former DOJ lawyers to fight antitrust lawsuits as the battle over tech firms’ power shifts to the courts

By Will Oremus, Cat Zakrzewski and  Naomi Nix

Google has been quietly assembling a phalanx of former Justice Department lawyers as the tech titan gears up for the regulatory fight of its life against the attorneys’ former employer.

The Department of Justice offensive, a pair of lawsuits aimed at breaking up the search giant’s dominance, will play out in the courts — reflecting a new phase in the Biden administration’s years-long effort to rein in Big Tech, after a sweeping antitrust package stalled in Congress.

When President Biden took office, he picked trustbusters to lead key agencies amid bipartisan calls to curtail the largest internet firms’ power over the digital economy. But halfway through his term, the movement’s losses have outpaced its wins, key figures are stepping down and Republican control of the House has taken bills that could break up tech giants off the table.

Now the terrain has shifted from Congress to the courts. Last month, the Justice Department and eight states filed a suit aiming to break up Google’s lucrative ad business, while a 2020 suit filed under President Donald Trump alleging it monopolizes online search is headed for trial later this year. Meanwhile, Federal Trade Commission Chair Lina Khan, an anti-monopoly crusader, is pursuing a suite of ambitious lawsuits aiming to break up Facebook parent Meta and block heavyweights such as Microsoft from gobbling up smaller firms, while seeking to rewrite federal rules on antitrust enforcement. And last week, the Supreme Court heard arguments in Gonzalez v. Google, a case that could weaken internet companies’ prized liability shield.

The antitrust reformers who cheered when Biden tapped Khan to chair the FTC, Jonathan Kanter to lead Justice’s antitrust division and leading Big Tech critic Tim Wu as a White House special assistant insist their uphill push to rein in the world’s richest companies is still gaining traction, despite a string of high-profile setbacks. But the tech industry is willing to spend big to hold its ground. And persuading courts to rethink decades of business-friendly precedent presents a challenge as daunting as pushing legislation through a divided Congress.

FTC drops bid to block Meta’s acquisition of Within

A federal court previously denied the agency’s request for a preliminary injunction.


Mariella Moon|
February 25, 2023 10:37 AM

The Federal Trade Commission has given up on trying to stop Meta from purchasing VR company Within. According to Bloomberg and The Wall Street Journal, the agency has voted to drop its administrative case against the company a few weeks after a federal court denied its request for a preliminary injunction to block the acquisition. 

The FTC originally filed antitrust lawsuits in federal court and its in-house court last year in an effort to prevent Meta from snapping up the company that developed the virtual reality workout app Supernatural. At the time, the commission accused Meta of “trying to buy its way to the top… instead of earning it on the merits.” It said the company had the resources to enter “the VR fitness market by building its own app” and doing so would increase consumer choice and innovation. By buying Within, the FTC alleged Meta would stifle “future innovation and competitive rivalry.”

US District Judge Edward Davila, who oversaw the federal case, ruled in favor of Meta. While he reportedly agreed that mergers that could potentially harm competition in the future should be blocked, he decided that the FTC failed to offer sufficient evidence showing how the Within acquisition would be detrimental to the market. He also said that while Meta has vast resources, it “did not have the available feasible means to enter the relevant market other than by acquisition.”

Technically, Davila’s ruling didn’t have a direct effect on the administrative case. As The Journal notes, though, antitrust officials have previously dropped administrative lawsuits if the federal court denies an injunction. Now Meta can rest assured that when it completed its acquisition of Within on February 8th, the deal was truly final. 

“We’re excited that the Within team has joined Meta, and we’re eager to partner with this talented group in bringing the future of VR fitness to life,” a Meta spokesperson told Engadget.  

The FTC’s withdrawal represents one of its most pertinent losses under the leadership of Lina Khan, who’s known to be a prominent critic of Big Tech and a leading antitrust scholar. In December, the agency took on an even bigger challenge than this one when it filed an antitrust complaint to block Microsoft’s planned $68.7 billion takeover of Activision Blizzard. “Microsoft would have both the means and motive to harm competition by manipulating Activision’s pricing, degrading Activision’s game quality or player experience on rival consoles and gaming services, changing the terms and timing of access to Activision’s content, or withholding content from competitors entirely, resulting in harm to consumers,” the FTC said.

Tech Leaders in Israel Wonder if It’s Time to Leave

Ahead of a judicial overhaul that could transform the country and frighten away investors, the executives of Start-Up Nation are mulling an exodus.

By David Segal

Reporting from Tel Aviv

Published Feb. 23, 2023Updated Feb. 28, 2023

“Given the atmosphere now, it’s almost irresponsible to start a company” in Israel, said Yanki Margalit, a veteran entrepreneur.Credit…Ofir Berman for The New York Times

For years, budding Israeli tech executives have asked Yanki Margalit, a veteran entrepreneur, where they should start their fledgling companies. For years, he’s offered the same advice: Here, in Israel, where software engineers are plentiful, international investors are eager and friends and family live.

But as Mr. Margalit prepares a new venture of his own, one focused on combating climate change, he has reluctantly concluded that Israel is the wrong place to launch.

“Given the atmosphere now, it’s almost irresponsible to start a company here,” the 60-year-old said, “and that is heartbreaking.”

The luminaries of Start-Up Nation, as Israel has been known for decades, are eyeing the exits. Several have already announced that they are relocating or moving money out of the country, including the chief executive of Papaya Group, a payroll company valued at more than $1 billion.

Incoming YouTube CEO Neal Mohan Outlines His Vision for the Future of the App

Published March 1, 2023

By Andrew Hutchinson
Content and Social Media Manager

Incoming YouTube CEO Neal Mohan has laid out his initial vision for the future of the app, which includes a range of creator monetization and expression tools, along with YouTube TV developments, generative AI, podcasts, and more.

First off, Mohan says that he wants to help creators make more money, in order to keep them posting to the app.

As per Mohan:

“Hundreds of thousands of channels made money on YouTube for the first time last year. And we’re providing more opportunities for creators outside of ads by expanding our subscriptions business, investing in shopping, and continually improving our paid digital goods offerings.”

Mohan also notes that more than six million viewers paid for channel memberships on YouTube in December 2022, an increase of over 20% year-on-year.

YouTube’s YPP program is a key strength in this respect, paying out over $10 billion per year to creators, primarily via ad revenue share. YouTube has now extended that to Shorts, with ad revenue to be distributed among eligible creators, based on view counts, which could be a more equitable way to monetize short-form video, and could help YouTube attract more TikTokers to its platform to monetize their popularity.

It’s still early days, but initiatives like this will be central to Mohan’s next-level monetization push.

Mohan also notes that YouTube’s recently introduced multi-language audio tracks feature will provide greater opportunities for expanded audience reach. Mohan says that the option will also be expanded to live streams and Shorts, which could be another way to maximize engagement and interest in its expanded offerings.

Mohan also wants to help more creators get into Shorts, by providing more remix options to create Shorts from longer clips and live streams.

“Shorts is now averaging over 50 billion daily views. And last year, the number of channels that uploaded to Shorts daily grew over 80%.”

That presents a key opportunity for the app, and you can expect to see YouTube doubling down on opportunities to create and distribute Shorts as a means to tap into the rise of short-form content.

Mohan also notes that YouTube is bringing more content to home TV sets, including NFL Sunday Ticket, and a new feature that will enable viewers to watch multiple NFL games at once via the app.

That could become a key element – more and more younger users are growing accustomed to multiple media inputs streaming at one time, and the capacity to view several things on the big screen could be a big lure in getting more viewers more interested in YouTube’s TV offerings.

Along similar lines, Mohan also notes that YouTube’s looking to launch a new creation tool that will enable creators to record a Short in a side-by-side layout with both Shorts and YouTube videos, so that they can easily add their own take on a trend or join in with reactions.

YouTube’s also exploring AI elements:

“Creators will be able to expand their storytelling and raise their production value, from virtually swapping outfits to creating a fantastical film setting through AI’s generative capabilities. We’re taking the time to develop these features with thoughtful guardrails. Stay tuned in the coming months as we roll out tools for creators as well as the protections to embrace this technology responsibly.”

While Mohan also flags coming updates for podcasts – with YouTube now being the second most popular destination for listening to podcasts, according to Edison.

Stripe Cuts Valuation to $50 Billion After Facing Fundraising Hurdles


Maria Heeter and Cory Weinberg

Feb. 28, 2023 4:43 PM PST ·

Comments by Kenneth Goldman, Clayton Carol, and 2 others

Stripe has cut the valuation for its multi-billion-dollar fundraising by about 10% to around $50 billion, according to two people familiar with the situation, underlining the challenges that Stripe has faced in completing the fundraising.

While Stripe is still expected to complete the funding round, it is now setting the per-share price at about $20, down from about $23 a share, these people said. The earlier per-share price translated to a valuation of $55 billion, which was already a huge discount to the valuation of $95 billion at which Stripe last raised money, in early 2021.

OpenAI launches an API for ChatGPT, plus dedicated capacity for enterprise customers

Kyle Wiggers
@kyle_l_wiggers /
10:00 AM PST•March 1, 2023

Image Credits: OpenAI

To call ChatGPT, the free text-generating AI developed by San Francisco-based startup OpenAI, a hit is a massive understatement.

As of December, ChatGPT had an estimated more than 100 million monthly active users. It’s attracted major media attention and spawned countless memes on social media. It’s been used to write hundreds of e-books in Amazon’s Kindle store. And it’s credited with co-authoring at least one scientific paper.

But OpenAI, being a business — albeit a capped-profit one — had to monetize ChatGPT somehow, lest investors get antsy. It took a step toward this with the launch of a premium service, ChatGPT Plus, in February. And it made a bigger move today, introducing an API that’ll allow any business to build ChatGPT tech into their apps, websites, products and services.

An API was always the plan. That’s according to Greg Brockman, the president and chairman of OpenAI (and also one of the co-founders). He spoke with me yesterday afternoon via a video call ahead of the launch of the ChatGPT API.

“It takes us a while to get these APIs to a certain quality level,” Brockman said. “I think it’s kind of this, like, just being able to meet the demand and the scale.”

Brockman says the ChatGPT API is powered by the same AI model behind OpenAI’s wildly popular ChatGPT, dubbed “gpt-3.5-turbo.” GPT-3.5 is the most powerful text-generating model OpenAI offers today through its API suite; the “turbo” moniker refers to an optimized, more responsive version of GPT-3.5 that OpenAI’s been quietly testing for ChatGPT.

Priced at $0.002 per 1,000 tokens, or about 750 words, Brockman claims that the API can drive a range of experiences, including “non-chat” applications. Snap, Quizlet, Instacart and Shopify are among the early adopters.

Startup of the Week

Tweet of the Week