A reminder for new readers. That Was The Week collects the best writing on critical issues in tech, startups, and venture capital. I select the articles because they are of interest. The selections often include things I disagree with. The articles are only snippets. Click on the headline to go to the original. I express my point of view in my editorial and the weekly video.
Content this week from @sama, @stratechery, @annasofialesiv, @albertwenger, @vkhosla, @reedalbergotti, @chloexiang, @goodeggs, @amir, @mingchikuo, @nfl, @hunterwalk, @nmasc_
Contents
Editorial: An Open Letter
Essays of the Week
Pause Giant AI Experiments: An Open Letter
The Open Letter to Stop ‘Dangerous’ AI Race Is a Huge Mess
Vinod Khosla on how AI will ‘free humanity from the need to work’
A Short History of Artificial Intelligence
The Accidental Consumer Tech Company; ChatGPT, Meta, and Product-Market Fit; Aggregation and APIs
Video of the Week
Coldfusion – AI is Evolving Faster Than You Think [GPT-4 and beyond]
News of the Week
Death of The Generalist Seed VC
The NFL Is Quietly Investing Millions Into Its Venture Capital Fund, 32 Equity
Apple Mixed-Reality Headset May Not Appear at WWDC as Mass Production Pushed Back Yet Again
Startup of the Week
Good Eggs Cuts Its Valuation 94% in Lifeline Financing as More Startups Get Desperate
Tweet of the Week
Sam Altman
Editorial: An Open Letter
1) the technical ability to align a superintelligence
2) sufficient coordination among most of the leading AGI efforts
3) an effective global regulatory framework including democratic governance
No comment this week, just an alternative Open Letter regarding AI. Enjoy.
We encourage all AI labs to continue exploring and developing AI systems beyond GPT-4 while prioritizing safety and ethical considerations.
AI systems with human-competitive intelligence have the potential to bring about unparalleled benefits to society and humanity, as demonstrated by a wealth of research and recognized by leading AI labs. The Asilomar AI Principles remind us that advanced AI could represent a profound change in the history of life on Earth and thus should be planned for and managed with the appropriate care and resources. We believe that responsible AI development can coexist with this careful approach.
Throughout history, human progress has been marked by developing and adopting time-saving techniques, enabling the division of labor and increased productivity. AI has the potential to enhance this trend further, automating repetitive tasks and allowing humans to focus on more fulfilling and creative endeavors. By leveraging AI, we can augment human capabilities, resulting in a more efficient and prosperous society where individuals can unlock their full potential and contribute to the collective growth of humanity.
Contemporary AI systems are approaching human competitiveness in general tasks, opening up opportunities. We must ask ourselves: Can we use AI to combat misinformation and enhance the quality of information available? Can we create new, fulfilling jobs that leverage the strengths of both humans and AI? Can we work alongside nonhuman minds to solve some of humanity’s most pressing challenges? We believe that responsible AI development can answer these questions affirmatively.
We encourage AI labs to actively pursue the development of AI systems more powerful than GPT-4 while remaining transparent and committed to safety measures. OpenAI’s recent statement regarding artificial general intelligence suggests that “At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models.” We agree. That point is now, and we must work collaboratively to ensure responsible progress.
We urge AI labs and independent experts to work together to develop and implement a set of shared safety protocols for advanced AI design and development. These protocols should be audited and overseen by independent outside experts, ensuring that systems adhering to them are safe beyond a reasonable doubt.
AI research and development should focus on creating more powerful systems and making existing state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.
In parallel, we propose that AI developers, researchers, industry stakeholders, and end-users collaborate to establish a self-governing, multi-stakeholder organization similar to how ICANN operates within the domain name space. This organization would be responsible for creating and enforcing robust governance systems for AI, including dedicated oversight of highly capable AI systems and large pools of computational capability, provenance, and watermarking systems to distinguish real from synthetic content and to track model leaks, a robust auditing and certification ecosystem, and liability for AI-caused harm.
Humanity can thrive in the future with AI. By promoting responsible AI development, we can move toward a world where these powerful systems are engineered for the clear benefit of all, and society has the opportunity to adapt. Let’s embrace the potential of AI to drive progress while maintaining a steadfast commitment to safety and ethical considerations.
Essays of the Week
Pause Giant AI Experiments: An Open Letter
We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.
AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research[1] and acknowledged by top AI labs.[2] As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.
Contemporary AI systems are now becoming human-competitive at general tasks,[3] and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system’s potential effects. OpenAI’s recent statement regarding artificial general intelligence, states that “At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models.” We agree. That point is now.
Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.
AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.[4] This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.
AI research and development should be refocused on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.
In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems. These should at a minimum include: new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.
Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an “AI summer” in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt. Society has hit pause on other technologies with potentially catastrophic effects on society.[5] We can do so here. Let’s enjoy a long AI summer, not rush unprepared into a fall.
Notes and references
[1] Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021, March). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?🦜. In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency (pp. 610-623).
Bostrom, N. (2016). Superintelligence. Oxford University Press.
Bucknall, B. S., & Dori-Hacohen, S. (2022, July). Current and near-term AI as a potential existential risk factor. In Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society (pp. 119-129).
Carlsmith, J. (2022). Is Power-Seeking AI an Existential Risk?. arXiv preprint arXiv:2206.13353.
Christian, B. (2020). The Alignment Problem: Machine Learning and human values. Norton & Company.
Cohen, M. et al. (2022). Advanced Artificial Agents Intervene in the Provision of Reward. AI Magazine, 43(3) (pp. 282-293).
Eloundou, T., et al. (2023). GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models.
Hendrycks, D., & Mazeika, M. (2022). X-risk Analysis for AI Research. arXiv preprint arXiv:2206.05862.
Ngo, R. (2022). The alignment problem from a deep learning perspective. arXiv preprint arXiv:2209.00626.
Russell, S. (2019). Human Compatible: Artificial Intelligence and the Problem of Control. Viking.
Tegmark, M. (2017). Life 3.0: Being Human in the Age of Artificial Intelligence. Knopf.
Weidinger, L. et al (2021). Ethical and social risks of harm from language models. arXiv preprint arXiv:2112.04359.
[2] Ordonez, V. et al. (2023, March 16). OpenAI CEO Sam Altman says AI will reshape society, acknowledges risks: ‘A little bit scared of this’. ABC News.
Perrigo, B. (2023, January 12). DeepMind CEO Demis Hassabis Urges Caution on AI. Time.
[3] Bubeck, S. et al. (2023). Sparks of Artificial General Intelligence: Early experiments with GPT-4. arXiv:2303.12712.
OpenAI (2023). GPT-4 Technical Report. arXiv:2303.08774.
[4] Ample legal precedent exists – for example, the widely adopted OECD AI Principles require that AI systems “function appropriately and do not pose unreasonable safety risk”.
[5] Examples include human cloning, human germline modification, gain-of-function research, and eugenics.
The Open Letter to Stop ‘Dangerous’ AI Race Is a Huge Mess
The letter has been signed by Elon Musk, Steve Wozniak, Andrew Yang, and leading AI researchers, but many experts and even signatories disagreed.
By Chloe Xiang, March 29, 2023, 11:47am
IMAGE: NURPHOTO / CONTRIBUTOR VIA GETTY IMAGES
More than 30,000 people—including Tesla’s Elon Musk, Apple co-founder Steve Wozniak, politician Andrew Yang, and a few leading AI researchers—have signed an open letter calling for a six-month pause on training AI systems more powerful than GPT-4.
The letter immediately caused a furor as signatories walked back their positions, some notable signatories turned out to be fake, and many more AI researchers and experts vocally disagreed with the letter’s proposal and approach.
The letter was penned by the Future of Life Institute, a nonprofit organization with the stated mission to “reduce global catastrophic and existential risk from powerful technologies.” It is also host to some of the biggest proponents of longtermism, a kind of secular religion boosted by many members of the Silicon Valley tech elite since it preaches seeking massive wealth to direct towards problems facing humans in the far future. One notable recent adherent to this idea is disgraced FTX CEO Sam Bankman-Fried.
Specifically, the institute focuses on mitigating long-term “existential” risks to humanity such as superintelligent AI. Musk, who has expressed longtermist beliefs, donated $10 million to the institute in 2015. ..
Despite this verification process, the letter started out with a number of false signatories, including people impersonating OpenAI CEO Sam Altman, Chinese president Xi Jinping, and Chief AI Scientist at Meta, Yann LeCun, before the institute cleaned the list up and paused the appearance of signatures on the letter as they verify each one.
The letter has been scrutinized by many AI researchers and even its own signatories since it was published on Tuesday. Gary Marcus, a professor of psychology and neural science at New York University, who told Reuters “the letter isn’t perfect, but the spirit is right.” Similarly, Emad Mostaque, the CEO of Stability.AI, who has pitted his firm against OpenAI as a truly “open” AI company, tweeted, “So yeah I don’t think a six month pause is the best idea or agree with everything but there are some interesting things in that letter.”
Vinod Khosla on how AI will ‘free humanity from the need to work’
Reed Albergotti, Mar 29, 2023, 8:50am PDT
THE SCENE
At nearly 70, Vinod Khosla is an elder statesman in Silicon Valley but he’s been on top of the most cutting-edge trends, including artificial intelligence.
When ChatGPT-maker OpenAI decided to switch from a nonprofit to a private enterprise in 2019, Khosla was the first venture capital investor, jumping at the opportunity to back the company that, as we reported last week, Elon Musk thought was going nowhere at the time. Now it’s the hottest company in the tech industry.
If you go back and read what Khosla wrote about artificial intelligence a decade ago, it sounds remarkably — even eerily— like what people are saying today.
That kind of foresight is why he has had staying power in the tech industry. He created Sun Microsystems in 1982 (its programming languages like Java are still used today) and joined venture capital firm Kleiner Perkins in 1986, where he was an early backer of AMD, Juniper Networks, and Excite. He launched Khosla Ventures in 2004, where he’s been a leader in cleantech investing and has scored home runs on Impossible Foods, Instacart, and DoorDash.
I spoke to him about a wide range of topics, including how he seemed to see our AI future so long ago. The following is an edited excerpt of our conversation.
Thinking About AI
Albert Wenger
I am writing this post to organize and share my thoughts about the extraordinary progress in artificial intelligence over the last years and especially the last few months (link to a lot of my prior writing). First, I want to come right out and say that anyone still dismissing what we are now seeing as a “parlor trick” or a “statistical parrot” is engaging in the most epic goal post moving ever. We are not talking a few extra yards here, the goal posts are not in the stadium anymore, they are in a far away city.
Growing up I was extremely fortunate that my parents supported my interest in computers by buying an Apple II for me and that a local computer science student took me under his wing. Through him I found two early AI books: one in German by Stoyan and Goerz (I don’t recall the title) and Winston and Horn’s “Artifical Intelligence.” I still have both of these although locating them among the thousand or more books in our home will require a lot of time or hopefully soon a highly intelligent robot (ideally running the VIAM operating system – shameless plug for a USV portfolio company). I am bringing this up here as a way of saying that I have spent a lot of time not just thinking about AI but also coding on early versions and have been following closely ever since.
I also pretty early on developed a conviction that computers would be better than humans at a great many things. For example, I told my Dad right after I first learned about programming around age 13 that I didn’t really want to spend a lot of time learning how to play chess because computers would certainly beat us at this hands down. This was long before a chess program was actually good enough to beat the best human players. As an aside, I have changed my mind on this as follows: Chess is an incredible board game and if you want to learn it to play other humans (or machines) by all means do so as it can be a lot of fun (although I still suck at it). Much of my writing both here on Continuations and in my book is also based on the insight that much of what humans do is a type of computation and hence computers will eventually do it better than humans. Despite that there will still be many situations where we want a human instead exactly because they are a human. Sort of the way we still go to concerts instead of just listening to recorded music.
As I studied computer science both as an undergraduate and graduate student, one of the things that fascinated me was the history of trying to use brain like structures to compute. I don’t want to rehash all of it here, but to understand where we are today, it is useful to understand where we have come from. The idea of modeling neurons in a computer as a way to build intelligence is quite old. Early electromechanical and electrical computers started getting built in the 1940s (e.g. ENIAC was completed in 1946) and the early papers on modeling neurons can be found from the same time in work by McCulloch and Pitts.
A Short History of Artificial Intelligence
Tracing the rise of the robot mind
BY ANNA-SOFIA LESIV, MARCH 28, 2023
Editor’s note: For many years, AI developments crept along at a snail’s pace. It sometimes felt like we’d never move beyond the era of the AOL SmarterChild chatbot. And then, everything changed. In a little over half a decade, we’ve undergone a century’s worth of innovation.
In this post, Anna-Sofia Lesiv explores the major turning points that led us to this moment. Regardless of whether you’re an AI super fan watching ChatGPT’s every move, or a reluctant luddite wondering what the hell a “transformer” is, this essay is worth a read.
With recent advances in machine learning, we could be entering a period of technological progression more impactful than the Scientific Revolution and the Industrial Revolution combined. The development of transformer architectures and massive deep learning models trained on thousands of GPUs has led to the emergence of smart, complex programs capable of understanding and producing language in a manner indistinguishable from humans.
The notion that text is a universal interface that can encode all human knowledge and be manipulated by machines has captivated mathematicians and computer scientists for decades. The massive language models that have emerged in the previous handful of years alone are now proof that these thinkers were onto something deep. Models like GPT-4 are already capable of not only writing creatively but coding, playing chess, and answering complex queries.
The success of these models, and their rapid improvement through incremental scaling and training, suggests that the learning architectures available today may soon be sufficient to bring about a general artificial intelligence. It could be that new models will be required to produce artificial general intelligence (AGI), but if existing models are on the right track, the path to artificial general intelligence could instead amount to an economics problem—what does it take to get the money and energy necessary to train a sufficiently large model?
At a time of such mind-blowing advancement, it’s important to scrutinize the foundations underpinning technologies that are sure to change the world as we know it.
Beyond the Turing test with LLMs
Large language models (LLMs) like GPT-4 or GPT-3 are the most powerful and complex computational systems ever built. Though very little is known about the size of OpenAI’s GPT-4 model, we do know that GPT-3 is structured as a deep neural network composed of 96 layers and over 175 billion parameters. This means that just running this model to answer an innocent query via ChatGPT requires trillions of individual computer operations.
After it was released in June 2020, GPT-3 quickly demonstrated it was formidable. It proved sophisticated enough to write bills, pass an MBA exam at Wharton, and be hired as a top software engineer at Google (eligible to earn a salary of $185,000). Also, it could score a 147 on a verbal IQ test, putting it in the 99th percentile of human intelligence.
However, those accomplishments pale in comparison to what GPT-4 can do. Though OpenAI remained particularly tight-lipped about the size and structure of the model, apart from saying: “Over the past two years, we rebuilt our entire deep learning stack and, together with Azure, co-designed a supercomputer from the ground up for our workload.” It shocked the world when it revealed just what this entirely redesigned model could do….
ChatGPT Gets a Computer
Posted on Monday, March 27, 2023
Ten years ago (from last Saturday) I launched Stratechery with an image of sailboats:
A simple image. Two boats, and a big ocean. Perhaps it’s a race, and one boat is winning — until it isn’t, of course. Rest assured there is breathless coverage of every twist and turn, and skippers are alternately held as heroes and villains, and nothing in between.
Yet there is so much more happening. What are the winds like? What have they been like historically, and can we use that to better understand what will happen next? Is there a major wave just off the horizon that will reshape the race? Are there fundamental qualities in the ships themselves that matter far more than whatever skipper is at hand? Perhaps this image is from the America’s Cup, and the trailing boat is quite content to mirror the leading boat all the way to victory; after all, this is but one leg in a far larger race.
It’s these sorts of questions that I’m particularly keen to answer about technology. There are lots of (great!) sites that cover the day-to-day. And there are some fantastic writers who divine what it all means. But I think there might be a niche for context. What is the historical angle on today’s news? What is happening on the business side? Where is value being created? How does this translate to normals?
ChatGPT seems to affirm that I have accomplished my goal; Mike Conover ran an interesting experiment where he asked ChatGPT to identify the author of my previous Article, The End of Silicon Valley (Bank), based solely on the first four paragraphs:
Given the first four paragraphs of the March 13, 2023 @stratechery post on SVB, GPT-4 identified Ben Thompson as the author.
Conover asked ChatGPT to expound on its reasoning:
ChatGPT was not, of course, expounding on its reasoning, at least in a technical sense: ChatGPT has no memory; rather, when Conover asked the bot to explain what it meant his question included all of the session’s previous questions and answers, which provided the context necessary for the bot to simulate an ongoing conversation, and then statistically predict the answer, word-by-word, that satisfied the query.
This observation of how ChatGPT works is often wielded by those skeptical about assertions of intelligence; sure, the prediction is impressive, and nearly always right, but it’s not actually thinking — and besides, it’s sometimes wrong…..
…
To which I say, we shall see! I agree with Tyler Cowen’s argument about Existential Risk, AI, and the Inevitable Turn in Human History: AI is coming, and we simply don’t know what the outcomes will be, so our duty is to push for the positive outcome in which AI makes life markedly better. We are all, whether we like it or not, enrolled in something like the grand experiment Hawkins has long sought — the sailboats are on truly uncharted seas — and whether or not he is right is something we won’t know until we get to whatever destination awaits.
The Accidental Consumer Tech Company; ChatGPT, Meta, and Product-Market Fit; Aggregation and APIs
The Accidental Consumer Tech Company
In yesterday’s Article ChatGPT Gets a Computer I primarily wanted to explore the nature of AI and how the addition of Wolfram|Alpha in particular highlighted how the experience of using an LLM was fundamentally different from that of what we usually think of as computing. There are, though, more traditional strategic questions that I wanted to touch on today.
Last November, OpenAI was an exciting fat startup making an audacious bet on an AI future (albeit with a very unconventional structure). The most obvious business model for the research-focused company was the OpenAI API, access to which was sold on a usage basis.
Then came ChatGPT. Within a matter of weeks ChatGPT had over 100 million users, marking the fastest growth of a consumer app ever, and by all accounts it’s still growing rapidly; OpenAI, whether they intended to or not, suddenly found itself a consumer tech company.
Go back to the first interview I conducted with Nat Friedman and Daniel Gross last October; Friedman said of his search for AI product companies:
I left GitHub thinking, “Well, the AI revolution’s here and there’s now going to be an immediate wave of other people tinkering with these models and developing products”, and then there kind of wasn’t and I thought that was really surprising. So the situation that we’re in now is the researchers have just raced ahead and they’ve delivered this bounty of new capabilities to the world in an accelerating way, they’re doing it every day. So we now have this capability overhang that’s just hanging out over the world and, bizarrely, entrepreneurs and product people have only just begun to digest these new capabilities and to ask the question, “What’s the product you can now build that you couldn’t build before that people really want to use?” I think we actually have a shortage.
Interestingly, I think one of the reasons for this is because people are mimicking OpenAI, which is somewhere between the startup and a research lab. So there’s been a generation of these AI startups that style themselves like research labs where the currency of status and prestige is publishing and citations, not customers and products. We’re just trying to, I think, tell the story and encourage other people who are interested in doing this to build these AI products, because we think it’ll actually feed back to the research world in a useful way. We keep hearing these narratives about how AI is going to solve reasoning over time. I think a very good test for whether it’s actually doing that is something like Copilot, where if it is doing that, it’s going to start writing close to 100% of your code in time.
There are not, needless to say, a shortage of people paying attention to products now, a topic I touch on in another interview with Friedman and Gross that will publish on Thursday. The bit about OpenAI, though, is an important one: this really isn’t an entity that was created to be a consumer tech company, and not just because of its weird corporate structure; Friedman’s point is that the culture wasn’t that of a consumer tech company either.
This is the first reason why I thought last week’s launch of ChatGPT plugins was so important: to me it signaled a major shift in ambition. ChatGPT may have started out as a fun demo, but now it is well on its way to being the next major consumer tech platform. And yes, that may have been obvious — those usage numbers don’t lie — but I think this launch was meaningful for what it signaled about OpenAI’s acceptance of that reality.
ChatGPT, Meta, and Product-Market Fit
There is another interesting meta observation to make: there is no one in tech who desires a traditional platform more than Meta’s Mark Zuckerberg. As far back as 2013 I was arguing that mobile saved Facebook from Zuckerberg’s platform obsession by forcing the company to focus on simply being an advertising platform instead of a developer platform; a year later I was very critical of Facebook’s acquisition of Oculus because I thought Zuckerberg’s desire for a platform was making him overlook VR’s fundamental shortcomings:
….
I doubt OpenAI will kill its API, to be clear, at least not for now, but another way to state this analysis more gently is that any startup building on OpenAI’s API would do well to consider the Azure alternative: supporting APIs forever is what the Microsoft monopoly was built on, and I’m sure the company is eager to lock in the next generation of companies on Azure.
On the flip side, I think that ChatGPT + plugins means that Bing might not make the ultimate market share progress that Microsoft hopes: while OpenAI withdrew the web plugin at some point over the weekend, I’m sure it’s coming back, and the reality is that in the consumer space ChatGPT already has the mindshare and, in my estimation, the better user experience.
Of course we have seen clearly compatible companies fight on each other’s turf to their collective detriment (e.g. Apple and Google), but the fact that Microsoft and OpenAI are joined at the hip will perhaps make their coopetition tilt more towards the “cooperate” part than the “competition” part. That’s for them to decide, though; it’s customers who are the big winners, which is always the origin story of a new Aggregator.
Video of the Week
News of the Week
Are solo GPs screwed?
The rocky state of venture’s chattiest cohort
Natasha Mascarenhas@nmasc_ / 5:00 AM PDT•March 25, 2023
Entrepreneur Ankur Nagpal raised a $70 million venture fund last year, called Vibe Capital, from over 200 investors. But now, as the market shifts and LPs are less interested in venture capital, the Ocho founder is shrinking the fund side by roughly 43%, canceling capital calls, and, ultimately, sending back money that had already been wired to the fund.
The contraction, Nagpal told TechCrunch, occurred because he’s busy building his own startup and the funding environment has shifted to more realistic expectations: “What looked like a $10 billion outcome is now a $1 billion dollar outcome.” As a result, he says he’s more confident on returning a higher multiple if he’s investing from a smaller fund size.
His LPs were surprised but “super happy” to get the capital back, Nagpal said. Since announcing the cut, the founder says that five different solo GPs have messaged him asking for introductions to LPS who just got capital back. “I think the reality is a lot of these people who are getting money back are actually not going to allocate it to venture anymore.” One of Nagpal’s biggest investors is Tiger Global, which has become notorious for retreating from venture fund bets. His other investors, namely venture funds, will likely use the capital to bet on new startups out of their own fund, he said.
In Nagpal’s case, the move will let him put 90% of his time into his new startup. But he says others in the solo GP world are going through a rough time. Many are shrinking fund goals, extending fundraising timelines, teaming up with investors to avoid team risk or even going toward placement agents, once taboo in the world of fundraising, to help them close investors in exchange for a fee. “Even the ones who are taking it seriously are actually now trying to build a firm, so you’re kind of becoming the thing that you were trying to replace,” he said…..
Death of The Generalist Seed VC
Leading Funding Rounds Now Requires More Thematic Focus To See, Pick, Win & Service Startups
Hunter Walk
I’m going to make the case that early stage venture *firms* who want to lead seed/A Rounds can be generalists (in the sense they have a set of GPs who cover a broad set of areas collectively), but that as a VC you, now more than ever, need some degree of focus. This doesn’t mean every partner is so narrow as to invest in just a single type of startup. But it does mean that if you cannot articulate the handful of spaces you are trying to dominate (from an investment returns perspective), you are probably not going to succeed ongoing.
Why do I believe this is now a reality (and I didn’t necessarily when we started Homebrew a decade ago)?
“Software eating the world” (or my take, “software enabling the world”) means the breadth of problems startups are solving, the range of markets they’re participating in, and specifics of what they’re trying to accomplish during their first few years of operations, are so diverse that you cannot credibly hold evaluative criteria in your head for all startups. You might invest as a generalist but I don’t believe your returns will outperform.
Similarly, founders of these companies are looking for lead investors who can de-risk their path forward. Being a pure generalist VC makes it very difficult to convince savvy founders that you have the industry relationships, relevant pattern-matching, and density of experience to be their best lead investor. I think this remains true no matter how large your firm’s Operations team is, or how much of a celebrity VC you might be (although both of those can be selling points).
We’re back in a phase where alpha seems to be coming from technical innovation (crypto, climate, biology, AI) and not just business model application (XYZ but a marketplace!). It’s not just that it takes time to understand these technologies but each of them bring their own new networks and talent to the forefront. You have to be *in* these networks to see the best early stage opportunity, not just wait for intros to land in your inbox.
The NFL Is Quietly Investing Millions Into Its Venture Capital Fund, 32 Equity
Published on March 25, 2023
A new committee supervising the NFL’s expanding 32 Equity venture arm has been formed, with four owners and a club president appointed by Commissioner Roger Goodell. In the world of venture capital, the NFL has its own company known as 32 Equity.
Let’s go through the basics of 32 Equity, including when it was created, how much money the owners put in at first, and the returns they’ve already experienced.
The founding of the NFL’s 32 Equity
Several of the major governing bodies in professional sports have set up venture capital funds to pool investment returns and increase their overall financial strength, as Profluence Sports reports. The National Football League’s 32 teams made the unexpected yet forward-thinking decision in 2013 to form 32 Equity. The league established the fund privately, but its early success has drawn widespread notice.
32 Equity is in charge of the NFL’s strategic investment efforts, with a focus on businesses that can expand football’s reach, provide engaging content, enhance the fan experience, and enter into huge, scalable markets in other industries.
At the beginning of 2013, all 32 NFL clubs were required to contribute $1 million each. Franchise owners have committed an extra $2 million in 2019 and $5 million in 2022 to the fund as a result of its early success.
32 Equity investments
32 Equity, the NFL’s venture fund
Annual NFL Owner Contribution
$1M: (2013)
$2M: (2019)
$5M: (2022)
$250M: total raised from NFL owners
Notable investments include:
– Fanatics (up 10x)
– Hyperice
– Genius Sports
30%: average annual return schedule
After 10 years and three rounds of fundraising, NFL’s 32 Equity has made significant investment market movements, as Drew Mailen reports. The venture fund is now worth over $100 million per club.
In a vote, NFL owners decided to back NOBULL, reports Front Office Sports. The clothing and footwear company previously committed to sponsoring the league’s annual Scouting Combine. From its beginning, 32 Equity’s goal has been to fund businesses that are part of the sports ecosystem. They want to specifically focus on ones that can gain an advantage due to their connection with the NFL. At the time, Twitter was an example of one that benefited from the NFL despite the league having little potential for Twitter…..
Kuo: Apple Mixed-Reality Headset May Not Appear at WWDC as Mass Production Pushed Back Yet Again
Thursday March 30, 2023 4:50 am PDT by Hartley Charlton
Apple has again pushed back mass production of its mixed-reality headset and the device may not appear at this year’s Worldwide Developers Conference (WWDC), Apple analyst Ming-Chi Kuo today said.
Apple headset concept by David Lewis and Marcus Kane
In a tweet, Kuo explained that Apple “isn’t very optimistic” about whether the headset will be able to create an “iPhone moment.” As a result, the company has chosen to delay the device’s mass production schedule to the middle to the end of the third quarter of 2023. Kuo believes that the delay adds uncertainty around “whether the new device will appear at WWDC 2023, as the market widely expects.”
The delay also means that shipment forecasts of the headset for 2023 will be even lower than previously thought, reducing to just 200,000 to 300,000 units. Previously, around half a million units were expected to ship this year.
Apple’s cause for concern with the device is allegedly in anticipation of poor market feedback, catalysed by the economic downturn, hardware specification compromises, the weight of the device, the readiness of the headset’s ecosystem and applications, and its high selling price. Kuo believes that the headset will be priced at $3,000 to $4,000, or even higher.
Kuo’s comments mirror a recent report from The New York Times, which claimed that Apple employees have serious concerns about the headset’s prospects, calling it “a solution in search of a problem.”
Startup of the Week
Good Eggs Cuts Its Valuation 94% in Lifeline Financing as More Startups Get Desperate
March 23, 2023 6:00 AM PDT
As more startups struggle to raise money from venture capitalists and approach bankruptcy, they are going to extreme lengths to stay afloat. The latest example is Good Eggs, which delivers fresh produce and other groceries.
The company this month raised around $7 million from Greenwich, Conn., investment firm Glade Brook Capital Partners at a pre-investment valuation of $15 million, said two people with knowledge of the deal. That represents a 94% valuation drop from late 2020, when the pandemic boosted food-delivery services and the startup raised $60 million at a pre-investment valuation of $270 million, one of these people said. The new deal also effectively wiped out the value of stakes held by earlier investors such as Benchmark that chose not to contribute more money.
THE TAKEAWAY
• The company gave investors a slew of preferential terms
• Revenue fell 18% in 2022 as the company failed to find a buyer
• Cash burn has fallen 60% in recent months
This isn’t the first time Good Eggs investors have been through such a process, known in the industry as a cram-down or pay-to-play deal. Five years ago, Benchmark partner Bill Gurley—best known for his early backing of Uber—led a $50 million investment in Good Eggs to help the company avoid bankruptcy. That deal, one of Gurley’s first big moves after he stepped down from Uber’s board of directors a year earlier, followed a restructuring of Good Eggs, Gurley said on Thursday. That restructuring shrank the stakes held by investors such as Sequoia Capital that had invested in the company even earlier.
Pay-to-play deals are becoming more frequent as cash-burning startups run out of options. Exercise equipment maker Tonal is discussing such a financing, The Information reported this week, while crime-reporting app Citizen struck a similar deal, the Financial Times first reported.