A reminder for new readers. That Was The Week collects the best writing on critical issues in tech, startups, and venture capital. I selected the articles because they are of interest. The selections often include things I disagree with. The articles are only snippets. Click on the headline to go to the original. I express my point of view in the editorial and the weekly video below.
This Week’s Video and Podcast
There will be no That Was The Week next week; we’ll be back on 6th October
Content this week from @kteare, @ajkeen, @krishnanrohit, @profgalloway, @adamlashinsky, @eladgil, @hunterwalk, @dan_hoog, @bgurley, @theallinpod, @ttunguz, @jon_victor_, @caseynewton, @kyle_l_wiggers, @OliverMolander, @sarahintampa, @jglasner, @levynews, @joemullin, @jaronschneider, @Samirkaji, @agazdecki
Editorial: Billions: The Venture Asset Class
Bill Gurley on Regulation, the State and Tech
What it feels like to build a startup
Editorial: Billions: The Venture Asset Class
There are a lot of talking points this week, and all of them could justify an editorial. Rohit Krishnan’s essay on innovation is a must-read. Prog Galloway’s missive about why Google should be broken up, at least into Youtube and Google, is nuanced and interesting – although I think any argument that Google has a monopoly of a market is wrong. It had a now eroded monopoly of a feature – search. Elad Gil has two essays that I consider must-reads, one on AI Regulation and one on Unicorns. Bill Gurley’s talk at the All-In Summit on regulation, Washington DC, and Silicon Valley is 20 minutes well-spent in this week’s video of the week. And Boston Consulting Group’s findings about productivity and AI are compelling reasons for all of us to get an AI co-worker.
But I want to focus on Hatcher founder Dan Hoogterp’s piece on Venture Returns versus Private Markets. It was written in June 2022 but came to my attention this week.
His goals are spelled out as:
We compared venture capital returns to public markets returns over a period of 20+ years, both through simulation, discussed later, and directly from our data set – with the objective of providing insight as to the correlation between public market returns and venture capital returns.
The article discusses various methodologies and key aspects, including:
Historical, time-matched Public Market Equivalent (PME) return values are used to compare with venture capital investment data.
Indexes like the Nasdaq-100 or S&P 500 are used as proxies for diversified public market returns.
Historical Distribution of Returns: Public market returns loosely follow a normal distribution, whereas venture returns follow a power-law or exponential decay.
Correlation between Asset Types: When using a simple correlation metric, the R-value is under 0.02, indicating a very low correlation. However, using Spearman rank correlation, the value increases to 0.33, suggesting some correlation level.
We all know that public and private markets are not correlated positively. He does show that they are correlated negatively.
VC transactions occur less frequently during volatile public market conditions.
Venture deal count negatively correlates with rank returns, suggesting that VC returns are impaired during less active periods.
This reveals (at least in my thinking) that Venture investors react emotionally to poor public market conditions and invest less. Investing less is a bad thing when a market is power-law-driven. It results in lower returns. Two charts show the differences:
Public Market: The article confirms that PME returns follow a quasi-normal distribution, which has implications for risk modeling and portfolio optimization. You can index a market with a normal distribution and avoid the risks of single stocks.
Venture Capital: The returns follow a power-law distribution, which is significant for understanding the non-normal, ‘fat-tailed’ risks associated with VC investments.
This chart comes from aggregating monthly cohorts into buckets yet still displays power-law characteristics. Traditionally you can only index a power-law distributed market if you buy every asset in the market, or you will miss the big winners and so underperform.
Because of this, Venture Capital has never been an asset class with predictable outcomes. This can be illustrated by looking at the numbers. Let’s take the period 2012-2023 for Series B investments as an example.
Taking 2019 as the endpoint, the average gain across ALL Series_B rounds is close to 300% in 5 years. Of course, this is not a liquid gain; it represents a rise in share price for a share that – because it is private – cannot be realized easily. But if it was possible to buy all of these companies, that would be the valuation growth in the period. About 9.4% of all of those rounds become unicorns.
As far as I know, no Venture Funds return 300% in five years. So the market as a whole outperforms any fund at Series B. If only you could index the entire market as you can public markets ….. and create liquidity.
Well, in the age of AI, it turns out you might just be able to do that. Here is the same table but for Series B Rounds selected by one of the heuristic models we run at SignalRank. A heuristic model applies insights to data and creates predictive decision-making applied to future investments. The assumption that private markets cannot be indexed is well-grounded but can now be challenged.
The average 5-year gain for this index approach (between 2012-19) is 577%, and the unicorn percentage is almost 30%. Over 75% of the companies go on to raise a subsequent round.
Because this is only 1,500 companies over 8 years, compared to the 11,500 in the full Series B set, it is possible to both beat the market and invest in the companies.
The full set of 2012-19 Series Bs raised $120 billion at the Series B. Owning 1% of the entire set would require access to the deals, plus $1.2bn in capital.
41% of the $120 billion was raised by 13% of the companies, the 1,500 index selections. This ability to attract an unreasonable proportion of the capital is a strong indication of the quality of the set.
Investing in an index owning 1% of the selected 1,500 would need $500m over the 8 years between 2012-19. A far more possible allocation.
A randomized Monte Carlo analysis of the 1,500 shows that a portfolio of 10 or more of these 1,500 chosen randomly would have a high chance to return the average results with vastly reduced risk compared to a Series B venture fund.
If a selection of these 1,500 companies belonged to an index that was traded on public markets, in addition to value growth, it would also provide liquidity to investors in the index.
So.. now you know what I do in my day job ;-).
SignalRank has completed six Series B rounds in the past 60 days. Two with Sequoia Capital leading, two with Lightspeed Venture Partners leading, and one each for Accel, Andreessen Horowitz, Index Ventures, Lux, Prosus, Softbank, Spark, and Union Square Ventures.
One share of SignalRank now embeds equity in all six of these investments.
The power law is buried inside both the market numbers and the SignalRank numbers, but as the 5-yearSignalRank gain shows – and the unicorn percent reinforces – the heuristic-driven model captures the power law and then – through aggregation – creates a more normal distribution.
The goal is to scale the SignalRank Index to become a true listed index-like set of assets representing the top 5% of Series Bs. As the capital allocated to the strategy grows, any single company’s share of the index will shrink, creating a highly de-risked private markets index.
We invest in companies by supporting seed investors partners in their best companies, and we share the rewards of these investments with our partners.
An index of the best private market companies turns a power law market into one that is more predictable and performant, and so one that can be owned just like an investor might own the S&P 500 or the NASDAQ Index. $500 million invested for a 5.77x 5-year gain returns a profit of about $2.3 billion. Additional growth beyond 5 years takes it even further. Of course, this is based on a 13-year backtest, and current investments’ actual results may differ. That said, it justifies this week’s title – Billions.
Essays of the Week
A data based look at how innovations came about through all of human history
SEP 18, 2023
I, Fax Machine
Before the internet, and the telephone, was the fax. Before instantaneous global communication news took days or even weeks to travel from one city to another. Into this world the fax arrived like a meteor, revolutionising the very essence of how we connect. But among its many versions, the printing telegraph stands out, weaving together the magic of electricity with the tangible feel of the printed word.
Today, as we tap away on our smartphones, sending emojis and tweets across the globe, it’s worth pausing to remember the printing telegraph. Alexander Bain, a Scottish inventor and professor invented the printing telegraph in 1842. Unlike its predecessors, which required operators to manually translate, Bain’s device automatically printed messages onto paper. No longer did one need to be versed in the rhythmic dots and dashes; the machine did the decoding, presenting the user with legible text.
But the printing telegraph was not born in isolation. It was a child of collective genius, standing on the shoulders of countless inventors over the centuries. It sits atop thousands of years of human creativity. And we can trace its lineage all the way back.
For instance, if we hop into our time machine and set the dials to somewhere around 3,500 BC, we’ll probably reach ancient Mesopotamia somewhere where someone is smashing a wedge-shaped stylus into soft clay. We’ve just witnessed the birth of writing, the first major communication revolution since language. Those odd chicken scratches on tablets might not seem like much, but they’ll allow knowledge to hop over generations. We need that.
Fast forward a few millennia and papyrus scrolls are the hot new text storage tech. Eventually we get paper, vellum, parchment, before – voila – movable type printing, humanity now could preserve ideas for ages and reproduce it cheaper than ever before. Also necessary for us to learn better and faster.
Meanwhile, we made progress in fits and spurts in mathematics as well. Babylonian and Egyptian math let societies measure land and build pyramids. Greek geometry formalised proofs. Algebra? Persian scholars like Al-Khwarizmi gave us that. Or Archimedes leaping out of his bathtub yelling “Eureka!” laid the foundations for some aspects of engineering. Calculus? Hooke, Leibniz, Newton hashed that out. Innovations like the binary numeral system by the Indian mathematician Pingala laid the groundwork for understanding signals and transmissions.
No math progress, no physics. No physics – no wires, batteries or telegraphs for Mr. Bain.
Of course, you can’t just use equations to magic up metal parts. Enter metallurgy. Humans have been hammering copper and iron ores for ages to make better tools and weapons. Bronze let ancients craft shapes like helmets and axe blades. We had to find, mine, and refine copper. Steel milling matured during the Industrial Revolution, as did machining tools allowing arbitrary precision for mass manufacturing, allowing for machined components with intricate parts of the sort that go into a telegraph machine. These mechanical advancements were integral to the production and operation of the telegraph system.
Also, we need energy to have powered the the electrical stuff. First came curious Greeks like Thales and amber rods with static cling. Then lodestones, magnetism, and compasses to guide explorers like Columbus. Fast forward to candle-lit labs where Galileo and Newton teased out the mysteries of motion and gravity.
The development of the voltaic pile, an early battery, by Alessandro Volta in 1800 was a game changer. By enabling a stable flow of current, it provided the crucial power source needed for early telegraphy experiments. Wheatstone and Cooke’s early telegraph prototype in 1837, using static electricity and needles, demonstrated the feasibility of transmitting messages electrically over a distance. This provided proof of concept for telegraphy.
Theories unifying electricity and magnetism, formulated by pioneers like Oersted, Ampere and Faraday, helped us figure out how electromagnets and telegraph relays were possible, stitching together the science underpinning telegraphy. Coulomb’s inverse-square law, the Biot–Savart law, and Faraday’s methods of electromagnetic induction laid more crucial steps toward understanding and harnessing the power of electricity. Telegraphs surfed this wave.
Before electronics, we had flags, smoke, drums for sending messages across distances. First came an optical telegraph using semaphores on towers to pass info visually, designed by Claude Chappe. This signalled the dawn of rapid long-distance communication. Francis Ronalds, not long after, built the first working electric telegraph using electrostatic means, a precursor to the printing telegraph.
But Bain’s telegraph could now use electricity to send messages down cheap wires. The printing telegraph improved greatly on these by encoding text into electric signals carried cheaply and swiftly by wire. In doing so, it enabled the global communications network the world enjoys today.
The printing telegraph integrated millennia of human innovation. From the first clay tablet to electrical telegraph, it took writing, math, materials science, physics, chemistry, engineering and a lot of human ingenuity to make it happen. Behind its wood, metal, and wire lay the gradual accretion of knowledge won through human ingenuity.
What’s more, the same could be said for anything and almost everything we invented, from superphosphates to the machine tools we talked about to computers or cement.
And I’ve long been wanting this, a way to see the lineage of an innovation. To see how the ingenuity of humanity spread through the ages to build the miracles we see.
But no such data actually exists, except in small bits and pieces. So I made one.
I created a list of every innovation that humanity came up with, across all fields, and try and figure out what the DNA of innovation might look like. You have to look at every single one of the 1000s of innovations, and then line up the various innovation areas to see if we can’t figure out what we’re researching.
It might be at too fine-grained a level if we go too deep which is unhelpful (e.g., analytic philosophy), or at too coarse a level (energy), and the difficulty is in finding the right level which can help us understand how to think about it, but don’t constrain us too much that we end up overfitting. But its still more art than science…. much more
Published on September 15, 2023
“The notion that power should be limited so that no person or institution can enjoy unaccountable influence is at the very root of our democracy.”
—Tim Wu, Columbia University
Capitalism is the most powerful system devised to elevate the human condition. Its oxygen is innovation, which requires healthy markets. America has a proud legacy of knowing when a corporate organism has morphed into an invasive species suffocating an ecosystem via predatory pricing, bundling, or other actions that control the supply of products and/or services. Historically, we step in — a competitive marketplace takes precedence over an aggregation of individual or corporate power. Antitrust laws pierce the canopy, oxygenating the marketplace and preserving a core attribute of innovation and prosperity: churn.
In the 19th century a series of “trusts” were established, in the belief that a centralization of power and sectors, run by thoughtful men, would be good for the economy. Soon there was recognition that the resultant abuse and income inequality warranted an antitrust movement. When Teddy Roosevelt broke up Standard Oil, it was a signal to the nation that Americans were in charge, not American corporations. The government was the sheriff, protecting the little guy.
History is rhyming. This week in a federal court in Washington, the Department of Justice is attempting a similar Heimlich maneuver on the $180 billion search market.
Bill and Paul’s Excellent Adventure
Bill Gates and Paul Allen founded Microsoft in 1975, in the shadow of industry behemoth IBM. For decades, IBM was something akin today’s Apple, Google, and Microsoft rolled into one dominant company. So dominant, it was sued by the U.S. government for antitrust violations, which triggered a major change to IBM’s business model: It “unbundled” software and hardware, meaning it stopped giving its software away for free to its hardware customers. This created, for the first time, a competitive market for software. A market that Gates and Allen would enter just six years later, developing software for the emerging category of personal computers.
Over the next decade, MSFT software would power the PC revolution: MS-DOS in 1981, Word in 1983, Windows and Excel in 1985, PowerPoint in 1987. Tellingly, PowerPoint was acquired from a nascent competitor, not developed in-house. Over the next decade, Microsoft became known more for entrenchment than innovation. “Embrace, extend, and extinguish” was the company’s strategy for suffocating would-be competitors. It worked — Microsoft supplanted IBM as the dominant force in computing. By 1998, Windows controlled over 90% of the PC operating system market, and Bill Gates was the wealthiest person in the world.
As with IBM before it, Microsoft’s success was recognized with the business world’s Lifetime Achievement Award: a DOJ antitrust suit. The crux of the government’s claim was similar to that made against IBM a quarter century earlier: Microsoft was abusing its commanding position to limit rivals’ ability to get traction with competing products. The headline product in 1998 was the browser: Netscape represented Microsoft’s first serious competitive threat in a decade; to stop it, the company bundled its Explorer browser for free with Windows and cut deals with PC manufacturers to make Explorer the default browser on computers. The DOJ believed this was anticompetitive, the court agreed, and the company signed a consent decree ensuring PC manufacturers greater flexibility regarding the software they bundled with Windows-powered computers.
The DOJ’s enforcement action oxygenated the marketplace in ways nobody could have foreseen. The same year the department sued Microsoft, the cycle was beginning again. Larry Page and Sergey Brin founded Google in 1998, and over the next decade their company rode a wave of innovation to global dominance. Adwords, the revenue-generating portion of the business, launched in 2000. Then Gmail in 2004, Maps in 2005, Docs in 2006, Android in 2007, and Chrome in 2008. All built on the success of the company’s core products, Google search and the Android operating system — just as Microsoft built its empire on the dominance of its Windows operating system.
Would Google exist today had the DOJ not sued Microsoft? Unlikely. Microsoft tried to compete with Google in search and mobile in the 2000s, but, unable to deploy its bundling and exclusivity strategies, it had to rely on its products — which were inferior.
Google doesn’t dominate computing today to the extent Microsoft did in 1998. Nobody does, as “computing” is a much broader space. But its control of search — the most common entry point to the internet — is a nearly pitch-perfect echo of Microsoft circa 2001. Similarly, a quarter century after its founding, Google has a more than 90% market share, a sclerotic artifact of market power vs. a function of innovation. Its market dominance creates a virtuous cycle of increasing power. An estimated 9 billion Google searches occur every day, vs. 400 million for Bing. The massive delta of data and reach makes for a better product: Click-through rates for ads on Google are 30% greater than on Bing. More usage = more data = more advertising, and so on. Today, Google’s parent Alphabet is worth $1.75 trillion and employs 175,000 people.
However, what was the last innovative Google product? Restructuring the brand’s architecture under Alphabet? Earnings growth has, mostly, been a function of finding new ways to extract profits from its monopoly: Google search results have become a billboard for Google-sponsored results interspersed with content harvested from other sites and links to Google’s own services. In 2020, The Markup found that Google-associated results (ads for or links to the company’s other services) constituted over 60% of the first screen of an average Google search result. And in 1 of 5 searches, the entire first screen is Google results. This is the meat of its business: Search ads generate 57% of the company’s revenue.
Despite turning search results into a carousel of ads and Google services, Google has racked up 90% market share in search queries — 95% on mobile. How? As Microsoft once did, it leverages its control over the most popular mobile operating system (Android) and spends unprecedented sums on deals assuring it is the default search engine on computers and phones — more than $10 billion per year. Google says it’s the leader because it has the best product, but if that’s so … why pay $10 billion a year to be the default? Dominance in search is also self-fulfilling, as it gives the company unrivaled data re what people search for and what results generate clicks. And Google’s ability to harvest additional data from adjacent products, including Mail, makes it increasingly difficult for competitors to get traction.
One difference? Google learned from the sins of the father and has tried to insulate itself from antitrust enforcement through lobbying and PR. Google spent over $10 million on lobbying in 2022 — in the late 1990s, Microsoft’s only presence in D.C. was an office in the suburbs focused on selling software to government agencies. In addition, today’s tech giants recognize CEO “likability” is key. Wojcicki, Pichai, and Sandberg made millions for their management skills, but billions as likability heat shields for their businesses’ abuses.
The New Gilded Age
The DOJ’s current lawsuit, one of several actions the federal government has taken against tech companies on antitrust and other grounds, reflects a much needed renewal of our free market instincts. Yes, government action is a component of a free market, despite what the techno-libertarian crowd would have you believe. Markets are not the product of divine creation coupled with a laissez-faire approach to regulation but a function of human effort that depends on rules and enforcement to work efficiently. We’ve lost our way with respect to this (see above: lobbying) and are paying the price with declining competition and innovation.
And not just in tech. Three companies control 95% of the U.S. beverage market. Four dominate the meat business, and rising meat prices are the largest contributor to food price inflation. Four airlines control over two-thirds of U.S. air travel, though they are substandard — the highest-ranked U.S. airline by quality of service is Delta … in 20th place, behind Air New Zealand. The next is United, in 49th place, trailing Azul Brazilian and Malaysia Airlines. Monopoly has its privileges, however: In 2014 the Economist calculated that U.S. airlines generated $22.40 in profits per passenger, while European airlines, subject to the rigors of a free market, earned just $7.84.
We see similar consolidation in banking, pharmaceuticals, health care, retail drug stores, publishing (where the DOJ recently had a big win, stopping the merger of Penguin Random House with Simon & Schuster), eyeglasses, and beer. Waves of consolidation are washing over nearly every sector.
Antitrust enforcement actions are perceived as punishments or moral judgments, but we should think of them as recognition. If a company is good enough for long enough, it can achieve market dominance and earn its profits from stifling competition vs. competing on products or services. It’s the logical, shareholder-driven thing to do. And when we stop them, the benefits accrue to almost everyone. When the U.S. broke up Standard Oil in 1911, its largest shareholder, John D. Rockefeller, became the wealthiest man in the world: The separated companies, free to compete and innovate in the market, were worth dramatically more than when bundled together. The breakup of AT&T unlocked enormous value in the telecommunications industry, leading to more patents, more profits, and eventually the fertile ground needed for the internet market explosion in the 1990s. Microsoft wasn’t broken up in 2001, but it flourished despite the limitations the DOJ put on it, becoming a more innovative company. The action also fired the starting gun for growth in a sector that’s created enormous stakeholder value.
This month’s trial concerning Google’s search dominance likely won’t lead to the breakup of Alphabet. However, I believe severing YouTube and Google would create significant value for shareholders, employees, and customers, who’d see their rents decline. Soon after the breakup, the Alphabet board would demand a strategy for competing in video, and the newly constituted YouTube board would ask how the company was going to challenge its former parent in text search. Even without a breakup, limitations on Google’s ability to perform infanticide on emerging competitors would be welcome. History suggests we are at the start of another 25-year cycle. Just as the web was driving innovation in 1998 (when Google was founded) and personal computers drove Microsoft’s early success, AI appears to be the emerging volcanic force. We need to ensure that the nascent challengers to Google (and to Meta and Apple and Amazon) have the light, air, and space needed to survive and create trillions in shareholder value and hundreds of thousands of jobs
Walter Isaacson’s 670-page magnum opus is long on behind-the-scenes moments (plus just plain long), but it comes up short in the other category that matters.
Sept. 16, 2023 6:00 AM PDT
Walter Isaacson is the exotic bird of American letters, a charming and convivial bon vivant and raconteur, the life of many a dinner party, a studious biographer and a generous mentor. He blurbed both of my books, a kindness he’s bestowed on many authors, and he has been nothing but kind and gracious to me over the years.
Unfortunately, these admirable and lovely attributes go a long way to explaining what Isaacson has become with his 670-page groaner on the life and times of Elon Musk: an elegant stenographer.
“Elon Musk,” released this week to countless book reviews, podcast segments and an onstage appearance at New York’s 92nd Street Y (alongside another newly controversial scribe, Michael Lewis), has an as-told-to, cinéma vérité feel. Isaacson interviewed 129 people—he lists them on pages 619 to 622—but one person’s voice reigns supreme: Musk’s. Over and over, Isaacson faithfully records what the Tesla-SpaceX-Neuralink-Boring Company-X chief tells him, sprinkles in the observations of those around him and leaves it at that.
He provides little analysis, next to no investigative research and positively zero condemnation for his subject’s behavior, no matter how odious it may be. It’s as if Isaacson were saying to his readers: “You decide what to think of this guy based on the account of his life he told me. My work is done here.”
To look at it charitably, Isaacson found himself in a no-win situation: If he trashed his subject, who would trust him with their life story in the future? Less charitably, the accomplished author was seduced by Musk’s power, and whether out of fear or fealty, he chose not to question it.
The overall tone Isaacson takes to writing about this great if thin-skinned man makes “Elon Musk” an object lesson in the perils of access journalism. What’s amazing about Isaacson’s opportunity also showcases what he has missed. He was omnipresent in Musk’s life for two years. He was given the rare chance to speak and text at length with his subject, who smoothed the way for Isaacson to talk to employees, friends, family members, ex-wives, girlfriends and even a few adversaries. Indeed, the book is laden with details about Musk’s personal life, from the videogames he likes to play, to the texts and emails he sends friends, to the foods Musk’s brother Kimbal prepares for family feasts. (It has to be asked: Musk’s legions of fanboys may hang on every Musk tweet and frat-boy fart joke, but will they have the patience for 600 pages?)
All too often, though, Isaacson’s life of Musk is short on the details that matter. He devotes all of three pages, for instance, to a description of Musk’s tunneling startup, The Boring Company. The author dutifully notes that although Musk has announced projects to tunnel under numerous cities, “by 2023, none of them had gotten underway.” But if you’re wondering why, you’ll get crickets from Musk and from his biographer. Did Isaacson ask Musk why The Boring Company has been all talk and no tunnels? The author doesn’t say. Boring, indeed.
Similarly, two slim chapters on Musk’s brain implantation startup, Neuralink, reveal little beyond the marketing propaganda Neuralink itself already has published. Nor does Isaacson discuss the controversies around Neuralink, from the treatment of the animals it experiments on to its unwillingness to participate in the peer-reviewed studies that are critical in the burgeoning field of human-machine interface technology. This is despite Isaacson’s access to Musk as well as Shivon Zilis, the mysterious and secretive executive who oversees Neuralink, and who also recently became the mother of twins conceived with Musk’s sperm.
This seems like a good place to discuss what Isaacson’s “Elon Musk” is long on, which is details of the entrepreneur’s bizarre romantic and family life. Isaacson goes deep on Musk’s marriages to Justine Musk and Talulah Riley, as well as his on-again, off-again—and often overlapping—relationships with the likes of singer Claire Boucher (aka Grimes) and actors Amber Heard and Natasha Bassett.
Isaacson reveals that Musk’s paternity of Zilis’ children emanated from her maternal desires and his willingness to be her sperm donor. But readers are left guessing if the two are or were romantically involved, though Grimes apparently wasn’t too pleased when she learned the paternity of Zilis’ kids. The book also narrates, but rarely comments on, Musk’s unusual interactions with his many other children, including his toddler traveling companion, X, and Vivian Jenna Wilson, his transgender teenager, formerly known as Xavier.
Isaacson’s book is best understood as the journalism of passive observation rather than active investigation. This is owing mainly to the limits of Isaacson’s access. In describing the breakdown between Musk and Google co-founder Larry Page over their views on the dangers of artificial intelligence, for example, Isaacson portrays Page’s comments in a heated conversation between the two at a birthday party for Musk in Napa Valley in 2013. Yet according to his chapter notes and list of interview subjects, Isaacson didn’t interview Page, and he doesn’t say if he tried.
A word on the chapter notes, by the way: Isaacson doesn’t use footnotes, so it’s impossible to know which facts and assertions he is ascribing to what sources. Instead, each chapter lists the people interviewed as well as articles or books from which the author pulled information. There’s nothing horribly wrong with this approach, especially as the book isn’t purporting to be a work of scholarship. It just calls into question the author’s precision even as it highlights its reliance on the words of Musk and people close to him.
Isaacson’s fact-checking for the book already has been called into question regarding his account of Musk’s interactions with the Ukrainian government over access to the Starlink internet service. This reminded me of the time in 2011 when I interviewed Isaacson in front of an audience at the Commonwealth Club of California about his biography of Steve Jobs. I noted that the active community of Jobs obsessives was abuzz with assertions of multiple inaccuracies in Isaacson’s book, often involving their firsthand recollections of events that conflicted with the accounts Jobs, a noted shader of the truth, shared with Isaacson. I asked the author if he would correct such errors in later editions; Isaacson shrugged off the suggestion.
I found two tiny inaccuracies in the new book: The venture capital firms of Sand Hill Road are located in Menlo Park, Calif., not nearby Palo Alto; and the wife of media scion James Murdoch is Kathryn, not Elisabeth, who is Murdoch’s sister. But the ease with which these could have been checked, and Isaacson’s reliance on Muskworld for his narrative, made me wonder what other errors lurk in the text.
For all my criticisms of this book, it has plenty to redeem it, particularly for Musk’s legions of fans. There is no denying his accomplishments, and Isaacson reports lavishly on the nitty-gritty of making SpaceX and Tesla the successes they are today.
He provides important and troubling information on Musk’s state of mind and health, offering ample evidence that Musk suffers from periodic bouts of depression as well as chronic neck and gastrointestinal issues. Given Musk’s penchant for partying, working long hours and pulling all-nighters more befitting a college student than a pudgy 50-something-year-old, I was left worrying for his longevity.
Isaacson also sheds light on some of Musk’s worst moments. For example, he says Musk “privately” said it was one of his “dumbest mistakes” to have posted a link to and commented on a right-wing conspiracy-site article that said House Speaker Nancy Pelosi’s husband had been attacked by a male prostitute he had solicited. The book also details precisely how Musk plotted to deny fired Twitter executives severance payments they were owed, though Isaacson characteristically calls the actions “ruthless” rather than despicable.
On the book’s last two pages, Isaacson finally addresses the elephant in the room. “Do the audaciousness and hubris that drive him to attempt epic feats excuse his bad behavior, his callousness, his recklessness?” he asks. Isaacson’s answer is no, though he notes that Shakespeare taught us that “all heroes have flaws, some tragic, some conquered, and those we cast as villains can be complex.”
There have been multiple calls to regulate AI. It is too early to do so.
SEP 17, 2023
[While I was finalizing this post, Bill Gurley gave this great talk on incumbent capture and regulation].
ChatGPT has only been live for ~9 months and GPT-4 for 6 or so months. Yet there have already been strong calls to regulate AI due to misinformation, bias, existential risk, threat of biological or chemical attack, potential AI-fueled cyberattacks etc without any tangible example of any of these things actually having or happened with any real frequency compared to existing versions without AI. Many, like chemical attacks are truly theoretical without an ordered logic chain of how they would happen, and any explanation as to why existing safegaurds or laws are insufficient.
Thanks for reading Elad Blog! Subscribe for free to receive new posts and support my work.
Pledge your support
Sometimes, regulation of an industry can be positive for consumers or businesses. For example, FDA regulation of food can protect people from disease outbreaks, chemical manipulation of food, or other issues.
In most cases, regulation can be very negative for an industry and its evolution. It may force an industry to be government-centric versus user-centric, prevent competition and lock in incumbents, move production or economic benefits overseas, or distort the economics and capabilities of an entire industry.
Given the many positive potentials of AI, and the many negatives of regulation, calls for AI regulation are likely premature, but also in some cases clearly self serving for the parties asking for it (it is not surprising the main incumbents say regulation is good for AI, as it will lock in their incumbency). Some notable counterexamples also exist where we should likely regulate things related to AI, but these are few and far between (e.g. export of advanced chip technology to foreign adversaries is a notable one).
In general, we should not push to regulate most aspects of AI now and let the technology advance and mature further for positive uses before revisiting this area.
First, what is at stake? Global health & educational equity + other areas
Too little of the dialogue today focuses on the positive potential of AI (I cover the risks of AI in another post.) AI is an incredibly powerful tool to impact global equity for some of the biggest issues facing humanity. On the healthcare front, models such as Med-PaLM2 from Google now outperform medical experts to the point where training the model using physician experts may make the model worse.
Imagine having a medical expert available via any phone or device anywhere in the world – to which you can upload images, symptoms, and follow up and get ongoing diagnosis and care. This technology is available today and just need to be properly bundled and delivered in a bundled and thoughtful way.
Similarly, AI can provide significant educational resources globally today. Even something as simple as auto-translating and dubbing all the educational text, video or voice content in the world is a straightforward task given todays language and voice models. Adding a chat like interface that can personalize and pace the learning of the student on the other end is coming shortly based on existing technologies. Significantly increasing global equity of education is a goal we can achieve if we allow ourselves to do so.
Additionally, AI can also play a role in other areas including economic productivity, national defense (covered well here), and many other areas.
AI is the likely the single strongest motive force towards global equity in health and education in decades. Regulation is likely to slow down and confound progress towards these, and other goals and use cases.
Regulation tends to prevent competition – it favors incumbents and kills startups
In most industries, regulation prevents competition. This famous chart of prices over time reflects how highly regulated industries (healthcare, education, energy) have their costs driven up over time, while less regulated industries (clothing, software, toys) drop costs dramatically over time. (Please note I do not believe these are inflation adjusted – so 60-70% may be “break even” pricing inflation adjusted).
Regulation favors incumbents in two ways. First, it increase the cost of entering a market, in some cases dramatically. The high cost of clinical trials and the extra hurdles put in place to launch a drug are good examples of this. A must-watch video is this one with Paul Janssen, one of the giants of pharma, in which he states that the vast majority of drug development budgets are wasted on tests imposed by regulators which “has little to do with actual research or actual development”. This is a partial explanation for why (outside of Moderna, an accident of COVID), no $40B+ market cap new biopharma company has been launched in almost 40 years (despite healthcare being 20% of US GDP).
Secondly, regulation favors incumbents via something known as “regulatory capture”. In regulatory capture, the regulators become beholden to a specific industry lobby or group – for example by receiving jobs in the industry after working as a regulator, or via specific forms of lobbying. There becomes a strong incentive to “play nice” with the incumbents by regulators and to bias regulations their way, in order to get favors later in life.
Regulation often blocks industry progress: Nuclear as an example.
Many of the calls to regulate AI suggest some analog to nuclear. For example, a registry of anyone building models and then a new body to oversee them. Nuclear is a good example of how in some cases regulators will block the entire industry they are supposed to watch over. For example, the Nuclear Regulatory Commission (NRC), established in 1975, has not approved a new nuclear reactor design for decades (indeed, not since the 1970s). This has prevented use of nuclear in the USA, despite actual data showing high safety profiles. France meanwhile has continued to have 70% of its power generated via nuclear, Japan is heading back to 30% with plans to grow to 50%, and the US has been declining down to 18%.
This is despite nuclear being both extremely safe (if one looks at data) and clean from a carbon perspective.
Indeed, most deaths from nuclear in the modern era have been from medical accidents or Russian sub accidents. Something the actual regulator of nuclear power seem oddly unaware of in the USA.
Nuclear (and therefore Western energy policy) is ultimately a victim of bad PR, a strong eco-big oil lobby against it, and of regulatory constraints.
Regulation can drive an industry overseas
I am a short term AI optimist, and a long term AI doomer. In other words, I think the short term benefits of AI are immense, and most arguments made on tech-level risks of AI are overstated. For anyone who has read history, humans are perfectly capable of creating their own disasters. However, I do think in the long run (ie decades) AI is an existential risk for people. That said, at this point regulating AI will only send it overseas and federate and fragment the cutting edge of it to outside US jurisdiction. Just as crypto is increasingly offshoring, and even regulatory-compliant companies like Coinbase are considering leaving the US due to government crackdowns on crypto, regulating AI now in the USA will just send it overseas.
The genie is out of the bottle and this technology is clearly incredibly powerful and important. Over-regulation in the USA has the potential to drive it elsewhere. This would be bad for not only US interests, but also potentially the moral and ethical frameworks in terms of how the most cutting edge versions of AI may get adopted. The European Union may show us an early form of this.
Do you know the parable of the Blind Men and the Elephant? The lessons of one’s subjective truth being espoused as an absolute one based on their own experiences carries beyond zoology. So when I tell you what I’m seeing in venture financing these days if you disagree with me, it might just be that we’re touching different parts of the elephant.
Like parenting a toddler coming off a sugar high, the last 18 months of startup activity has been marked largely by tears, shrieks, and occasional throwing of toys. And while I’m quite optimistic about the coming years, we’re not yet through the pain for many existing companies navigating the transition from a hypergrowth market to one which rewards a different style of operating. Haystack’s Semil Shah wrote up his POV on what this has all meant for the seed market and one point in particular caught my eye. Semil asserts,
Seed-stage valuations have generally been left-unchanged, and I could argue even they’ve gone up since the beginning of 2022. Looking back now, it makes sense – VC firms have lots of dry powder, and while they may have slowed down relative to 2021, they’re still making investments. Early-stage is perhaps a more attractive stage to deploy smaller dollars these days – a friend remarked everyone wants to gamble, but no one wants to sit at the whale tables just yet.
I think he and I are touching the same region, but different parts, of the elephant, so here’s where we differ (and all of this is “AI Startups excepted” obviously).
A. Valuations for the Top Decile of Seed Startups Have Fallen Less YoY While the Second Decile Have Been Hit Harder. I’m defining Top 10% and Second 10% as “degree to which their founders, markets, and milestones pattern-match for the average seed investor.” This is obviously imperfect and to truly segment quality would take 10+ years. But think of this as equivalent to average salary of Top 10 picks in the NBA draft vs picks 11-20. I’m saying that 11-20 were hit harder by the downturn where as before they were often evaluated similarly by the venture community and rewarded commensurately. Whereas at peak of the boom, picks 1-20 were often raising the same (or substantially similar) rounds.
Why are the Top 10% less impacted? Well, the obvious reason is they look like better risk/reward opportunities, but I think it’s also because generally the better brand name firms are doing the Top 10% deals. They have stable capital bases, care less about the different between a few hundred thousand dollars in entry price, and so on. So to continue my NBA example, let’s say you basically only had Big Market Teams making the top draft picks – salaries would be higher right because they could pay more (no player salary cap in venture 🙂 ).
Reminder: I’m not saying the Top 10% of seed startups are, startup for startup, better than the Next 10% – that gets figured out later.
B. It’s Changing Venture Portfolio Models Towards Concentration, Not Just Dry Powder/Gambling. Gotta own enough of your winners. Nothing is more true in venture but this math got a bit perverted during ZIRP. When $20B outcomes occur everyone on the cap table eats well. When it’s $2B, you better have gotten your ownership. It’s just math. Funds, especially new ones, who believed otherwise are now preaching greater ‘concentration’ and at seed, this creates a floor on valuations. Why? Because you start to care more about basis points than the cost to get those basis points. In order to get your 5%, 10%, 15% target you’re willing to increase round size and valuation a bit to make the math work for the founders and any other investors they want to include.
Sunday, June 19, 2022
We compared venture capital returns to public markets returns over a period of 20+ years, both through simulation, discussed later, and directly from our data set – with the objective of providing insight as to the correlation between public market returns and venture capital returns.
As we process venture capital investment data, we merge historical, time-matched, Public Market Equivalent (“PME”) return values. We use an index, such as the Nasdaq-100 or S&P 500, as a proxy for the diversified public market returns over the corresponding time period. If a venture investment does not have a definitive end-point, such as a liquidation or out-of-business signal, we assume the PME is liquidated three years after the last known signal, or at the end of our data timeline, whichever comes first. In calculating venture returns, they are subjected to the same timelines and effects.
The two key aspects are the historical distribution of returns as well as the correlation between the asset types. Let’s discuss.
To evaluate the distributions and correlation of venture returns to market returns for a matched set of investments, we grouped all investments in our data by month and year. We then compare the mean venture returns for each month-year cohort to the mean PME returns calculated to correspond to those investments. For our evaluation, the holding time for venture investments is assumed to be three years after the last financial event, or the actual date of an exit or out-of-business signal. The values and holding time for PME investments are based on the corresponding venture investment date within the cohort with dispersed liquidation dates based on the corresponding venture investments. This reflects the relative behavior of these markets in aggregate.
The distribution of public market returns loosely follows a normal distribution when bucketed by time in this manner. Although not precisely a normal distribution, it is far more normal in comparison to venture. In this distribution, the chances of higher returns appearing far to the right (many standard deviations from the mean) become vanishingly small regardless of how much data you add. The histogram below shows the number of monthly aggregate return occurrences vs roughly equal cash-on-cash return multiple buckets bucket.
The distribution of venture investment return multiples, even when aggregated into monthly cohort buckets as we have done here, is roughly distributed according to a power-law or exponential decay. In this distribution, progressively larger values are consistent, though at ever-decreasing frequencies. The histogram below shows this power curve distribution for the corresponding venture investments. Note that the X-axis is logarithmic to make the distribution apparent. The distribution is steep enough that on a linear scale (not shown), there is a spike for the buckets at the very low end of the X-Axis and dispersed instances of return multiples over an extensive range.
We also examined the correlation between the mean venture return multiple by month and the corresponding PME return multiple. With direct correlation, we get very low values (under a 0.02 R-value). This is due to the linear distribution of PME returns being compared to the power curve characteristics apparent in venture returns.
When the Spearman rank correlation is calculated by converting both venture returns and PME returns into a percentile rank value for the period and using rank correlation, we get increased correlation values. This method converts each value to a 0 to 1 rank score, depending on where its returns ranked in its set. In the comparison of venture return ranks to PME return ranks by year-month, we get a correlation of 0.33.
The chart above shows the return rank for both venture and the PME for each month from 1995 through to 2019. It is evident in the chart that there are clusters of performance by period for each type of investment. There are many instances and time periods where return ranks appear unrelated. This chart represents data in the form used to assess rank correlation, which was the highest we measured. This includes all venture transactions and uses both realized and unrealized outcomes. When the analysis was performed with only realized venture outcomes, the rank correlation was still the highest but dropped to 0.21.
There are interactions between these asset types that play a role. It is likely that venture transactions, whether initial investments, follow-on investments or liquidation events, occur less frequently during adverse or highly volatile public market conditions. This could be a mitigator to correlations between the related asset classes. One indicator of this relationship is that venture deal count is negatively correlated to the rank returns for the same period, with a value of -0.44. This suggests venture transaction returns are impaired in periods of less activity. This is likely caused by a combination of valuation anomalies, real market conditions, and other factors.
The specific data points and aggregated values in the Hatcher data certainly do not precisely reflect the relevant venture capital history. However, we believe they capture the behavior of the market in material respects. Our isolation of each event into a contextually detailed discrete outcome, as well as part of an investee company’s path, provides for flexible analysis and simulation. We continually improve the data, in quality and depth, adding options for general analysis as well as simulation.
We analyze growth in unicorn market cap by region. We also compare # of Barry’s Bootcamps to # of unicorns for key cities.
SEP 20, 2023
[This post co-authored with Shreyan Jain]
We’ve previously explored the geographic distribution of global unicorn market cap in 2019, 2020, and 2021. In the past, we’ve used this analysis to highlight trends like the concentration of tech startups in specific industry-towns, a slowdown in new unicorn creation in China, and the prevalence of post-Covid remote work policies in the Bay Area.
A lot has happened since early-2021, including public tech market and multiples highs, ZIRP (Zero Interest Rate Policy), inflation, interest rate hikes, and a massive public market stock valuation correction, so now is a good time to re-examine what unicorn market cap looks like.
Big Takeaways / TL;DR:
Despite the “you can build a startup anywhere” mantra during COVID, unicorn market cap continued to aggregate in the USA (53% of global market cap). Indeed, 40% of all the unicorn market cap in the world continues to be found in just 3 US cities: SF Bay Area (26%), New York (8%), and LA (6%).
The wide distribution of cities with just 1 unicorn (up from 37 to 75 in 4 years) likely reflects a rise in “ZIRPacorns”, to coin a term, rather than a true decentralization of important startups. ZIRPacorns are unicorns that would probably be worth dramatically less than $1 billion if not for the COVID/ZIRP-era multiple excess.
SF has emerged as the center of AI unicorn market cap with an 81% initial market share of generative AI companies. It is, of course, quite early days and this will likely evolve with time.
NY has grown global unicorn share from 5% to 8% since 2020. Of the top 10 unicorns in NY, 6 are crypto and 3 are fintech, suggesting that NY is a fintech/crypto cluster to date morphing towards a general purpose ecosystem. LA also grew its share during COVID – largely due to a mix of SpaceX and other defense or space companies like Anduril and Relativity. These 3 companies make up 67% of LA market cap.
Paris and London also gained share, while China has continued to lose share over the past few years.
A few cities or regions are highly dependent on single companies for much of their market cap. For instance, Bytedance makes up 58% of Beijing’s total market cap of $387B, SpaceX is 62% of LA’s $222B, and Shein is 64% of Shenzhen’s $157B.
Miami, Austin, and others have many Barry’s Bootcamps, but fewer unicorns than popular perception might suggest (see chart). We look at the Barry’s/Unicorn ratio of key cities. SF Bay Area is dominant with a unicorn/Barry’s ratio of 48. This is of course a lagging indicator and we think Miami in the long run may benefit from talent turnover coming from NY, much as LA was by the Bay Area. Alternatively, the Bay Area may become more fit and reduce this ratio.
All raw data was taken from CB Insights and can be found here. Some caveats:
The data for the 2023 batch of unicorns was downloaded on July 20, 2023 and may not include all recent unicorns.
The listed market caps are based on last-round valuations, many of which are inflated 2021 numbers and may not reflect current fair-market value.
Unicorn market caps are inherently backwards-looking and may not adequately capture trends that we’re still in the early stages of, such as the market reset for growth-stage companies or the proliferation of generative AI.
There are undoubtedly errors in market cap, mismapping of a company to a city, etc across multiple years of data. That said, major trends should hold.
The number of unicorns has exploded from 44 in 2012 to 1215 today—a nearly 30x increase in roughly 10 years. Going back to 2019, the number of unicorns has roughly doubled every 2 years, with new unicorns in any given period outpacing existing unicorns that “graduate” via an IPO, acquisition, or downround. Since June 2019, 6 times as many private companies become unicorns (1094) as the number of unicorns that became publicly traded companies (172).
In both 2020 and 2021, new unicorns accounted for roughly 40% of all unicorns; that rate has shot up to 52% today. This trend becomes even more apparent when you look at decacorns; companies that crossed the $10B mark within the previous years account for 60% of all decacorns today, up from 35% in 2021.
New unicorn creation in the past 3 years spiked in 2021 and early 2022 before falling back to pre-COVID norms in the last year (it’s likely the data here is off by a few months, since there’s a lag between when fundraising rounds close and when they get publicly announced and recorded in datasets like CBI).
In 2020-2021 interest rates dropped, stimulus was pumped into the economy, and both public and private valuations soared. Assuming private market valuations followed public ones, many private companies were overvalued by 3-4X at their peaks. This suggests a threshold of $3 to $4 billion market cap at time of funding in 2020-2021 equates to a “normal” $1 billion company in non-ZIRP times. Indeed, when we compared 2019 (at $1B) and 2021-2023 (at $3B) vintages in terms of new unicorn distribution, we saw nigh-identical patterns.
Since COVID there has been a large reset in public market valuations with many technology companies losing 50-90% of their market caps. Private markets adjust slowly, with markdowns, firesales, and shut downs often going unreported. We would anticipate many of the unicorns from the last few years will lose their status as $1B+ companies in the upcoming years.
Video of the Week
AI of the Week
Last week at Saastr 2023, I had the privilege of hosting a panel with Maggie Hott, GTM leader at OpenAI, Sharon Zhou, cofounder & CEO of Lamini, & Jordan Tigani, founder & CEO of MotherDuck talking about the implications of AI for the software industry broadly.
Four themes resonated throughout the session.
First, large language models like GPT-3 are making AI accessible to the masses. These advanced models allow people to interact conversationally with technology. For example, instead of writing complex SQL queries, users can get database results by describing what they want in plain English. The upshot is that AI can now be utilized by domain experts, not just computer scientists, which opens up the markets for previously technical products to upwards of 10x the user population.
Second, education represents a huge opportunity for AI. Universities are eager to incorporate large language models into curricula and instruction. Students have started to use AI to tailor lessons and quizzes for each student’s needs and learning style. The prospects of such infinitely patient and personalized tutors are driving major investment in AI both for education within schools & ultimately learning within the enterprise.
Third, AI-generated images saw breakout success this year. Leveraging models like DALL-E 2 & Midjourney, our panelists noted the significant improvement in content performance when spending more time on the image than the content.
Finally, there’s debate around who will tune AI software. Will it be third-party SaaS vendors or the customers buying the software? The answer is it’s much more likely to be the subject matter experts within the enterprises who know enough to drive the model performance. But the software vendors will have to offer the tools to do it.
It’s hard to believe this wave is only about 9 months old. From the conversation we shared at the conference, the ripple effects of this technology we felt for a very long time. We are still figuring out exactly the right ways to deploy it and where it will have maximum leverage.
In fact, during the quick fire round, we discussed how a CEO should navigate this and it’s hard. The reality is the industry is changing so fast that it’s difficult to project the implications of AI even a few quarters out on any individual company or industry.
Thank you to Maggie, Sharon, & Jordan for joining me.
Photo Credit: Christiano Boria
By Jon Victor
Sept. 18, 2023 7:00 AM PDT
As fall approaches, Google and OpenAI are locked in a good ol’ fashioned software race, aiming to launch the next generation of large-language models: multimodal. These models can work with images and text alike, producing code for a website just by seeing a sketch of what a user wants the site to look like, for instance, or spitting out a text analysis of visual charts so you don’t have to ask your engineer friend what these ones mean.
Google’s getting close. It has shared its upcoming Gemini multimodal LLM with a small group of outside companies (as I scooped last week), but OpenAI wants to beat Google to the punch. The Microsoft-backed startup is racing to integrate GPT-4, its most advanced LLM, with multimodal features akin to what Gemini will offer, according to a person with knowledge of the situation. OpenAI previewed those features when it launched GPT-4 in March but didn’t make them available except to one company, Be My Eyes, that created technology for people who were blind or had low vision. Six months later, the company is preparing to roll out the features, known as GPT-Vision, more broadly.
What took OpenAI so long? Mostly concerns about how the new vision features could be used by bad actors, such as impersonating humans by solving captchas automatically or perhaps tracking people through facial recognition. But OpenAI’s engineers seem close to satisfying legal concerns around the new technology. Asked about steps Google is taking to prevent misuse of Gemini, a Google spokesperson pointed to a series of commitments the company made in July to ensure responsible AI development across all its products.
OpenAI might follow up GPT-Vision with an even more powerful multimodal model, codenamed Gobi. Unlike GPT-4, Gobi is being designed as multimodal from the start. It doesn’t sound like OpenAI has started training the model yet, so it’s too soon to know if Gobi could eventually become GPT-5.
The industry’s push into multimodal models might play to Google’s strengths, however, given its cache of proprietary data related to text, images, video and audio—including data from its consumer products like search and YouTube. Already, Gemini appears to generate fewer incorrect answers, known as hallucinations, compared with existing models, said a person who has used an early version.
In any event, this race is AI’s version of iPhone versus Android. We are waiting with bated breath for Gemini’s arrival, which will reveal exactly how big the gap is between Google and OpenAI.
Can you stop chatbots from making stuff up using search?
SEP 19, 2023
Today let’s talk about an advance in Bard, Google’s answer to ChatGPT, and how it addresses one of the most pressing problems with today’s chatbots: their tendency to make things up.
From the day that the chatbots arrived last year, their makers warned us not to trust them. The text generated by tools like ChatGPT does not draw on a database of established facts. Instead, chatbots are predictive — making probabilistic guesses about which words seem right based on the massive corpus of text that their underlying large language models were trained on.
As a result, chatbots are often “confidently wrong,” to use the industry’s term. And this can fool even highly educated people, as we saw this year with the case of the lawyer who submitted citations generated by ChatGPT — not realizing that every single case had been fabricated out of whole cloth.
This state of affairs explains why I find chatbots mostly useless as research assistants. They’ll tell you anything you want, often within seconds, but in most cases without citing their work. As a result, you wind up spending a lot of time researching their answers to see whether they’re true — often defeating the purpose of using them at all.
When it launched earlier this year, Google’s Bard came with a “Google It” button that submitted your query to the company’s search engine. This made it slightly faster to get a second opinion about the chatbot’s output, but still placed the burden for determining what is true and false squarely on you.
Starting today, though, Bard will do a bit more work on your behalf. After the chatbot answers one of your queries, hitting the Google button will “double check” your response. Here’s how the company explained it in a blog post:
When you click on the “G” icon, Bard will read the response and evaluate whether there is content across the web to substantiate it. When a statement can be evaluated, you can click the highlighted phrases and learn more about supporting or contradicting information found by Search.
Double-checking a query will turn many of the sentences within the response green or brown. Green-highlighted responses are linked to cited web pages; hover over one and Bard will show you the source of the information. Brown-highlighted responses indicate that Bard doesn’t know where the information came from, highlighting a likely mistake.
When I double-checked Bard’s answer to my question about the history of the band Radiohead, for example, it gave me lots of green-highlighted sentences that squared with my own knowledge. But it also turned this sentence brown: “They have won numerous awards, including six Grammy Awards and nine Brit Awards.” Hovering over the words showed that Google’s search had shown contradictory information; indeed, Radiohead has (criminally) never won a single Brit Award, much less nine of them.
“I’m going to tell you about a tragedy that happened in my life,” Jack Krawczyk, a senior director of product at Google, told me in an interview last week.
Krawczyk had cooked swordfish at home, and the resulting smell seemed to permeate the entire house. He used Bard to look up ways to get rid of it, and then double-checked the results to separate fact from fiction. It turns out the cleaning the kitchen thoroughly would not fix the problem, as the chatbot had originally stated. But placing bowls of baking soda around the house might help.
If you’re wondering why Google doesn’t double-check answers like this before showing them to you, so did I. Krawczyk told me that, given the wide variety of ways people use Bard, double-checking is frequently unnecessary. (You wouldn’t typically ask it to double-check a poem you wrote, or an email it drafted, and so on.)
And while double-checking represents a clear step forward, it does still often require you to pull up all those citations and make sure Bard is interpreting those search results correctly. At least when it comes to research, human beings are still holding the AI’s hand as much as it is holding ours.
Still, it’s a welcome development.
Image Credits: VectorHot / Getty Images
Investors haven’t tired of generative AI startups yet — particularly those with clear enterprise applications.
Case in point, Writer, which is developing what it describes as a “full-stack” generative AI platform for businesses, today announced it raised $100 million in a Series B funding round led by ICONIQ Growth with participation from WndrCo, Balderton Capital and Insight Partners, Aspect Ventures and Writer customers Accenture and Vanguard.
Bringing Writer’s total raised to $126 million and valuing the company at between $500 million and $750 million post-money, the new tranche will fund the development of Writer’s “industry-specific” text-generating AI models, co-founder and CEO May Habib tells TechCrunch.
“Many enterprises are still just scratching the surface on generative AI, mostly building internal ‘CompanyX-GPT’-type of applications,” Habib said via email. “The harder, more impactful use cases require a lot more know-how on retrieval augmented generation, data gathering and cleaning and workflow construction, and they’re realizing that that’s 90% of the work. That’s the part that Writer makes much easier — and all of the data plus the large language model (LLM) can be hosted in an enterprise virtual private cloud, which makes it workable for enterprises.”
Writer’s models can be connected to a company’s “knowledge base,” supplying them with additional context.
Writer competes in a crowded field that includes not only OpenAI and its generative text AI rivals, like Anthropic, AI21 Labs and Mistral AI, but enterprise-focused generative platforms such as Jasper, Cohere and Typeface. All offer AI-powered tools to complete — or generate entirely from scratch — documents ranging from ads to copy for email campaigns, blog posts, flyers and websites.
So what sets Writer apart? Well, for one, it claims to have trained its fine-tunable models on business writing that isn’t copyrighted, a key point at a time when the copyright status of AI-generated works in the U.S. remains somewhat nebulous. Writer also asserts that its models are “smaller” than average and thus more “cost-effective”; transparent in the sense that customers can inspect the models’ code, features and data; and never trained on customer data.
As do several of Writer’s competitors, Writer lets customers connect its models to business data sources to improve their ability to research, fact-check and answer questions. In addition, Writer allows companies to enforce regulatory, legal and brand rules across the models on its platform.
Data & AI/ML Entrepreneur | Early Stage Investor | Top Medium Writer
The main news yesterday here on LinkedIn was the results of Boston Consulting Group (BCG) testing the impact of Generative AI on its consultants 👩💻
The results were quite overwhelming: consultants using GPT-4 finished 12.2% more tasks, completed tasks 25.1% more quickly, and produced 40% higher quality results.
A few weeks back, McKinsey introduced “Lilli,” its latest in-house generative AI tool. Lilli is a chat application that has undergone training using over 100,000 confidential internal documents and interview transcripts. Even in its beta phase, Lilli has evidently proven to be a success within the organization, demonstrating significant impact and widespread adoption. ➡️
The tool has already dramatically cut down the time spent on research and planning from weeks to mere hours, and from hours to just minutes in others ➡️ An astounding 66% of employees now use the app multiple times a week ➡️ In the first 2 weeks of August alone, Lilli answered a whopping 50,000 questions
McKinsey concluded that a combination of cost, reliability, and security justified an in-house solution – despite the higher “tech lift”. McKinsey chose to build its own app on top of other LLM technologies:
🔵 Cohere (most likely using their Embeddings and Semantic Search products); and 🔵 OpenAI Microsoft Azure service (most likely for the “enterprise-friendly” secure GPT-4 instance)
McKinsey’s declaration of being “LLM agnostic” is telling. To them, the LLM is a mere means to a desired end: achieving business goals. With Lilli’s in-house build, they have the flexibility to change underlying technologies as deemed fit.
In many cases, startups will have to offer similar flexibility in terms of the underlying technology to win over enterprise customers (besides regulation, managing confidential customer data, security, PII requirements, etc.) – something that will likely require more engineering efforts and will involve continuous testing, QA, etc over multiple evolving LLMs available on the market. Something that could be overwhelming for smaller startups with smaller technical teams.
☝ However, the McKinsey’s and BCG’s of the world have one big advantage: data and knowledge that they’ve gathered from various projects for decades that is probably stored and quite well-indexed.
Data, documentation and knowledge bases will be the key moat and competitive advantage in this new era where models will be increasingly commoditized. Knowledge bases are as important to AI progress as Foundation Models and LLMs.
To compete, make sure your documentation and knowledge bases are the best on the planet. When it comes to knowledge you want to be able to store a lot of it, and you want to be able to find the right piece of knowledge at the right time. For e.g. LLMs this is typically done with a vector database (for now).
News Of the Week
Image Credits: TechCrunch
X owner Elon Musk today floated the idea that the social network formerly known as Twitter may no longer be a free site. In a live-streamed conversation with Israeli Prime Minister Benjamin Netanyahu on Monday, Musk said the company was “moving to a small monthly payment” for the use of the X system. He suggested that such a change would be necessary to deal with the problem of bots on the platform.
“It’s the only way I can think of to combat vast armies of bots,” explained Musk. “Because a bot costs a fraction of a penny — call it a tenth of a penny — but even if it has to pay…a few dollars or something, the effective cost of bots is very high,” he said. Plus, every time a bot creator wanted to make another bot, they would need another new payment method.
Musk didn’t say what the new subscription payment would cost, but described it as a “small amount of money.”
During the conversation, Musk also shared new metrics for X, noting the site has now 550 million monthly users, who generate 100 to 200 million posts every day. However, it wasn’t clear if Musk is counting automated accounts — that is, either good bots like news feeds or bad bots like spammers — among those numbers.
This figure also didn’t allow for a direct comparison with Twitter’s user base pre-Musk, which was calculated using a specific metric Twitter had invented called the “average monetizable daily active user,” or mDAU. This older metric indicated the users on Twitter who could be monetized by viewing its ads. During its last public earnings of Q1 2022 Twitter had 229 million mDAUs.
Musk didn’t expand on his plan to charge for X or when such a change would come about. But since Musk took over the platform last year, the company has been pushing its users to subscribe to its paid subscription product, X Premium (previously Twitter Blue). This $8 per month or $84 per year subscription service offers a variety of features like the ability to edit posts, half the ad load, prioritized rankings in search and
The number of startups in the region that have received early-stage funding in 2023 so far tops every other US metro by a wide margin.
Silicon Valley also accounts for 31% of US early-stage funding vs. New York’s 15% in 2023 YTD.
It’s consistently captured around one-quarter of early-stage deals in the US for the past 5 years.
Tales of Silicon Valley’s decline as a venture market are way off.
Get more data on where funding is going in the US, Asia, Europe, and other regions in our State of Venture report.
September 15, 2023
After filing to go public, most companies either go through with an offering within a few months or formally withdraw it.
But some do neither. Rather, having mistimed the market or misjudged their own IPO readiness, some companies continue to lie in wait, prepared to restart the process when conditions improve.
This past week, we saw one such example from Turo, the peer-to-peer car rental platform. The San Francisco-based company — which originally filed to go public in January 2022 — submitted an updated registration that included earnings up to the first half of this year.
To date, 14-year-old Turo has submitted six updated filings, posting regular growth in revenue. For the first half of 2023, revenue totaled $408 million — up from $333 million in the same period a year ago — while net loss widened to $23 million.
Could this latest filing signify that Turo is ready to finally take the IPO plunge? Certainly the window has been opening some, led by Arm’s massive offering this week, as well as planned IPOs from Instacart and Klaviyo.
If so, Turo likely won’t be alone. Should the window stay open, we can expect to see many more unicorns resuscitate IPO plans first initiated toward the tail end of the market boom a couple of years ago.
Three of the top candidates are companies that filed confidentially for a public offering but have not yet done so publicly.
Navan: The corporate travel and expense management software provider formerly known as TripActions reportedly submitted a confidential filing for a public offering roughly a year ago. Buzz around a potential IPO has been mounting for some time for the 8-year-old Palo Alto-based company, which has raised about $1 billion in equity funding and $1.2 billion in debt financing to date. While Navan has yet to submit a public filing, there’s good reason to think this could be imminent as the IPO market heats up.
Reddit: The online discussion forum announced in December 2021 that it had confidentially submitted a draft registration statement to the Securities and Exchange Commission for a planned IPO. Since then, market conditions changed for the worse, while Reddit itself faced criticism following a policy change that prompted platform moderators to strike and shutter forums. We still haven’t seen a public filing, but that could be coming as the IPO window opens.
Cohesity: Data management software provider Cohesity reportedly submitted a confidential filing for an initial public offering in December 2021. Now, it looks like the San Jose-based unicorn may be resuscitating those plans. In August, Cohesity announced several new hires, including a new CFO, Eric Brown, who has prior experience heading finance for public companies including Electronic Arts, McAfee and Polycom.
In addition, there are companies that filed to go public in late 2021 or early 2022 but later withdrew their offerings. These include energy-as-a-service provider Redaptive, business software company Justworks, and workflow automation provider Basis Technologies. While none have formally refiled, it is noteworthy that they were ready to go a couple years ago, before conditions turned. We’ll see if any attempt a restart.
For those considering another shot at an IPO, Thursday’s well-received offering from chip designer Arm Holdings offers an encouraging sign. While valuations of growth technology companies have fallen sharply in the past couple years, there’s still plentiful investor appetite for industry leaders with solid financials.
Sequoia and Andreessen to take a huge hit on their 2021 Instacart investment, after a 75% plunge in valuation
Venture firms Sequoia and Andreessen Horowitz invested $50 million each in Instacart at the tech market’s peak in 2021.
Based on Instacart’s latest IPO prospectus, the value of those investments has sunk by more than three-quarters.
Instacart is trying to crack open an IPO market that’s been closed for venture-backed companies for 21 months, but it won’t be easy.
A shopper prepares to fill his cart at a Giant supermarket in Washington, DC, April 6, 2020.
Evelyn Hockstein/The Washington Post via Getty Images)
Sequoia Capital and Andreessen Horowitz, two of Silicon Valley’s most high-profile venture firms, are poised to take a massive hit on their last investment in grocery delivery company Instacart, a deal that closed in 2021 as tech stocks were soaring.
That’s more than 75% below where Sequoia and Andreessen invested in early 2021. At that time, Instacart sold shares at $125 a piece for a $39 billion valuation. The delivery economy was booming because of Covid shutdowns, and Instacart’s services were seeing record demand.
“This past year ushered in a new normal, changing the way people shop for groceries and goods,” Instacart finance chief Nick Giovanni said in a press release at the time.
In the more than two years since then, Instacart and its investors have learned that growth during that period was anything but normal. Instacart was closing out a quarter in which revenue surged 200%. In the quarter before, sales jumped almost sevenfold. Instacart said it was preparing to increase head count by 50% and bolster investment in advertising.
Sequoia’s Mike Moritz, who led his firm’s investment and recently announced his departure after 38 years, said in the same press release that Instacart was “fulfilling its role as a vital service for consumers, a reliable partner for retailers and an effective platform for advertisers.” Fidelity, T. Rowe Price and D1 Capital Partners also participated in that financing round.
Then the economy reopened, inflation spiked and the Federal Reserve started boosting interest rates, which hovered near zero throughout Covid. Consumers started shopping again in person on tightened budgets, and with capital costs jumping, investors began demanding that cash-burning companies find a path to profitability. Last year, the Nasdaq suffered its steepest drop since the 2008 financial crisis.
It’s also true that venture firms haven’t seen any real returns from IPOs since before the 2022 market collapse. The dearth of exits is particularly stark because VCs invested record amounts of capital in 2020 and 2021, including deals at high valuations in areas such as crypto and fintech.
Even with the changing market conditions, Instacart has continued to grow but at a dramatically slower pace. Revenue increased 15% in the latest quarter from the year prior, and operating expenses have come down over that time, allowing the company to turn profitable.
From a valuation perspective, the bigger issue is that Instacart raised the $39 billion round during a record stretch of tech IPOs, and just a couple of months after fellow sharing-economy companies Airbnb and DoorDash had blockbuster offerings.
There hasn’t been a notable venture-backed tech IPO in the U.S. since late 2021, and Instacart and Klaviyo are the only two that have publicly filed recently. Car-sharing service Turo is also on file, but its initial prospectus came out in early 2022.
Fortunately for Sequoia and Andreessen, they began investing in Instacart when the company was in its early days and the stock price was much lower than it is today. Assuming the stock price holds up, there’s still considerable money to be made for limited partners. Because of the lock-up period, the firms can’t begin selling shares until 180 days after the offering.
Sequoia is the largest investor in Instacart, with a 15% stake on a fully diluted basis. The 400,000 shares it purchased in 2021 are a small sliver of the 51.2 million shares it owns. In total, the firm has invested about $300 million for a stake that would be worth over $1.5 billion at the top of the range.
Sequoia led Instacart’s $8.5 million Series A round in 2013, when the price was just 24 cents a share, according to the prospectus. Andreessen led the next round at $2.98, and Sequoia participated. Both firms were in the Series C at $13.31 a share and the Series D at $18.52.
Because Andreessen’s total ownership is below 5%, its full stake isn’t disclosed in the prospectus.
BY JOE MULLIN
SEPTEMBER 19, 2023
The U.K. Parliament has passed the Online Safety Bill (OSB), which says it will make the U.K. “the safest place” in the world to be online. In reality, the OSB will lead to a much more censored, locked-down internet for British users. The bill could empower the government to undermine not just the privacy and security of U.K. residents, but internet users worldwide.
A Backdoor That Undermines Encryption
A clause of the bill allows Ofcom, the British telecom regulator, to serve a notice requiring tech companies to scan their users–all of them–for child abuse content.This would affect even messages and files that are end-to-end encrypted to protect user privacy. As enacted, the OSB allows the government to force companies to build technology that can scan regardless of encryption–in other words, build a backdoor.
These types of client-side scanning systems amount to “Bugs in Our Pockets,” and a group of leading computer security experts has reached the same conclusion as EFF–they undermine privacy and security for everyone. That’s why EFF has strongly opposed the OSB for years.
It’s a basic human right to have a private conversation. This right is even more important for the most vulnerable people. If the U.K. uses its new powers to scan people’s data, lawmakers will damage the security people need to protect themselves from harassers, data thieves, authoritarian governments, and others. Paradoxically, U.K. lawmakers have created these new risks in the name of online safety.
The U.K. government has made some recent statements indicating that it actually realizes that getting around end-to-end encryption isn’t compatible with protecting user privacy. But given the text of the law, neither the government’s private statements to tech companies, nor its weak public assurances, are enough to protect the human rights of British people or internet users around the world.
Censorship and Age-Gating
Online platforms will be expected to remove content that the U.K. government views as inappropriate for children. If they don’t, they’ll face heavy penalties. The problem is, in the U.K. as in the U.S., people do not agree about what type of content is harmful for kids. Putting that decision in the hands of government regulators will lead to politicized censorship decisions.
The OSB will also lead to harmful age-verification systems. This violates fundamental principles about anonymous and simple access that has existed since the beginning of the Internet. You shouldn’t have to show your ID to get online. Age-gating systems meant to keep out kids invariably lead to adults losing their rights to private speech, and anonymous speech, which is sometimes necessary.
In the coming months, we’ll be watching what type of regulations the U.K. government publishes describing how it will use these new powers to regulate the internet. If the regulators claim their right to require the creation of dangerous backdoors in encrypted services, we expect encrypted messaging services to keep their promises and withdraw from the U.K. if that nation’s government compromises their ability to protect other users.
SEP 21, 2023
Last year, Getty rejected AI generated content and announced that it would not accept any submissions created with AI models. The company is reiterating that in an email sent to creators that now specifically calls out Adobe.
In an email sent to Getty contributors and obtained by PetaPixel, the stock photography agency reaffirms its stance against AI generated models and takes it one step further by noting the built-in AI features recently added by Adobe are part of that ban. Adobe Firefly exited beta and became available to all Creative Cloud subscribers last week.
“Getty Images does not accept files created using AI generative models. This includes Adobe’s recently announced Creative Cloud tools, which are now available with its Firefly‑powered generative AI tools built in,” the email reads.
“We’ll update you if our submission policy changes.”
This message has been sent out to a list of creators in what appears to be a warning of sorts and is not a direct response to a particular upload. It is not clear what method Getty is using to identify AI-generated images, but it may be tied to file EXIF data or through an internal algorithm that looks for the telltale signs of AI’s use.
The statement seems to indicate that Getty considers all Adobe Firefly-based tools as against its rules, including Generative Fill and Generative Expand despite the fact that these tools could be used to augment existing images instead of making them entirely from scratch. Getty doesn’t appear to see a distinction and seems to be banning the submission of both.
“These changes do not prevent the submission of 3D renders and do not impact the use of digital editing tools (e.g., Photoshop, Illustrator, etc.) with respect to modifying and creating imagery,” the company said last year. That policy doesn’t appear to be changing.
Getty, a publicly traded company, seems intent on keeping its library stocked — pun intended — with only real photos. Its stance — and therefore the stance of all of its sub-brands such as iStock — against AI stands in contrast to its competitor Shutterstock which not only allows the submission of AI-generated content but also includes a built-in system on its platform that is powered by Dall-E. The stock image company has even paid online influencers to promote the AI generator, granted legal protection to users of the sysetm, and has paid out over $4 million to photographers from its AI contributor fund in order to compensate them for providing the images used to train the AI.
Startup of the Week
Allocate, a company enabling clients of wealth advisers and family offices to invest in venture funds, raised $10 million in new funding.
Why it matters: The current fundraising market is an opportunity for venture firms to diversify their limited partner base, and for eager smaller investors to get access to top funds.
How it works: On an average year, the company assesses between 500 and 700 fund managers, and selects between 20 and 50 across established large firms and smaller newer ones, according to co-founder and CEO Samir Kaji.
Though he declined to name any firms, SEC filings indicate they include Andreessen Horowitz, Lightspeed Venture Partners, Khosla Ventures, Craft Ventures, Altimeter, Uncork, and among others. It also provides co-investment opportunities alongside some fund managers.
Allocate charges an investment fee, and will begin to charge subscription fees next year as it expands its portfolio management software’s capabilities. So far, customers have invested almost $500 million.
Of note: While wealth advisers and private banks have gravitated toward more blue chip funds, family offices tend to be more interested in smaller firms, says Kaji.
What they’re saying: “iCapital and CAIS allowed these [registered investment adviser] firms to offer these types of opportunities to clients… the one area that hasn’t been as democratized is top tier venture capital,” says Kaji, who previously worked at First Republic Bank and Silicon Valley Bank for two decades.
While institutional investors are currently split on investing in venture, Kaji says that Allocate’s customers are enthusiastic about the asset class and eager to get more access.
Between the lines: An obvious question is why Allocate’s customers wouldn’t simply invest in funds-of-funds in order to access venture firms.
Kaji argues that Allocate lets clients invest into funds that are more tailored to their preferences, avoids the double charging of carry, and avoids the higher minimum investments some funds-of-funds require.
Details: Allocate’s new funding came from Gopher Asset US (affiliate with Noah Holdings International), Intera Investments, M13 Ventures, and family offices.
Prior to this round, the company had raised $23.5 million in total, per PitchBook.