Hands Off Google

A reminder for new readers. That Was The Week collects the best writing on critical issues in tech, startups, and venture capital. I select the articles because they are of interest. The selections often include things I disagree with. The articles are only snippets. Click on the headline to go to the original. I express my point of view in the editorial and the weekly video below.

This Week’s Video and Podcast is non-existent. Andrew is in Vilnius and unable to record. Back next week

Content this week from : @rogerwaters, @JonPorty, @jjvincent, @whoisnnamdi, @CalPERS, @amandaumpz, @hunterwalk, @Norm_Lewis, @spikedonline, @pmarca, @NathanLands, @Noahpinion, @dwarkesh_sp, @chrija, @ChrisMetinko, @KateClarkTweets, @nmasc_, @ingridlunden, @levie


Editorial: Hands Off Google

Essays of the Week

EU suggests breaking up Google’s ad business in preliminary antitrust ruling

EU regulator orders Google to sell part of ad-tech business

The Shadow Price of Venture Capital

Capital Calls are Falling off a Cliff

Calpers Private Equity Annual Program Review

CalPERS Set to Heavily Boost Venture Capital Investments

Why a Seed VC Should Care More About Ownership Than Valuation on Each New Investment, But Understand the Fund Impact of Paying Too Much Too Often

Video of the Week

Pink Floyd’s Roger Waters EXPLOSIVE Interview Sets Record Straight

AI of the Week

The problem with AI apocalypticism

Lore Issue #35: The Case for AI Optimism

Why trying to “shape” AI innovation to protect workers is a bad idea

Contra Marc Andreessen on AI

Savoring the AI Meal Without Being Eaten: A Rejoinder to Marc Andreessen

AI-Generated Video Platform Synthesia Receives Nvidia Backing At Unicorn Valuation

News Of the Week

The End of Megafunds

TCV Raised 50% to 75% Less Than Planned For New Venture Fund

Toyota Seeks to Compete With Tesla Through Powerful New EV Technology

Mercedes tries putting ChatGPT in your car

10-Year Annualized Forecasts for Major Asset Classes

Startup of the Week


Tweet of the Week

Aaron @levie on AI apocalypse


This week the This Week in AI section has a lot of great content. Norman Lewis leads with an essay against the apocalyptic views of the dangers of AI and is joined by Nathan Lands and Noah Smith. Meanwhile, Dwarkesh Patel and Christoph Janz take the opposite stance, quoting Albert Wenger of Union Square Ventures in diatribes against Marc Andreessen.

Lewis asks:

… why such intelligent people can only imagine the worst when it comes to this remarkable technology. Why would a superhuman intelligence, if that indeed is what is being created, seek to destroy us? Wouldn’t an AI system aim to surpass, rather than destroy, the achievements of human civilisation?

indeed. Regular readers know where my views lay. Take risks, but be smart. And trust the innovators to do so.

But for me, the news of the week was an EU regulators threat to break up Google’s advertising business from its other interests, accusing Google of favoring its own interests in determining the inputs and outputs from algorithmic advertising.

After a two-year investigation into the company’s ad-tech business, the regulator concluded that Google had abused its monopoly in online advertising by favouring its own ad exchange, AdX, in the auctions held by its own ad server, DFP, and in the way its ad-buying tools, branded as Google Ads and DV360, place bids on such exchanges.

Google only has one business – advertising. Everything else it does are loss-making services designed to grow the advertising business. You cannot break up Google into an advertising business and the rest. Only one would survive. Then all of the benefits we get from Android, Android TV, Youtube, Google Workspaces, Deepmind, the Google Cloud, and the rest would disappear due to lack of revenue.

The idea itself is inane. And the EU, no stranger to inanity, has yet again demonstrated a staggering lack of understanding of how technology is funded and supported.

The state really has no business telling Google how to run its business. It claims the right by trying to claim Google has a monopoly but fails to make a convincing case. Google itself points out how its share of advertising has been declining due to competition from Amazon, TikTok, Facebook, and others. The revenue it receives from “cost per click” programs has been in price decline for years.

And so this week’s cover is “Hands Off Google,” with an AI image generated from a prompt by Adobe Express. I, for one, have no faith that Government can be the friend of humanity when it addresses innovation and technology. The EU is even less qualified than others.

Essays of the Week

EU suggests breaking up Google’s ad business in preliminary antitrust ruling

In a formal statement of objections, the European Commission has taken aim at Google’s core advertising business. A hearing will now follow to decide the course of action.

By Jon Porter and James Vincent

Jun 14, 2023, 4:00 AM PDT

Illustration: The Verge

The European Commission has made a formal antitrust complaint against Google and its ad business. In a preliminary opinion, the regulator says Google has abused its dominant position in the digital advertising market. It says that forcing Google to sell off parts of its business may be the only remedy if the company is found guilty of the charges.

This would be a significant move targeting the main source of the search giant’s revenue and a rare example of the EU recommending divestiture at this stage in an investigation. The Commission has already fined Google over three prior antitrust cases but has only previously imposed “behavioral” remedies — changes to its business practices.

“Our preliminary concern is that Google may have used its market position to favor its own intermediation services,” the Commission’s executive vice-president in charge of competition policy Margrethe Vestager said in a statement. In its preliminary findings, the Commission says Google has “abused its dominant positions” since at least 2014 to favor its own ad exchange. As the Commission’s press release explains:

The Commission is concerned that Google’s allegedly intentional conducts aimed at giving [Google’s ad exchange] AdX a competitive advantage and may have foreclosed rival ad exchanges. This would have reinforced Google’s AdX central role in the adtech supply chain and Google’s ability to charge a high fee for its service.

The statement of objections issued today is an important step in the EU’s investigation, but does not prejudge its outcome. Google will now have the opportunity to reply in writing and request a hearing, after which the Commission will decide whether Google has broken antitrust law in the bloc. If found guilty, the EU’s competition regulator can also fine Google up to 10 percent of its global sales and impose various changes to its business.

In a statement, Google’s VP of global ads, Dan Taylor, said the company disagrees with the Commission’s position, and called digital advertising a “highly competitive sector.”

EU regulator orders Google to sell part of ad-tech business

Competition commission accuses firm of favouring its own services to detriment of rivals

Alex Hern and Lisa O’Carroll

Wed 14 Jun 2023 07.58 EDT

The EU has ordered Google to sell part of its advertising business, as the bloc’s competition regulator steps up its enforcement of big tech’s monopolies.

The competition commission said it had taken issue “with Google favouring its own online display advertising technology services to the detriment of competing providers of advertising technology services, advertisers and online publishers”.

Its view was therefore that “only the mandatory divestment by Google of part of its services would address its competition concerns”.

After a two-year investigation into the company’s ad-tech business, the regulator concluded that Google had abused its monopoly in online advertising by favouring its own ad exchange, AdX, in the auctions held by its own ad server, DFP, and in the way its ad-buying tools, branded as Google Ads and DV360, place bids on such exchanges.

Speaking shortly before the ruling, Margrethe Vestager, the competition commissioner, told reporters of the complexity of the investigation. “This market is a highly technical market. It is very dynamic. The detection of these behaviours can be very challenging.

“Each time a practice was detected … Google simply modified its behaviour so as to make it more difficult to detect but with the same objectives [and] with the same effects.”

She added that Google would be given a chance to respond to the EU’s concerns.

The Shadow Price of Venture Capital

Valuations are 60% too high relative to the volume of venture funding


JUN 14, 2023

Venture valuations have fallen off a cliff, but they are still too high.

Valuations rise and fall with the volume of dollars invested in startups according to a stable ratio: for every one percent change in funding, valuations move two-thirds of a percent.

But since peaking in late 2021, valuations have only fallen 0.4% for every 1% drop in funding. This pricing “error” has accumulated: today’s valuations are 60% higher than you’d expect for the amount of capital invested.

In other words, the “shadow price” of venture capital is a lot lower than what we’re seeing in announced transactions these days. Never before have valuations and funding diverged so meaningfully from their long-run equilibrium, for so long.

What does it all mean?

A 3-for-2 special

Valuations and capital invested in venture have both grown substantially since 2010:

Interestingly, they trend together. As one rises, the other does too. Same in the other direction.

It’s not a perfect relationship, but there’s clearly something tying them together.

In fact, you can predict average valuations from the capital invested each quarter. The below chart plots actual valuations over time against what we’d predict based on capital deployment – a simple regression of valuations on funding:

Capital invested predicts the average valuation of deals done each quarter.

Most of the time, the “errors” of this capital-based prediction are small. Further, these deviations typically correct themselves within a quarter or two.

In this sense, venture valuations are a function of capital flows. More capital drives valuations ever higher, in a fairly predictable fashion. There’s an equilibrium, a balance between the two….(contd)

💥𝗖𝗮𝗽𝗶𝘁𝗮𝗹 𝗖𝗮𝗹𝗹𝘀 𝗮𝗿𝗲 𝗙𝗮𝗹𝗹𝗶𝗻𝗴 𝗼𝗳 𝗮 𝗖𝗹𝗶𝗳𝗳💥

Marc Penkala General Partner @ āltitude

From Carta

This is no surprise per se, 𝘁𝗵𝗲 𝗹𝗮𝘁𝗲𝘀𝘁 𝗱𝗲𝗰𝗹𝗶𝗻𝗲 𝗶𝗻 𝗰𝗮𝗽𝗶𝘁𝗮𝗹 𝗰𝗮𝗹𝗹𝘀 𝗴𝗼𝗲𝘀 𝗵𝗮𝗻𝗱 𝗶𝗻 𝗵𝗮𝗻𝗱 𝘄𝗶𝘁𝗵 𝘁𝗵𝗲 𝗱𝗲𝗰𝗹𝗶𝗻𝗲 𝗶𝗻 𝘀𝘁𝗮𝗿𝘁𝘂𝗽 𝗳𝘂𝗻𝗱𝗶𝗻𝗴. Even though VC funds are just a part of the startup funding environment next to business angels, family offices, corporates and governments.

Capital calls serve two primary purposes. 𝗙𝗶𝗿𝘀𝘁𝗹𝘆, they allow GPs to efficiently deploy capital when investment opportunities arise. By making periodic capital calls, the GPs can allocate funds strategically and mitigate the risk of idle capital, optimizing IRRs. 𝗦𝗲𝗰𝗼𝗻𝗱𝗹𝘆, capital calls enable LPs to manage their cash flow by spreading their investment obligations over a specified period, rather than providing the entire committed capital upfront.

𝗧𝗵𝗲 𝗿𝗲𝗰𝗲𝗻𝘁 𝘀𝘁𝗲𝗲𝗽 𝗱𝗲𝗰𝗹𝗶𝗻𝗲 𝗼𝗳 -𝟰𝟱% 𝗶𝗻 𝗰𝗮𝗽𝗶𝘁𝗮𝗹 𝗰𝗮𝗹𝗹𝘀 𝗰𝗮𝗻 𝗵𝗮𝘃𝗲 𝘀𝗶𝗴𝗻𝗶𝗳𝗶𝗰𝗮𝗻𝘁 𝗶𝗺𝗽𝗹𝗶𝗰𝗮𝘁𝗶𝗼𝗻𝘀 𝗳𝗼𝗿 𝗯𝗼𝘁𝗵 𝗟𝗣𝘀 𝗮𝗻𝗱 𝗩𝗖 𝗳𝘂𝗻𝗱𝘀. For LPs, a decrease in capital calls means a reduced need for immediate cash for VC funds. This could provide LPs with more flexibility in managing their liquidity, allowing them to allocate funds to other investment opportunities or financial obligations. Part of the decline is probably as well driven by defaulting LPs, not being able to pay-in their capital calls – 𝗜 𝗵𝗮𝘃𝗲 𝗵𝗲𝗮𝗿𝗱 𝗻𝘂𝗺𝗯𝗲𝗿𝘀 𝗿𝗮𝗻𝗴𝗶𝗻𝗴 𝗳𝗿𝗼𝗺 𝟱% 𝘁𝗼 𝟮𝟱%, depending on the respective LP structure.

On the other hand, 𝗩𝗖 𝗳𝘂𝗻𝗱𝘀 𝗺𝗮𝘆 𝗳𝗮𝗰𝗲 𝗰𝗵𝗮𝗹𝗹𝗲𝗻𝗴𝗲𝘀 𝗱𝘂𝗲 𝘁𝗼 𝘁𝗵𝗲 𝗱𝗲𝗰𝗹𝗶𝗻𝗲 𝗶𝗻 𝗰𝗮𝗽𝗶𝘁𝗮𝗹 𝗰𝗮𝗹𝗹𝘀, this may lead to uncalled committments – especially after the switching date, once management fees are calculated based on deployed capital instead of committed capital.

A decrease in capital calls is an indicator that GPs may be finding fewer attractive investment opportunities or are just hesitant deploying capital – it might be temporary, though. 𝗜𝘁 𝗰𝗼𝘂𝗹𝗱 𝗶𝗻𝗱𝗶𝗰𝗮𝘁𝗲 𝗮 𝘀𝗹𝗼𝘄𝗱𝗼𝘄𝗻 𝗶𝗻 𝘁𝗵𝗲 𝘀𝘁𝗮𝗿𝘁𝘂𝗽 𝗲𝗰𝗼𝘀𝘆𝘀𝘁𝗲𝗺, 𝗿𝗲𝗱𝘂𝗰𝗲𝗱 𝗶𝗻𝘃𝗲𝘀𝘁𝗼𝗿 𝗰𝗼𝗻𝗳𝗶𝗱𝗲𝗻𝗰𝗲, 𝗼𝗿 𝗼𝘁𝗵𝗲𝗿 𝗺𝗮𝗿𝗸𝗲𝘁 𝗳𝗮𝗰𝘁𝗼𝗿𝘀 𝘁𝗵𝗮𝘁 𝗶𝗺𝗽𝗮𝗰𝘁 𝗱𝗲𝗮𝗹𝗳𝗹𝗼𝘄. The decline could lead to lower fund performance, reduced managemnt fees and potential difficulties in meeting investment targets.

Calpers Private Equity Annual Program Review

Item06d 01 A
734KB ∙ PDF file



CalPERS Set to Heavily Boost Venture Capital Investments

byAmanda Umpierrez

June 14, 2023

The large public pension fund is moving its allocation in venture capitalism from $800 million to $5 billionThe California Public Employees’ Retirement System, or CalPERS, is dedicating a multibillion-dollar push into international venture capital in an effort to remedy what it calls “a lost decade.”

The country’s leading public pension fund now expects to grow its allocation from nearly $800 million to $5 billion, the Financial Times reported, heavily increasing its former distribution of just 1% towards venture exposure in a $55 billion private equity program. Over the past year, CalPERS’ private equity portfolio returned -4.7%, while its venture investment performance finished at -24.8%.

In a board meeting presentation set for this month, Anton Orlich, managing investment director at CalPERS, argues that the institutional investor has had an “inconsistent” private equity investment strategy over the past years, describing the inactivity to a “lost decade,” that saw big wins and outperformance in the venture capital market.

Why a Seed VC Should Care More About Ownership Than Valuation on Each New Investment, But Understand the Fund Impact of Paying Too Much Too Often

Posted on June 9, 2023 by hunterwalk

My friends at Weekend Fund recently put out a round-up newsletter of some investor responses to the question “Do Valuations Matter?” It’s all worth reading but I’ll excerpt my thoughts here since it’s a discussion Satya and I have often with new VCs.

scales of justice statue with dollar bills on each scale, cartoon art

Like NEA, Homebrew takes an ownership-driven approach to investing. They view valuation as an important guardrail in evaluating an investment opportunity. Hunter also breaks down their framework for evaluating an investment opportunity when achieving their target ownership exceeds their maximum check size, and the “opportunity cost” of doing so resulting in less diversification. 

More from Hunter: 

“In our historically concentrated approach to seed stage investing, hitting our ownership target mattered more than valuation *but* valuation was an incredibly important guardrail in evaluating an opportunity, for it has great impact on the company and our portfolio management overall. 

We set a ‘max check size’ for our initial investments which was meant to get us, on average, 10-15% ownership and if held to, would overall guide us to an investment period that provided both time and company diversification for the fund. It also drove our reserves strategy. So in any negotiation, whether we wrote our ‘max check’ to get the target ownership was a factor of round size, company stage, and so forth. But we would rarely walk away from an opportunity based on valuation if it fits within that target ownership and check-size box. 

In situations where targeting the 10-15% ownership would have required a commitment larger than our ‘max check size’ we had to decide whether (a) the opportunity here was worth 1.5 or 2 slots – ie are we going to make one fewer investment out of the fund in order to do this one or (b) would we stick with our check size but take lower ownership as a result or (c) walk away. Of these three, (c) was the most common decision for a variety of reasons that were about being consistent in our strategy and product offering.”
— Hunter Walk (Homebrew) 

Video of the Week

AI of the Week

The problem with AI apocalypticism

The elites’ fear of artificial intelligence betrays a degraded view of humanity.


4th June 2023

Artificial intelligence (AI) was once a discrete topic earnestly discussed among a narrow band of computer scientists. But no more. Now it’s the subject of near daily public debate and a whole heap of doom-mongering.

Just this week, the Centre for AI Safety, a San Francisco-based NGO, published a statement on its webpage asserting that: ‘Mitigating the risk of extinction from AI should be a global priority, alongside other societal-scale risks such as pandemics and nuclear war.’ This apocalyptic statement has been supported by some of the biggest names working in AI, including Sam Altman, the chief executive of OpenAI (the firm behind ChatGPT), and Demis Hassabis, chief executive of Google DeepMind.

There’s no doubt that AI is a major technological breakthrough that really could have a significant impact on our lives. But the predominant narrative that is emerging around AI goes much further than that. It massively overestimates its current potential and invariably draws absurdly dystopian conclusions. As a result, non-conscious AI assistants, from ChatGPT to Google Bard, are now regarded as potentially capable of sentience and of developing an intelligence far superior to our own. They are supposedly just a short step from developing independent agency, and of exercising their own judgement and will.

Some AI enthusiasts have allowed themselves to indulge in utopian flights of fancy. They claim that AI could be about to cure cancer or solve climate change. But many more, as the Centre for AI Safety statement attests, have started to speculate about its catastrophic potential. They claim that AI will turn on us, its creators, and become a real and mortal threat to the future of human civilisation.

By believing that AI will develop its own sense of agency, these AI experts, alongside assorted pundits and politicians, are indulging in an extreme form of technological determinism. That is, they assume that technological development alone determines the development of society.

Lore Issue #35: The Case for AI Optimism

Nathan Lands

Nathan Lands

Lore.com Weekly AI Newsletter | Posts about Generative AI for work.

Published Jun 14, 2023

Originally published at Lore.com

Good morning.

Just when we thought AI news would slow down, we had one of the craziest weeks since GPT-4 was released!

Okay, today we’ve got exciting news from OpenAI, Adobe, AMD, and an RPG created with AI in a day.

Let’s get started!


Last week, Marc Andreessen, the legendary entrepreneur and co-founder of Netscape who played a pivotal role in developing the early internet, wrote an epic essay titled Why AI Will Save The World. He wrote it to counter what he calls a “full-blown moral panic about AI.”

If you haven’t read it yet, we suggest you do. It’s a must-read for all AI entrepreneurs, executives, and innovators.

Here are some of the key insights:

AI has the potential to solve some of society’s most pressing challenges including inequality, environmental issues, and healthcare, provided that regulatory capture doesn’t hinder its progress.

AI is not human. There is no current scientific evidence to support the notion that AI will become sentient and pose a threat to humanity.

AI can foster economic growth and job creation by enabling individuals to engage in creative work that was previously impossible.

The geopolitical risk of a nation without democratic values taking the lead in AI development is a more realistic and pressing concern than dystopian sci-fi scenarios.

Companies of all kinds should be building and implementing AI rapidly without regulation.

We generally agree with the points made in the article. Companies must be mindful of how they implement AI; however, this is also a time for boldness and innovation.

Democracy has been a dominant force since the mid-20th century, largely because democratic nations were at the forefront of two major technological revolutions: nuclear technology and the internet. There’s a strong parallel here – it’s highly conceivable that democracy will only continue flourishing if a democratic nation leads in this next major technological revolution, AI.

Why trying to “shape” AI innovation to protect workers is a bad idea

Instead, we should empower workers and create mechanisms for redistribution.


Photo by ray rui on Unsplash

I’ve been to a number of meetings and panels recently where intellectuals from academia, industry, media, and think tanks gather to discuss technology policy and the economics of AI. Chatham House Rules prevent me from saying who said what (and even without those rules, I don’t like to name names), but one perspective I’ve encountered increasingly often is the idea that we should try to “shape” or “steer” the direction of AI innovation in order to make sure it augments workers instead of replacing them. And the economist Daron Acemoglu has been going around advocating very similar things recently:

According to Acemoglu and [his coauthor] Johnson, the absence of new tasks created by technologies designed solely to automate human work will…simply dislocate the human workforce and redirect value from labour to capital. On the other hand, technologies that not only enhance efficiency but also generate new tasks for human workers have a dual advantage of increasing marginal productivity and yielding more positive effects on society as a whole…

[One of Acemoglu and Johnson’s main suggestions is to r]edirect technological change to enhance human capabilities: New technologies, particularly AI, should not focus solely on automating tasks previously done by humans and instead seek to empower and enable workforces to be more productive.

So I thought I’d write a post explaining why I think this is generally a bad idea — and what we should do instead.

In theory, new technologies can have several different effects on human workers. They can:

reduce demand for human workers by replacing their labor,

increase wages by making human workers more productive at their jobs,

create new demand for new kinds of jobs, and

increase overall labor demand through economic growth.

In addition, new technology can affect inequality by favoring low-skilled workers or high-skilled workers.

It’s understandable that we might want to steer or shape the development of AI technology so that it maximizes the benefits for workers, and avoids the “replacement” part. But there is a big problem with the idea — namely that no one, including Daron Acemoglu or any economist, has any idea how to predict which technologies will augment humans and which will simply replace their labor.

Contra Marc Andreessen on AI

“The claim that you will completely control any system you build is obviously false, and a hacker like Marc should know that”


Marc Andreessen published a new essay about why AI will save the world. I had Marc on my podcast a few months ago (YouTube, Audio), and he was, as he is usually, very thoughtful and interesting. But in the case of AI, he fails to engage with the worries about AI misalignment. Instead, he substitutes aphorisms for arguments. He calls safety worriers cultists, questions their motives, and conflates their concerns with those of the woke “trust and safety” people.

I agree with his essay on a lot:

People grossly overstate the risks AI poses via misinformation and inequality.

Regulation is often counterproductive, and naively “regulating AI” is more likely to cause harm than good. 

It would be really bad if China outpaces America in AI.

Technological progress throughout history has dramatically improved our quality of life. If we solve alignment, we can look forward to material and cultural abundance.

But Marc dismisses the concern that we may fail to control models, especially as they reach human level and beyond. And that’s where I disagree.

It’s just code

Marc writes:

My view is that the idea that AI will decide to literally kill humanity is a profound category error. AI is not a living being that has been primed by billions of years of evolution to participate in the battle for the survival of the fittest, as animals are, and as we are. It is math – code – computers, built by people, owned by people, used by people, controlled by people. The idea that it will at some point develop a mind of its own and decide that it has motivations that lead it to try to kill us is a superstitious handwave.

The claim that you will completely control any system you build is obviously false, and a hacker like Marc should know that. The Russian nuclear scientists who built the Chernobyl Nuclear Power Plant did not want it to meltdown, the biologists at the Wuhan Institute of Virology didn’t want to release a deadly pandemic, and Robert Morris didn’t want to take down the entire internet.
The difference this time is that the system under question is an intelligence capable (according to Marc’s own blog post) of advising CEOs and government officials, helping military commanders make “better strategic and tactical decisions”, and solving technical and scientific problems beyond our current grasp. What could go wrong?

I just want to take a step back and ask Marc, or those who agree with him, what do you think happens as artificial neural networks get smarter and smarter? In the blog post, Marc says that these models will soon become loving tutors and coaches, frontier scientists and creative artists, that they will “take on new challenges that have been impossible to tackle without AI, from curing all diseases to achieving interstellar travel.”
How does it do all this without developing something like a mind?


Why do you think something so smart that it can solve problems beyond the grasp of human civilization will somehow totally be in your control? Why do you think creating a general intelligence just goes well by default?

Saying that AI can’t be dangerous because it’s just math and code is like saying tigers can’t hurt you because they’re just a clump of biochemical reactions. Of course it’s just math! What else could it be but math? Marc complains later that AI worriers don’t have a falsifiable hypothesis, and I’ll address that directly in a second, but what is his falsifiable hypothesis? What would convince him AI can be dangerous? Would some wizard have to create a shape shifting intelligent goo in a cauldron? Because short of magic, any intelligence we build will be made of math and code….

Savoring the AI Meal Without Being Eaten: A Rejoinder to Marc Andreessen

Christoph Janz

Last week, Marc Andreessen published a blog post titled “Why AI Will Save the World”. He begins by enumerating the ways AI will improve all aspects of life. I 100% agree with everything here. I can’t wait until every child has an “infinitely patient, infinitely compassionate, infinitely knowledgeable, infinitely helpful” AI tutor, as Marc puts it. Similar transformations in healthcare, science, and various other work and life domains are equally promising, and at Point Nine, we’re incredibly excited to be investing in startups across these fields.

However, what followed in Marc’s post was unexpected, especially coming from someone of his standing and intellectual caliber. He dismisses AI safety risks and categorizes everyone voicing concerns about AI’s potential dangers as either unscientific fearmongers or self-interested opportunists.

Besides Eliezer Yudkowsky, the people Marc (presumably) brushes into this corner include figures like Nick Bostrom, Stuart Russell, Geoffrey Hinton, Max Tegmark, and the late Stephen Hawking. These great minds have all cogently articulated potential scenarios should AI greatly surpass human intelligence. For those unfamiliar with this discourse, it’s known as the AI alignment problem. This term refers to the challenge of ensuring that an AI’s goals align with our own. If a powerful AI is not perfectly aligned with human values and goals, it could lead to disastrous outcomes. A famous example to illustrate this is an AI programmed with a seemingly harmless goal, like “maximize the production of paperclips.” Without proper safeguards, this instruction, if taken to an extreme, could lead the AI to convert all available matter (including humans and the Earth) into paperclips.

If Marc has a compelling counterargument to these concerns, I would genuinely LOVE to hear it. However, his article is riddled with ad hominem arguments. Instead of addressing the issue at hand, he resorts to psychoanalyzing those who voice concerns and attempts to undermine their credibility.

The only thing he says about the issue itself is this:

“In short, AI doesn’t want, it doesn’t have goals, it doesn’t want to kill you, because it’s not alive.”

If there was a prize for compressing the largest number of questionable assumptions and illogical conclusions into a short sentence, this one would have a high chance of winning.

I liked Albert Wenger’s response on Twitter:

AI-Generated Video Platform Synthesia Receives Nvidia Backing At Unicorn Valuation

June 13, 2023

Chris Metinko

Synthesia is the latest AI startup to raise a big round and hit a $1 billion valuation after a $90 million Series C.

The London-based startup allows companies to easily create instructional videos from text featuring stock or custom AI avatars without the need for actors, cameras or studios.

The new round was led by Accel with investments from NVenturesNvidia’s venture capital arm — and existing investors Kleiner Perkins, GV, Firstmark Capital, Alex Wang, Olivier Pomel and Amjad Masad.

“While we weren’t actively looking for new investment, Accel and NVIDIA share our vision for transforming traditional video production into a digital workflow that will enable creators to bring their ideas, from training videos to Hollywood films, to life with only a Synthesia account,” said Synthesia co-founder and CEO Victor Riparbelli in a release. “I’m delighted to have them on board as we accelerate our AI research efforts.”

Synthesia says more than 12 million videos have been generated on its platform since its founding in 2017. The startup works with more than 50,000 companies worldwide.

The company has now raised more than $156 million, per Crunchbase.

News Of the Week

The End of Megafunds

By Kate Clark

June 15, 2023 10:45 AM PDT ·

Megafunds may be a relic of another era in venture capital. That’s not a bad thing. 

Insight Partners has reduced the target of its next fund 25% to $15 billion, the Financial Times first reported this week. Other firms, like Tiger Global Management and TCV, aren’t on track to meet their initial ambitious targets either.

There aren’t enough opportunities to justify more pandemic-era monster funds anyway, and many limited partners have realized those funds must land massive exits—a rarity in the industry right now—to generate satisfactory returns. Plus, the frigid fundraising environment makes raising such funds next to impossible. 

It’s a necessary correction that should help smaller VC firms compete.

During the pandemic-fueled rush into private tech, VC funds grew extremely large. Firms like Coatue Management, which raised $16 billion for private investments between 2018 and 2021, sucked all the air out of the room. These funds made winning deals far more challenging for traditional, smaller venture firms, whose ability to generate consistent returns is better proven, according to investors. 

A coming correction to fund sizes should also finally temper overheated valuations, a relief for the VC industry, particularly the smaller firms that couldn’t pay the high prices. 

And until there are more good deals to get done—outside artificial intelligence—there’s really no reason for firms to raise such big pools of capital. 

That may be one reason Insight’s partners aren’t upset that it’s downsizing. A person close to Insight said the firm still has a whopping $10 billion sitting in its last fund and there just aren’t enough good investment opportunities to justify initial plans to raise $20 billion. It helps that Insight isn’t going through this alone. 

TCV, a 28-year-old crossover fund, has raised less than half of the $5.5 billion fund target it set last year, The Information reported earlier this week. (It’s unclear if the firm has reduced its target size.) And Tiger Global Management has reduced its fund target twice over, most recently landing on $5 billion, down from around $12 billion. 

TCV Raised 50% to 75% Less Than Planned For New Venture Fund

By Natasha Mascarenhas and Kate Clark

June 12, 2023 6:23 PM PDT ·

TCV, a 28-year-old mainstay of the venture capital industry, has raised 50% to 75% less capital for its next flagship fund for private investments than the $5.5 billion target it set last year, according to new securities filings and a document compiled by one of TCV’s limited partners.

The filing doesn’t clarify whether TCV is raising more capital for the fund or has finished the process, which started last summer. Either way, the smaller-than-planned fundraising by one of the biggest VC firms in the world could indicate lower demand among institutional investors to back private technology funds as shares of many smaller public tech companies continue to struggle. Tiger Global Management and Insight Partners, two other major startup backers, also have indicated they will raise less capital for their next funds than they originally planned.

• TCV, Insight Partners, Tiger Global plan to raise less funding than planned
• Funding struggles could be related to downturn in startup funding, valuations
• TCV recently withdrew plans to launch a special purpose acquisition company

In the new filings, TCV said it raised $1.4 billion for its next flagship fund and raised an additional $1.35 billion across four new funds, bringing its total haul to about $2.75 billion. It isn’t clear whether those additional funds are related to the flagship fund. If they are related, the fund’s current size would be 50% less than the original target.

A spokesperson for TCV didn’t have a comment.

Toyota Seeks to Compete With Tesla Through Powerful New EV Technology

Watch out, Elon.


In competing with Tesla, the recognized leader in the EV field, Toyota  (TM) – Get Free Report is shooting high, looking to “change the future” of the automotive field through powerful new technological innovations. 

The Japanese carmaker unveiled a broad plan June 13 that features a redesign of its factories in addition to a detailed explanation of how the company is planning to approach EV batteries. 

The announcement comes just a day prior to Toyota’s annual shareholder meeting. 

Seeking to become a “world leader in battery EV energy consumption,” Toyota broke down plans for its next-generation batteries which, it said, will “achieve a vehicle cruising range of 1,000 km” (621 miles). 

Tesla’s Model Y, in comparison, has a range of around 530 km.

These next-generation batteries will launch in 2026. 

Mercedes tries putting ChatGPT in your car

A three-month beta program kicks off this week.

Will Shanklin | June 15, 2023 12:28 PM

Mercedes-Benz is putting ChatGPT on the road. The automaker is using Microsoft’s Azure OpenAI Service to bring the viral natural-language model to its in-car voice assistant. It will initially be available in a three-month beta program for US customers in select vehicles, but Mercedes says it will consider a broader and more permanent rollout in the future.

ChatGPT integration could put the automaker’s “Hey Mercedes” voice assistant on steroids. Rather than merely answering simple and pre-programmed commands like “Turn up the heat” or “What’s the forecast,” it can carry natural conversations about virtually any topic, including contextual follow-up questions. (Children of the 1980s can finally live out their Knight Rider fantasies.) In addition, Mercedes says it’s “exploring” ChatGPT plugins to enable tasks like making restaurant reservations or booking movie tickets through natural language.

Although holding a lengthy chat on the road could lead to distracted driving, the fact that it’s voice-only should lessen the concern for recklessness. Perhaps it could even help by answering questions you’d otherwise be tempted to look up on your phone while behind the wheel.

Mercedes and Microsoft tout Azure’s “enterprise-grade security, privacy and reliability” for data protection. Still, the companies clarify that your conversations will be “stored in the Mercedes-Benz Intelligent Cloud, where it is anonymised and analysed.” In other words, assume people will listen to your recordings for training and data analysis, so be wary about uttering anything that’s private or could identify you personally.

10-Year Annualized Forecasts for Major Asset Classes

By Marcus Lu

10-Year Annualized Forecasts for Major Asset Classes

While there’s no way of predicting the future, quantitative models can help us come up with a general idea of how different asset classes may perform in the future.

One example is Vanguard’s Capital Markets Model (VCMM), which has produced a set of 10-year annualized return forecasts for both equity and fixed income markets.

Visualized above, these projections were published on May 17, 2023, and are based on the March 31, 2023 running of the VCMM.

Startup of the Week

France’s Mistral AI blows in with a $113M seed round at a $260M valuation to take on OpenAI

Ingrid Lunden@ingridlunden / 9:02 AM PDT•June 13, 2023

Image Credits: John Seaton Callahan (opens in a new window)/ Getty Images

AI is well and truly off to the races: a startup that is only four weeks old has picked up a $113 million round of seed funding to compete against OpenAI in the building, training and application of large language models and generative AI.

Mistral AI, based out of Paris, is co-founded by alums from Google’s DeepMind and Meta and will be focusing on open source solutions and targeting enterprises to create what CEO Arthur Mensch believes is currently the biggest challenge in the field: “To make AI useful.” It plans to release its first models for text-based generative AI in 2024.

Lightspeed Venture Partners is leading this round, with Redpoint, Index Ventures, Xavier Niel, JCDecaux Holding, Rodolphe Saadé and Motier Ventures in France, La Famiglia and Headline in Germany, Exor Ventures in Italy, Sofina in Belgium, and First Minute Capital and LocalGlobe in the UK all also participating.

Tweet of the Week

%d bloggers like this: