Can We Stop A More Brutal World?

A reminder for new readers. That Was The Week collects the best writing on critical issues in tech, startups, and venture capital. I selected the articles because they are of interest. The selections often include things I entirely disagree with. But they express common opinions, or they provoke me to think. The articles are only snippets. Click on the headline to go to the original. I express my point of view in the editorial and the weekly video below.

This Week’s Video and Podcast:

Content this week from @kteare, @ajkeen, @noahpinion, @broderick, @kantrowitz, @aagave, @blaiseaguera, @NorvigPeter, @amir, @mikeloukides, @fchollet, @jasonlk, @mslopatto, @mvpeers, @jeffjohnroberts, @ron_miller, @frederici, @DavidSacks, @RayDalio



Essays of the Week

You’re not going to like what comes after Pax Americana

This is what an unmoderated internet looks like

Lina Khan’s FTC Is Totally Outmatched Vs. Amazon

Why So Many Streaming Services Are Struggling

Video of the Week

Graham Alison at the All-In Summit

AI of the Week

Artificial General Intelligence Is Already Here

OpenAI’s Revenue Crossed $1.3 Billion Annualized Rate, CEO Tells Staff

Automated Mentoring with ChatGPT

How I think about LLM prompt engineering

News Of the Week

CB Insights, Q3 Deep Dive

Pitchbook: VC Returns Are at a 10+ Year Low

Greylock Closes New $1B Fund To Invest In Earliest-Stage Startups

At The Sam Bankman-Fried Trial, It’s Clear FTX’s Collapse Was No Accident 

How is it still getting worse for Sam Bankman-Fried?

SBF Didn’t Understand Risk-Reward

From FTX to Coinbase ventures, plenty of VCs have made big crypto bets that could actually pan out

X Tests New Live-Stream and Spaces Buttons in the Post Composer

Social media platforms swamped with fake news on the Israel-Hamas war

EU Officials Launch Investigation into X Over the Distribution of Misinformation Around the Israel-Hamas War

Atlassian to acquire former unicorn Loom for $975M

Startup of the Week

Adobe Firefly’s generative AI models can now create vector graphics in Illustrator

X of the Week

David Sacks and Ray Dalio

Another Step Toward International War


This was a week when it was almost impossible to escape politics, particularly world politics. The barbaric, medieval images coming out of Israel on Shabbat Saturday morning, at the start of the Jewish New Year, were both sickening and, at the same time, sadly, not surprising.

This was in a week where my media consumption included the interview Graham Alison did at the All In Summit and a similar couple with Ray Dalio and Larry Summers. All three focus on the current and future world order. The interview with Alison is this week’s Video of the Week.

So, it is time for me to stand back and ask some big questions. It is also time to begin to try and answer them. It all starts with Graham Alison’s book “Destined For War” and Ray Dalio’s essay this week – “Another Step Toward International War.” These are two super-intelligent big thinkers focusing on the coming end of US global supremacy, leading to a possible or even likely global conflict. Then there is Noah Smith’s “You’re not going to like what comes after Pax Americana,” one of this week’s Essay of the Week.

Alison refers to Thucydide’s Trap. That describes an apparent tendency towards war when an emerging power threatens to displace an existing great power as a regional or international hegemon.

Weakening great powers allows opponents to take advantage of the new situation. The fall of the Berlin Wall and the collapse of the Soviet Union are historical examples. But broader global conflict becomes possible when the one great power declines. The decline of Great Britain from the late 19th century, alongside the rise of Germany and the USA, is an example. It led to the removal of the pound as the world’s reserve currency, two world wars, and the bankruptcy of Great Britain. Much of the period since 1945 was about the US replacing Britain worldwide.

Dalio sees these trends as inevitable repeating cycles.

“Based on the perspective I have gained from studying history and from my over 50 years of experiences betting on what’s likely to happen, it seems to me that the Israel-Hamas war is another classic, unfortunate step toward a more violent and encompassing international war. In other words, it’s part of a larger war dynamic. Anyone who has studied history and is watching what is going on should be concerned about 1) these conflicts moving from being contained to being all-out brutal wars that continue until the other side is clearly defeated, and 2) these conflicts spreading to involve more countries. In order to gain perspective on how these pre-war stages in the Big Cycle unfold, I suggest that you study other pre-war periods, such as those in the two years prior to World War I and II in both Europe and Asia. What is happening now sure looks a lot like that.”

“…these two hot wars (the Israel-Hamas war and the Russia-Ukraine war) are not just between the parties directly involved in them—these wars are part of the bigger great power conflicts to shape the new world order—and they will have big effects on the countries who are allies and enemies of the four sides in these two seemingly irreconcilable wars.”

In his interview, Alison also paints the world as a USA-China-driven set of conflicts (listen to it, it’s short and powerful).

He and Dalio both believe that a global war is not inevitable:

”To push the point home, I want to make clear that I believe that we are in that brief part of the Big Cycle when the conflicts are heating up and the leading powers still have the ability to choose between crossing that line into brutal war or pulling back from the brink.”

After documenting the wars between stats that are beginning to break out, Noah Smith weighs in with this:

”These are just a few signs of an unraveling global order. Pax Americana is in an advanced state of decay, if not already fully dead. A fully multipolar world has emerged, and people are belatedly realizing that multipolarity involves quite a bit of chaos.”

Smith states that the relative peace of the post-war years was due to US supremacy:

”But the simplest and most parsimonious explanation for the Long Peace is that American power kept the peace. If countries sent their armies into other countries, there was always the looming possibility that America and its allies could intervene to stop them — as they did in the Korean War in 1950, the Gulf War of 1991, Bosnia in 1992 in Bosnia, Kosovo in 1999, and so on. Soviet power occasionally helped as well, as when the USSR helped India intervene to end the Bangladeshi genocide in 1971. But overall the Soviet Union was a revisionist power that was more likely to start wars than end them, while the U.S. and its allies, being the most powerful bloc, preferred to keep the status quo.”

The decline of the policeman is enabling the rise of mischief makers.

”Like tigers eyeing their prey. The world is starting to revert into a jungle, where the strong prey upon the weak, and where there is a concomitant requirement that every country build up its own strength; if your neighbor is a tiger, you should probably grow some claws of your own. Old scores that had to wait can now be settled. Disputed bits of territory can now be retaken. Natural resources can now be seized. There are many reasons for countries to fight each other, and now one of the biggest reasons not to fight has been removed.”

These three writers all make compelling reading, if also depressing.

My point of view starts with human agency. History is not already written and is not being played out like a script. It is all to play for. The key factor today is the US. The global power with the most compelling reason for peace. Leadership includes managing decline. And US leadership for the next decades will be about managing the transition to a new world order. That can be done in many ways. One of those ways is to resist change. All of the others involve embracing change and managing it. The relationship with China, India, and the rising world manifest in the BRICS will be central to that.

I do not want a world war. I certainly do not want to fight China. I believe national self-interest and global peace are aligned. I also think, along with Alison, that China’s economy is unstoppable.

But if the rhetoric of today, positing the rest of the world as either enemies or weak allies survives, then we are talking ourselves into mass slaughter based on inhuman values.

When Hamas slaughtered innocents on the last Shabbat, it was an act of extreme barbarism. If Alsion Dalio and Noah Smith are to be believed, this behavior is simply the opening scenes of the same thing on a mass scale.

In 2023 we can say no, as most of us have this week. And we can say it to our governments when they scapegoat “others” (immigrants, China) to explain their failings. Pointing the finger at others and wanting to fight them is, at the root, a barbaric thought leading to barbaric actions.

Wanting to police content is part of the same set of thoughts. In a piece announcing the failure of content moderation – This is what an unmoderated internet looks like – Ryan Broderick writes:

“And so I think I’m ready to finally face the facts: Community moderation, in almost every form, should be considered a failed project. Our public digital spaces, as they currently exist, cannot be fixed and the companies that control them cannot, or, more likely, will not ensure their safety or quality at a scale that matters anymore. And the main tactic for putting pressure on these companies — reporters and researchers highlighting bad moderation and trust and safety failures and the occasional worthless congressional hearing playing whack-a-mole with offensive content — has amounted to little more than public policy LARPing. We are right back where we started in 2012, but in much more online world. And the companies that built that world have abandoned us to go play with AI.”

This was triggered by X’s “failure” to moderate some of the more horrific content and opinions following last Saturday’s events. I do think Ryan is right about the inevitable failure of content moderation. But he regrets it. I see it differently. I want to know even the most disgusting views so that I can know them and rebut them. There really is no such thing as content moderation. That is when one side wants to erase the other.

Twitter has, for me, justified its existence this week. The wonderful writing of Isaac Saul would not have been made available to the 16 million people who read it. Seeing other views, many horrific, is a price worth paying for the good stuff. I do not want content moderation. I want open conversation. He ended with this human reaction to events. And, in the face of some inhuman acts, a human reaction is what we all need this week:

“Am I pro-Israel or pro-Palestine? I have no idea.
I’m pro-not-killing-civilians.
I’m pro-not-trapping-millions-of-people-in-open-air-prisons.
I’m pro-not-shooting-grandmas-in-the-back-of-the-head.
I’m pro-not-flattening-apartment-complexes.
I’m pro-not-raping-women-and-taking-hostages.
I’m pro-not-unjustly-imprisoning-people-without-due-process.
I’m pro-freedom and pro-peace
and pro-all the things we never see in this conflict anymore.
Whatever this is, I want none of it.”

Essays of the Week

You’re not going to like what comes after Pax Americana

Welcome to the jungle.


“I may be right and I may be wrong/ But you’re gonna miss me when I’m gone” — Taj Mahal

Yesterday, as you may have heard, Hamas launched a massive surprise attack on Israel, crossing the border from Gaza and seizing or assaulting towns nearby after a huge rocket bombardment, killing hundreds. Scenes of Hamas soldiers taking Israeli captives into Gaza have proliferated across the internet. Israel has responded by declaring a state of war, and the fighting between the two sides promises to be more destructive and vicious than anything in recent memory.

As many have pointed out already, this attack is probably an attempt to disrupt the possibility of an Israel-Saudi peace deal, which the U.S. has been trying to facilitate. Such a deal — which would be a continuation of the “Abraham Accords” process initiated under Trump —would make it more difficult for Hamas to obtain money from Saudi benefactors; it would also mean that every major Sunni Arab power recognizes the state of Israel, meaning that Hamas’ image as anything other than a client of Shiite Iran would be shattered.

If Hamas succeeds in scuttling an Israel-Saudi deal, it will be a blow to U.S. prestige and to U.S. claims to be a stabilizing, peacemaking influence. But even if an Israel-Saudi deal eventually goes through, this attack is a demonstration of America’s decreasing ability to deter conflict throughout the world.

Nor is this the only recent outbreak of interstate conflict. In recent weeks, Azerbaijan has moved to fully reclaim the territory of Nagorno-Karabakh, sending 120,000 ethnic Armenians fleeing for their lives — a massive episode of ethnic cleansing. The main reason for this was Russia’s preoccupation with its Ukraine invasion; Azerbaijan defeated Armenia in a war in 2020 and took formal control of Nagorno-Karabakh, but Russia stepped in and prevented further violence. With Russian power waning, Armenia has tried to rapidly pivot to the U.S., but this was not sufficient to prevent Azerbaijan’s ethnic cleansing.

Meanwhile, Serbia is building up troops on its border with Kosovo, whose independence has been in dispute since the U.S. intervened against Serbia in the 1990s. The U.S. and some of its allies recognize Kosovo as independent from Serbia, but Serbia, Russia, China, and a few other European countries don’t.

These are just a few signs of an unraveling global order. Pax Americana is in an advanced state of decay, if not already fully dead. A fully multipolar world has emerged, and people are belatedly realizing that multipolarity involves quite a bit of chaos.

What was Pax Americana? After the end of the Cold War, deaths from interstate conflicts — countries going to war with each other, imperial conquest, and countries intervening in civil wars — declined dramatically.

Source: OWID

Civil wars without substantial foreign intervention are very common, but except for the occasional monster civil war in China or Russia, they don’t tend to kill many people; it’s when countries send their armies to fight beyond their borders that the big waves of destruction usually happen. And for almost 70 years after the end of World War 2, this happened less and less. Historians call this the Long Peace. The lowest level of interstate conflict came from 1989 through 2011, after the collapse of the USSR, when the U.S. became the world’s sole superpower.

Political scientists and historians have many theories for why the Long Peace happened (and there are even a few who think it was just a statistical illusion). Democratic peace theory says that countries fought less because their people brought their leaders under tighter control. Capitalist peace theory says that the spread of global trade and financial links made war less attractive economically; it’s also possible that rich countries are more materially satisfied and thus less likely to fight. The UN and other international organizations may have also tamped down conflict.

But the simplest and most parsimonious explanation for the Long Peace is that American power kept the peace. If countries sent their armies into other countries, there was always the looming possibility that America and its allies could intervene to stop them — as they did in the Korean War in 1950, the Gulf War of 1991, Bosnia in 1992 in Bosnia, Kosovo in 1999, and so on. Soviet power occasionally helped as well, as when the USSR helped India intervene to end the Bangladeshi genocide in 1971. But overall the Soviet Union was a revisionist power that was more likely to start wars than end them, while the U.S. and its allies, being the most powerful bloc, preferred to keep the status quo.

This is what an unmoderated internet looks like


OCT 11, 2023

Content Moderation Is A Failed Project

In February 2022, when Russia invaded Ukraine, I wrote a piece called “Everything will be all the time and everywhere,” where I essentially used social media, but mainly Twitter, to construct a ticking clock of the first hours of the invasion. And while I was able to make some sense of the noise online, I still concluded at the time that, “our feeds aren’t meant for content like this and are breaking.” And a year and a half later, those feeds are completely broken.

As an exercise, I tried to keep track of what I was seeing online this weekend from Israel and Palestine. And it has been, of course, impossible to follow anything. My understanding of what’s going on has not just been muddled by platforms like X, but warped entirely. I know more about adult film star Mia Khalifa’s cancelation for tweeting that Hamas should shoot their videos horizontally, right-wing influencers Ben Shapiro and Andrew Tate arguing with each other about who’s tough enough to go fight in Gaza, and unfathomably racist posts from verified losers than I do about anything material that’s happening on the ground. I’ve seen so much content reported, debunked, and rebunked(?) that I think I’ve reached the limits of my mind’s ability to understand reality. To say nothing of the endless cascade of horrifying violence X is serving up via the autoplaying videos it bricks my phone’s battery with, posted by verified accounts who are actively monetizing them, whether they’re genuine or not. Surrounded by ads for hentai mobile games, of course.

And this dogshit content swirling inside of X is also still guiding what’s being posted everywhere else. Big subreddits and popular Instagram accounts (and legitimate digital publishers) are full of screenshots of the same stuff I’m seeing on X. If Twitter was the cultural engine of the English-speaking internet in the 2010s, it’s now spewing oil into every other part of the internet and there are no mechanisms in place to contain it. As Mashable’s Matt Binder posted today, “Nearly every thing that’s gone viral on Twitter over the past few days has been wrong.”

The main framework for how large social platforms have been moderated for the last decade started getting cobbled together around 2014, after 4chan’s massive leak of non-consensual sexual material, dubbed “The Fappening,” and was really formalized in 2015, with the rise of ISIS, and, in 2016, when Facebook launched a factchecking division and acknowledged that Russia’s disinfo factory, the Internet Research Agency, was using the platform to meddle in foreign elections. One by one, major corporate platforms began to accept that they had a responsibility and a duty to protect users from spam, scams, misinformation and disinformation, harassment, abuse, illegal and malicious material, and extremism. And they, of course, failed to uphold that responsibility time and time again. But these companies did hire a bunch of former Obama staffers and made them non-apologize to reporters every time one of their products caused a genocide. Which is nice.

We also now know that the “moderation” these companies kept pledging to increase via sophisticated AI tools was actually just being outsourced to literal sweatshops in countries like Kenya and South Africa where workers make dollars a day viewing the worst content imaginable until they psychologically can’t take it anymore. An experience that Musk, with his infinite business acumen, is now providing to any X user that accidentally clicks on the app’s For You tab. Or worse, he’s outsourced misinformation to the wannabe Wikipedia editors running his Community Notes feature, which completely broke this weekend. Though, it did manage to fire off this incredible debunk before it got clogged up with Arma 3 gameplay videos.

It’s clear now that these companies were only ever going to clean up their platforms to a point and, as we’re now learning, it was only temporary. Musk recently shut down an internal misinformation tool called Smyte and liquidated X’s election integrity team. Meta’s Threads doesn’t even have one. And YouTube and Meta have largely given up on moderating conspiracy theories. The only institution still in the moderation game, it seems, is the European Union. EU commissioner Thierry Breton served Musk with a letter yesterday giving him 24 hours to comply with a request for information on how Twitter is upholding the EU’s Digital Services Act, which has strict rules for how platforms handle misinfo and extremist material. Musk is currently trying to bait the EU into publicly posting out a list of violations it thinks X has allowed, likely because he knows his supporters will dogpile EU officials. Breton responded by posting his Bluesky username lol.

This is where you say, “So what? It’s the internet. Read a news site if you want a clear understanding of the world.” To which The Atlantic Council’s Emerson T. Brooking replied, “Boo-hoo, Twitter’s dead, whatever — except that X remains the preferred platform for policymakers. And what they believe affects millions.” When Musk bought Twitter a year ago, I naively believed that users, especially irl important ones, would react to the increasing noise on their feeds by simply leaving the platform. And, if my own following tab is an indication, many have. But what has actually happened is much more dangerous. Instead of X dissolving into a digital backwater for divorced guys with NFT debt, it has, instead, continued to remain at the top of the digital funnel while also being 4chan-levels of rotten. It is still being used to process current events in “real time” even though it does not have the tools, nor the leadership necessary to handle that responsibility. The inmates are running the asylum and there is nothing on the horizon to convince that that will get better.

And so I think I’m ready to finally face the facts: Community moderation, in almost every form, should be considered a failed project. Our public digital spaces, as they currently exist, cannot be fixed and the companies that control them cannot, or, more likely, will not ensure their safety or quality at a scale that matters anymore. And the main tactic for putting pressure on these companies — reporters and researchers highlighting bad moderation and trust and safety failures and the occasional worthless congressional hearing playing whack-a-mole with offensive content — has amounted to little more than public policy LARPing. We are right back where we started in 2012, but in much more online world. And the companies that built that world have abandoned us to go play with AI.

Lina Khan’s FTC Is Totally Outmatched Vs. Amazon

Thin resources, shaky case law, and a stable of lawyers switching sides set up an impossible battle.


OCT 6, 2023

Bill Kovacic ran the U.S. Federal Trade Commission from 2008 to 2009 and has been watching the agency’s landmark antitrust case against Amazon. Speaking frankly of its chances, he’s not bullish.  

“If Amazon were the only case, the FTC could staff it to match the other side punch for punch  — it’s not the only case,” he told Ranjan Roy and me in a recording of Big Technology Podcast last Friday. “It’s not that the FTC doesn’t have capable people. The real rub is that you don’t have enough experienced people to run cases.”


With limited funding and staff spread thin across multiple tech antitrust pushes, the FTC’s resource mismatch may be the determining factor in its Amazon case. The U.S. government is on track to spend more than $6 trillion this year, but it’s allocated only $430 million to the FTC, which is suing both Amazon and Meta. In just a few hours, those tech giants make what the FTC spends in a year, and they’re shelling it out to ensure their businesses continue unimpeded. 

Amazon, most prominently, has spent its riches hiring hallways of FTC lawyers. Former FTC associate general counsel Brian Huseman is now Amazon’s VP of public policy. Former FTC attorney Amy Posner joined Amazon in 2020 after 13 years at the agency and is now senior corporate counsel. Former FTC Bureau of Competition attorney Elisa Kantor Perlman joined Amazon a few months later and has the same title. Former FTC attorney Sean Pugh also left for Amazon.

“To me, it’s just the brain drain element,” one ex-FTC lawyer told me. “These aren’t just random attorneys — they are the cream of the cream of the crop.”

The lawyers remaining at the agency are spread extremely thin, especially with FTC Chair Lina Khan bringing multiple lawsuits at once. Going after Facebook, Amazon, and others in tandem means the FTC’s experienced lawyers must fan out, leaving junior staff to assume prominent roles. 

“They’re running the Meta case, which is another bet your agency case, they are running very complicated and difficult merger cases, which also demand top talent,” said Kovacic. “You come to a point — we have to put up a larger number of people who are very bright, great intellectual gifts, but they’re rookies. They’re relatively inexperienced. That’s a mismatch.” 

Even under ideal circumstances, the FTC’s challenge would be significant, but throw in the legal lift required to win and its case looks even more daunting. The FTC is trying to show that Amazon illegally used its monopoly power to maintain its dominance, but upstart rivals like Chinese shopping apps Shein and Temu are making that case harder to prove. 40% of Amazon users now use Temu, according to data I obtained from Apptopia. And that usage is accelerating. 

Kovacic said modern judges may be willing to consider a case where Amazon has a monopoly in a commerce niche vs. all U.S. commerce, but during a multiyear case, the dynamic could change. “This is a real difficulty for the case,” Kovacic said. “As these other developments take place. The judge starts to wonder, what are we focusing on here? What’s the point of this?”

Without a victory, the FTC could still influence Amazon and its fellow tech giants to alter some practices. Amazon, for instance, made it easier to cancel Prime after the FTC threatened to take action, and eventually did. But still, Kovacic said it’s understandable that the vibe within Amazon is fairly relaxed today. “They can take some comfort from the fact that defendants generally come through this process relatively unscathed,” he said. “With the caution that antitrust trials are awkward and it does open the curtain.” 

Why So Many Streaming Services Are Struggling

Legacy media brands built their streaming offerings solely around their content. They should focus more on the consumer experience.

By Andrew A. Rosen

Rosen is the author of Medium Shift, a column about the post-streaming wars media convergence.

Oct. 12, 2023 9:01 AM PDT

Art by Clark Miller

Desperate to stem losses from streaming and pay down billions in debt, media companies including Warner Bros. Discovery are raising prices. That’s in part because they’ve been focused almost exclusively on their content libraries. Instead, they must rethink what consumers need in the streaming era.

That’s the lesson of Netflix and Hulu. Both benefit from a broad mix of licensed and original content, plus innovative user interfaces that help users discover TV shows and movies, in part through personalized recommendations.

It’s more than just content. Netflix and Hulu benefit from innovative user interfaces that help users discover TV shows and movies, in part through personalized recommendations.

Both are growing and profitable. Netflix now reaches 240 million subscribers globally, including 75.6 million in the U.S. and Canada. It reported $4.3 billion in free cash flow in the 12 months ended in June. Hulu reported 48.3 million U.S. subscribers as of July 1. Majority owner Disney does not break out the service’s financial results, but Hulu was reported to be profitable on $10.8 billion in revenues in 2022.

Legacy media streaming services have yet to find a similarly successful formula. Their apps present as off-the-shelf technologies pieced together with simple, similar-looking interfaces and limited personalization.

Subscribers are defecting, and these services are suffering for it. Disney+ has lost subscribers in the U.S. and Canada for two consecutive quarters; it recently projected it will fall tens of millions of subscribers short of its September 2024 target of 215 million to 245 million subscribers. Warner Bros. Discovery’s Max reported losing 2 million subscribers after relaunching HBO Max in May as a new app combining content from HBO and Discovery. Smaller services such as AMC Networks and Starz are losing subscribers in 2023 despite owning popular shows including “The Walking Dead” and “Power,” respectively.

Raising fees will only compound the problem. At a Bank of America conference last month, Warner Bros. Discovery Chief Financial Officer Gunnar Wiedenfels tried to convince investors that price hikes would help: “For a decade, in streaming, an enormously valuable amount of quality content has been given away well below fair market value, and I think that’s in the process of being corrected.” Last week, the company said it would raise the monthly price of Discovery+ by $2, to $8.99. Disney has nearly doubled the price of the ad-free tier for Disney+ over the past 10 months, to $13.99, from $7.99.

Wiedenfels said too many viewers sign up for a streaming service for a low monthly fee, binge content for two or three weeks and then cancel. Streaming is supposed to provide predictable monthly revenue. Following Netflix’s success, legacy media companies assumed demand for streaming would only grow as cord-cutting accelerated. Consumers paid a monthly fee for cable TV, why wouldn’t they pay a monthly fee to stream content? So these companies jumped into streaming over the past decade—and were rewarded for it, for a time.

Now, they are learning that customers may leave and never return. In the pre-cord-cutting era, monthly churn at cable services was often around 1%. By contrast, the monthly churn rate at Warner Bros. Discovery’s Max in June was 7%, according to research firm Antenna, and at Discovery+ more than 8%, compared with 3.5% at Netflix. Wiedenfels believes monthly fees have created perverse consumer incentives. His perverse solution: higher prices.

A Tale of Two Moats

The disparity between Netflix and Hulu and the others reflects two different moats in streaming. The first is the storytelling expertise and libraries of studios such as Disney, Warner Bros., Universal Studios, Sony Pictures Entertainment and Lionsgate. Let’s call it “The Storytelling Moat.”

Video of the Week

AI of the Week

Artificial General Intelligence Is Already Here

Today’s most advanced AI models have many flaws, but decades from now, they will be recognized as the first true examples of artificial general intelligence.



Blaise Agüera y Arcas is a vice president and fellow at Google Research, where he leads an organization working on basic research, product development and infrastructure for AI.

Peter Norvig is a computer scientist and Distinguished Education Fellow at the Stanford Institute for Human-Centered AI.

Artificial General Intelligence (AGI) means many different things to different people, but the most important parts of it have already been achieved by the current generation of advanced AI large language models such as ChatGPT, Bard, LLaMA and Claude. These “frontier models” have many flaws: They hallucinate scholarly citations and court cases, perpetuate biases from their training data and make simple arithmetic mistakes. Fixing every flaw (including those often exhibited by humans) would involve building an artificial superintelligence, which is a whole other project.

Nevertheless, today’s frontier models perform competently even on novel tasks they were not trained for, crossing a threshold that previous generations of AI and supervised deep learning systems never managed. Decades from now, they will be recognized as the first true examples of AGI, just as the 1945 ENIAC is now recognized as the first true general-purpose electronic computer.

The ENIAC could be programmed with sequential, looping and conditional instructions, giving it a general-purpose applicability that its predecessors, such as the Differential Analyzer, lacked. Today’s computers far exceed ENIAC’s speed, memory, reliability and ease of use, and in the same way, tomorrow’s frontier AI will improve on today’s.

But the key property of generality? It has already been achieved.

What Is General Intelligence?

Early AI systems exhibited artificial narrow intelligence, concentrating on a single task and sometimes performing it at near or above human level. MYCIN, a program developed by Ted Shortliffe at Stanford in the 1970s, only diagnosed and recommended treatment for bacterial infections. SYSTRAN only did machine translation. IBM’s Deep Blue only played chess.

Later deep neural network models trained with supervised learning such as AlexNet and AlphaGo successfully took on a number of tasks in machine perception and judgment that had long eluded earlier heuristic, rule-based or knowledge-based systems.

Most recently, we have seen frontier models that can perform a wide variety of tasks without being explicitly trained on each one. These models have achieved artificial general intelligence in five important ways:

Topics: Frontier models are trained on hundreds of gigabytes of text from a wide variety of internet sources, covering any topic that has been written about online. Some are also trained on large and varied collections of audio, video and other media.

Tasks: These models can perform a variety of tasks, including answering questions, generating stories, summarizing, transcribing speech, translating language, explaining, making decisions, doing customer support, calling out to other services to take actions, and combining words and images.

Modalities: The most popular models operate on images and text, but some systems also process audio and video, and some are connected to robotic sensors and actuators. By using modality-specific tokenizers or processing raw data streams, frontier models can, in principle, handle any known sensory or motor modality.

Languages: English is over-represented in the training data of most systems, but large models can converse in dozens of languages and translate between them, even for language pairs that have no example translations in the training data. If code is included in the training data, increasingly effective “translation” between natural languages and computer languages is even supported (i.e., general programming and reverse engineering).

Instructability: These models are capable of “in-context learning,” where they learn from a prompt rather than from the training data. In “few-shot learning,” a new task is demonstrated with several example input/output pairs, and the system then gives outputs for novel inputs. In “zero-shot learning,” a novel task is described but no examples are given (for instance, “Write a poem about cats in the style of Hemingway” or “’Equiantonyms’ are pairs of words that are opposite of each other and have the same number of letters. What are some ‘equiantonyms’?”).

“The most important parts of AGI have already been achieved by the current generation of advanced AI large language models.”

“General intelligence” must be thought of in terms of a multidimensional scorecard, not a single yes/no proposition. Nonetheless, there is a meaningful discontinuity between narrow and general intelligence: Narrowly intelligent systems typically perform a single or predetermined set of tasks, for which they are explicitly trained. Even multitask learning yields only narrow intelligence because the models still operate within the confines of tasks envisioned by the engineers. Indeed, much of the hard engineering work involved in developing narrow AI amounts to curating and labeling task-specific datasets.

By contrast, frontier language models can perform competently at pretty much any information task that can be done by humans, can be posed and answered using natural language, and has quantifiable performance.

The ability to do in-context learning is an especially meaningful meta-task for general AI. In-context learning extends the range of tasks from anything observed in the training corpus to anything that can be described, which is a big upgrade. A general AI model can perform tasks the designers never envisioned.

So: Why the reluctance to acknowledge AGI?

OpenAI’s Revenue Crossed $1.3 Billion Annualized Rate, CEO Tells Staff

By Amir Efrati

Oct. 12, 2023 8:09 AM PDT ·

ChatGPT maker OpenAI is generating revenue at a pace of $1.3 billion a year, CEO Sam Altman told staff this week, according to several people with knowledge of the matter. Altman’s remark implies the company is generating more than $100 million per month, up 30% from this summer, when the Microsoft-backed startup generated revenue at a $1 billion-a-year pace.

The revenue pace, largely from subscriptions to its conversational chatbot, represents remarkable growth since the company launched a paid version of ChatGPT in February. For all of last year, the company’s revenue was just $28 million. Since the release of ChatGPT, OpenAI has become a closely watched barometer of demand for artificial intelligence that can help software developers code faster and help business managers quickly summarize documents or generate blog posts and advertising materials.

• OpenAI’s revenue now running at $1.3 billion annual rate
• In the summer, annualized revenue was $1 billion
• Last year revenue was $28 million

OpenAI’s revenue increase could help boost the company’s private valuation as it organizes a tender offer in which outside investors will be able to buy shares from some employees—the second such sale it has held this year. OpenAI employees may soon try to sell stock at a price that implies the company is worth $80 billion or higher overall, according to the Wall Street Journal.

The company’s rapid growth also underlines the promise of large language models that underpin ChatGPT and others like it.  Although the technology is far from perfect, some LLM developers, including OpenAI’s Andrej Karpathy, have begun referring to it as the next software operating system because of its ability to write and run code, access the internet and retrieve and reference files.

The Information first reported in June that Altman wants to turn ChatGPT into a “supersmart personal assistant for work.” He has spoken to former Apple design chief Jony Ive and SoftBank CEO Masayoshi Son about developing an AI hardware device.

It’s not known how OpenAI’s development costs this year compare to its revenue, which also includes the sale of server capacity to power its AI. (That capacity runs on Microsoft’s cloud servers, and it isn’t clear what percentage of the revenue OpenAI gets to keep from those deals.) Last year, the company booked a $540 million loss as it developed ChatGPT and GPT-4, the most advanced LLM it sells. Zoom, Stripe, Ikea and Volvo all have said they use GPT-4, though some get access to it through OpenAI while others get it through Microsoft, which then splits that revenue with OpenAI.

To pay for OpenAI’s grand ambitions, Altman himself has privately suggested the company may try to raise as much as $100 billion in the coming years, The Information has previously reported. That would be on top of the more than $10 billion Microsoft has already committed to OpenAI to fund its operations and AI development in exchange for access to its technology. Investors expect OpenAI will raise another large round of funding as soon as this year, in addition to the ongoing tender offer for employees.

Automated Mentoring with ChatGPT

AI can help you learn—but it needs to do a better job

By Mike Loukides

October 10, 2023

Ethan and Lilach Mollick’s paper Assigning AI: Seven Approaches for Students with Prompts explores seven ways to use AI in teaching. (While this paper is eminently readable, there is a non-academic version in Ethan Mollick’s Substack.) The article describes seven roles that an AI bot like ChatGPT might play in the education process: Mentor, Tutor, Coach, Student, Teammate, Student, Simulator, and Tool. For each role, it includes a detailed example of a prompt that can be used to implement that role, along with an example of a ChatGPT session using the prompt, risks of using the prompt, guidelines for teachers, instructions for students, and instructions to help teacher build their own prompts.

The Mentor role is particularly important to the work we do at O’Reilly in training people in new technical skills. Programming (like any other skill) isn’t just about learning the syntax and semantics of a programming language; it’s about learning to solve problems effectively. That requires a mentor; Tim O’Reilly has always said that our books should be like “someone wise and experienced looking over your shoulder and making recommendations.” So I decided to give the Mentor prompt a try on some short programs I’ve written. Here’s what I learned–not particularly about programming, but about ChatGPT and automated mentoring. I won’t reproduce the session (it was quite long). And I’ll say this now, and again at the end: what ChatGPT can do right now has limitations, but it will certainly get better, and it will probably get better quickly.

First, Ruby and Prime Numbers

I first tried a Ruby program I wrote about 10 years ago: a simple prime number sieve. Perhaps I’m obsessed with primes, but I chose this program because it’s relatively short, and because I haven’t touched it for years, so I was somewhat unfamiliar with how it worked. I started by pasting in the complete prompt from the article (it is long), answering ChatGPT’s preliminary questions about what I wanted to accomplish and my background, and pasting in the Ruby script.

ChatGPT responded with some fairly basic advice about following common Ruby naming conventions and avoiding inline comments (Rubyists used to think that code should be self-documenting. Unfortunately). It also made a point about a puts() method call within the program’s main loop. That’s interesting–the puts() was there for debugging, and I evidently forgot to take it out. It also made a useful point about security: while a prime number sieve raises few security issues, reading command line arguments directly from ARGV rather than using a library for parsing options could leave the program open to attack.

It also gave me a new version of the program with these changes made. Rewriting the program wasn’t appropriate: a mentor should comment and provide advice, but shouldn’t rewrite your work. That should be up to the learner. However, it isn’t a serious problem. Preventing this rewrite is as simple as just adding “Do not rewrite the program” to the prompt.

Second Try: Python and Data in Spreadsheets

My next experiment was with a short Python program that used the Pandas library to analyze survey data stored in an Excel spreadsheet. This program had a few problems–as we’ll see.

ChatGPT’s Python mentoring didn’t differ much from Ruby: it suggested some stylistic changes, such as using snake-case variable names, using f-strings (I don’t know why I didn’t; they’re one of my favorite features), encapsulating more of the program’s logic in functions, and adding some exception checking to catch possible errors in the Excel input file. It also objected to my use of “No Answer” to fill empty cells. (Pandas normally converts empty cells to NaN, “not a number,” and they’re frustratingly hard to deal with.) Useful feedback, though hardly earthshaking. It would be hard to argue against any of this advice, but at the same time, there’s nothing I would consider particularly insightful. If I were a student, I’d soon get frustrated after two or three programs yielded similar responses.

Of course, if my Python really was that good, maybe I only needed a few cursory comments about programming style–but my program wasn’t that good. So I decided to push ChatGPT a little harder. First, I told it that I suspected the program could be simplified by using the dataframe.groupby() function in the Pandas library. (I rarely use groupby(), for no good reason.) ChatGPT agreed–and while it’s nice to have a supercomputer agree with you, this is hardly a radical suggestion. It’s a suggestion I would have expected from a mentor who had used Python and Pandas to work with data. I had to make the suggestion myself.

How I think about LLM prompt engineering

Prompting as searching through a space of vector programs


OCT 9, 2023

Flashback: Word2Vec’s emergent word arithmetic

In 2013, at Google, Mikolov et al. noticed something remarkable.

They were building a model to embed words in a vector space — a problem that already had a long academic history at the time, starting in the 1980s. Their model used an optimization objective designed to turn correlation relationships between words into distance relationships in the embedding space: a vector was associated to each word in a vocabulary, and the vectors were optimized so that the dot-product (cosine proximity) between vectors representing frequently co-occurring words would be closer to 1, while the dot-product between vectors representing rarely co-occurring would be closer to 0.

They found that the resulting embedding space did much more than capture semantic similarity. It featured some form of emergent learning — it was capable of performing “word arithmetic”, something that it had not been trained to do. There existed a vector in the space that could be added to any male noun to obtain a point that would land close to its female equivalent. As in: V(king) – V(man) + V(woman) = V(queen). A “gender vector”. Pretty cool! There seemed to be dozens of such magic vectors — a plural vector, a vector to go from wild animals names to their closest pet equivalent, etc.

Illustration: a 2D embedding space such that the vector linking “wolf” to “dog” is the same as the vector linking “tiger” to “cat”.

Word2Vec and LLMs: the Hebbian learning analogy

Fast forward ten years — we are now in the age of LLMs. On the surface, modern LLMs couldn’t seem any further from the primitive word2vec model. They generate perfectly fluent language — a feat word2vec was entirely incapable of — and seem knowledgeable about any topic. And yet, they actually have a lot in common with good old word2vec.

Both are about embedding tokens (words or sub-words) in a vector space. Both rely on the same fundamental principle to learn this space: tokens that appear together end up close together in the embedding space. The distance function used to compare tokens is the same in both cases: cosine distance. Even the dimensionality of the embedding space is similar: on the order of 10e3 or 10e4.

You may ask — wait, I was told that LLMs were autoregressive models, trained to predict the next word conditioned on the previous word sequence. How is that related to word2vec’s objective of maximizing the dot product between co-occurring tokens?

In practice, LLMs do seem to encode correlated tokens in close locations, so there must be a connection. The answer is self-attention.

Self-attention is the single most important component in the Transformer architecture. It’s a mechanism for learning a new token embedding space by linearly recombining token embeddings from some prior space, in weighted combinations which give greater importance to tokens that are already “closer” to each other (i.e., that have a higher dot-product). It will tend to pull together the vectors of already-close tokens — resulting over time in a space where token correlation relationships turn into embedding proximity relationships (in terms of cosine distance). Transformers work by learning a series of incrementally refined embedding spaces, each based on recombining elements from the previous one.

How self-attention works: here attention scores are computed between “station” and every other word in the sequence, and they are then used to weight a sum of word vectors that becomes the new “station” vector.

Self-attention confers Transformers with two crucial properties:

The embedding spaces they learn are semantically continuous, i.e. moving a bit in an embedding spaces only changes the human-facing meaning of the corresponding tokens by a bit. The word2vec space also verified this property.

The embedding spaces they learn are semantically interpolative, i.e. taking the intermediate point in between two points in an embedding space produces a point that represents the “intermediate meaning” between the corresponding tokens. This comes from the fact that each new embedding space is built by interpolating between vectors from the previous space.

This is not entirely unlike the way the brain learns, mind you. The key learning principle in the brain is Hebbian learning — in short, “neurons that fire together, wire together”. Correlation relationships between neural firing events (which may represent actions or perceptual inputs) are turned into proximity relationships in the brain network, just like Transformers (and word2vec) turn correlation relationship into vector proximity relationships. Both are maps of a space of information.

From emergent word arithmetic to emergent vector programs

Of course, there are significant differences between word2vec and LLMs as well. Word2vec wasn’t designed for generative text sampling. LLMs are a lot bigger and can encode vastly more complex transformations. The thing is, word2vec is very much a toy model: it is to language modeling as a logistic regression on MNIST pixels is to state-of-the-art image computer vision models. The fundamental principles are mostly the same, but the toy model is lacking any meaningful representation power. Word2vec wasn’t even a deep neural network — it has a shallow, single-layer architecture. Meanwhile, LLMs have the highest representation power of any model anyone has ever trained — they feature dozens of Transformer layers, hundreds of layers in total, and their parameter count ranges in the billions.

Just like with word2vec, LLMs end up learning useful semantic functions as a by-product of organizing tokens into a vector space. But thanks to this increased representation power and a much more refined autoregressive optimization objective, we’re no longer confined to linear transformations like a “gender vector” or a “plural vector”. LLMs can store arbitrarily complex vector functions — so complex, in fact, that it would be more accurate to refer to them as vector programs rather than functions…. more

News Of the Week

CB Insights, Q3 Deep Dive

I got some good news. I got some bad news. Global venture funding increased quarter-over-quarter to hit $64.6B in Q3’23. But this increase wasn’t driven by a resurgence of deals. In fact, deal count dropped to its lowest level since 2016. Instead, Q3’23’s funding total was propped up by $100M+ mega-rounds, including 6 deals of $1B or more. Without these 6 billion-dollar deals, global funding would’ve actually ticked down QoQ in Q3’23. Dive deeper into our latest funding data (and download it for yourself) in our State of Venture Q3’23 Report.

Electrifying. Speaking of big-time rounds, half of Q3’23’s 10 largest deals went to the electric vehicle industry. Meanwhile, two other top deals were focused on clean energy and sustainable manufacturing. This prioritization parallels a major push by governments around the world to reduce emissions in industries like automotive, steelmaking, mining, and more. Explore top rounds across stages — from seed to Series E+ — by downloading the full State of Venture Q3’23 Report.

Giddy-oof. Twelve unicorns (private companies valued at $1B+) were born in Q3’23. This marked a 40% drop QoQ and the lowest birth count in over 6 years. The US maintained the lead in unicorn births (5), followed by Asia (4) and Europe (2). Three startups tied for top-valued new unicorn in Q3’23, each with a $1.8B valuation: BitGo (US), Helsing (Germany), and Quest Global (Singapore). Learn more about the unicorn club’s newest members by downloading the full report and underlying data.

Pitchbook: VC Returns Are at a 10+ Year Low

by Jason Lemkin | Blog Posts

So according to Pitchbook, which collects a massive amount of VC data, VC funds themselves hit a 10+ year low in paper returns this past quarter.   In fact, returns plummeted to -20% (!)

Yes, that’s as bad as it sounds.

And this is a big deal, because VC’s own investors, “Limited Partners” or LPs, are under pressure here as a result.  That means they are giving less money to VCs to invest further.  It’s led in part a big overall slowdown in venture. Not just because of the Unicorn Crash, or falling valuations, but simply because returns have fallen so far, so fast, that LPs are taking a big pause or way slowing things down in many cases.

Having said that, you have to be careful viewing venture in too short a timeframe, and in looking at paper returns.

As rough as 2023 is for LPs, 2021 was insane the other way.  Top LPs had 95%+ returns!!

The best startups are still growing revenue at strong rates, if not the crazy pace of 2021.  And IPOs have slowly restarted.  And the U.S. economy is still strong.

But the Unicorn Overhang is going to Cloud late-stage funds especially, driving down paper returns and cash distributions.  For now, they are at a 10-year low.  2 years ago, they were at a 15+ year high.

Greylock Closes New $1B Fund To Invest In Earliest-Stage Startups

October 3, 2023

Chris Metinko

Venture capital may have slowed, but that doesn’t mean some VC firms aren’t still raising big.

Greylock — famous for bets on companies such as Airbnb, Coinbase and then-Facebook, now Meta — announced its 17th fund, a $1 billion vehicle focused on pre-seed, seed and Series A founders. 

The firm will look to fund AI-first companies across a variety of areas, including cybersecurity, fintech, consumer and more. In a blog post, the firm said more than 80% of the investments made in its previous fund were pre-seed, seed or Series A.

“We expect that every company will become an AI company,” the blog reads. “While it’s been exciting to watch as the venture community has embraced AI as a thesis area over the last year, Greylock has been committed to AI investing for a decade and this commitment continues into Fund 17.”

Simultaneously, Greylock also announced a new program called Greylock Edge, a three-month company-building program for pre-idea, pre-seed and seed founders that also will include some type of “flexible” funding.

Greylock announced its 16th fund in September 2020, which also totaled $1 billion. Some of its recently announced investments include Frec, PayJoy and Upwind Security.

Big money across the pond

That wasn’t the only news of a large new fund on Tuesday. London-based Atomico has raised $1.1 billion of new funding to invest in startups even as Europe also has seen a decline in venture capital funding, the Financial Times reported. The firm is raising the cash for its new venture and growth funds, with a goal of $1.35 billion for both funds, per the report.

At The Sam Bankman-Fried Trial, It’s Clear FTX’s Collapse Was No Accident 

SBF’s top lieutenant and ex-girlfriend testified that criminality, not carelessness, destroyed billions in value. So why is a different narrative circulating?


I’m just back at my desk after spending two days in court at the Sam Bankman-Fried trial. The downtown Manhattan courthouse is just a few stops away on the subway, and I figured I’d stop by to see Caroline Ellison, Alameda Research’s ex-CEO and Bankman-Fried’s ex-girlfriend, testify about his alleged crimes. It was more revelatory than expected.

Bankman-Fried’s empire — which spanned FTX and Alameda — lost billions of customer money without dispute, but there’s a narrative that his simple carelessness might’ve been at fault. It was a whoopsie, you could say, from a guy with good intentions. High-profile writers including Michael Lewis seemed to buy the theory, and Bankman-Fried’s lawyers played off it. Sam was a “math nerd who didn’t drink or party” his legal team said. And well, “some things got overlooked.”

Shocked might not be the right word, but I was astonished to see the delta between that narrative and the reality in court. Speaking clearly, and with devastating precision, Ellison told jurors exactly how she and Bankman-Fried funneled FTX customer funds into Alameda Research, then spent the money even after FTX customers weren’t likely to get it back. 

The crimes were no mistake. With everything laid out on spreadsheets, Ellison revealed how she periodically updated Bankman-Fried on how much money Alameda had taken from FTX customers, using the cryptic “FTX Borrows” as the line item. She admitted labeling it such in case the company landed in legal trouble. And even as Alameda drew more than $10 billion from FTX, more than FTX’s total remaining assets, Ellison said Bankman-Fried directed her to pay back loans to crypto lenders, like Genesis, leaving FTX depositors out to dry…. More

How is it still getting worse for Sam Bankman-Fried?

The defense botched the cross examination of Caroline Ellison.

By Elizabeth Lopatto, a reporter who writes about tech, money, and human behavior. She joined The Verge in 2014 as science editor. Previously, she was a reporter at Bloomberg.

n the break after Caroline Ellison stepped down from the stand, Barbara Fried engaged defense lawyer Christian Everdell in an animated conversation. Fried, the defendant’s mother, was gesticulating, and clearly had a strong opinion about something. Everdell walked off, and Mark Cohen talked to her for a bit after that.

Fried seemed frustrated, and I couldn’t blame her. The defense absolutely biffed the cross-examination of Ellison and, to make matters worse, was unable to keep a recording of an all-hands meeting where Ellison confessed to taking customer funds from being played for the jury. Is this really the best the defense can do?

Before this case, I had been told that Everdell and Cohen were “workman-like,” which I took to mean that they were unshowy but competent. I now believe that comment was an insult. I have been waiting for a juicy cross-examination, as I live for chaos and drama. I am beginning to think I am not going to get one.

In Cohen’s disorganized cross-examination, he mostly bored the jury

Ellison had given, in her direct testimony, fairly damning evidence tying FTX CEO Sam Bankman-Fried to the conspiracy to take FTX customer funds. There were fake balance sheets, one of which was sent to crypto lender Genesis. After a Genesis representative received the balance sheet, he texted Ellison to tell her he’d spoken to Bankman-Fried — strongly suggesting that Bankman-Fried was aware of the contents of the fake balance sheet. Not great!

But a lot of testimony relied on Ellison recounting conversations she’d had in person, or on auto-deleting text messaging platforms. This gave the defense an opportunity to try to make her sound unreliable. After all, she had an incentive to flip on Bankman-Fried: the possibility of leniency in her sentencing. Given her fun tweets about speed, the fact that she was Bankman-Fried’s ex-girlfriend, and that she’d apparently written a bunch of stuff down, I was expecting fireworks. For the first time in this trial, maybe the defense had an opening.

Instead, I got a sad trombone. In Cohen’s disorganized cross-examination, he mostly bored the jury. At one point, two different jurors appeared to be asleep.

Midway through the morning I began wondering if there was a mercy rule for cross-examinations. Prosecutor Danielle Sassoon had run an effective direct examination, creating an easy-to-follow narrative. By contrast, Cohen appeared to be bumbling around, taking up one topic only to abruptly pivot. Sure, we’re still in the prosecution’s case, but Cohen had all night to prepare his lines of questioning. 

Apparently, Alameda had a problem with retaining accountants

We established that Bankman-Fried had a much larger appetite for risk than Ellison. I thought perhaps it might be building to something, but this line of questioning was quickly dropped. We established that Bankman-Fried and Ellison reacted differently to stress, and that they also had different approaches to media: namely, that Ellison avoided it while Bankman-Fried sought it out. Okay?

We discovered that there was one accountant at Alameda in 2021, and two more junior accountants were hired in 2022. Apparently, Alameda had a problem with retaining accountants, which didn’t surprise me much; CEOs generally don’t do the balance sheets for their companies. I was ready to hear this pursued further — but then it, too, was dropped.

I think the defense was also trying to suggest that the government had coerced Ellison’s testimony, by suggesting that she had pleaded guilty to a charge of defrauding investors that she couldn’t have been involved in. After all, she didn’t prepare materials for them. Unfortunately, she did say that she had conversations with investors as part of their due diligence — and, of course, Alameda was taking on losses from FTX to keep FTX’s balance sheet pristine. This line of questioning felt like a waste of time.

There were rather a lot of sidebars during the cross-examination, to the point that when one occurred, several jurors looked entertained. There were a few yesterday, too, including one in which the prosecution complained that Bankman-Fried was visibly scoffing at Ellison’s answers, according to the court transcript. (I did observe him occasionally shaking his head, and sometimes quivering at points during her testimony, but didn’t have a view of his face.) …more

SBF Didn’t Understand Risk-Reward

By Martin Peers

Oct. 10, 2023 5:01 PM PDT

Just when it seemed there was nothing new we could learn from the SBF saga, the FTX founder’s onetime deputy (and onetime romantic partner) Caroline Ellison testified in court on Tuesday. And her account of Sam Bankman-Fried’s attitudes toward risk, which helped set the stage for FTX’s bankruptcy, was truly mind-blowing. Whatever the outcome of the trial, anyone in the business of entrusting money to entrepreneurs should be sobered by this story.

Consider these statements, drawn from the court transcript of Ellison’s testimony: At one point in late 2021, Bankman-Fried directed Ellison and others at his hedge fund, Alameda Research, to “borrow as much money as we could from whatever sources we could find at whatever terms we could get.” Among the collateral: FTX’s token, FTT, which let’s not forget is a made-up piece of junk. Ellison was conscious of the risk: If all those loans were called at once, she said, Alameda “might have to default on our loans or go bankrupt.” Of course, what happened is that Alameda used FTX customer money to repay loans, Ellison testified—$10 billion in total, part of $14 billion that Alameda took from FTX. What a business plan. (For more on the trial, see here.)

And it’s not like Bankman-Fried was borrowing the money because he had somehow found a time machine that allowed him to buy Apple stock at its price in the year 2000. No, he wanted the money to make venture investments in early-stage companies, for political donations and to lend billions to employees, including himself. His idea of risk-reward is a little skewed. Investing in early-stage companies is extremely risky not just because of the unproven nature of what you’re investing in, but because of the difficulty of getting your money out in a hurry if you need to, as Ellison noted in her testimony. For those reasons, it’s particularly inappropriate to finance such investments with borrowed money, even more so when the borrower is leveraged to the hilt.

So there you have it. These are the clowns entrusted with $2 billion by venture capitalists and billions more by lenders and customers. You can blame customers for being stupid enough to listen to celebrities promoting FTX (why would Tom Brady know anything about crypto?). But the lenders and venture capitalists are even more blameworthy, although deciding which group is the more incompetent isn’t easy. Perhaps the lenders have the most egg on their face: After all, if they’d done an ounce of due diligence on even the value of Alameda’s collateral, they might have helped avert the FTX disaster.

From FTX to Coinbase ventures, plenty of VCs have made big crypto bets that could actually pan out

October 12, 2023

Good morning Term Sheet readers—it’s Fortune Crypto editor Jeff Roberts tagging in. In recent weeks, an interesting storyline has emerged concerning the intersection of venture capital and crypto. Discussions on the topic have mostly revolved around VC firms getting burned by ill-advised crypto bets—most notoriously, Sequoia vaporizing $200 million on FTX—but there is another aspect to the story. 

I’m referring to crypto companies’ forays into venture investing. The topic has been in the news this month as a result of a strange twist in the FTX debacle: It turns out that, while alleged fraudster Sam Bankman-Fried was squandering his customers’ money left and right—including massive loans to FTX executives and a promised $55 million be-my-friend deal with Tom Brady—he stumbled on a jackpot.

The bonanza in question comes from a $500 million bet by FTX on the buzzy AI company Anthropic. The startup, which was founded by disgruntled ChatGPT employees, raised $300 million from Google to make the tech giant its “preferred cloud provider”—and then turned around and raised $4 billion from Amazon Web Services. Now, there are rumors Anthropic is raising another $2 billion at a $30 billion dollar valuation. Ironically, this could lead to a handsome return and ease some of the pain for FTX’s fleeced customers and investors. 

The Anthropic situation is an outlier, but FTX is hardly the only crypto firm with a VC portfolio. There is also Celsius, another bankrupt crypto firm whose founder liked to help himself to customer money and that ended up as roadkill during last year’s meltdown. The defunct firm was acquired by a group—appropriately named Farenheit—led by Silicon Valley veteran Michael Arrington, in part because of its venture capital portfolio.

The portfolio consists of startups focused on “DeFi infrastructure, trading tools, treasury management and Bitcoin mining innovation. We are very excited to help these companies grow and reach their full potential,” Arrington told Term Sheet. Fahrenheit does not for now hold any soon-to-be unicorns in its portfolio, but it’s early enough some diamonds in the rough could emerge.

Meanwhile, the big dog when it comes to crypto firms making VC bets is Coinbase Ventures, which owns shares in hundreds of startups. In March of 2022, Oppenheimer said the portfolio had “hidden value” and estimated it was worth over $6 billion. That assessment is 18 months old and it’s unlikely that valuation has held up following the recent carnage in the crypto industry, but don’t be surprised if there are some thoroughbreds in that stable.

Things will get really interesting if crypto can turn the corner and embark on another bull run—an event that’s a matter of time if previous cycles are any indication. If that happens, the industry’s current reputation as a blackhole for VCs could quickly change, and the biggest crypto firms will not be not asking for funds but instead talking about exits for their portfolio companies.

Jeff John Roberts

X Tests New Live-Stream and Spaces Buttons in the Post Composer

Published Oct. 12, 2023

By Andrew Hutchinson Content and Social Media Manager

X is looking to make it easier for users to live stream in the app, with new access buttons in testing for both its audio Spaces and video broadcasting options.

As you can see in this example, shared by X designer Andrea Conway, X is currently testing a new process that would see both the Spaces and video live-streaming options appear when you hold down the post composer button. That, ideally, will then lead to more people using X to stream direct, by making the option more up front and readily accessible in-stream.

The previous Twitter team tried similar, with your various posting options expanding from the composer “+” icon.

X is now looking to revisit this approach, while it’s also looking to add more intuitive functionality for posts, like double-tap for Like and swipe to reply. That would provide more direct engagement options, potentially without needing explicit icons for each.

X is currently working to build out its live-stream functionality, with owner Elon Musk testing a couple of variations of direct streaming and gaming broadcasts in the app.

X sees live-streaming as another avenue of opportunity to get more content feeding direct into the platform, though it is also worth noting that X/Twitter has had live-streaming, and Spaces, available for some time, and neither has caught on as a major element of focus.

Social media platforms swamped with fake news on the Israel-Hamas war

Of all platforms, Elon Musk’s X appears to have had the worst outbreak of fake news related to the war.

Disinformation – fake news that is spread deliberately – about the Israel-Palestine conflict has spread across all social networks like X, Facebook, Instagram and TikTok [Yousef Masoud/AP]

By Pranav Dixit

Published On 10 Oct 202310 Oct 2023

Hours after Hamas, the armed Palestinian group, attacked Israel on Saturday, X, the social network owned by the world’s richest man Elon Musk was awash with fake videos, photos and misleading information about the conflict.

“Imagine if this was happening in our neighbourhood, to your family,” posted Ian Miles Cheong, a far-right commentator whom Musk interacts with often, along with a video that he claimed showed Palestinian fighters killing Israeli citizens.

A Community Note, an X feature that lets users add context to posts, stated that the people in the clip were members of Israeli law enforcement, not Hamas.

But the video is still up and has racked up millions of impressions. And hundreds of other X accounts have shared the clip on the platform, some of them with verified check marks, an Al Jazeera search showed.

Disinformation – fake news that is spread deliberately – about the war and the Israel-Palestine conflict in general spread across other social networks like Facebook, Instagram and TikTok too, but thanks to Musk’s revamped policies that let anyone pay to be verified as well as large scale layoffs in X’s Trust and Safety teams, the platform appears to have seen the worst of it.

X, Meta, which owns Facebook, Instagram and Threads, TikTok, and BlueSky, did not respond to Al Jazeera’s request for comment.

On Monday, X declared there were more than 50 million posts on the platform over the weekend about the conflict.

In response, the company said it had removed newly-created accounts affiliated with Hamas, escalated “tens of thousands of posts” for sharing graphic media and hate speech, and updated its policies that define what the platform considers “newsworthy”.

“These massive companies are still stumped by the proliferation of disinformation, even as no one is still surprised by it,” said Irina Raicu, the director of the Internet Ethics Program at Santa Clara University.

“They put out numbers – how many posts they’ve taken down, how many accounts they’ve blocked, what settings you might want to change if you don’t want to see carnage. What they don’t put out are their metrics of their failures: how many distortions were not accompanied by ‘Community Notes’ or otherwise labelled, and for how long. It’s left to the journalists and researchers to document their failures after they happen.”

‘Old and recycled footage’

Over the last few years, bad actors have repeatedly used social media platforms to spread disinformation in response to real-world conflicts. In 2019, for instance, Twitter and Facebook were flooded with rumours and hoaxes after India and Pakistan, two nuclear powers, came to the brink of war following Pakistan’s shooting down of two Indian warplanes and its capture of an Indian pilot.

EU Officials Launch Investigation into X Over the Distribution of Misinformation Around the Israel-Hamas War

Published Oct. 12, 2023

By Andrew Hutchinson Content and Social Media Manager

EU officials certainly seem keen to enforce the obligations of their new Digital Services Act, with new reports that the EU has launched an official investigation into X over how it’s facilitated the distribution of “graphic illegal content and disinformation” linked to Hamas’ attack on Israel over the weekend.

Various reports have indicated that X’s new, more streamlined, more tolerant approach to content moderation is failing to stop the spread of harmful content, and now, the EU is taking further action, which could eventually result in significant fines and other penalties for the app.

The EU’s Internal Market Commissioner Thierry Breton issued a warning to X owner Elon Musk earlier in the week, calling on Musk to personally ensure that the platform’s systems are effective in dealing with misinformation and hate speech in the app.

Musk responded by asking Breton to provide specific examples of violations, though X CEO Linda Yaccarino then followed up with a more detailed overview of the actions that X has taken to manage the rise in related discussion.

Though that may not be enough.

According to data published by The Wall Street Journal:

“X reported an average of about 8,900 moderation decisions a day in the three days before and after the attack, compared with 415,000 a day for Facebook”

At first blush that seems to make some sense, given the comparative variance in user numbers for each app (Facebook has 2.06 billion daily active users, versus X’s 253 million). But broken down more specifically, the numbers show that Facebook is actioning almost six times more reports, on average, than X, so even with the audience variation in mind, Meta is taking a lot more action, which includes addressing misinformation around the Israel-Hamas war.

So why such a big difference?

In part, this is likely due to X putting more reliance on its Community Notes crowd-sourced fact-checking feature, which enables the people who actually use the app to moderate the content that’s shown for themselves.

Yaccarino noted this in her letter to Breton, explaining that:

“More than 700 unique notes related to the attacks and unfolding events are showing on X. As a result of our new “notes on media” feature, these notes display on an additional 5000+ posts that contain matching images or videos.”

Yaccarino also said that Community Notes related to the attack have already been viewed “tens of millions of times”, and in combination, X is clearly hoping that Community Notes will make up for any shortfall in moderation resources as a result of its recent cost-cutting efforts.

But as many have explained, the Community Notes process is flawed, with the majority of notes that are submitted never actually being displayed to users, especially around divisive topics.

Because Community Notes require consensus from people of opposing political viewpoints in order to be approved, the contextual pointers are often left in review, never to see the light of day. That means for things that are in general agreement, like AI-generated images, Community Notes are helpful, but they’re not overly effective for topics that spark dispute.

In the case of the Israel-Hamas war, that could also be an impediment, with the numbers also suggesting that X is likely putting too much reliance on volunteer moderators for key concerns like terrorism-related content and organized manipulation.  

Indeed, third-party analysis has also indicated the coordinated groups are already looking to seed partisan information about the war, while X’s new “freedom of speech, not reach” approach has also led to more offensive, disturbing content being left active in the app, despite it essentially promoting terrorist activity.

X’s view is that users can choose not to see such content, by updating their personal settings. But if posters also fail to tag such in their uploads, then that system is also seemingly falling short.

Atlassian to acquire former unicorn Loom for $975M

Ron Miller @ron_miller / 7:32 AM PDT•October 12, 2023

Image Credits: William West (opens in a new window)/ Getty Images

Atlassian announced this morning that it is acquiring video messaging service Loom for $975 million, the same company that had a $1.53 billion valuation in May 2021 when it announced a $130 million Series C. That was when companies were still thinking about all work being cloud-based and the future looked oh so bright.

As times have changed, so has the value of the company, but Atlassian still sees Loom and its 25 million customers, and more than 5 million video conversations per month, as a valuable asset. The company believes that it can be a useful collaboration tool for its platform, especially Jira and Confluence.

“Async video is the next evolution of team collaboration, and teaming up with Loom helps distributed teams communicate in deeply human ways,” Mike Cannon-Brookes, Atlassian co-founder and co-CEO said in a statement.

The company also sees the power of AI helping push this acquisition further with features like “video transcripts, summaries, documents, and the workflows developed from them, providing multiple ways for teams to connect and collaborate,” according to the company.

Joe Thomas, co-founder and CEO of Loom, tried to put a positive spin on the acquisition, saying in a statement, “Loom’s vision is to empower everyone at work to communicate more effectively wherever they are, and by joining Atlassian, we can accelerate their mission to unleash the potential of every team.” That is, of course, the argument of every acquired CEO, that the combined entities can do so much more, so much faster, than the company could do on its own.

But this was a company that launched in 2015, and raised over $200 million along the way. Its $30 million Series B in 2019 included industry luminaries like Figma CEO Dylan Field, Front CEO Mathilde Collin and Instagram co-founders Kevin Systrom and Mike Krieger, along with VC firms Sequoia and Kleiner Perkins.

The company’s customer list on its website reads like a who’s who of corporations across a variety of verticals, including companies like Ford, Tesla, Disney, Walmart, Goldman Sachs and Amazon, to name but a few.

Startup of the Week

Adobe Firefly’s generative AI models can now create vector graphics in Illustrator

Frederic Lardinois @fredericl / 9:00 AM PDT•October 10, 2023

Image Credits: David Paul Morris/Bloomberg / Getty Images

Illustrator is Adobe’s vector graphics tool for graphic artists and it’s about to join the generative AI era with the launch of the Firefly Vector Model at Adobe’s MAX conference today. Adobe describes the new model as “the world’s first generative AI model focused on producing vector graphics.” Like Firefly for creating images and photos, Firefly for Illustrator will be able to create entire vector graphics from scratch. And like the other Firefly models, the vector model, too, was trained on data from Adobe Stock.

In its beta, Illustrator will now let you create entire scenes through a text prompt. What’s nifty here is that those scenes can consist of multiple objects. So this isn’t just a jumble of vectors that make up the overall graphic but Illustrator will automatically generate these different objects and you can manipulate them individually to your heart’s content, just like any other group or layer in Illustrator.

Image Credits: Adobe

Alexandru Costin, Adobe’s VP for generative AI and Sensei, told me the company used tens of millions of vector images in Adobe Stock to train Firefly to enable this new capability. Costin described the process as “a journey” and since there hasn’t been as much work done on using generative AI to create vector drawings compared to the work on creating other images, this surely took a bit more work on the team’s part. He noted that the team focused on creating a model that could generate these images with the fewest possible points, too.

Image Credits: Adobe

Another new feature that’s coming to Illustrator is called Mockup, which allows Illustrator users to take any 3D scene and then take any vector art and apply it to that 3D scene. That could be a design for a drink can, for example, or a mockup of a new logo on a t-shirt. “Mockup is really exciting to show your customers the art in context so they understand what they’re buying when they contract you as a freelancer,” Costin explained.

Also new is Retype, which converts static text in images to editable text — and it’ll find matching fonts, too — and Illustrator is now available on the web, too!

X of the Week

Another Step Toward International War

Ray Dalio

Founder, CIO Mentor, and Member of the Bridgewater Board

October 12, 2023

What happened and is happening between Israel and Hamas, like what happened and is happening between Ukraine and Russia, should raise revulsion and fear in everyone. That is true both because these conflicts reveal the unimaginably terrible and revolting ways people can and do treat other people—especially innocent civilians—and because no one anywhere can be sure that they won’t someday find themselves in some horrible war. While Israel, Hamas, Ukraine, and Russia are in hot wars, thankfully the major powers (the US and China) are not, though they remain at the brink of one. It appears that we are at a very critical juncture in which we will soon see if the Israel-Hamas war spreads and how far it spreads and, longer term, whether the great powers are forces for peace (and will back away from the brink of direct conflict) or get involved (and cross the brink). Hopefully the horrific and tragic images we are all seeing will encourage restraint. However, it is very likely the case that the images of civilian casualties we are seeing now and will see during an escalating war in Gaza will lead to new conflicts both between countries and within countries (e.g., repulsive violence against Jewish and Muslim populations in many countries). In my opinion, this war has a high risk of leading to several other conflicts of different types in a number of places, and it is likely to have harmful effects that will extend beyond those in Israel and Gaza. Primarily for those reasons, it appears to me that the odds of transitioning from the contained conflicts to a more uncontained hot world war that includes the major powers have risen from 35% to about 50% over the last two years since I wrote my book Principles for Dealing with the Changing World Order.

In today’s post, I describe how I see what is happening within the context of how these sorts of things have repeatedly played out in the past. I also will make another suggestion for what leaders can do to move toward a better world—i.e., a more broadly prosperous and peaceful world.