A reminder for new readers. That Was The Week collects the best writing on critical issues in tech, startups, and venture capital. I selected the articles because they are of interest. The selections often include things I entirely disagree with. But they express common opinions, or they provoke me to think. The articles are only snippets. Click on the headline to go to the original. I express my point of view in the editorial and the weekly video below.
This Week’s Video and Podcast:
Content this week from @kteare, @ajkeen, @pmarca, , @sama, @andreretterath, @Haje, @nmasc_, @Carta, @Om, @stratechery, @benthompson. @Beezer232, @jasongulya, @timadlerauthor, @FLandymore3, @KrishnanRohit, @JGlasner, @PebbleLife, @Auren
Contents
Editorial:
The VC industry needs to rip up the playbook and start again
The Biggest Challenge for Data-Driven Investors: Mirroring the Past Into the Future
With X, Musk is playing the bottoms-up game
Sequoia Capital Partner, Other Investors Boycott Web Summit Following CEO’s Israel Comments
19% of venture rounds are down rounds in 2023.
Social Internet Is Dead. Get Over It.
Social media is history — just ask the kids!
Beezer Clarkson on 20VC
Geoffrey Hinton realised mankind was history when AI got the joke
Sam Altman Warns That Ai Is Gonna Destroy A Lot Of People’s Jobs
At The Sam Bankman-Fried Trial, It’s Clear FTX’s Collapse Was No Accident
Here comes another Netflix price hike
The Biggest IPOs Of 2021 Have Shed 60% Of Their Value
Auren Hoffman on Persistence and Success (Optimism?)
Editorial
Last Sunday we recorded a Gillmor Gang and the topic turned to “progress” and the future. The show has not been released yet but due to the wonders of our streaming media world you can see the live recording as it happened here –
https://twitter.com/gillmorgang/status/1713619139628351897?s=61&t=G32nn4vUSL2wQvFzsbr_JQ
You will see me arguing that progress is measured by how much human work is replaced by automation. Freeing human time to do other things. Central to that idea is the belief that technological innovation will reduce the need for human labor. This idea of progress requires optimism of both the mind and the spirit. Not optimism in “technology” but in human ingenuity. And, after all, what is technology except for human ingenuity? The Gang had a lively discussion about it.
We were quite timely because 24 hours later Marc Andreessen released his mostly excellent Techno-Optimist Manifesto. It asks us all to:
Become our allies in the pursuit of technology, abundance, and life.”
and declares
We believe technology is a lever on the world – the way to make more with less.
This aligns with my own view and that of many innovators. The product of human ingenuity is growth, and growth drives human progress. AI is certainly one of the key inventions capable of accelerating progress – making more with less.
In another of this week’s essays, we get an opposite point of view:
“Professor Hinton, who created waves when he quit his job at Google partly in order to sound the alarm over the existential threat AI poses to humanity, said: ““PaLM [Google’s AI system] could actually explain why a joke was funny.”
Another Damascene moment came when Hinton realised how good neural networks are at sharing information with each other compared to humans. Humans, who share information at the sentence level, could be thought of as swapping bytes of information between each other compared to AI, which can instantly share gigabytes.
“That was the moment I realised that we were history,” concluded Hinton. “I’m pessimistic because pessimists are usually right.” “
Pessimists are usually right is itself a sentence that only a pessimist could utter. I can’t think of any period in human history where pessimism was right. From the Neanderthal age all the way to today, humans have been able to overcome limits to the development of the species while reducing individual working hours, improving health and longevity, globalizing access to knowledge, and so on. But Hinton is not alone in that view.
Sam Altman is interviewed on the same topic this week. Focusing on AI and job loss he is reported as follows:
Altman thinks job loss is an inevitable casualty of any “technological revolution.” Every 100 to 150 years, Altman said at The Wall Street Journal Tech Live conference on Monday, half of people’s jobs end up getting phased out.
“I’m not afraid of that at all,” he said, as quoted by the paper. “In fact, I think that’s good. I think that’s the way of progress, and we’ll find new and better jobs.”
Altman is historically correct and logically correct also. Progress destroys jobs because it makes them unnecessary. If it is possible to improve teaching by introducing AI, that would be wonderful. The destruction of jobs is the very essence of progress.
The bit about finding new jobs has historically been true. But it would also be OK if technical innovation reached a point where AI and robots would even do the new jobs. Ultimately we will not need 7 billion or more people in “work” or paid labor.
That enables us to ask new questions, exciting questions. What motivates us to make effort? How do we use non-work or leisure time to make our lives more fulfilling? How does the wealth that is created by automation get used and distributed? How can we lift living standards for everybody? Is work and effort the same thing? Is free time the real currency of progress?
Altman poses and answers some of these:
Since framing job loss as some sort of greater good is basically his whole narrative. In the past, he’s never sounded that upset about AI replacing the “median human,” either.
Still, he did reiterate that “we are really going to have to do something about this transition” — though what that “something” will actually be remains unclear.
“It is not enough just to give people a universal basic income,” Altman said. “People need to have agency, the ability to influence. We need to jointly be architects of the future.”
When automated farming and land enclosures transformed agriculture in the time of the agricultural revolution, it led to the formation of cities and created the basis for an industrial revolution. The Spinning Jenny and the Steam Engine would have made no sense without free laborers leaving the land. AI will have a similar impact at greater scale, and faster. The human energy released for new endeavors will create things we cannot consider today.
Capitalism and the market are at the heart of Andreessen’s Manifesto as mechanisms for fueling progress. So far that is historically true. Capitalism beats any system that has come before it. Because of this focus on capitalism and the market, Andreessen has no narrative about the distribution of abundance or wealth. He is silent on ownership other than to confirm, through the silence, that he sees no need to discuss it. The implication is that wealth flows to a few private hands, progress does not serve humanity as a whole except for the removal of drudgery, and abundance stays highly centralized in the hands of elites. He may not believe this, but there is no evidence he does not.
However, at some point, the wealth created by autonomous agents will suggest the need to challenge its concentration in a few hands. Altman has universal basic income as a partial answer to this. But most likely, we will need a social tax on automated wealth, ensuring that society benefits from humanity’s gains and the resulting abundance.
In his wonderful novel, For Us the Living, written in 1939, Robert Heinlein called this a “Heritage” payment. That is a payment due to every human in order to enable all humans to benefit from the sum of all human progress.
The Heritage Check system grants every member of society a guaranteed minimum income – individuals may choose to work, but are not obligated to. Those who choose to work enjoy a remarkably short workday and pleasantly high wages. The result is a true leisure society – most people indulge themselves in artistic or artisanal pursuits.
Source: https://payingformyjd.wordpress.com
Altman is closest to asking and answering this question because he is closest to making jobs obsolete and seems to have an awareness of society as a real thing. (I always said he was smart.)
To quote Oscar Wilde:
A map of the world that does not include Utopia is not worth even glancing at, for it leaves out the one country at which Humanity is always landing. And when Humanity lands there, it looks out, and, seeing a better country, sets sail. Progress is the realisation of Utopias.
Now, I have said that the community by means of organisation of machinery will supply the useful things, and that the beautiful things will be made by the individual. This is not merely necessary, but it is the only possible way by which we can get either the one or the other.
Essays of the Week
The Techno-Optimist Manifesto
“You live in a deranged age — more deranged than usual, because despite great scientific and technological advances, man has not the faintest idea of who he is or what he is doing.” — Walker Percy
MARC ANDREESSEN, OCT 16, 2023
“Our species is 300,000 years old. For the first 290,000 years, we were foragers, subsisting in a way that’s still observable among the Bushmen of the Kalahari and the Sentinelese of the Andaman Islands. Even after Homo Sapiens embraced agriculture, progress was painfully slow. A person born in Sumer in 4,000BC would find the resources, work, and technology available in England at the time of the Norman Conquest or in the Aztec Empire at the time of Columbus quite familiar. Then, beginning in the 18th Century, many people’s standard of living skyrocketed. What brought about this dramatic improvement, and why?”
— Marian Tupy
Thanks for reading Marc Andreessen Substack! Subscribe for free to receive new posts and support my work.
Subscribe
“There’s a way to do it better. Find it.”
— Thomas Edison
Lies
We are being lied to.
We are told that technology takes our jobs, reduces our wages, increases inequality, threatens our health, ruins the environment, degrades our society, corrupts our children, impairs our humanity, threatens our future, and is ever on the verge of ruining everything.
We are told to be angry, bitter, and resentful about technology.
We are told to be pessimistic.
The myth of Prometheus – in various updated forms like Frankenstein, Oppenheimer, and Terminator – haunts our nightmares.
We are told to denounce our birthright – our intelligence, our control over nature, our ability to build a better world.
We are told to be miserable about the future.
Truth
Our civilization was built on technology.
Our civilization is built on technology.
Technology is the glory of human ambition and achievement, the spearhead of progress, and the realization of our potential.
For hundreds of years, we properly glorified this – until recently.
I am here to bring the good news.
We can advance to a far superior way of living, and of being.
We have the tools, the systems, the ideas.
We have the will.
It is time, once again, to raise the technology flag.
It is time to be Techno-Optimists.
Technology
Techno-Optimists believe that societies, like sharks, grow or die.
We believe growth is progress – leading to vitality, expansion of life, increasing knowledge, higher well being.
We agree with Paul Collier when he says, “Economic growth is not a cure-all, but lack of growth is a kill-all.”
We believe everything good is downstream of growth.
We believe not growing is stagnation, which leads to zero-sum thinking, internal fighting, degradation, collapse, and ultimately death.
There are only three sources of growth: population growth, natural resource utilization, and technology.
Developed societies are depopulating all over the world, across cultures – the total human population may already be shrinking.
Natural resource utilization has sharp limits, both real and political.
And so the only perpetual source of growth is technology.
In fact, technology – new knowledge, new tools, what the Greeks called techne – has always been the main source of growth, and perhaps the only cause of growth, as technology made both population growth and natural resource utilization possible.
We believe technology is a lever on the world – the way to make more with less.
Economists measure technological progress as productivity growth: How much more we can produce each year with fewer inputs, fewer raw materials. Productivity growth, powered by technology, is the main driver of economic growth, wage growth, and the creation of new industries and new jobs, as people and capital are continuously freed to do more important, valuable things than in the past. Productivity growth causes prices to fall, supply to rise, and demand to expand, improving the material well being of the entire population.
We believe this is the story of the material development of our civilization; this is why we are not still living in mud huts, eking out a meager survival and waiting for nature to kill us.
We believe this is why our descendents will live in the stars.
We believe that there is no material problem – whether created by nature or by technology – that cannot be solved with more technology.
We had a problem of starvation, so we invented the Green Revolution.
We had a problem of darkness, so we invented electric lighting.
We had a problem of cold, so we invented indoor heating.
We had a problem of heat, so we invented air conditioning.
We had a problem of isolation, so we invented the Internet.
We had a problem of pandemics, so we invented vaccines.
We have a problem of poverty, so we invent technology to create abundance.
Give us a real world problem, and we can invent technology that will solve it.
The VC industry needs to rip up the playbook and start again
The boom is over but Silicon Valley’s optimism is masking problems with investor capital and returns
The writer is founder of Sifted, an FT-backed site about European start-ups
In these grim times, the world could do with a shot of hope. Right on cue, up pops the irrepressible venture capitalist Marc Andreessen to shout about his latest techno-optimist manifesto. “Give us a real world problem, and we can invent technology that will solve it,” the co-founder of Andreessen Horowitz wrote this week.
No matter the billions of dollars wasted on fruitless crypto and metaverse investments, nor the recent landslide in private market valuations, nor the still-chilly state of the public listings markets; the techno-capitalist machine that is Silicon Valley continues to hum with the conviction that it can build a better future. To the outside observer, it appears like “ideology as usual” in VC land.
Yet when the volume is turned down, many VCs have been quietly rethinking their financial game, recognising that the uniquely favourable conditions that benefited their industry over the past two decades are never going to occur again. Last year, some commentators even speculated about whether the industry had reached a “Minsky moment”, when asset values suddenly collapsed after a period of reckless speculation. (There has been nothing quite that dramatic so far.)
Please use the sharing tools found via the share button at the top or side of articles. Copying articles to share with others is a breach of FT.com T&Cs and Copyright Policy. Email licensing@ft.com to buy additional rights. Subscribers may share up to 10 or 20 articles per month using the gift article service. More information can be found here.
https://www.ft.com/content/25dfb910-b40c-4fa0-b7cc-74c23e336768
This year, others have questioned whether we might be nearing the end of the VC-driven entrepreneurial age. For an industry built on short-sighted enthusiasm and wild-eyed ambition, there is a lot of doubt around as many VC funds struggle to raise capital. The storytellers need a new story.
The VC industry certainly experienced a golden era over the first two decades of this century. The near-universal adoption of the internet and smartphones created the digital infrastructure for VC-backed ecommerce and social media companies to boom. With massive network effects and negligible marginal costs, consumer internet companies were catnip to VC investors, enabling them to turn relatively small early-stage investments into mega-sized exits.
Moreover, the extraordinarily loose monetary conditions after the global financial crisis of 2008 let VCs raise cheap money and hurl it at fast-scaling companies, such as Uber and Airbnb.
This capital-as-a-strategy model, prioritising revenue growth over cash or profit generation, is a lot harder to make work now that money costs something. When Brian Chesky, Airbnb’s co-founder, visited the FT recently, he acknowledged that his company would never have been able to follow its growth strategy of the 2010s today… Lots More
The Biggest Challenge for Data-Driven Investors: Mirroring the Past Into the Future
DDVC #57: Where venture capital and data intersect. Every week.
OCT 18, 2023
Today’s episode is all about AI and data biases and how they are prone to mirror the past into the future.
How Can Data-driven Approaches Help Investors Overcome Cognitive Biases?
Let’s look back at my very first episode “Why VC is Broken and Where to Start Fixing It” more than a year ago:
… the VC investment/decision-making process is manual, inefficient, non-inclusive, subjective and biased which leads not only to a huge waste of resources but more importantly to sub-optimal outcomes and missed opportunities.
Double-clicking on the “subjective and biased” part, we find that cognitive biases are the key driver leading investors to pattern matching and oftentimes suboptimal outcomes. Pattern matching on the basis of limited sample size is dangerous and unfortunately, no investor had the opportunity to experience all of the success cases in the world firsthand.
Data-driven approaches may balance these shortcomings if done right. By assembling a comprehensive time-series dataset about all companies out there, we can analyze how different features of successful vs unsuccessful companies have evolved over time, just as described in this paper.
Following the extraction of “success patterns” (check out this related piece on “Patterns of Successful Startups”), investors cannot only translate them into an algorithmic selection of new investment opportunities but leverage these findings to challenge their cognitive biases.
Firsthand experiences with a limited sample of successful vs unsuccessful companies shape subjective cognitive biases. Creating awareness for these biases and balancing them with more objective feature patterns identified through a significantly more comprehensive data sample merges the best of both worlds: Subjective/human + objective/data.
What Is Data Bias?
While humans are prone to cognitive biases, data-driven approaches and machine-learning models are prone to data bias. Data bias occurs when an information set is inaccurate and fails to represent the entire population.
For example, when looking at successful vs unsuccessful startups, one might over-index a specific industry or geography in the training data. As a result, extracted “success patterns” might only partially apply to the full universe of opportunities out there, limiting your ability to spot all success candidates.
Data bias is a significant concern as it can lead to biased responses and skewed outcomes, resulting in inequality and ineffectiveness in the screening/investment selection process.
How to Mitigate Data Bias?
As for cognitive biases, the first step is to create awareness of data biases too. Only thereafter, we can leverage techniques like stratified sampling, oversampling and undersampling, or moderator variables. The latter is something that has proven to be extremely valuable in the context of startup screening, so let’s dive into this topic in a bit more detail.
Understanding Moderator Variables: In the context of research and statistics, a moderator variable affects the strength or direction of the relationship between an independent variable (=cause) and a dependent variable (=effect). In simpler terms, it can change how one factor affects another.
Addressing Data Bias with Moderators:
Clarifying Relationships: By examining moderator variables, we can better understand under which conditions certain relationships hold or don’t hold. For instance, if we’re studying the relationship between startup success (dependent variable) and investment received (independent variable), a moderator like “region of operation” might reveal that the relationship is stronger in urban areas compared to rural areas.
Identifying Hidden Biases: Sometimes, biases aren’t evident until you introduce a moderator. For example, a dataset might show that a tech bootcamp improves job placement rates for all participants. But when the moderator “gender” is introduced, it could reveal a significant discrepancy in placement rates between men and women, indicating a potential bias.
Limitations:
Doesn’t Eliminate Bias: Introducing moderator variables can help reveal and understand biases, but it doesn’t inherently eliminate them. This requires additional initiatives like the sampling techniques mentioned above.
Requires Thoughtful Selection: Not all variables serve effectively as moderators. Researchers must have a theoretical or empirical reason to believe that a certain variable can act as a moderator.
The Biggest Challenge of Data-driven Investing: Mirroring the Past Into the Future
Beyond the data biases mentioned above, one of the biggest concerns with purely data-driven investment selection is the question of how to deal with changing success patterns. Let’s imagine the following (a brief summary of my paper here):
You procure startup data from established providers like Crunchbase, Pitchbook, and Dealroom —> big problem is that they tend to update and over-write features; historic values get deleted which makes it difficult to reconstruct the full history of a company
You scrape additional data from LinkedIn, ProductHunt, GitHub, and diverse public registers —> important to repeat in consistent intervalls like every week or month to keep track of feature development over time
You merge all datasets together and remove duplicates to receive a single source of truth with comprehensive coverage of companies and maximum level of detail on time-series features
You encode your features (think for example One-Hot Encoding) and take a snapshot of all independent variables as of t1 (think 1st Jan 2015)
You classify the sample into success (for example IPO and M&A above 500m) and failure (all other cases) at a later point in time to represent required outcomes from an early-stage investor’s view; this is your dependent variable as of t2 (think 31st Dec 2020)
You train a classification model to identify patterns across the independent variables as of t1 (1st Jan 2015) that predict the success of the dependent variable as of t2 (31st Dec 2020); these are your success patterns
Shifting t1 and t2 equally across time reveals that success patterns change, even when keeping the majority of features constant. This can be explained by the fact that business models and industries evolve, requiring new approaches to become successful. Said differently, what got us here won’t get us there.
In VC land, this means that sourcing and screening investment opportunities with algorithms that got trained on historical data bears the risk that novel, so far unseen business models or innovations might fall through the cracks.
Assume you’ve trained a model with input data / independent variables t1 = 2018 and output data / dependent variable t2 = 2022. Would the model know what a successful core fusion company looks like? Of course not.
Why? Because there hasn’t been a successful core fusion company as of today. It will take at least another few years, potentially decades to know what success for these kind of companies looks like. Before this happens, we cannot purely rely on data-driven approaches to identify novel innovations but need human intuition.
Augmented VC as the solution to all problems
While data-driven methods offer objectivity and counteract the cognitive biases inherent in human decision-making, humans possess the unique ability to rectify the limitations of these data-centric strategies. Not only can they prevent the mere replication of historical patterns, but they can also discern novel and previously unrecognized patterns.
In a nutshell, data-driven approaches are exceptional in spotting established success patterns but struggle to identify novel innovations whereas (some) humans are exceptional in intuitively identifying so far unseen opportunities yet quickly become prone to flawed cognitive biases that resulted from limited sample sizes. Therefore, combining the power and objectivity of computers with the intuition of humans seems like the only way to improve efficiency, effectiveness, and inclusiveness 🤖🤝🤓
Stay driven,
Andre
With X, Musk is playing the bottoms-up game
Po-tee-weet?
Haje Jan Kamps @Haje / 9:30 AM PDT•October 19, 2023
Image Credits: Bryce Durbin / TechCrunch
Most marketers can’t dream so big as to imagine a world where the name of their product is mentioned on every newscast, written about on every flier, every advert, every sign and every email signature.
That’s the absurdly omnipresent power of the Twitter brand. At first, it seemed like Musk was playing a game of chicken with the site. But no. The gloves are off. The hawks are out. When X’s home icon changed from a birdhouse to a regular house, you know that things were starting to get real. Twitter isn’t dying: It’s dead.
Tweets are posts. Retweets are reposts. The cute little home icon? Burn it. Accessibility? Diversity? Eh. It doesn’t matter. That’s not who we’re building for over here. And in the process, the artist formerly known as Twitter is burning so much brand recognition it’ll make a junior advertising executive do unspeakable things.
None of these things makes sense if you are playing the wrong game.
If you think Musk is ruining Twitter, then yeah, he is. That’s because he’s not in it for Twitter. He’s in it for X, and the current lovers of the platform are collateral damage.
Founders love giving top-down takes. What’s the total addressable market, how can it be segmented in a way that makes money, and as the tide raises all boats, how hard can founders ride that CAC-to-LTV ratio mixed metaphor into the supernova of their IPO?
I’ve been doing this long enough that I know what works. The problem is that every category-defining company of our era didn’t have an addressable market: They created one.
Seen through that lens, after a bout of pneumonia, a couple shots of whiskey, and some squinting, perhaps X is starting to make a little bit of sense.
Ripping the personality out of X is what Elon has to do to make it fit his dream: to have been right all along. He wants to be right about PayPal, which ultimately failed to solve all the world’s banking problems. Then he got distracted by sending stuff into space, running a company literally into the ground (and was boring about it) and building an electric car or two.
Musk can be a raging sack of arrogance, and I dislike him for it, but he’s right about a few major things. In 1999, banking in the U.S. was a morass of misery, and globally, things were even worse. Some of the features of Scandinavian internet banks back then sounded like witchcraft to American ears. And today I still can’t transfer some money from my bank account to my neighbor’s without three panic attacks, two-thirds of a computer science degree, and some sort of digital intermediary.
Consumer banking was broken 25 years ago, and it still is today. It’s ludicrous that the banks couldn’t get together and solve this, so now we’re all stuck with Square, Stripe, Plaid, and the 10,000 PayPal/Venmo/Cash App clones. “Everyone is making money,” you might squeal short-sightedly. Yeah, but whose money? Why should it cost any money to move money from one place to another, ever? For consumers, that’s absurd. It means that by the time you give your dollar to someone, it becomes 97 cents just by virtue of having changed hands. If we were to send that one dollar back and forth enough times, it would disappear.
If we haven’t been able to fix banking in 25 years, we may never be able to. And so the only way to “fix” that system is to supplant it.
In that context, everything that’s happened starts to make sense. Musk isn’t trying to build a bank; he’s trying to replace the entire banking system altogether.
Sequoia Capital Partner, Other Investors Boycott Web Summit Following CEO’s Israel Comments
By Natasha Mascarenhas, Oct. 16, 2023 2:41 PM PDT ·
High-profile venture capitalists, including Y Combinator’s Garry Tan and Sequoia Capital partner Ravi Gupta, are withdrawing from a major gathering of tech leaders after conference CEO Paddy Cosgrave made comments that appear to refer to Israel’s retaliatory strikes in Hamas-controlled Gaza as “war crimes.”
At least five speakers said they would no longer participate at the November Web Summit conference, which draws tens of thousands of founders and executives to Lisbon every year. Those announcing their withdrawal include Ori Goshen, co-founder of AI21, an Israel-based rival to OpenAI, and Keith Peiris, co-founder of AI startup Tome. The boycott exposes the widening fallout in the business world from terrorist group Hamas’ Oct. 7 attacks in southern Israel, which have forced hundreds of Israeli tech workers to the front lines and prompted Wall Street financiers to publicly criticize academic institutions they back for their positions on the war.
THE TAKEAWAY
•Sequoia’s Gupta, YC’s Tan cancel speaking engagements
•Venture capitalists question Web Summit’s Qatar ties
•Web Summit’s Cosgrave had tweeted about Israel’s retaliation
Cosgrave on Friday expressed shock at the “rhetoric and actions of so many Western leaders & governments,” as Israel prepared to launch a ground invasion of Gaza. “War crimes are war crimes even when committed by allies, and should be called out for what they are,” he wrote on X. Cosgrave also liked posts on X, the app formerly known as Twitter, that referred to the terrorist group’s killing of Israelis a week ago as “self-defence” and that said Israel was committing genocide against Palestinians. Later Monday, Cosgrave removed the liked posts.
Backlash to his comments was swift. On Monday, Y Combinator CEO Tan said on X that he was canceling his appearance at the Web Summit. Gil Dibner, founder of Angular Ventures, a VC firm based in Tel Aviv and London, also canceled his speaking commitment.
Cosgrave also is facing questions about his company’s association with Qatar, a U.S. military ally.
19% of venture rounds are down rounds in 2023.
Peter Walker, Head of Insights @ Carta | Data Storyteller
That’s up from only 5% down rounds in 2021.
And these down rounds are seeing companies take significant haircuts vs the prior valuation (anywhere from 30-60% drops).
Nobody’s idea of a good time.
Of course, company valuations in the public market decline all the time so I believe the stigma of a down round in private tech is a bit unwarranted…but I digress.
First off – just how common are down rounds today?
If you focus on the black bars in the chart below, they illustrate the percentage of all rounds in a given stage and year that were down rounds.
For Series A, the percentage jumps from 8% to 16% year over year.
Series B
8% –> 14%
Series C
8% –> 19%
Series D
13% –> 31%
Series E
15% –> 24%
Youch.
But the frequency of these down rounds isn’t the only metric we care about. We’re also interested in how much the valuation declines for each company. So now focus on the salmon-colored bars.
The median post-money valuation decline in a down round for a Series A company is -34% this year.
The median decline in a Series E down round? -56%.
Obviously you’d expect the declines to increase as a company gets later stage (since they are closer to the public markets which have not been kind to tech companies over the past year + some recent IPOs failing to take off) but damn. A 56% haircut is no joke.
These are difficult moments in a company’s lifecycle. They require a ton of explaining – to new investors, to current investors, to the wider employee base. But there are many examples of startups achieving major exits after undertaking a down round – so hang in there 🙏
Social Internet Is Dead. Get Over It.
OCTOBER 15, 2023, Om Malik
**
The social-media Web as we knew it, a place where we consumed the posts of our fellow-humans and posted in return, appears to be over.…In large part, this is because a handful of giant social networks have taken over the open space of the Internet, centralizing and homogenizing our experiences through their own opaque and shifting content-sorting systems.
Algorithms optimized for engagement shape what we see on social media and can goad us into participation by showing us things that are likely to provoke strong emotional responses. But although we know that all of this is happening in aggregate, it’s hard to know specifically how large technology companies exert their influence over our lives.
The moment exposes the tension between how social networks wish people used their services and the reality … Asking users to unlearn the habit of relying on social media will take time and may not work at all.
I read these three articles and was reminded of something I have known for a while, though I had not synthesized it succinctly enough: the internet, as we have known it, has evolved from a quaint, quirky place to a social utopia, and then to an algorithmic reality. In this reality, the primary task of these platforms is not about idealism or even entertainment — it is about extracting as much revenue as possible from human vanity, avarice, and narcissism.
Frankly, none of this should be a surprise. Most of the social algorithms have been specifically designed and optimized to do just that. The Social Internet began as a place to forge “friendships” and engage in “social interactions.” It performed its role as intended until companies needed to generate profit. By then, we were all hooked on the likes, hearts, retweets, and followers and the boost they gave to our egos.
Looking back, the very idea that socially inept and maladjusted founders would define online social norms feels almost laughable. The notion of having 5,000 people as “friends” was as preposterous then as it is now. We were naive in our thinking, and happy to replace real-life friendships with an unlimited number of online friends. After all, digital friends are superior to real ones, right? In my column for Business 2.0 magazine, I wrote:
This new startup might seem like the bastard child of EdTV and Blogger, the latest in tech-enhanced West Coast narcissism. But it actually points the way to a future where we use technology to stay in close touch with our friends and families around the world. Companies that take advantage of this trend are poised to capture more than just our attention.
Whether in Parisian cafes, Bombay chai stalls, or Manhattan singles’ bars, humans have an overwhelming need to get together, talk, communicate, and interact. Our genes are coded that way. It’s no surprise that as we rush toward an always-on, ever more connected society, we want to mimic these offline interactions on the Net.
Back then, the internet was still seen as a utopian ideal — not a massive marketing machine. Our friendly chats and discussions weren’t enough for platforms to draw the advertising revenue required for giants like Facebook and Twitter to keep growing. However, sharing news and media links became an effective way for social platforms to keep users engaged. Discussing the latest news was often more straightforward than initiating a genuine online conversation. Hence, the social internet morphed into “social media.”
It was evident where this was all headed…. Lots More
Social media is history — just ask the kids!
OCTOBER 19, 2023, Om Malik
One of the biggest problems with old people is that they forget that they were young once, or that they were easily bored and did things that annoyed their parents, or that they had their own creative ways to deal with boredom and wasting time. And that is why I am a bit sanguine about how the young and the restless use the Internet and social media.
What triggered this chain reaction of thoughts were the results of a recent survey of 1,500 adolescents by The Gallup Group, which pointed out that, on average, they spend 4.8 hours on social media. But not all social media is created equal.
When you parse the numbers, these teenagers spend 39.6 percent of their time on YouTube and 31 percent on TikTok. They spend about 18 percent of their time on Instagram. However, when it comes to Facebook and Twitter, the numbers decline sharply — 6.25 percent and 4.2 percent.
Yes, teenagers are spending a lot more time on social media — just not as much on what we old-timers call social media. The big takeaway from this survey is that both Facebook and Twitter have aged out. They are as relevant to this generation as Fox and CBS are to the youth.
More importantly, both YouTube and TikTok have one thing in common — they are very visual mediums. More importantly, those numbers tell me that the teens are watching “videos” on these networks, not “social networking” in a classic sense. It is easy to conclude that young people are “social networking” when in reality, they are using the Internet through two portals: YouTube and TikTok.
“In our studies, something like almost 40 percent of young people, when they’re looking for a place for lunch, they don’t go to Google Maps or Search. They go to TikTok or Instagram,” Prabhakar Raghavan, a Google executive said at Fortune’s Brainstorm 2022 event. “TikTok “is becoming a one-stop shop for content in a way that it wasn’t in its earlier days,” Lee Rainie, of Pew Research Center told the New York Times.
YouTube is more than just a social networking site. It serves as a source for music videos and streams, a learning hub, a helper for homework, and a platform to pass the time. Much like TikTok, YouTube is where the new internet generation seeks the information they need in a format tailored to them. It’s not social networking in the classic sense that an older generation envisions. Perhaps this is just how the youth use the internet nowadays.
No one should be surprised…More
China Chips and Moore’s Law
Stratechery – Posted on Wednesday, October 18, 2023
The complexity for minimum component costs has increased at a rate of roughly a factor of two per year (see graph on next page). Certainly over the short term this rate can be expected to continue, if not to increase. Over the longer term, the rate of increase is a bit more uncertain, although there is no reason to believe it will not remain nearly constant for at least 10 years.
— Gordon Moore, Cramming More Components Onto Integrated Circuits
Moore’s law is dead.
– Jensen Huang
On Tuesday the Biden administration tightened export controls for advanced AI chips being sold to China; the primary target was Nvidia’s H800 and A800 chips, which were specifically designed to skirt controls put in place last year. The primary difference between the H800/A800 and H100/A100 is the bandwidth of their interconnects: the A100 had 600 Gb/s interconnects (the H100 has 900GB/s), which just so happened to be the limit proscribed by last year’s export controls; the A800 and H800 were limited to 400 Gb/s interconnects.
The reason why interconnect speed matters is tied up with Nvidia CEO Jensen Huang’s thesis that Moore’s Law is dead. Moore’s Law, as originally stated in 1965, states that the number of transistors in an integrated circuit would double every year. Moore revised his prediction 10 years later to be a doubling every two years, which held until the last decade or so, when it has slowed to a doubling about every three years.
In practice, though, Moore’s Law has become something more akin to a fundamental precept underlying the tech industry: computing power will both increase and get cheaper over time. This precept — which I will call Moore’s Precept, for clarity — is connected to Moore’s technical prediction: smaller transistors can switch faster, and use less energy in the switching, even as more of them fit on a single wafer; this means that you can either get more chips per wafer or larger chips, either decreasing price or increasing power for the same price. In practice we got both.
What is critical is that the rest of the tech industry didn’t need to understand the technical or economic details of Moore’s Law: for 60 years it has been safe to simply assume that computers would get faster, which meant the optimal approach was always to build for the cutting edge or just beyond, and trust that processor speed would catch up to your use case. From an analyst perspective, it is Moore’s Precept that enables me to write speculative articles like AI, Hardware, and Virtual Reality: it is enough to see that a use case is possible, if not yet optimal; Moore’s Precept will provide the optimization.
The End of Moore’s Precept?
This distinction between Moore’s Law and Moore’s Precept is the key to understanding Nvidia CEO Jensen Huang’s repeated declarations that Moore’s Law is dead. From a technical perspective, it has certainly slowed, but density continues to increase; here is TSMC’s transistor density by node size, using the first (i.e. worse) iteration of each node size:
Remember, though, that cost matters; here is the same table with TSMC’s introductory price/wafer, and what that translates to in terms of price/billion transistors:
Notice that number on the bottom right: with TSMC’s 5 nm process the price per transistor increased — and it increased a lot (20%). The reason was obvious: 5 nm was the first process that required ASML’s extreme ultraviolet (EUV) lithography, and EUV machines were hugely expensive — around $150 million each.
In other words, it appeared that while the technical definition of Moore’s Law would continue, the precept that chips would always get both faster and cheaper would not.
GPUs and Embarrassing Parallelism
Huang’s argument, to be clear, does not simply rest on the cost of 5 nm chips; remember Moore’s Precept is about speed as well as cost, and the truth is that a lot of those density gains have primarily gone towards power efficiency as energy became a constraint in everything from mobile to PCs to data centers. Huang’s thesis for several years now is that Nvidia has the solution to making computing faster: use GPUs.
GPUs are much less complex than CPUs; that means they can execute instructions much more quickly, but those instructions have to be much simpler. At the same time, you can run a lot of them at the same time to achieve outsized results. Graphics is, unsurprisingly, the most obvious example: every “shader” — the primary processing component of a GPU — calculates what will be displayed on a single portion of the screen; the size of the portion is a function of how many shaders you have available. If you have 1,024 shaders, each shader draws 1/1,024 of the screen. Ergo, if you have 2,048 shaders, you can draw the screen twice as fast. Graphics performance is “embarrassingly parallel”, which is to say it scales with the number of processors you apply to the problem.
This “embarrassing parallelism” is the key to GPUs outsized performance relative to CPUs, but the challenge is that not all software problems are easily parallel-izable; Nvida’s CUDA ecosystem is predicated on providing the tools to build software applications that can leverage GPU parallelism, and is one of the major moats undergirding Nvidia’s dominance, but most software applications still need the complexity of CPUs to run.
AI, though, is not most software. It turns out that AI, both in terms of training models and in leveraging them (i.e. inference) is an embarrassingly parallel application. Moreover, the optimum amount of scalability goes far beyond a computer monitor displaying graphics; this is why Nvidia AI chips feature the high-speed interconnects referenced by the chip ban: AI applications run across multiple AI chips at the same time, but the key to making sure those GPUs are busy is feeding them with data, and that requires those high speed interconnects.
That noted, I’m skeptical about the wholesale shift of traditional data center applications to GPUs; from Nvidia On the Mountaintop:
Humans — and companies — are lazy, and not only are CPU-based applications easier to develop, they are also mostly already built. I have a hard time seeing what companies are going to go through the time and effort to port things that already run on CPUs to GPUs; at the end of the day, the applications that run in a cloud are determined by customers who provide the demand for cloud resources, not cloud providers looking to optimize FLOP/rack.
There’s another reason to think that traditional CPUs still have some life in them as well: it turns out that Moore’s Precept may be back on track.
EUV and Moore’s Precept
The table I posted above only ran through 5 nm; the iPhone 15 Pro, though, has an N3 chip, and check out the price/transistor:
While I only included the first version of each node previously, the N3B process, which is used for the iPhone’s A17 Pro chip, is a dead-end; TSMC changed its approach with the N3E, which will be the basis of the N3 family going forward. It also makes the N3 leap even more impressive in terms of price/transistor: while N3B undid the 5 nm backslide, N3E is a marked improvement over 7 nm.
Video of the Week
AI of the Week
AI Colleges.
The AI Guy for Higher Ed / I help colleges create human-centered, AI-enhanced learning experiences / Keynote Speaker / AI Strategist and Coach
The biggest threat to traditional colleges isn’t Coursera, Youtube, or Google Certificates.
It’s AI Colleges.
These colleges won’t just use AI.
They’ll bake AI into their institutional DNA.
They will:
➢ Streamline their admissions process through AI
➢ Train faculty how to use AI
➢ Train students how to use AI
➢ Partner with companies to prepare students for an AI World
➢ Create their own knowledge base for AI training
➢ Adapt quickly.
➢ Have an AI Operator running the show
These colleges will have an advantage because they’ll be created for the Age of AI.
They won’t have to contend with hundreds of years of inertia.
They’ll use AI to open up space for human-to-human interactions between faculty and students.
They’ll operate at a fraction of the cost.
It’s not a question of if these colleges emerge.
It’s a matter of when.
Geoffrey Hinton realised mankind was history when AI got the joke
Cassandra: Professor Geoffrey Hinton has repeatedly sounded the alarm about the existential threat posed by AI
by Tim Adler, 19 October 2023
So-called ‘godfather of AI’ Geoffrey Hinton says he realised that humanity had created its own successor when Google’s PaLM explained why a joke was funny
Geoffrey Hinton, who has been called “the godfather of AI”, realised that mankind’s days were numbered when artificial intelligence could explain a joke.
Professor Hinton, who created waves when he quit his job at Google partly in order to sound the alarm over the existential threat AI poses to humanity, said: ““PaLM [Google’s AI system] could actually explain why a joke was funny.”
Another Damascene moment came when Hinton realised how good neural networks are at sharing information with each other compared to humans. Humans, who share information at the sentence level, could be thought of as swapping bytes of information between each other compared to AI, which can instantly share gigabytes.
“That was the moment I realised that we were history,” concluded Hinton. “I’m pessimistic because pessimists are usually right.”
Professor Hinton was in conversation with Doctor Fei-Fei Li, professor of computer science at Stanford University, at a seminar organised by the University of Toronto, where Professor Hinton teaches.
Li, who is also a vice-president of Google as chief scientist of AIML at Google Cloud, said that she first became anxious about AI in 2018 in the wake of the Facebook-Cambridge Analytica scandal, in which a tech company politically targeted Facebook users with campaign-ad messages.
Li said: “This is the moment that we really have to rise to the moment, not only as our passion as a technologist but also our responsibility as a humanist.”
Hinton confessed to an Eeyore-ish gloominess about humanity’s future now that AI, he believes, has understanding and intelligence and has passed the famous Turing Test, being indistinguishable from humanity.
Li though had a more upbeat view of the future, dependent on humanity acting now.
“I believe in humanity and the collective will,” said Li. “If we do the right thing, we have a fighting chance of doing things better. It’s fragile but if we all recognise the same thing, then there’s hope.”
Hinton said that people talk about the risks of AI but there is a actually a plethora of risks, including the risk of creating an underclass of people who will never have a job.
Hinton said: “The rich people are going to get richer, and the poor people are going to get poorer, and even if we have universal basic income, that’s not going to solve the problem of human dignity. Many people have a job because they feel they’re doing something worthwhile.”
Other risks posed by AI include imminent threats such as fake news being used in next year’s US election, and every country developing its own warfare robots.
“But the existential risk is that humanity gets wiped out because we have invented a better form of intelligence that’s smarter than us,” he warned.
“If things that are more intelligent than us want to take control then we won’t be able to stop them,” Hinton continued. “We have to stop them wanting control.”
The irony, said Hinton, is that mankind has discovered the secret to immortality, only it’s not our everlasting life which we have created.
The best we can hope for, Hinton added, is that AI keeps humans around as a kind of amusing toy, the way in which Greek gods supposedly meddled with human affairs just for fun.
Li however was more positive than Hinton, calling his Cassandra-like warnings about extinction “an interesting thought experiment”, but the reality was more nuanced than that.
“I don’t want to paint the picture of robots creating our machine overlords,” she said.
Li believes that, rather than create permanent mass unemployment, AI could help us move away from a labour economy to what she calls a dignity economy, where people do work which is meaningful instead of just chasing a pay check.
The one silver lining though, according to Hinton, is that artificial intelligence is still terrible at telling jokes, especially when it comes to landing a punchline “but we’ll get there. Being a comedian is not a job for the future”.
SAM ALTMAN WARNS THAT AI IS GONNA DESTROY A LOT OF PEOPLE’S JOBS
“I’M NOT AFRAID OF THAT AT ALL. IN FACT, I THINK THAT’S GOOD.”
CHIP SOMODEVILLA VIA GETTY
Rinse Cycle
OpenAI CEO Sam Altman, whose company makes that one chatbot your boss has probably considered replacing you with, warns that the advancement of AI means that a lot of people are going to lose their jobs. Spoiler: he doesn’t sound like he’s going to do anything about it.
Viewing the shift through an historical lens, Altman thinks job loss is an inevitable casualty of any “technological revolution.” Every 100 to 150 years, Altman said at The Wall Street Journal Tech Live conference on Monday, half of people’s jobs end up getting phased out.
“I’m not afraid of that at all,” he said, as quoted by the paper. “In fact, I think that’s good. I think that’s the way of progress, and we’ll find new and better jobs.”
Long Jobs
Telling people they’re going to be out of a job in the name of progress is not an easy sell. Or, as Altman puts it, “that’s not, uh, that’s not cool.”
Which is an interesting thing for him to admit, since framing job loss as some sort of greater good is basically his whole narrative. In the past, he’s never sounded that upset about AI replacing the “median human,” either.
Still, he did reiterate that “we are really going to have to do something about this transition” — though what that “something” will actually be remains unclear.
“It is not enough just to give people a universal basic income,” Altman said. “People need to have agency, the ability to influence. We need to jointly be architects of the future.”
It may well be that being joint architects pretty much means “getting billions of people to use ChatGPT.”
Back and Forth
If Altman really thinks we should do “something” about the transition to AI, he hasn’t been all that keen on attempts so far to regulate it.
Although in May he did call on Congress to be tougher on the industry, later that month he threatened to pull OpenAI out of the European Unionwhen it actually did so.
These kinds of contradictions are fairly characteristic of Altman, who has rarely passed up the chance to both gloom and gloat about the tech’s future.
At times he’s “afraid” of what he’s making and loses sleep over it. But he’s also pretty confident that eventually, poor, “median” folks should be able to see an “AI medical advisor” instead of an actual doctor.
But he’s consistently right about at least one thing: people are already losing their jobs to AI.
The Dream AI Hardware
OCT 13, 2023
I wrote a book on AI development, exploring how it works, its history, and its future. Order it here, ideally in triplicate!
My #1 KPI in life is serendipity. In fact, I try to optimize my life for serendipity.
That’s why I pre-ordered Rewind’s Pendant this week, a piece of AI hardware you wear around your neck which transcribes all your conversations.
Some people think that wearing one of these AI devices is like wearing a wire and being a Fed, but I say you’re optimizing for serendipity. Let me explain.
There is nowhere filled with more serendipity than my home of New York City.
In the span of two hours, you can run into a high school friend you haven’t seen in a decade, see a billboard with copywriting so clever you’d think it was written by Ricky Gervais, and have a wonderful conversation about everything from artificial intelligence to pizza while walking around Central Park. There’s so many opportunities for serendipity that it’s hard to keep track.
The recently released AI hardware—Rewind’s Pendant, Avi Schiffman‘s Tab, and Humane’s AI pin—promise to help you keep track of that serendipity. As a writer who makes his living on the internet, here’s how my purchasing thought process went: the more serendipity, the more ideas—the more ideas, the more publishing—the more publishing, the more views—the more views, the more money. $59 for more ideas? That’s a no-brainer investment. I already pay $12.99/month for Readwise to remember what I read and $10/month for Otter AI to transcribe my voice memos. What’s another $59? For a device to remember all my conversations? Am I being the midwit in this meme right now? Perhaps, but I don’t think so. I think I’m just a normal dude trying to capture the serendipity of life.
It’s my hope that whenever I have a cool conversation with a friend, I’ll be able to rewind word-for-word whatever was said. In writing, it’s a common idea you should write how you talk; so if I could get a recording of how I said an idea in a conversation to a friend, I can just slap that into a blog or a tweet. Or I can quote a friend in an article if they’re cool with it. Or maybe I’ll use it for those networking events where you meet 50 people with similar sounding jobs so I can remember who the hell each person is. A man can dream.
And dream I will—Here is my dream user experience for AI hardware. Rewind’s founder Dan Siroker and I follow each other on Twitter, so it’s my hope he sees this and is like “wow this is a good idea, let me implement it immediately (and pay Jason and Rohit in stock shares!)”
Dream UX of AI Hardware:
Exact Transcript: I want the ability to get an exact time-stamped transcript of the day’s conversations. This is where Avi’s Tab and the Rewind Pendant differ. Tab uses AI to summarize what you’re talking about but has no transcripts, whereas Rewind’s will have full transcripts. I want the full transcripts because ideas are often misinterpreted by an ultra-literal AI (think metaphors, nicknames, slang, etc.). Give me the exact transcripts or give me nothing.
Searchability: Ability to search through transcript with a simple command-F query and copy-paste text easily. I can imagine myself thinking “I remember we talked about X → search X”. I’d also love if the device could capture locations so I can do something like “I had a good conversation in Central Park but I forget the topic → search Central Park”.
Design: Both Tab and Rewind are going with the necklace route. I’m a fan of it. I think it looks slick and you forget about it quickly (if it’s lightweight). The recording/not-recording signal needs to be obvious like Snapchat’s Ray-Ban glasses which glow red when on—otherwise anyone wearing it turns into a Fed and we are officially bringing back sauna meetings.
Daily Wrap-Up Emails: Give me the day’s summary of my transcript with the option to view the whole thing in-browser or in-app. This is the Readwise daily email model.
AI-Suggestions: OK this would make it an absolute 11 out of 10 but what if AI hardware could give you further things to research based on your conversations? So let’s say I talked about JFK for 10 minutes in a conversation with a friend, the AI knows, and suggests books and documentaries for me to look into. Maybe there can even be a Creator Option where the AI suggests content ideas based on conversations. Imagine a notification “Turn this into a tweet”.
News Of the Week
Telegram CEO, a criticised but cited source of Hamas videos, says app will continue to host ‘war-related content’
Ingrid Lunden @ingridlunden / 9:05 AM PDT•October 13, 2023
Image Credits: Carl Court / Getty Images
As social platforms like X (formerly Twitter), Meta and TikTok face off with regulators and the theater of public opinion for how they are handling incendiary and graphic content, disinformation, writing and other media related to Hamas and Israel, Pavel Durov, the CEO of Telegram, has controversially come out to defend how his messaging app is not taking down some of the more sensitive war-related coverage that can be found there, claiming that it can prove to be an important channel for information.
He also went on to distinguish it from social media, since users only see content to which they subscribe. (Yes, that does not take into account how content posted on Telegram gets shared.)
In his Telegram post today, Durov — borrowing some of the more “high-level” language that other social media executives have used — said that “Telegram’s moderators and AI tools remove millions of obviously harmful content from our public platform,” but he also swiftly moved on to defending the app continuing to allow sensitive content under the category of “war-related coverage.”
“Tackling war-related coverage is seldom obvious.” (He does not define what the line is between “obviously harmful” and “war-related coverage.”)
“While it would be easy for us to destroy this source of information, doing so risks exacerbating an already dire situation,” he continued, citing how, he said, Hamas used Telegram to warn civilians in Ashkelon to leave the area ahead of missile strikes. “Would shutting down their channel help save lives — or would it endanger more lives?” he asked in his post today.
The FCC is Expected to Propose the Return of Net Neutrality Protections Oct 19th – Let’s Hope They Get it Right!
OCTOBER 13, 2023
Network neutrality is the idea that internet service providers (ISPs) should treat all data that travels over their networks fairly, without discrimination in favor of particular apps, sites or services. It is a principle that must be upheld to protect the open internet. The idea that ISPs could prevent access to certain sites, slow down rates and speeds for certain users, isn’t just horrendous— it’s vastly unpopular. When ISPs charge tolls or put up road-blocks, it comes at the expense of all segments of society, and undermines internet access as a right.
The FCC will meet on October 19th to vote on proposing Title II reclassificationthat would support accompanying net neutrality protections. Based on a draft version of the Notice of Proposed Rulemaking, the FCC will propose to reestablish the Commission’s authority to issue net neutrality rules for broadband internet access service by classifying it as a “telecommunications service” under Title II of the Communications Act of 1934. If the FCC issues the notice as expected on October 19th, the next steps would be a public comment phase followed by issuance of a final rule. This process could result in a final rule restoring net neutrality requirements around spring of 2024.
We’re glad that the FCC is finally taking steps to bring back net neutrality. Title II provides a clear avenue for the FCC to exercise authority to enact net neutrality rules that will stand up to a challenge in court. For years, the FCC incorrectly classified broadband access as an “information service,” and when it tried to impose even a weak version of net neutrality protections under that classification, they were struck down in court. We’ve had victories in the past on this issue thanks to the overwhelming support of millions of Americans, and we need the FCC to do the right thing now.
The classification of broadband as a Title II “telecommunications service” is not only correct as a factual matter and proven to be legally defensible, it also provides the FCC the tools it needs to issue narrow regulations that address the proven need for net neutrality rules, while forbearing from any provisions of Title II that might be unnecessary.
The internet should live up to its history of fostering innovation, creativity, and freedom. When ISPs act as gatekeepers, making special deals with a few companies or privileging their own services, we all lose. Hopefully, on October 19th the FCC will show all Americans that it knows how the internet works, and will put people over ISPs once and for all.
At The Sam Bankman-Fried Trial, It’s Clear FTX’s Collapse Was No Accident
SBF’s top lieutenant and ex-girlfriend testified that criminality, not carelessness, destroyed billions in value. So why is a different narrative circulating?
OCT 13, 2023
I’m just back at my desk after spending two days in court at the Sam Bankman-Fried trial. The downtown Manhattan courthouse is just a few stops away on the subway, and I figured I’d stop by to see Caroline Ellison, Alameda Research’s ex-CEO and Bankman-Fried’s ex-girlfriend, testify about his alleged crimes. It was more revelatory than expected.
Bankman-Fried’s empire — which spanned FTX and Alameda — lost billions of customer money without dispute, but there’s a narrative that his simple carelessnessmight’ve been at fault. It was a whoopsie, you could say, from a guy with good intentions. High-profile writers including Michael Lewis seemed to buy the theory, and Bankman-Fried’s lawyers played off it. Sam was a “math nerd who didn’t drink or party” his legal team said. And well, “some things got overlooked.”
Shocked might not be the right word, but I was astonished to see the delta between that narrative and the reality in court. Speaking clearly, and with devastating precision, Ellison told jurors exactly how she and Bankman-Fried funneled FTX customer funds into Alameda Research, then spent the money even after FTX customers weren’t likely to get it back.
The crimes were no mistake. With everything laid out on spreadsheets, Ellison revealed how she periodically updated Bankman-Fried on how much money Alameda had taken from FTX customers, using the cryptic “FTX Borrows” as the line item. She admitted labeling it such in case the company landed in legal trouble. And even as Alameda drew more than $10 billion from FTX, more than FTX’s total remaining assets, Ellison said Bankman-Fried directed her to pay back loans to crypto lenders, like Genesis, leaving FTX depositors out to dry.
Here comes another Netflix price hike
Subscribers to Netflix’s Basic and Premium plans will be paying more, with prices rising to $11.99 and $22.99 per month in the US.
By Emma Roth, a news writer who covers the streaming wars, consumer tech, crypto, social media, and much more. Previously, she was a writer and editor at MUO.
Oct 18, 2023 at 1:12 PM PDT
Netflix is getting another price increase. As part of the streamer’s third quarter earnings results, Netflix announced that starting today, users on its $9.99 per month Basic plan will now have to pay $11.99, and those paying $19.99 per month for Premium will have to pay $22.99. Netflix’s $6.99 ad-supported plan and $15.49 Standard tier will stay the same price.
Netflix last raised its prices in January 2022 and it closed off access to the $9.99 Basic ad-free plan to new and relapsed users in July, forcing everyone to fork out more even if they only want to avoid ads.
Prices for the Basic and Premium plans in the UK and France are going up as well, with the ad-supported and Standard plans remaining unchanged. In the UK, the Basic and Premium plans will cost £7.99 and £17.99, respectively, while customers in France will see the Basic plan move up to 10.99€ and the Premium plan cost 19.99€.
“As we deliver more value to our members, we occasionally ask them to pay a bit more,” Netflix writes in its letter to shareholders. “Our starting price is extremely competitive with other streamers and at $6.99 per month in the US, for example, it’s much less than the average price of a single movie ticket.”
Earlier this month, The Wall Street Journal reported the streamer would raise the cost of its subscription a “few months” after the Hollywood actors strike ends, and now it’s happened even though the actors are still striking. Last month, the Writers Guild of America ended its strike after reaching a deal with services like Netflix to provide streaming data, higher minimum pay, and better residuals.
Over the past few months, Netflix says it added 8.76 million new subscribers, bringing the streamer’s global total to 247.15 million. In addition to its password-sharing crackdown “exceeding” expectations, Netflix also saw significant gains to its ad-supported plan, with membership increasing almost 70 percent quarter over quarter. Netflix says accounts on the cheapest tier now account for about 30 percent of all new sign-ups in the 12 countries where it’s offered.
And since removing the Basic plan in the US, UK, and Italy “boosted adoption” of Netflix’s ads and Standard plans, the company announced that it will be making the same change in Germany, Spain, Japan, Mexico, Australia, and Brazil next week. It’s also planning to roll out new features for its ad-supported tier, which will include a way to download content starting next month. Netflix added better resolutionand the ability to watch two streams at once to its ad-supported tier earlier this year.
Netflix has a strong slate of content planned over the next month, with its first live sporting event, The Netflix Cup, airing on November 14th. Meanwhile, the service is also debuting its Squid Game reality showand Scott Pilgrim anime in November while also adding Across the Spider-Verse later this month.
The Biggest IPOs Of 2021 Have Shed 60% Of Their Value
October 18, 2023
In normal times, going public at a valuation of over $10 billion is both a rare occurrence and a very big deal.
In 2021, however, such enormous IPOs became rather common, even for money-losing startups.
More than 20 global companies made debuts on the Nasdaq and NYSE that year at initial valuations above $10 billion. The list included some of the buzziest names in tech and the app economy: ride-hailing platform Didi, EV maker Rivian, Southeast Asian “superapp” Grab, and crypto platform Coinbase.
Valuations reached nosebleed altitudes. Per Crunchbase data, the initial valuations of all companies that went public in 2021 on the two largest U.S. exchanges collectively exceeded a trillion dollars. Startups delivered some of the biggest debuts.
Since then, these buzzy names have also posted some of the largest post-peak declines. To illustrate, we used Crunchbase data to assemble a curated list of the largest offerings of 2021, comparing initial valuation to present one:
Big decline
Overall, valuations of the selected largest startup offerings of 2021 are down 60% from their initial level. But as a group, post-debut performance has been quite mixed.
On the positive side, Samsara, an Internet of Things data platform provider, stands out as a star performer. Its shares are actually worth more today than they were when the company went public in December 2021.
Brazilian digital banking provider Nubank has also done comparatively well, being down only about 10% from its valuation post-offering.
Others are in much worse shape.
Startup of the Week
Pebble’s $100k+ EV travel trailer can live off the grid for 7 days
Kirsten Korosec @kirstenkorosec / 10:36 AM PDT•October 19, 2023
Image Credits: Pebble/screenshot
Pebble, the California-based EV startup founded by a veteran of Apple, Cruise and Zoox, unveiled Thursday a prototype of its flagship product: an all-electric travel trailer designed for an off-grid seeking digital nomad.
The so-called Pebble Flow, which was revealed virtually, is supposed to bring electrification, automation and the usability of an iPhone to the RV world. What that translates to is a 25-foot long electric travel trailer that sleeps up to four people and comes standard with a 45 kWh lithium-ion phosphate battery, 1 kW of rooftop solar, plug-and-play ready for Starlink and AC and DC charging capability and regenerative charging that sends electrons back into the battery while its being towed. It even has bidirectional charging to power appliances or your home, according to Pebble founder and CEO Bingrui Yang.
Yang emphasized the lightweight construction of the travel trailer, including the aluminum frame, which helps keep the gross vehicle weight to 6,200 pounds.
Image Credits: Pebble
The modern exterior includes wraparound windows that users can turn opaque for privacy, a kitchen with a 4-in-1 convection microwave and a full-size fridge and a removable induction cooktop that can be brought outside and used for outdoor cooking. A flip-up window by the kitchen completes the look.