AI Can’t Be Stopped

A reminder for new readers. That Was The Week collects the best writing on key issues in tech, startups, and venture capital. I select the articles because they are interesting. The selections often include things I disagree with. The articles are only snippets. Click on the headline to go to the full original. My editorial and the weekly video are where I express my point of view.


Editorial: AI Can’t Be Stopped

Essays of the Week

Scale AI: Why Data Will Power the AI Revolution – Index Ventures

Generative AI: The Next Consumer Platform – Andreessen Horowitz

🧪 Using ChatGPT in the innovation process – Azeem Azhar

972 billion portfolios: How to design the optimal venture portfolio – Moonfire

Chance or deep observation: Our approach to thesis articulation – Moonfire

People always put their money in futures they predict – Rohit Krishnan

What will TV look like in three years? These industry insiders share their predictions – CNBC

Oops! How Google bombed, while doing pretty much exactly the same thing as Microsoft did, with similar results – Gary Marcus

Let Section 230 Stay – Karan Lala

Section 230 Has to Go – Bradley Tusk

Video of the Week

Marc Andreessen on how AI will revolutionize software, whether NFTs are useless, & whether he should be funding flying cars instead, a16z’s biggest vulnerabilities, the future of fusion, education, Twitter, venture, managerialism, & big tech

News of the Week

Lina Khan Chalks Up Another Defeat

Private Investment Diligence and Fraud Prevention: Will New Regulations Change the Game?

ChatGPT to Become Available in Microsoft Tools According to CEO Satya Nadella

ChatGPT Passes Another Test — Amazon’s Software Interview Questions

OpenAI’s ChatGPT Passes Wharton MBA Exam

Why did Google’s ChatGPT rival go wrong and are AI chatbots overhyped?

Sequoia reveals in filing how much is sitting in its Sequoia Capital Fund (and yes, it’s a lot)

Salesforce Ventures: Only 150 Private SaaS Companies Have Hit $100,000,000 in ARR

Beyond Deep Learning with Gary Marcus

The wisdom of juries at the Elon Musk trial

Startup of the Week


Tweet of the Week

Samir Kaji


AI Can’t be Stopped. That much is clear. And AI is not perfect. That too is clear. But when you read (below) that ChatGPT passed Amazon’s software engineer interview and then passed the Wharton MBA exam it becomes clear it is already good for many things.

Azeem Azhar writes about his conversation focused on creating a board game to play with his children. Read it. Azeem used ChatGPT over several rounds of back and forth to fine-tune an exciting concept into a very good, novel, and challenging game. It learned from its failures, prompted by his questions and responses.

There is no doubt in my mind that this technology is already good enough to augment human endeavors. In some cases replace humans, freeing up time for other things.

Moonfire, a venture platform leveraging AI has two posts this week showing how it could run almost a billion simulated exercises to create a portfolio of investments aided by human investment theses.

At SignalRank we simulated ten years of Series B investments and produced 234 unicorns from just over 1000 selections. Signalrank AI did the heavy lifting.

Andreessen Horowitz believes that consumers will use AI more and more. Index Ventures believe that data will be the power behind the AI revolution.

On the other hand, Google’s flawed effort to launch Bard – its ChatGPT competitor – was embarrassing and led to stock losses. Gary Marcus discussed it below.

Where are we? We are very far from Kansas. This year will witness staggering technological advances without fixing all of the apparent limits. This is in many ways not AI yet. But it is data-driven, decision capable, conversational, interactive, useful, and in some instances a game changer.

I’m all in on cheering its progress, and I will cover it as much as is interesting here.

Essays of the Week

Scale AI: Why Data Will Power the AI Revolution

by Index Ventures

Scale AI Founder & CEO Alex Wang

Scale AI emerged from a fairly small-stakes problem. Alexandr Wang, its 26-year-old founder and CEO, wanted to know when to restock his fridge.

Wang grew up in Los Alamos, New Mexico, the son of two physicists working at the national lab in town. That upbringing helped spark his interest in science and technology. In high school, he excelled in math, physics, and coding competitions, and worked for two tech companies. In 2015, he enrolled at MIT to study computer science, where he received perfect grades in his first year. But he had that grocery problem. He decided to code his way out of it.

Wang could see that artificial intelligence and machine learning were transforming the world. “First we built machines that could do arithmetic,” he says, “but the idea that you could have them do these more nuanced tasks that required what we view as humanlike understanding was this very exciting technological concept.” He looked around for a side project and decided to build a camera inside his fridge that would tell him if he was running low on milk.

After a few weeks, he realized there was no way he’d get enough data to train his system to properly quantify his fridge’s contents. A realization struck him. He envisioned AI over the next 20 years. “We’re going to build incredible things with AI,” he says. “What are the hurdles? It became really clear that data was going to be one of those.” That’s when he came up with Scale, what he calls “data infrastructure to power the AI revolution.”

Generative AI: The Next Consumer Platform

by Connie Chan and Justine Moore

We’ve entered the age of generative AI. The use cases are everywhere—from writing essays to creating comics to editing films—and adoption has outpaced every consumer tech trend of the past decade. Text generator ChatGPT surpassed 1 million users in just five days, and tens of millions of consumers have created AI avatars.

Whenever new technology captures consumer attention so quickly, it begs the question: is there real value here? We believe that the answer is undoubtedly yes. Generative AI will be the next major platform upon which founders build category-defining products. 

Much as the iPhone revolutionized our daily interaction with technology—spawning products like Uber, DoorDash, and Airbnb—generative AI will change everyday life. 

One of the most powerful things about AI is that it enables products to personalize the user experience. The early applications of this have been in edtech and search—if you’re explaining why it rains, you’ll use different language for an eight-year-old than for a high school student. We expect that this kind of customization will be a core value prop of many AI-enabled products.

Here, we explore the main consumer categories where we see opportunity. In subsequent posts, we’ll delve deeper into each of these areas and share the questions we’re asking as we evaluate consumer AI companies.

Language models have the potential to revolutionize one of the core functions of the internet: search. 

We’ve all experienced the struggle of typing a question into Google and getting overwhelmed by a flood of links, some of which have conflicting or inaccurate information. It’s quite literally an endless scroll. What if you could get a single, concise answer written in natural language, with links to read more if you’re interested? LLM-powered search engines make this possible. 

Companies like You and Neeva are doing this for general search queries. Others are taking a more verticalized approach: Consensus searches across research papers to provide evidence-backed answers, while Perplexity’s Bird SQL product targets the Twitter graph (e.g., “Most popular tweets about Golden Globes fashion”).

🧪 Using ChatGPT in the innovation process

Creating a new board game with a large-language model

Azeem Azhar

Credit: Azeem Azhar / Midjourney

I’m continuing my experiments with ChatGPT. In particular, trying to figure out how it can synthesise ideas from different domains and help in the creative process.

I play board games with my kids; among them are Jaipur, Azul, Pandemic, Innovation, Forbidden Island, and others. We were on the hunt for a new one (half-term is coming) and I was struggling to filter via BoardGameGeek. Heading over to ChatGPT I figured I would see where a discussion would take me.

You will find our exploration below. This is a rare example of an essay that is largely written by a chatbot. But what I wanted to do was actually share the dialogue between me and the software, so you can see the chain of prompts to which it responded. I’ll insert my additional comments in grey code blocks.

Like this!

The image at the top is the proposed cover art for the game ChatGPT and I riffed on, as drawn by Midjourney.

972 billion portfolios: How to design the optimal venture portfolio

We ran nearly a trillion portfolio simulations to determine the optimal portfolio construction. Here’s what we found.

Francesco Farina

Credit: Mick Haupt

Determining the optimal size and strategy of an early-stage VC portfolio is an art, with as many answers as there are firms.

That said, there are two main camps: 1) a small, concentrated portfolio, betting on the best companies you can find, or 2) as large a portfolio as possible, acting like a sort of index of the market. And there are plenty of successful examples from both ends of the spectrum.

But it’s hard to find a general formula because the optimal portfolio is a function of many factors, and it largely depends on your goal. What do you want to prioritise? No losses? 2x returns? 10x returns?

We decided to dig into the question and math it out.

We ran nearly a trillion portfolio simulations to determine the optimal mathematical portfolio construction calibrated for real-world success, and found that portfolio performance is largely controlled by five variables:

Decision quality

Portfolio size

Ticket sizing

Whether or not, and how much, you follow-on.

Upper bound on ROI (of a single investment)

Let’s go through them one by one, picking out key findings and seeing how they affect portfolio performance – and later this week we’ll release our Portfolio Simulator so you can run some experiments yourself.

Decision quality

It all starts with the quality of your decision making.

If you’re better at picking winners than the average VC, the probability of returning a multiple of the fund is much higher. Doubling the fund is almost a certainty for portfolios larger than 200 companies, regardless of the expected maximum bound on return (we’ll come back to this in a minute), and the probability of losing money decreases more rapidly as the size of your portfolio increases.

If you’re bad at picking winners…well, other than considering another line of work, you could eliminate the effect of your decision quality by indexing the market and allocating capital randomly. But, again, it depends on your goal. If you want to just achieve 1x and avoid losses, create a large portfolio. If you want to achieve 2, 3, 5x, you either need to improve your decision quality or invest in fewer companies and hope for the best.

Chance or deep observation: Our approach to thesis articulation

We start with ideas rather than companies, articulating a thesis for each investment area long before we consider writing any cheques.

Mattias LjungmanAkshat Goenka

Credit: Greg Rakozy

When an investor decides to back a company, what are they really investing in? At the pre-seed and seed stages, when a start-up may be little more than a big idea, there is rarely the luxury of a robust financial record to guide decisions. Instead, investors must assess a range of considerations, from the capabilities of the founders to the breadth of the market opportunity, the nature of the competition, the scalability of the business plan and the degree of future optionality.

All of these must prove favourable for an embryonic business to succeed, and any one of them might result in its failure. So how does an investor really decide: what do they look for, and which factors do they judge most important? Do those vary according to the maturity of the business, the geography, and the sector it is operating in?

The concept of the prepared mind is not one that is new, dating back to Louis Pasteur’s belief that “chance favours the prepared mind”, but the way investors reach this state of knowledge (via deep observation and analysis) varies greatly. At Moonfire, we believe in the power of the investment thesis to guide that process. Additionally, we believe that during a downturn like the one we are facing right now, deep thesis research and articulation enable us to outperform and not get swayed. Every investor has a thesis of some description, whether unconscious or elaborated in detail. Some will prefer to back founders with a track record of success or who have overcome adversity. Others might be attracted to particular sectors, to outstanding technical founders, or start-ups operating in complete greenfield spaces.

Our bias is to be as systematic and proactive as possible with the dogma that new technology can be commercialised to unlock greater access, efficiency and/or product quality. We start with ideas rather than companies, articulating a thesis for each investment area long before we consider writing any cheques. Every week, on what we call Thesis Thursday, a member of the investment team will present their case for a sector, sub-sector or theme that we should be considering within our four investment categories: Work & Knowledge, Capital & Finance, Health & Wellbeing and Gaming, Community & Leisure. A thesis might, for example, cover the broad category of the modern data stack, a strand within that such as open source solutions, or a particular trend such as the rise of Reverse ETL or dbt. Our Thesis Thursday meetings are useful in a unique way, as we have both engineers and investors around the table at Moonfire, meaning we can deep dive into a thesis both technically and commercially.

People always put their money in futures they predict

A whistlestop history of investing theses

The life of money-making is one undertaken under compulsion, and wealth is evidently not the good we are seeking; for it is merely useful and for the sake of something else. And so one might rather take the aforenamed objects to be ends; for they are loved for themselves. But it is evident that not even these are ends;


I. The hard thing about this thing

I started with a pressing problem, which is how should I invest my money. It led me down a rabbithole, on the link between wealth and how people imagined the future, between beliefs and bets, and this is likely the first part of this exploration.

Sometimes I think about the world of investing as play-acting our beliefs in a particularly wonky universe. In the investing foxhole, there are no atheists. We create our own theories, act on our beliefs, cut the world into understandable slices, and then try and put our money to work.

Why do we do this? Because if we’re right, we make more money tomorrow than we have today. We win when the world moves in a way that we predicted or when we get lucky.

Pledge your support

But it’s hard. Today, if you were to open up your favourite trading app, you’ll see some 5000 stocks and 8000 ETFs, not to mention options and FX and all other assorted ways to test your hunches. How do you even choose amongst these?

Well, you could listen to someone else. 60% stocks and 40% bonds used to be conventional wisdom, and it isn’t bad. Or you could add an equal measure of Gold and Cash, and get a Cockroach portfolio. Or you could use 90% stocks and 10% government bonds, for the Buffett portfolio. Are any of these “right”? Well, not in any objective sense, because the future’s unknown. But if you have any beliefs at all about the future, sure some of them are better!

If you think the future of corporate American looks a lot like the past of corporate America, the Buffett portfolio is great. If you think the future’s pretty unknown, the Cockroach is great. If you’re in the middle, choose conventional wisdom. Note that these aren’t optimal, they’re just the financial equivalent of choosing the right sport to play. To do anything more is often the work of a lifetime.

What will TV look like in three years? These industry insiders share their predictions

Alex Sherman@SHERMAN4949

Lillian Rizzo@LILLIANNNN

Peter Chernin


CNBC asked media insiders, including Barry Diller, Bela Bajaria, Jeff Zucker and Bill Simmons, for their predictions about what TV will be like in three years.

They also weighed in on which companies will dominate streaming and how big a role sports and gambling will play.

“It will continue to be in decline. It will be crappier. Budgets will get cut,” former Fox executive Peter Chernin said of legacy TV.

Illustration by Elham Ataeiazar

The media industry is in the middle of change. There’s little doubt legacy cable TV will continue to bleed millions of subscribers each year as streaming takes over as the primary way the world watches television.

Still, the details of what’s about to happen to a transitioning industry are unclear. CNBC spoke with more than a dozen leaders who have been among the most influential decision-makers and thinkers in the TV industry over the past two decades to get a sense of what they think will happen in the next three years.

CNBC asked the same set of questions to each interviewee. The following is a sampling of their answers.

In three years, will legacy TV effectively die?

Peter Chernin, The North Road Company CEO: It will continue to be in decline. It will be crappier. Budgets will get cut. More scripted programming will migrate away to streaming. There will be more repeats. But it will continue to exist. One of the really interesting questions here – this will be fascinating – the core of linear TV is sports rights. The NFL deal starts next season and is double the price of the previous one. That will suck even more money out of programming budgets. Then you’ve got the NBA deal, those renewal talks will happen this year. That will probably double in price. So you’ve got increasing prices of the most high-profile sports and declining number of homes watching. That will eat away at everything else.

Oops! How Google bombed, while doing pretty much exactly the same thing as Microsoft did, with similar results

I will always remember today (my birthday no less), February 8, 2023, as the day in which a chatbot-induced hallucination cost Alphabet $100 billion.

But I will also remember it as the week in which Microsoft introduced an ostensibly similar technology, with ostensibly similar problems, to an entirely different response.

Kevin Roose, for example, wrote earlier today at The New York Times, that he “felt a … sense of awe” at the new Microsoft product, Bing enhanced with GPT, even while recognizing that it suffered from now-familiar hallucinations, and others types of errors (“If a dozen eggs cost $0.24, how many eggs can you buy for a dollar?” — Bing said 100, where the correct answer is 50.)

Others, it should be noted, also encountered errors in relatively brief trials, eg CBS Mornings reported in their segment that in a fairly brief trial they encountered errors of geography, and hallucinations of plausible-sounding but fictitious establishments that didn’t exist.

Roose seemed reassured, though, by the corporate types at Microsoft and OpenAI, reporting optimistically that “Kevin Scott, the chief technology officer of Microsoft, and Sam Altman, the chief executive of OpenAI, said in a joint interview on Tuesday that they expected these issues to be ironed out over time”.

The Times’ Roose even went so far to as chide anyone who even expressed concern about the errors, saying that “fixating on the areas where these tools fall short risks missing what’s so amazing about what they get right”. 

Let Section 230 Stay

The U.S. Supreme Court is getting ready to hear arguments that will determine the limits of free speech on the internet.

Karan Lala is a fellow at the Integrity Institute.

Illustration by Shane Burke.

Gonzalez v. Google, which the Supreme Court will hear this month, is the culmination of years of litigation. The action—a consolidation of lawsuits filed against Google, Twitter and Facebook—attempts to hold these platforms liable for their automated recommendation of content to users.

Social media platforms are publishers, not originators, of content, and as such have long been considered immune from liability according to Section 230 of the Communications Decency Act. This case calls not just that immunity into question, but also the entire economy of the internet as we know it. Reinterpreting Section 230 to remove immunity for algorithmic recommendations would make it nearly impossible for social media platforms as we know them to function. That’s bad for the platforms, bad for social media users and bad for speech in general.

The nature of the internet makes moderating every bit of speech impossible. Holding platforms responsible for every dangerous statement that slips through their nets will have a ruinous chilling effect on modern discourse.

Algorithmic ranking often comes off as an opaque boogeyman. The Netflix documentary “The Social Dilemma,” for instance, portrays the ranking algorithm as an omniscient council in a futuristic chamber, turning emotional dials and toying with users’ deepest vulnerabilities to keep them hooked.

Section 230 Has to Go

The U.S. Supreme Court and Congress should seize their chance to change online discourse for the better.

Bradley Tusk is a venture capitalist, political strategist, philanthropist and writer.

Art by Clark Miller and Shane Burke.

On Monday, The Information published an opinion piece with the headline “Let Section 230 Stay.” Today, we present an opposing point of view.

If you’re reading The Information, you already know that the internet is a toxic waste dump. How could it not be? Negative content attracts more eyeballs than positive content. The platforms make their money on eyeballs, so they have no real incentive to change anything. This is why older Americans are constantly scammed into turning over their savings and why teenage girls can watch step-by-step instructions on Instagram on how to cut themselves. I don’t need to spend any more time telling you what you already know.

But here’s the interesting part: There’s a viable solution, and even in this highly polarized political environment in Washington, it could actually happen. In 1996, Congress passed the Communications Decency Act. Among its provisions was Section 230, which shields platforms from liability for the content posted by users—if I defame you on Twitter, you can sue me, but you can’t sue Elon Musk. But Acts of Congress are not etched in stone. They can be overturned, repealed or modified. And in the next few months, this could actually happen.

Video of the Week

My podcast with the brilliant Marc Andreessen is out!

We discuss:

how AI will revolutionize software

whether NFTs are useless, & whether he should be funding flying cars instead

a16z’s biggest vulnerabilities

the future of fusion, education, Twitter, venture, managerialism, & big tech

Dwarkesh Patel has a great interview with Marc Andreessen. This one is full of great riffs: the idea that VC exists to restore pockets of bourgeois capitalism in a mostly managerial capitalist system, what makes the difference between good startup founders and good mature company executives, how valuation works at the earliest stages, and more. Dwarkesh tends to ask the questions other interviewers don’t.

News of the Week

Lina Khan Chalks Up Another Defeat

A federal judge tosses the FTC’s Meta suit as lacking enough evidence.

Does Lina Khan ever win a case? In her latest embarrassment, a federal judge last week rebuked the Federal Trade Commission for not doing its homework and dismissed its attempt to block Meta’s acquisition of the virtual reality app developer Within Unlimited.

Meta CEO Mark Zuckerberg has sought to expand in the burgeoning virtual reality (VR) market as user growth at its Facebook subsidiary has slowed. Lacking the in-house expertise, Meta has bought apps to complement its VR headsets. This includes Within’s popular fitness app Supernatural, which offers guided workouts in exotic virtual locations.

Enter Ms. Khan, the ambitious FTC Chair who wants to stop big tech companies from growing via acquisitions whether or not the purchases threaten competition or harm consumers. Progressives believe antitrust regulators shouldn’t have allowed Facebook to buy WhatsApp and Instagram last decade. Ms. Khan wants to compensate for this putative mistake by stopping Meta now.

Private Investment Diligence and Fraud Prevention: Will New Regulations Change the Game?

Tuesday, February 7, 2023

Venture capital and other private funding sources continue to be an important pathway for financing early-stage companies. Unfortunately some startups that raised money did so by misrepresentation and in certain cases fraud, only to end up failing when the truth was finally discovered – resulting in significant losses for their investors.

Given the high-profile nature of a few of these failed startups and the resulting losses, there has been a call for increased scrutiny. The U.S. Securities and Exchange Commission (SEC) in particular appears to have heard the call and has proposed new regulations that may result in requiring certain investors to exercise a higher level of scrutiny when considering an investment. In this article we will explore the implications of such proposals and how they may affect investment firms’ obligations to their investors, as well as the overall startup investment landscape.

Current Landscape and New Regulations on the Horizon

The three key players involved in VC (or other private market) investments are (1) the companies seeking funding, (2) the investment firms making the investments, and (3) the investors who provide the capital to the investment firms. Generally, investment firms and investors have a mutual understanding that both parties have the expertise, sophistication, and diligence capabilities to allocate capital across investments in a manner that outweighs the (perhaps greater) risk of such investments relative to public market investing. However, factors such as fraud, misrepresentation, and other misdeeds can be difficult to quickly identify during standard diligence, which oftentimes is focused on a high level or red flag review that may not get into enough depth, especially when it comes to independently validating a technology or product. Additionally, many earlier stage startups do not have the resources to have audited financials, thereby eliminating another important diligence process. As a result, the call for greater scrutiny may even become louder as more avenues for retail investors to access private markets open up.

ChatGPT to Become Available in Microsoft Tools According to CEO Satya Nadella

Can you imagine working on an Excel file and ChatGPT is on hand helping you lay out macros and formulas? Hours of mundane work could almost instantly be done thanks in part to the program. Well, this is a future that Microsoft CEO Satya Nadella is envisioning. A world where AI is helping you with emails, slideshows, spreadsheets, and more! While speaking at a panel with The Wall Street Journal, laid out his vision for AI moving forward with the tech giant. Saying part, “Every product of Microsoft will have some of the same AI capabilities to completely transform the product.”

For those who’ve been following since last year, this comes as no surprise. Back in October, Microsoft announced the AI integration in a few of its programs, including its search engine Bing, DALL-E 2 in the Azure OpenAI Service, and a new Designer App that allows the creation of images via text prompts. This all comes at the heels of Microsoft’s $10 Billion dollar investment in OpenAI, proving that the company is very serious about capturing the potential of AI in the marketplace.

But for many, there is a growing concern about AI’s increasing presence in the workforce. MIT even wrote a roadmap about handling AI’s coming disruptive effects. Much of the work focused on an adaptable workforce, minimizing employment losses, and labor stabilization. That’s because like with the industrial revolution, new technology ushered new jobs while out-of-date professions perished. Just look at the agricultural sector. What was once the primary employer of most workers, is now only around 10% of the workforce. With the advent of new technology, that number is expected to get smaller.

ChatGPT Passes Another Test — Amazon’s Software Interview Questions

It seems that OpenAI’s ChatGPT is on a roll when it comes to tests. Recently, the popular AI was able to pass an MBA examination, surprising some. Now, ChatGPT has answered the technical interview questions correctly for a software position at Amazon according to Business Insider. This information comes from internal Slack messages within the company. According to the discussion, the chatbot was able to correctly answer some of Amazon’s interview questions for a software coding position. Though not spotlessly.

That’s because according to employee discussions, the code provided by ChatGPT wasn’t efficient and had some “buggy” implementation. Though according to a machine learning engineer, it was able to improve code while providing correct solutions. Upon seeing the results, another employee stated that they were “honestly impressed.” They further went on to say, “I’m both scared and excited to see what impact this will have on the way that we conduct coding interviews.”

OpenAI’s ChatGPT Passes Wharton MBA Exam

OpenAI’s ChatGPT has been the talk of the internet since its debut in the public imagination in late November of last year. Since then, the possible ramifications of AI as a disruptor in higher education have caused a stir among educators, with reports of companies already attempting to uncover AI-written essays and other assignments. But it seems that AI has taken another step forward. In a research paper titled “Would Chat GPT3 Get a Wharton MBA?” Professor Christian Terwiesch put AI chatbot ChatGPT against a Master of Business Administration (MBA) exam’s core course.

The course that was chosen was “Operations Management.” The result of which, to put it simply, is that the AI passed. It not only passed, but according to Professor Terwiesch, it “would have received a B to B- grade on the exam.” Professor Terwiesch went into further detail about ChatGPT’s performance. According to him, ChatGPT, “does an amazing job at basic operations management and process analysis questions including those that are based on case studies,” Terwiesch wrote, but added the AI fell short when it had to manage “more advanced process analysis questions.

That’s not all. ChatGPT did so well on legal document preparation, he believes that “this technology might even be able to pass the bar exam.” This is quite interesting since in a few short weeks, another AI program will be making its debut in court. But don’t worry yet, this doesn’t mean AI is ready to replace humans in the MBA tracks at universities. According to the same report, the AI made some mistakes when given simple 6th-grade calculations. As Professor Terwiesch said in his report, “These mistakes can be massive in magnitude.” Though when given a “human hint” on other mistakes, it was able to self-correct.

All in all according to Professor Terwiesch, ChatGPT’s performance, “has important implications for business school education, including the need for exam policies, curriculum design focusing on collaboration between human and AI, opportunities to simulate real-world decision-making processes, the need to teach creative problem solving, improved teaching productivity, and more.

Why did Google’s ChatGPT rival go wrong and are AI chatbots overhyped?

Alphabet’s shares plummeted after Bard gave incorrect answer in a demo

‘ChatGPT needs a huge amount of editing’: users’ views mixed on AI chatbot

Google said the Bard error underlined the need for ‘rigorous testing’ on the AI chatbot. Photograph: Jonathan Raa/NurPhoto/REX/Shutterstock

Dan Milmo Global technology editor

Thu 9 Feb 2023 12.09 GMT

Google’s unveiling of a rival to ChatGPT had an expensively embarrassing stumble on Wednesday when it emerged that promotional material showed the chatbot giving an incorrect response to a question.

A video demo of the program, Bard, contained a reply wrongly suggesting Nasa’s James Webb space telescope was used to take the very first pictures of a planet outside the Earth’s solar system, or exoplanets.

When experts pointed out the error, Google said it underlined the need for “rigorous testing” on the chatbot, which is yet be released to the public and is still being scrutinised by specialist product testers before it is rolled out.

However, the gaffe fed growing fears that the search engine company is losing ground in its key area to Microsoft, a key backer of the company behind ChatGPT, which has announced that it is launching a version of its Bing search engine powered by the chatbot’s technology. Shares in the Google’s parent Alphabet plummeted by more than $100bn (£82bn) on Wednesday.

So what went wrong with the Bard demo and what does it say about hopes for AI to revolutionise the internet search market?

Sequoia reveals in filing how much is sitting in its Sequoia Capital Fund (and yes, it’s a lot)

Connie Loizos@cookie / 11:50 PM PST•February 6, 2023


Image Credits: Viaframe (opens in a new window)/ Getty Images

Almost a year ago to the day, the 50-year-old investing powerhouse Sequoia Capital announced that it had reorganized itself around a singular, permanent structure: The Sequoia Capital Fund.

Now, thanks to an SEC form filed on Friday, we know how much is sitting in the fund: $13.6 billion.

The number represents two things: the value of the stock that Sequoia has rolled into its permanent fund from its legacy funds — these are shares in now-public companies that Sequoia backed as startups, including Airbnb, DoorDash, Unity and Snowflake. Some of those shares are owned by Sequoia; some of them are owned by the firm’s limited partners, who have agreed to let Sequoia continue to manage the shares on their behalf.

Salesforce Ventures: Only 150 Private SaaS Companies Have Hit $100,000,000 in ARR

by Jason Lemkin | Blog PostsGrowthScale

In the latest SaaStr Workshop Wednesday (sign up for FREE here), Jessica Bartos of Salesforce Ventures did a great deep dive on the state of SaaS and venture in 2013.

The full session is below and it’s a great watch.

One metric stood out to me I hadn’t seen presented before: just how many private SaaS companies (i.e., startups) have crossed $100,000,000.  It’s only 150.

While that’s still a lot, it also gives you a sense of the funnel and odds in SaaS.  And of some of the challenges, as we have far more than 150 unicorns.  I’ve invested at the seed level at 6 that are past $100m ARR today, so I assumed it was a bit more than 150.

But it makes sense.  150 x $100m = $15B+ in spend, probably more like $30B in spend.  Getting to $100m ARR in SaaS is pretty doable if you can get to $10m ARR and are growing 60%.

But it’s a reminder if you want to Go Big in SaaS, your job is to be in The Top 150 of All SaaS Startups.  Aim high.

Azeem Azhar’s Exponential View / Season 4, Episode 5

Beyond Deep Learning with Gary Marcus

Gary Marcus has a reputation for being a contrarian in the AI community. Over the past several years, Marcus, a neuroscientist, entrepreneur, and the author of .

Gary Marcus is well known as a deep learning critic. A neuroscientist, entrepreneur, and the author of “Rebooting AI: Building Artificial Intelligence We Can Trust,” Marcus believes that researchers need to move past deep learning in order to make true advances in machine intelligence. Marcus and Azeem Azhar discuss why, how, and when we can expect progress.

In this podcast, they also discuss:

What newborn ibexes can teach us about intelligence and learning.

How deep learning widens the gap between big companies and startups.

What we need for a breakthrough in AI.

The wisdom of juries at the Elon Musk trial

February 4, 2023

by philg

A lot of folks, including journalists, love to concoct ex post facto explanations for why the stock market moved as it did on a particular day. The Elon Musk trial has introduced us to a guy who sounds a lot smarter than most pundits and financial reporters. “Jury Rules for Elon Musk and Tesla in Investor Lawsuit Over Tweets” (NYT):

The federal judge in the case, Edward M. Chen, had already ruled that “funding secured” and Mr. Musk’s second statement were untrue, and that Mr. Musk was reckless when posting them.

“I thought he was crazy to try his chances at trial, given the stakes involved,” said Adam C. Pritchard, a law professor at the University of Michigan, noting the judge’s pretrial rulings. “You’re fighting with one hand behind your back in that situation — and yet he won.”

If he had lost, Mr. Musk and Tesla might have had to pay billions of dollars in damages to investors who said they had lost money when the company’s stock surged after his statements on Twitter and then tumbled after his plan fizzled.

One male juror said their arguments were difficult to follow and sometimes seemed disorganized. “There was nothing there to give me an ‘aha’ moment,” he said, later adding, “Elon Musk is a guy who could sneeze and the stock market could react.”

Let’s check in with the superpundits to see how they did compared to this juror. Dow 36,000 was published in October 1999 when the DJIA was at 10,000. The D.C. insiders authors predicted that the DJIA would be at 36,000 no later than 2004. They were proved correct… in November 2021.

Startup of the Week

The Mastodon Bump Is Now a Slump

Active users have fallen by more than 1 million since the exodus from Elon Musk’s Twitter, suggesting the decentralized platform is not a direct replacement.

ELON MUSK’S TWITTER takeover last October and the associated chaos drove millions of people into the arms of niche open source microblogging platform Mastodon. Overnight, a shaggy extinct mammal became associated with a buzzy network touted as the independent future of social media.

Companies and politicians joined up. Twitter users put Mastodon usernames in their handles and trumpeted their migration. The new traffic knocked many Mastodon instances, or servers, offline. In less than two months, Mastodon’s monthly active users climbed from 380,000 to more than 2.5 million. But not everyone stuck around. 

Mastodon’s active monthly user count dropped to 1.4 million by late January. It now has nearly half a million fewer total registered users than at the start of the year. Many newcomers have complained that Mastodon is hard to use. Some have returned to the devilish bird they knew: Twitter.

After a decade of Big Tech dominating social media, the idea of a small, alternative, and open source platform like Mastodon growing into a truly mainstream challenger was alluring to some. The decentralized platform operates very differently from services like Facebook, Instagram, and Twitter and demands volunteers take on the job of sustaining and moderating servers. That’s because Mastodon is part of the Fediverse, a network of servers running interoperable open source software.

Tweet of the Week

%d bloggers like this: