Guilty

A reminder for new readers. That Was The Week collects the best writing on critical issues in tech, startups, and venture capital. I selected the articles because they are of interest. The selections often include things I can’t entirely agree with. But they express common opinions, or they provoke me to think. The articles are only snippets. Click on the headline to go to the original. I express my point of view in the editorial and the weekly video below.

This Week’s Video and Podcast:

Content this week from @kteare, @ajkeen, @davidgura, @jake_k, @alexeheath, @kiranstacey, @AnnaSophieGross, @MsHannahMurphy, @AndreRetterath, @WhiteHouse, @stevesi, @MParekh, @danshipper, @KatLeighDoherty, @heathwblack, @cagiant60, @LawyerLiam, @breadfrom, @alexiskold, @HarryStebbings

Contents

Editorial: Guilty

Essays of the Week

Sam Bankman-Fried is found guilty of all charges and could face decades in prison

Elon Musk gives X employees one year to replace your bank

Sunak plays eager chat show host as Musk discusses AI and politics

Elon Musk tells Rishi Sunak AI will render all jobs obsolete

It’s Not What You Know, But Who You Know

Video of the Week

Using the iPhone 15 Pro Max to shoot the Apple Mac Launch Event

AI of the Week

President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence

Regulating AI by Executive Order is the Real AI Risk

AI’s proxy war heats up as Google reportedly backs Anthropic with $2B

AI: Google’s strategy with Anthropic

ChatGPT Is the Best Journal I’ve Ever Used

Citi Used Generative AI to Read 1,089 Pages of New Capital Rules

How EQT Motherbrain uses LLMs to map companies to industry sectors

VC GPT: How LLMs are strengthening SignalFire’s in-house AI

News Of the Week

Motherbrain, i.e. how EQT managed to analyze 50 mln companies

AirBnb Preseed Deck Breakdown

Startup of the Week

Starlink

Shield AI

X of the Week

@alexiskold vs @HarryStebbings

Editorial: Guilty

Ethics matter. And using them in times of stress matters, too. SBF lost the plot in his defense this week, seeking to blame his lack of attention to detail and his colleagues for the failings at FTX and Alameida.

There was a lack of ethics in his decisions and a further lack of ethics in his trial strategy.

I have no idea how contrived the entire set of episodes was at FTX, but blaming colleagues seems both a low shot and a long shot simultaneously. It would have been better to tell the truth – I crossed lines due to a frothy market. I thought it would be fine. The market tanked, I panicked and then crossed more lines trying to keep the ship afloat. When I failed, I tried to cover for my mistakes. At least the jury would have heard some honesty and contrition.

So, the trial is over, and SBF is guilty. Justice will be done. Given all the evidence, it seems to be the right decision.

But this week, AI was also put on trial. Both the Whitehouse and 10 Downing Street held meetings that produced statements and documents. The competition to be the leader who rails in AI seems pervasive across many governments. The full Whitehouse press briefing is below, as is the gist of the UK meeting.

The two most interesting responses are Elon Musk’s statement that jobs will become unnecessary (he’s right) and Steven Sinovsky’s thoughtful essay on why AI regulation is premature.

The AI section is lengthy this week; Michael Parekh explains why Google is investing $2bn in Anthropic, and Dan Shipper shows why the Apple Journal app has nothing on ChatGPT as a Journal co-pilot.

There are four pieces on AI’s use in Venture Capital, focusing on EQT and SignalFire (not to be confused with SignalRank). And Andre Retterath of Data-Driven VC writes about the correlation between alumni networks and who gets funded (ominous).

Enjoy.

Essays of the Week

Sam Bankman-Fried is found guilty of all charges and could face decades in prison

Updated November 2, 20239:13 PM ET 

David Gura

Sam Bankman-Fried leaves a Manhattan federal court in New York City on Jan. 3, 2023.

Ed Jones/AFP via Getty Images

Sam Bankman-Fried, the former head of cryptocurrency exchange FTX, was found guilty of each of the seven criminal charges he was facing, marking a spectacular fall from grace for a “math nerd” who was once a shining star in finance.

Bankman-Fried now faces the prospect of spending decades in prison after being convicted on charges including securities fraud, wire fraud and money laundering. The jury deliberated just for several hours before reaching its verdict.

Bankman-Fried is likely to appeal the decision.

“We respect the jury’s decision. But we are very disappointed with the result. Mr. Bankman Fried maintains his innocence and will continue to vigorously fight the charges against him,” Mark Cohen, Bankman-Fried’s lawyer, said in a statement.

During a trial that lasted more than four weeks, prosecutors sought to prove that Bankman-Fried had been a criminal mastermind who orchestrated a massive financial fraud.

In a courtroom that was frequently packed, prosecutors detailed how Bankman-Fried and some of his top lieutenants secretly funneled billions of dollars in customer assets from FTX to Alameda Research, a private trading firm he also controlled.

The U.S. government said the former billionaire treated Alameda like a personal piggybank, using FTX customer money to buy luxury real estate for friends and family, and to make political donations and risky investments.

“Sam Bankman-Fried perpetrated one of the biggest financial frauds in American history – a multibillion-dollar scheme designed to make him the King of Crypto – but while the cryptocurrency industry might be new and the players like Sam Bankman-Fried might be new, this kind of corruption is as old as time,” said Damian Williams, U.S. attorney for the Southern District of New York, in a statement.

“This case has always been about lying, cheating, and stealing, and we have no patience for it,” he added.

From a penthouse in The Bahamas to prison

The conviction marks a sharp reversal of fortune for a now 31-year old M.I.T. graduate who just last year was living large in a $35 million penthouse with some of his co-workers, as he ran a crypto empire that was estimated to be worth tens of billions of dollars during its heyday.

As FTX grew, Bankman-Fried became a celebrity in his own right at a time when the popularity of cryptocurrencies surged. There was a wave of investments from amateur traders and established Wall Street firms alike, and Bankman-Fried capitalized on the craze.

Elon Musk gives X employees one year to replace your bank

‘You won’t need a bank account… it would blow my mind if we don’t have that rolled out by the end of next year.’

By Jacob Kastrenakes and Alex Heath

Oct 26, 2023 at 6:25 PM PDT

Elon Musk wants X to be the center of your financial world, handling anything in your life that deals with money. He expects those features to launch by the end of 2024, he told X employees during an all-hands call on Thursday, saying that people will be surprised with “just how powerful it is.”

“When I say payments, I actually mean someone’s entire financial life,” Musk said, according to audio of the meeting obtained by The Verge. “If it involves money. It’ll be on our platform. Money or securities or whatever. So, it’s not just like send $20 to my friend. I’m talking about, like, you won’t need a bank account.”

X CEO Linda Yaccarino said the company sees this becoming a “full opportunity” in 2024. “It would blow my mind if we don’t have that rolled out by the end of next year,” Musk said.

Musk wants to beat PayPal with the PayPal playbook he wrote two decades ago

The company is currently working on locking down money transmission licenses across the US so that it can offer financial services. Musk told employees Thursday that he hopes to get the others X needs in “the next few months.”

Musk has discussed his plans to turn X into a financial hub before. He even renamed Twitter after his dot-com-boom-era online bank, X.com, which eventually became part of PayPal. He previously said the platform would offer high-yield money market accounts, debit cards, checks, and loan services, with the goal of letting users “send money anywhere in the world instantly and in real-time.”

The original plan for X.com is clearly on Musk’s mind. “The X/PayPal product roadmap was written by myself and David Sacks actually in July of 2000,” Musk said on Thursday’s internal X call. “And for some reason PayPal, once it became eBay, not only did they not implement the rest of the list, but they actually rolled back a bunch of key features, which is crazy. So PayPal is actually a less complete product than what we came up with in July of 2000, so 23 years ago.”

Turning X into a rich hub for financial services ties directly into Musk’s goal of making the platform into an “everything app,” akin to super apps like WeChat in China that offer access to shopping, transportation, and more. 

Musk faces major challenges to get there, though. Convincing people why they need such a platform is one. Getting them to trust X with their entire financial life is another.

Sunak plays eager chat show host as Musk discusses AI and politics

Kiran Stacey

The prime minister flattered the entrepreneur who in turn put aside his abrasive persona for their talk on AI

Earlier this week, Elon Musk was interviewed by the American podcast host Joe Rogan. On Wednesday he was grilled by reporters outside the AI safety summit in Bletchley Park. On Thursday, it was the turn of the British prime minister.

British officials have crowed for days about their success in getting the world’s richest man to attend the summit, which was a pet project for Sunak. So delighted were they at the UK’s pulling power they decided to give the X owner a 40-minute in-person conversation with the prime minister in the glamorous surrounds of Lancaster House, previously used as a set for The Crown.

Except what had been billed as “fireside chat” was not. Instead, Sunak played the role of eager chatshow host, keen to elicit drawled lessons on love, life and technology from the billionaire superstar sitting next to him.

“I’m very excited to have you here,” said Sunak, taking his jacket off and leaning forward in his chair. “Thanks for having me,” said Musk, relaxing back into his.

For 25 minutes, the prime minister quizzed Musk on his views on the summit and AI in general.

“What do we need to do to make sure we do enough [to regulate AI]?” asked Sunak. Later he suggested that technology was now developing even faster than “Moore’s law”, which suggests that computing power roughly doubles every two years, before checking himself. “Is that fair?” he asked Musk.

At one point Sunak even appeared to ask the controversial technology entrepreneur his views on international diplomacy. “Some people said I was wrong to invite China [to the summit],” he said. “Should we be engaging with them? Can we trust them? Was that the right thing to have done?”

…More

Elon Musk tells Rishi Sunak AI will render all jobs obsolete

UK prime minister and billionaire entrepreneur exchange views at conclusion of global summit on the technology

Anna Gross in London and Hannah Murphy in San Francisco

Elon Musk told UK prime minister Rishi Sunak there “will come a time where no job is needed” as the billionaire entrepreneur described artificial intelligence as the “most disruptive force in history” in a wide-ranging conversation.

Speaking in the lavish Lancaster House in London, the Tesla chief executive and owner of SpaceX and X said he believed there would come a time when “you can have a job if you want a job . . . but AI will be able to do everything”.

The discussion between the two global figures comes as the UK government closed its two-day summit in Bletchley Park designed to shape global rules and examine the extreme risks around the development of AI, such as its possible scope to aid in the development of biological and chemical weapons by nefarious actors.

During the summit, several big tech companies and nations signed a “landmark” voluntary agreement to allow governments including the UK, US and Singapore to test their latest models for social and national security risks. Companies signing the pledge included Sam Altman’s OpenAI, Google DeepMind, Anthropic, Amazon, Mistral, Microsoft and Meta.

Musk co-founded OpenAI in 2015 but left several years later over clashes over AI safety. He launched his own AI company, xAI, in July. The new company snapped up high-profile hires in the field, with a mission to “understand the true nature of the universe” and a focus on building machines with human-level intelligence, known as artificial general intelligence, or AGI.

He also hopes to use the technology developed at the company to improve X, formerly Twitter, which he bought for $44bn last October, according to people familiar with the matter. 

Asked by Sunak what he thought of the initiative, Musk said jokingly that “it will be annoying, that’s true” but said he supported the plan because “we’ve learnt over the years that having a referee is a good thing”. 

Flanked by large portraits of aristocratic figures, Musk said he believed ensuring AI technologies had “some kind of off switch” was something we should be “quite concerned about”. “What if one day they get a software update and they’re not so friendly anymore,” he ruminated.

On balance, he said, he believed the technology would “be a force for good” and AIs could actually become “great friends” to their users, especially to people such as his son who struggled to make friends in the real world… More

It’s Not What You Know, But Who You Know

DDVC #59: Where venture capital and data intersect. Every week.

ANDRE RETTERATH

NOV 2, 2023

Welcome back to another episode of the “Insights From the Data” series where I summarize useful research for founders and VCs. Today, I’m excited to share a recent study called “Alumni Networks in Venture Capital Financing.

Major Findings

Prominent Alumni Connections: Approximately 1/3 of VC deals involve a startup founder and a VC partner who graduated from the same institution.

Alma Mater Preference: VC investors tend to favor startups from their own academic institutions. This inclination is not influenced by factors like geographical proximity or the propensity of prestigious institutions to produce both entrepreneurs and VC investors.

Effective Deal Identification: The preference of investors for alumni-founded startups indicates an advantage in information access rather than simple bias. Such connections lead to better deal identification, with investments in connected startups being 33% more likely to result in an IPO after funding, highlighting the significance of this informational edge.

Consequences of VC Partner Exits: The departure of a partner from a VC firm may disrupt the existing connections with startups from the same university. The authors have noted a decline in deals following such departures, with a 23% decrease in the likelihood of funding for startups connected to the departing partner’s university.

Investment Size Correlation: The investment size is typically 18% larger when a VC investor and a startup founder have attended the same university. This suggests that the connection is not merely about favoritism but indicates a readiness to invest more substantial amounts compared to deals without such connections.

Data & Methodology

The authors reference my database benchmarking study from 2020 and thus use a comprehensive sample of deals between 2000 and 2020 listed in Pitchbook. They complement it with the Department of Education’s College Scorecard data which includes information on the characteristics of U.S. institutions of higher education such as enrollment, location, and average SAT score of students admitted. In the absence of such data, the authors supplement the sample with LinkedIn data. 

Via fuzzy matching, they connect the different sources and match 490 universities, which results in 90% coverage of all deals listed in Pitchbook. Hereof, the authors analyze 18,022 startups that received funding from 1,662 VC partners.

Conclusion

The paper suggests that connections based on academic affiliations in the VC sector enhance the exchange of valuable information, as opposed to directing funds to subpar startups. It seems like the old saying “It’s not what you know, but who you know” seems to be particularly true for VC investors. But what does that mean in practice?

For Founders: There are two ways you can leverage these insights. Firstly, focus your initial investor conversations on VCs from your own alma mater as they’re more likely to convert and invest larger amounts. Secondly, check the backgrounds of the VC Partners in your funnel, research potentially common history such as a shared alma mater, and don’t be shy to refer to it. While the first approach is surely in line with this study, the second leverages a similarity bias as described in “Whatever you do, be aware of these 21 biases”

For Investors: If you’d like to get access to preferred deal flow from a specific institution, you’d rather hire an investor from their alumni pool. For example, my friend Judith is a General Partner at La Famiglia and an alumna of the famous CDTM (an elite program between TU Munich and LMU). Her firm invested in companies founded by other CDTM alumni such as Personio (10bn), Forto (2.1bn), Y42, Orbem, Luminovo, Languse, Stitch, Zavvy, and others. Call this preferred access 😉

Video of the Week

AI of the Week

President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence

HOME

BRIEFING ROOM

STATEMENTS AND RELEASES

Today, President Biden is issuing a landmark Executive Order to ensure that America leads the way in seizing the promise and managing the risks of artificial intelligence (AI). The Executive Order establishes new standards for AI safety and security, protects Americans’ privacy, advances equity and civil rights, stands up for consumers and workers, promotes innovation and competition, advances American leadership around the world, and more.

As part of the Biden-Harris Administration’s comprehensive strategy for responsible innovation, the Executive Order builds on previous actions the President has taken, including work that led to voluntary commitments from 15 leading companies to drive safe, secure, and trustworthy development of AI.

The Executive Order directs the following actions:

New Standards for AI Safety and Security

As AI’s capabilities grow, so do its implications for Americans’ safety and security. With this Executive Order, the President directs the most sweeping actions ever taken to protect Americans from the potential risks of AI systems:

Require that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government. In accordance with the Defense Production Act, the Order will require that companies developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety must notify the federal government when training the model, and must share the results of all red-team safety tests. These measures will ensure AI systems are safe, secure, and trustworthy before companies make them public. 

Develop standards, tools, and tests to help ensure that AI systems are safe, secure, and trustworthy. The National Institute of Standards and Technology will set the rigorous standards for extensive red-team testing to ensure safety before public release. The Department of Homeland Security will apply those standards to critical infrastructure sectors and establish the AI Safety and Security Board. The Departments of Energy and Homeland Security will also address AI systems’ threats to critical infrastructure, as well as chemical, biological, radiological, nuclear, and cybersecurity risks. Together, these are the most significant actions ever taken by any government to advance the field of AI safety.

Protect against the risks of using AI to engineer dangerous biological materials by developing strong new standards for biological synthesis screening. Agencies that fund life-science projects will establish these standards as a condition of federal funding, creating powerful incentives to ensure appropriate screening and manage risks potentially made worse by AI.

Protect Americans from AI-enabled fraud and deception by establishing standards and best practices for detecting AI-generated content and authenticating official content. The Department of Commerce will develop guidance for content authentication and watermarking to clearly label AI-generated content. Federal agencies will use these tools to make it easy for Americans to know that the communications they receive from their government are authentic—and set an example for the private sector and governments around the world.

Establish an advanced cybersecurity program to develop AI tools to find and fix vulnerabilities in critical software, building on the Biden-Harris Administration’s ongoing AI Cyber Challenge. Together, these efforts will harness AI’s potentially game-changing cyber capabilities to make software and networks more secure.

Order the development of a National Security Memorandum that directs further actions on AI and security, to be developed by the National Security Council and White House Chief of Staff. This document will ensure that the United States military and intelligence community use AI safely, ethically, and effectively in their missions, and will direct actions to counter adversaries’ military use of AI.

Protecting Americans’ Privacy

Without safeguards, AI can put Americans’ privacy further at risk. AI not only makes it easier to extract, identify, and exploit personal data, but it also heightens incentives to do so because companies use data to train AI systems. To better protect Americans’ privacy, including from the risks posed by AI, the President calls on Congress to pass bipartisan data privacy legislation to protect all Americans, especially kids, and directs the following actions:

Protect Americans’ privacy by prioritizing federal support for accelerating the development and use of privacy-preserving techniques—including ones that use cutting-edge AI and that let AI systems be trained while preserving the privacy of the training data.  

Strengthen privacy-preserving research and technologies, such as cryptographic tools that preserve individuals’ privacy, by funding a Research Coordination Network to advance rapid breakthroughs and development. The National Science Foundation will also work with this network to promote the adoption of leading-edge privacy-preserving technologies by federal agencies.

Evaluate how agencies collect and use commercially available information—including information they procure from data brokers—and strengthen privacy guidance for federal agencies to account for AI risks. This work will focus in particular on commercially available information containing personally identifiable data.

Develop guidelines for federal agencies to evaluate the effectiveness of privacy-preserving techniques, including those used in AI systems. These guidelines will advance agency efforts to protect Americans’ data.

Advancing Equity and Civil Rights

Irresponsible uses of AI can lead to and deepen discrimination, bias, and other abuses in justice, healthcare, and housing. The Biden-Harris Administration has already taken action by publishing the Blueprint for an AI Bill of Rights and issuing an Executive Order directing agencies to combat algorithmic discrimination, while enforcing existing authorities to protect people’s rights and safety. To ensure that AI advances equity and civil rights, the President directs the following additional actions:

Provide clear guidance to landlords, Federal benefits programs, and federal contractors to keep AI algorithms from being used to exacerbate discrimination.

Address algorithmic discrimination through training, technical assistance, and coordination between the Department of Justice and Federal civil rights offices on best practices for investigating and prosecuting civil rights violations related to AI.

Ensure fairness throughout the criminal justice system by developing best practices on the use of AI in sentencing, parole and probation, pretrial release and detention, risk assessments, surveillance, crime forecasting and predictive policing, and forensic analysis.

Standing Up for Consumers, Patients, and Students

AI can bring real benefits to consumers—for example, by making products better, cheaper, and more widely available. But AI also raises the risk of injuring, misleading, or otherwise harming Americans. To protect consumers while ensuring that AI can make Americans better off, the President directs the following actions:

Advance the responsible use of AI in healthcare and the development of affordable and life-saving drugs. The Department of Health and Human Services will also establish a safety program to receive reports of—and act to remedy – harms or unsafe healthcare practices involving AI. 

Shape AI’s potential to transform education by creating resources to support educators deploying AI-enabled educational tools, such as personalized tutoring in schools.

Supporting Workers

AI is changing America’s jobs and workplaces, offering both the promise of improved productivity but also the dangers of increased workplace surveillance, bias, and job displacement. To mitigate these risks, support workers’ ability to bargain collectively, and invest in workforce training and development that is accessible to all, the President directs the following actions:

Develop principles and best practices to mitigate the harms and maximize the benefits of AI for workers by addressing job displacement; labor standards; workplace equity, health, and safety; and data collection. These principles and best practices will benefit workers by providing guidance to prevent employers from undercompensating workers, evaluating job applications unfairly, or impinging on workers’ ability to organize.

Produce a report on AI’s potential labor-market impacts, and study and identify options for strengthening federal support for workers facing labor disruptions, including from AI.

Promoting Innovation and Competition

America already leads in AI innovation—more AI startups raised first-time capital in the United States last year than in the next seven countries combined. The Executive Order ensures that we continue to lead the way in innovation and competition through the following actions:

Catalyze AI research across the United States through a pilot of the National AI Research Resource—a tool that will provide AI researchers and students access to key AI resources and data—and expanded grants for AI research in vital areas like healthcare and climate change.

Promote a fair, open, and competitive AI ecosystem by providing small developers and entrepreneurs access to technical assistance and resources, helping small businesses commercialize AI breakthroughs, and encouraging the Federal Trade Commission to exercise its authorities.

Use existing authorities to expand the ability of highly skilled immigrants and nonimmigrants with expertise in critical areas to study, stay, and work in the United States by modernizing and streamlining visa criteria, interviews, and reviews.

Advancing American Leadership Abroad

AI’s challenges and opportunities are global. The Biden-Harris Administration will continue working with other nations to support safe, secure, and trustworthy deployment and use of AI worldwide. To that end, the President directs the following actions:

Expand bilateral, multilateral, and multistakeholder engagements to collaborate on AI. The State Department, in collaboration, with the Commerce Department will lead an effort to establish robust international frameworks for harnessing AI’s benefits and managing its risks and ensuring safety. In addition, this week, Vice President Harris will speak at the UK Summit on AI Safety, hosted by Prime Minister Rishi Sunak.

Accelerate development and implementation of vital AI standards with international partners and in standards organizations, ensuring that the technology is safe, secure, trustworthy, and interoperable.

Promote the safe, responsible, and rights-affirming development and deployment of AI abroad to solve global challenges, such as advancing sustainable development and mitigating dangers to critical infrastructure.

Ensuring Responsible and Effective Government Use of AI

AI can help government deliver better results for the American people. It can expand agencies’ capacity to regulate, govern, and disburse benefits, and it can cut costs and enhance the security of government systems. However, use of AI can pose risks, such as discrimination and unsafe decisions. To ensure the responsible government deployment of AI and modernize federal AI infrastructure, the President directs the following actions:

Issue guidance for agencies’ use of AI, including clear standards to protect rights and safety, improve AI procurement, and strengthen AI deployment.  

Help agencies acquire specified AI products and services faster, more cheaply, and more effectively through more rapid and efficient contracting.

Accelerate the rapid hiring of AI professionals as part of a government-wide AI talent surge led by the Office of Personnel Management, U.S. Digital Service, U.S. Digital Corps, and Presidential Innovation Fellowship. Agencies will provide AI training for employees at all levels in relevant fields.

As we advance this agenda at home, the Administration will work with allies and partners abroad on a strong international framework to govern the development and use of AI. The Administration has already consulted widely on AI governance frameworks over the past several months—engaging with Australia, Brazil, Canada, Chile, the European Union, France, Germany, India, Israel, Italy, Japan, Kenya, Mexico, the Netherlands, New Zealand, Nigeria, the Philippines, Singapore, South Korea, the UAE, and the UK. The actions taken today support and complement Japan’s leadership of the G-7 Hiroshima Process, the UK Summit on AI Safety, India’s leadership as Chair of the Global Partnership on AI, and ongoing discussions at the United Nations.

The actions that President Biden directed today are vital steps forward in the U.S.’s approach on safe, secure, and trustworthy AI. More action will be required, and the Administration will continue to work with Congress to pursue bipartisan legislation to help America lead the way in responsible innovation.

For more on the Biden-Harris Administration’s work to advance AI, and for opportunities to join the Federal AI workforce, visit AI.gov.

211. Regulating AI by Executive Order is the Real AI Risk

The President’s Executive Order on Artificial Intelligence is a premature and pessimistic political solution to unknown technical problems and a clear case of regulatory capture.

STEVEN SINOFSKY

NOV 1, 2023

The President’s Executive Order on Artificial Intelligence is a premature and pessimistic political solution to unknown technical problems and a clear case of regulatory capture at a time when the world would be best served by optimism and innovation1

This week President Biden released the “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” as widely anticipated.

White House announces executive order.

I wanted to offer some thoughts on this because as a technologist, student of innovation, and executive that long experienced the impact of regulation on innovation I feel there is much to consider when seeing such an order and approach to technology innovation.

Upgrade to paid

Unlike past initiatives from the executive branch, the first thing I noticed is that this was in fact an Executive Order or EO. It was not a policy statement or aspirational document. This was not the work of a leader of science like Vannevar Bush working through an office like the “Office of Scientific Research and Development” writing “As We May Think”.

“As We May Think”, Life Magazine reprint.

Instead, this document is the work of aggregating policy inputs from an extended committee of interested constituencies while also navigating the law—literally what is it that can be done to throttle artificial intelligence legally without passing any new laws that might throttle artificial intelligence. There is no clear owner of this document. There is no leading science consensus or direction that we can discern. It is impossible to separate out the document from the process and approach used to “govern” AI innovation. Govern is quoted because it is the word used in the EO. This is so much less a document of what should be done with the potential of technology than it is a document pushing the limits of what can be done legally to slow innovation.

You have to read this document starting from the assumption that AI needs to be regulated immediately and forcefully and do so without the accountability of the democratic process. It doesn’t really matter what view you have of AI from accelerate to exterminate, but knowing the process one just has to be concerned. Is AI truly such an immediate existential risk that the way to deal with it is to circumvent the democratic process?

There are three elements of executive orders that are critical to understand:

They are not based on first principles and are politically expeditiousapproaches to getting the government to do something. They do not look at what should be regulated. They are focused on what can be regulated.

They are not accountable as are laws. A party does not challenge the executive order on the merits/constitutionality of the contents of the order, but rather on if the order was an overreach of the specific authority granted to the executive branch in perhaps some hardly connected law on how an agency is run. If you are one to have distrust for putting the power of government in the hands of unelected bureaucracies, then executive orders should be a major red flag.

The creation of an executive order is not subject to the same levels of transparency and accountability as is the legislative process. There’s no easily discovered history of debate. There’s no clear view of inputs or sources of information to later challenge. There’s no mechanism for competing interests to weigh in on the effort before it is the “law of the land”. If you are one to argue against money and influence in politics, then on principle you should loathe executive orders.

Now of course if you like the results of an executive order then these are all great. Unfortunately, the history of executive orders is that of them becoming increasingly important as the legislature becomes less effective or writes bills that are increasingly vague. We saw President Trump reverse a bunch of President Obama executive orders on his first day. Many people cheered. Then we saw President Biden reverse a bunch of President Trump orders on his first day. Many different people cheered. This might be fun politically but when you consider innovation this is a disaster. Such an approach is akin to trying to build something amazing while working with a constant threat of a reorg or having resources pulled out from under the team. EOs are not a way to govern effectively in general, and they are specifically not a way to “govern” innovation.

The lens I used when reading the document was to imagine being around in the 1970s at the dawn of the microprocessor, the database, and the technologies that became the foundation internet we know today. At each of those there were people terribly concerned about what could be. Books were written. Science fiction resulted. Movements were created. At the time, however, the government and industry were focused on innovation. They were not focused on issues ancillary to innovation or turning science fiction or political fears into a less worrisome product roadmap.

Some pessimistic books from my own shelves describing fears of technology that were written right at the start of the upward curve of the technology.

Imagine where we would be if at the dawn of the microprocessor someone was deeply worried about what would happen if “electronic calculators” were capable of doing math faster than even the best humans. If that would eliminate the jobs of all math people. If the enemy were capable of doing math faster than we could. Or if the unequal distribution of super-fast math skills had an impact on the climate or job market. And the result of that was a new regulation that set limits for how fast a microprocessor could be clocked or how large a number it could compute?

AI’s proxy war heats up as Google reportedly backs Anthropic with $2B

Devin Coldewey@techcrunch / 1:55 PM PDT•October 27, 2023

 Comment

Image Credits: Anthropic

With a massive $2 billion reported investment from Google, Anthropic joins OpenAI in reaping the benefits of leadership in the artificial intelligence space, receiving immense sums from the tech giants that couldn’t move fast enough themselves. A byword for the age: Those who can, build; those who can’t, invest.

The funding deal, according to sources familiar cited by The Wall Street Journal, reportedly involves $500 million now and up to $1.5 billion later, though subject to what, if any, timing or conditions is unclear. I’ve asked Anthropic for comment on the matter.

It recalls — though it does not quite match — Microsoft’s enormous investment in OpenAI early this year. But with Amazon committing to as much as $4 billion to Anthropic, the funding gap is probably more theoretical than practical.

The Google investment is just the latest in a developing proxy war between rival companies with a limited number of champions to back. Though all these companies are powerful and expert in many areas of technology, the simple fact is none of them would be able to stand up a credible competitor to either OpenAI or Anthropic in the area of large language models. And since everyone is also betting that LLMs are going to upend their business models and become crucial components of any future tech platform, they can’t afford to not have at least partial ownership of the leaders in the space.

They have more than money, as well: It would be similarly difficult for the AI startups (though one may well question that title now) to stand up the infrastructure needed to build and deploy these AI models at the scales required to operate profitably. So the deals also involve things like compute credits and mutual aid.

https://techcrunch.com/2023/04/06/anthropics-5b-4-year-plan-to-take-on-openai/

But it would be silly for them to all invest in the same one and all become its customers as well. Fortunately there are a few outfits worth investing in, with OpenAI and Anthropic the obvious contenders.

When I spoke with Anthropic CEO and co-founder Dario Amodei at Disrupt last month, he hinted (though it is only clear now in retrospect) at this coming cash infusion.

“We’ve only been around for a little over two and a half years… in that time, we’ve managed to raise $1.5 billion, which is a lot. Our team is much smaller, and yet we’ve managed to hold our own,” he said. “We have really, very radically been able to do more with less. And I think relatively soon, we’re going to be in a position to do more with more.”

It all fits into the plan outlined in internal documents obtained by TechCrunch in April: raise $5 billion (or more) to take on OpenAI directly.

AI: Google’s strategy with Anthropic

…upping the ante on their Google Cloud LLM AI options

MICHAEL PAREKH, OCT 29, 2023

The chips keep flowing in the AI Tech Wave gold rush in the early days of building out the infrastructure with billions of premptive capex and partnership investments. It was only last month I’d written in “Amazon bets on an LLM AI” via a multi-billion dollar investment and partnership with LLM AI company Anthropic.

This was AFTER Google with multiple Foundation LLM AI models of its own like Palm 2 and the upcoming multimodal Gemini, had already partnered with Anthropic with billions earlier. The logic of moves were of course as follows, as I’d explained:

“Key to understand here is that it’s an AI Infrastructure chess game in these early days of the AI Tech Wave, with the companies in the first three boxes below trying to make sure the companies in boxes 5 and especially 6, are their customers and partners down the road.”

Well, it may be a poker game as well as a chess game, as Google seems to have upped the ante with Anthropic. As Reuters reports, “Google agrees to invest up to $2 billion in OpenAI rival Anthropic”:

“Google has agreed to invest up to $2 billion in the artificial intelligence company Anthropic, a spokesperson for the startup said on Friday.”

“The company has invested $500 million upfront into the OpenAI rival and agreed to add $1.5 billion more over time, the spokesperson said.”

“Google is already an investor in Anthropic, and the fresh investment would underscore a ramp-up in its efforts to better compete with Microsoft, a major backer of ChatGPT creator OpenAI, as Big Tech companies race to infuse AI into their applications.”

Anthropic as I have discussed before, was founded not too long ago by former founders/employees of OpenAI, and has raised billions from financial and strategic investors to become the leading developer of Foundation LLM AI models after OpenAI. As Reuters goes on to explain:

“Amazon.com also said last month it would invest up to $4 billion in Anthropic to compete with growing cloud rivals on AI.”

“In Amazon’s quarterly report to the U.S. Securities and Exchange Commission this week, the online retailer detailed it had invested in a $1.25 billion note from Anthropic that can convert to equity, while its ability to invest up to $2.75 billion in a second note expires in the first quarter of 2024.”

“Anthropic, which was co-founded by former OpenAI executives and siblings Dario and Daniela Amodei, has shown efforts to secure the resources and deep-pocketed backers needed to compete with OpenAI and be leaders in the technology sector.”

The key motivation for Google to understand here is that Anthropic’s LLM AI models are an important EXTERNAL LLM AI model to offer to business customers of its Google Cloud data center business, which ranks no. 3 after Microsoft Azure and no. 1 Amazon AWS.

As I outlined last week, Amazon, Microsoft and Google all reported their quarterly earnings, and public investors are very focused on the relative performance of each of the companies’ cloud business relative to enterprise and business customers ramping up AI infrastructure products and services. 

While Google has industry leading Foundation LLM AI products of its own to go head to head with Microsoft and OpenAI’s GPT-4 LLM AI, with Palm 2 and the upcoming Google Gemini, those products are more for Google’s OWN products and services. Not necessarily to be offered to business and enterprise customers to build their own AI products and services.

ChatGPT Is the Best Journal I’ve Ever Used

My slow and steady progression to living out the plot of the movie ‘Her’

BY DAN SHIPPER, OCTOBER 27, 2023

This is a joke, but it’s not entirely wrong either.

For the past few weeks, I’ve been using GPT-3 to help me with personal development. I wanted to see if it could help me understand issues in my life better, pull out patterns in my thinking, help me bring more gratitude into my life, and clarify my values.

“I’m really enjoying the course and feeling super empowered using ChatGPT + Replit to code. All that in an afternoon, it’s crazy” — Daniel B.

Unlock the power of AI and learn to create your personal AI chatbot in just 30 days with our cohort-based course.

Here’s what you’ll learn:

Master AI fundamentals like GPT-4, ChatGPT, vector databases, and LLM libraries

Learn to build, code, and ship a versatile AI chatbot

Enhance your writing, decision-making, and ideation with your AI assistant

What’s included:

Weekly live sessions and expert mentorship

Access to our thriving AI community

Hands-on projects and in-depth lessons

Live Q&A sessions with industry experts

A step-by-step roadmap to launch your AI assistant

The chance to launch your chatbot to Every’s 90,000 person audience

Act before November 9th and save $500 by taking advantage of our early bird pricing. Sign up now before the price goes up. Learn to build with AI in just 30 days!

I’ve been journaling for 10 years, and I can attest that using AI is journaling on steroids. 

To understand what it’s like, think of a continuum plotting levels of support you might get from different interactions:

Talking to GPT-3 has a lot of the same benefits of journaling: it creates a written record, it never gets tired of listening to you talk, and it’s available day or night. 

If you know how to use it correctly and you want to use it for this purpose, GPT-3 is pretty close, in a lot of ways, to being at the level of an empathic friend:

If you know how to use it right, you can even push it toward some of the support you’d get from a coach or therapist. It’s not a replacement for those things, but given its rate of improvement, I could see it being a highly effective adjunct to them over the next few years. 

People who have been using language models for much longer than I have seem to agree:

(Nick is a researcher at OpenAI. He’s also into meditation and is generally a great follow on Twitter.)

It sounds wild and weird, but I think language models can have a productive, supportive role in any personal development practice. Here’s why I think it works.

Why chatbots are great for journaling

Journaling is already an effective personal development practice. ….

Citi Used Generative AI to Read 1,089 Pages of New Capital Rules

Bank set up AI task forces, started pilot program for coders

Wall Street races to adopt technology to improve efficiencies

Citigroup Inc. headquarters in New York.Photographer: Daniel Acker/Bloomberg

By Katherine Doherty, October 27, 2023 at 2:30 AM PDT

Citigroup Inc. is planning to grant the majority of its over 40,000 coders access to generative artificial intelligence as Wall Street continues to embrace the burgeoning technology. 

As part of a small pilot program, the Wall Street giant has quietly allowed about 250 of its developers to experiment with generative AI, the technology popularized by ChatGPT. Now, it’s planning to expand that program to the majority of its coders next year. 

The bank and its rivals have slowly begun experimenting with the technology, which created waves last year when ChatGPT made its debut and showed how generative AI can produce sentences, essays or poetry based on a user’s simple questions or commands. The technology typically creates this new work after being trained on vast quantities of pre-existing material.

Increasingly, bank executives argue artificial intelligence will make their staffers more efficient. Like when federal regulators dropped 1,089 pages of new capital rules on the US banking sector, Citigroup combed through the document word by word using generative AI.

The bank’s risk and compliance team used the technology to assess the impact of the plans, which will determine how much capital the lender has to set aside to guard against future losses. Generative AI organized the proposal into pieces and composed key takeaways, which the team then presented to the outgoing treasurer Mike Verdeschi. 

At Citigroup, the advent of ChatGPT sparked a concerted push to use artificial intelligence. In response, earlier this year the bank formed two task forces to explore potential uses for the technology, according to Stuart Riley, the bank’s co-chief information officer who is overseeing the effort. 

Stuart RileySource: Citigroup Inc.

“This is across every part of the bank,” Riley said in an interview. “Some of them are small, helping with daily routine, and others are complex bodies of work.”

Bank staffers have been increasingly worried that the technology might replace them. That’s not so, according to Riley. Whether AI or employees generate a line of code, it will still need human oversight, he said. 

“Humans still look at the code to make sure it’s doing what they expected it to do. They are still supervising, like a co-pilot,” he said. “The AI tool is given to the developer to enable them to produce code more quickly – it’s not replacing them. We are using AI to amplify the power of our employees.”

JP Morgan Chief Executive Officer Jamie Dimon expressed a similar sentiment this month when he said AI is likely to make dramatic improvements to workers’ quality of life, including cutting the work week down to three-and-a-half days for some. The technology is already being used by thousands at his bank, he said, and between February and April, the lender advertised for more than 3,500 related roles, according to data from consultancy Evident.

Citigroup is also exploring modernizing its systems using AI, a process which would ordinarily cost millions of dollars and require substantial manpower, according to Riley. To update legacy systems, the banking giant needs to change the coding language and AI can help translate that from an older one like Cobol into a more modern one like Java.

The bank is examining ways to use generative AI to analyze documentation and generate content. To hasten the process of parsing reams of quarterly results, AI can analyze earnings and other public documents, freeing up staff to spend more time with clients rather than crunching numbers.

How EQT Motherbrain uses LLMs to map companies to industry sectors

Valentin Buchner

Oct 3

Originated from my Master’s Thesis project supervised by Lele Cao (EQT) and Jan-Christoph Kalo (Vrije Universiteit Amsterdam, University of Amsterdam). Proofreaders: Lele Cao, Armin Catovic, and Finn McLaughlan.

One could say that the use cases of a large language model (LLM) are only limited by an engineer’s creativity. Using GPT for M&A sourcing is a great example of a valuable use case, and this blog post will introduce another productive use of LLMs: industry sector classification. To not get stuck in buzzwords, let’s start with the problem we are addressing:

A schematic example of EQT’s proprietary sector framework, taken from Cao et al. (2023). Credits go to Cecilia Henje for creating this visualization.

EQT follows a thematic investment strategy, meaning it invests behind long-term global trends, such as sustainability and digital transformation. To be able to make targeted investments falling into relevant macro-trends, EQT has created its own industry sector taxonomy, distinguishing between more than 300 different sectors organized in a four-level hierarchy. However, an investment professional needs to know which companies fall into their sector of interest. While we know the company-sector mapping for some companies based on past deals and manual annotations, it does not sound like a good idea to annotate all ~50M companies in our database manually.

This is where LLMs come into play. In machine learning terms, our problem can be defined as a multi-label text classification task. For each company, the most relevant information to infer its industry sector exists in unstructured text: we have a company’s name, a description that defines its core business activities, and a set of keywords. This information can be fed as input to a machine learning solution that predicts the industry sector(s) this company belongs to. It is important to note that a company can belong to multiple industries at the same time. For instance, a company making an educational platform about stock trading can belong to both the educationand capital markets sector.

For the choice of machine learning model, there are clearly many options, and those who are curious to learn more about the baselines we compared our LLM approach with are welcome to give this paper a read. This blog post will focus on the method that performed best on our industry sector classification problem: Prompt-Tuned Embedding Classification (PTEC).

Wait… what’s that? Let’s break this down into its parts: Prompt Tuning and Embedding Classification

Prompt Tuning

Prompt Tuning is a novel approach to adapt LLMs to a desired downstream task as efficiently as possible. As LLMs are reaching hundreds of billions of parameters in size, fine-tuning and updating all these parameters is computationally very expensive. The paradigm of parameter-efficient fine-tuning (PEFT) suggests that one should only update the subset of the parameters most crucial to the LLM’s performance. Different PEFT methods focus on different parameter groups, and Prompt Tuning is the method that updates the smallest amount of parameters.

Let’s have a look at the figure below to see how this works. On the left, we see that the input text is split into separate tokens and the vector representations (embeddings) of these tokens are then used to create the token embedding matrix. This is the usual process when working with LLMs, and very well explained in this article by the Financial Times. Usually, these token embeddings would now be fed into the LLM, which would then generate an answer.

A schematic overview of Prompt Tuning

However, as we see in the figure, the secret sauce of Prompt Tuning is the soft prompt matrix, which is concatenated with the token embedding matrix before feeding both into the LLM. Similar to the instructions created by a prompt engineer, this soft prompt matrix serves as instructions to the LLM. It consists of a predefined number of virtual token embeddings. These virtual token embeddings exist in the same space as the real token embeddings (e.g. they have the same dimensionality), without having a defined meaning in natural language. The crucial difference to natural language instructions is that since the soft prompt is a matrix of parameters, we can train it with mathematical optimization methods such as gradient descent. This is the same method as used during LLM pretraining, and commonly used to train deep learning algorithms.

Since the soft prompt usually contains less than 0.1% of the LLM’s parameters, it is much more efficient to tune. Nevertheless, it can achieve similar, and in some cases even better performance than full model fine-tuning! How can this be?

Fine-tuning the complete LLM on a given downstream task often results in forgetting what the LLM learned during pretraining, referred to in literature as “catastrophic forgetting”. This hurts its ability to generalize. For instance, while an LLM currently understands many different languages, if we fine-tune it on only English data, it may forget other languages. Prompt Tuning avoids this by only fine-tuning the soft prompt, and keeping the LLM parameters frozen. This retains the generalizing abilities the LLM learned during pretraining

Apart from efficiently achieving great downstream task performance, there is one more benefit: We can batch multiple tasks in the same model. Let’s elaborate on this:

….. More

VC GPT: How LLMs are strengthening SignalFire’s in-house AI

October 12, 2023 | 

At SignalFire, we’re not only investing in AI companies, but we’re also building an AI-driven venture firm day-to-day. Artificial intelligence and machine learning have been at the core of our operations for more than ten years.

When we started in 2013, the firm set out to create a machine learning–powered talent platform to help our investment team spot high-potential founders and companies, and to help our portfolio companies improve their recruiting efforts. Since then, we’ve woven data into the fabric of how we operate as a firm, giving rise to our in-house platform, Beacon AI. Alongside daily fine-tuning—via reinforcement learning from human feedback by our investment, talent, and data teams—we’ve expanded Beacon to incorporate LLMs over the last year.

Highlights

SignalFire has been building AI internally to help our investment and talent teams for a decade. Now we’re integrating LLMs into our Beacon AI data platform to speed up workflows and surface more accurate and comprehensive results for company and people searches.

Applying GPT lets us surface concise summaries of companies we’re evaluating and similar companies who might be competitive, and it allows us to route high-potential companies to the right investment team members.

GPTs let us unify different job titles that mean the same thing to expand results for recruiting searches, programmatically understand gaps in teams, and provide more sales leads with contact info to portfolio companies.

Experimentation is critical to learn where LLMs perform best, when to use off-the-shelf models vs. in-house models, and how to minimize costs as we continue to pioneer new approaches to using AI in venture.

Beacon: SignalFire’s homebrewed AI

The Beacon AI platform does quite a few things for the firm. Before we dive into how we’re utilizing LLMs, let’s cover what Beacon AI is in the first place, and why it’s transformative for VC.

At its core, Beacon AI is a massive data platform that tracks and ranks more than 80 million companies, 600 million people, and millions of open-source projects. To power it, we rely on 40 different datasets to help us paint a holistic picture of each company, person, and project. We then rely on a handful of proprietary machine learning algorithms developed by our team over the past decade for ranking.

AI for investment sourcing

But what does a platform that ranks people, companies, and open-source talent do for the firm? First, it helps us research every deal that we do. From understanding a startup’s talent ranking to the market the company serves, we can apply a wide swath of quantitative learnings to every company and founding team we consider. This helps us ask more relevant questions and make more principled decisions.

In addition to research, we use the platform to proactively recommend founders, companies, and open-source projects that our investors should contact. By tracking people’s mobility and quantifying some of the essential aspects of teams, companies, and projects, we can identify opportunities for investment before they become available on the market, giving our investment team a sourcing superpower. If a company is hiring high-quality engineers or gaining momentum on GitHub, Beacon will surface it to make sure we’ve taken a look.

AI for portfolio success

We also use the platform to support our portfolio companies. Most venture capital firms try to assist their portfolios through human-centric services alone, which doesn’t scale as their portfolio grows. The alternative is for portfolio support to wane in efficacy, which also isn’t optimal. So, while SignalFire does provide human-centric Portfolio Success as a Service, we also believe in augmenting and scaling our ability to serve people via software.

Beacon AI helps us power a talent search tool that our portfolio companies can use for free to find the best talent as they build their teams, including people who aren’t officially looking for work but who our AI suggests might be open to new opportunities. If someone has worked at the best companies in their industry or is a prolific Github contributor, but their current company is bleeding talent, we know to recommend them to our portfolio companies. For example, our portfolio company EvenUp has used Beacon extensively for recruiting as it rapidly scales: they recently sourced 100 top candidates for their first product designer role, leading to two finalists and a great hire.

Finally, Beacon helps our portfolio companies find prospective customers. It can cross-reference their ideal customer profile with our talent search engine to provide them with lists of sales leads, complete with contact info.

Rather than ceasing support for earlier investments or companies still finding product-market fit, building scalable AI lets us help our entire portfolio as it grows. Now, we’re adding in LLMs to make our Beacon AI even more powerful.

News Of the Week

Motherbrain, i.e. how EQT managed to analyze 50 mln companies

by Giuliano Castagneto

 October 3, 2023 in ITALYVENTURE CAPITAL

Lele Cao

Swedish Eqt, born in 1994 as the Wallenberg industrial group’s investment arm, has gradually evolved into one of the world’s leading private capital managers, with 224 bn euro under management, distributed over a wide spectrum of asset classes, ranging from venture capital to private equity, from infrastructure to real estate. In 2016 Eqt stepped into the AI world, investing in the sector but first and foremost launching Motherbrain, a dramatically innovative Artificial Intelligence-based approach to investment targeting, that was at first applied to Venture Cpital, and later on to almost all other asset classes Eqt deals with. Motherbrain’s research head Lele Cao, due to present his creature at the 0100 Mediterranean Conference 2023, focusing on private capital investments in the Mediterranean Basin, that will be held in Rome on the 18th of October, explained to BeBeez International (the conference media partner) the basic principles Motherbrain is based on.

BeBeez International: How is Motherbrain organized?
Cao: It is articulated in three divisions, Labs, Platform and Research, employing around 30 people.Labs performs the day to day operations, i.e. the tasks commanded by Eqt’s investment teams. Platform is essentially an engineering unit, developing new analytic tool again upon investment managers’ input. Finally, Research, which employs just three people, studies how to improve knowledge acquiring, also by implementing newly developing technologies to enlarge and effectively manage our knowledge base. Of course, any development stems also from the continuous interaction between Motherbrain’s technicians and Eqt’s investors.

BeBeez International: What does Motherbrain actually do?
Cao: We scan a huge database, constantly fed by a wide range of data sources, upon instructions we receive from investment teams, to provide them with several types of information helping in selecting potential investments featuring a high likelihood of success. But not only that. We can provide EQT with information that is relevant not only to investment managers but also to Eqt’s portfolio companies.

BeBeez International: For instance?
Cao: A typical example is the search of talents or M&A targets an invested company needs to boost sales or to introduce a new technology. We may also run sensitivity analyses to test, say, the resilience of an invested or investable company to external shocks.

BeBeez International: Which data do you use to run those evaluations?
Cao: All sorts of information, from market and sector data to press news and even social media posts. Knowledge acquiring indeed consists basically of linking different sets of data relating to concepts and entities.

BeBeez International: What is Motherbrain’s track record?
Cao: Since its birth it has scanned, on a regular basis, some 50 million companies on the basis of a long list of topics.

BeBeez International: ESG has nowadays turned one of the hottest topics concerning investment strategies. Which way can Motherbrain help EQT’s portfolio managers.
Cao: The central issue about ESG investing is the availability of relevant data on suitable companies. Of course the knowledge base gets larger and larger as the number of data sources and past experience increase. On the other hand, that holds true for all other functionalities, which evolve fast, thus improving the system’s performance steadily.

BeBeez International: In 2023 EQT started to employ also ChatGBT-like technologies to further strengthen the potential of Motherbrain.
Cao: We are developing our own application utilizing large language model, but there is still a lot of work to do on that, particularly on the interaction with our knowledge graphs-based information and technology.

AirBnb Preseed Deck Breakdown

Hey Persuaders!

Liam Gill, October 29, 2023

This is a short 14-slide deck. A few things to pay attention to: 

How they pitch a two-sided marketplace in a simple, easy-to-understand way. Many founders who pitch marketplaces end up with really complex pitches as they try to show the value for the buyer and seller simultaneously. 

The final slide. The way they present their offer instead of asking for money is exactly what I’ve been teaching you. I’m confident their script would have used some form of trend stacking to accompany the slide. 

Let’s get into it. 

Slide 1: Introduction

Note: I pitched to VCs that Brian Chesky pitched to with this deck. So, I have a few insights from them into their reactions, and what did/didn’t work, and I’ll be mixing that into my own commentary. 

One of the things that investors loved about this deck was that the first slide told them exactly what AirBnB did. I’d still recommend that your first slide use something that helps communicate your vision so if I was rewriting this I’d recommend, “Allowing travellers to book a room with locals instead of being isolated in hotels”. This change would bring an emotional aspect to it; if they were writing a deck today, this would also need to change to something more general as AirBnB has a much grander vision.

Startup of the Week

Musk: SpaceX’s Starlink ‘has achieved breakeven cash flow’

Aria Alamalhodaei@breadfrom / 12:30 PM PDT•November 2, 2023

Image Credits: IAU

SpaceX’s Starlink has “achieved breakeven cash flow,” CEO Elon Musk said Thursday, a milestone achievement for the rocket company’s four-year-old satellite internet business unit.

He announced the news in a post on X, the social media platform that he also owns. He added that Starlink “is also now a majority of all active satellites and will have launched a majority of all satellites cumulatively from Earth by next year.”

It is unclear whether the breakeven milestone is for the quarter or a specific period of time. Regardless, the achievement leaves open the question of whether SpaceX may be nearing taking Starlink public via IPO — a move that Musk once said the company would execute once the cash flow became predictable.

SpaceX president and chief operating officer Gwynne Shotwell gave a hint that the economics were becoming more predictable earlier this year, when she said that Starlink had already achieved a cashflow positive quarter in 2022.

Starlink’s mega-constellation currently sits at around 5,000 satellites, all of these launched by SpaceX rockets. That vertical integration has been key to the company expanding the Starlink network at such aggressive rates — SpaceX launched the first operational Starlink birds in 2019 — and being able to grow at a pace that far outstrips competitors.

Since then, the network has grown to around 2 million subscribers and spans verticals from consumer to maritime and aviation. More recently, Starlink made the news for its role in international conflict — including the war in Ukraine and more recently the war between Israel and Hamas. SpaceX recently established a defense-focused version of Starlink called Starshield, a sign of the Pentagon’s interest in procuring satellite internet capabilities.

SpaceX is valued at around $150 billion. The company posted revenues last year of $1.4 billion, The Wall Street Journal reported in September.

Defense Tech Startup Shield AI Raises $200M At $2.7B Valuation

Chris Metinko

October 31, 2023

Shield AI, the defense and aerospace startup creating AI pilots, raised a $200 million Series F at a $2.7 billion valuation.

The round was co-led by Riot Ventures and Thomas Tull’s US Innovative Technology Fund, with participation from Ark Investment and returning investors Disruptive and Snowpoint Ventures.

The round comes less than a year after the San Diego-based company was valued at $2.2 billion after raising $60 million in December.

Shield AI’s software, Hivemind, enables aircraft to operate autonomously in high-threat environments.

“We’re building the world’s best AI pilot to ensure air superiority and deter conflict because we believe the greatest victory requires no war,” said co-founder and President Brandon Tseng, in a release. “This funding accelerates the scaling of Shield AI’s products, enabling the deployment of intelligent, affordable mass — the most important non-nuclear deterrent for the next 30 years.”

Founded in 2015, Shield AI has raised approximately $773 million, according to Crunchbase.

Defense tech cash

The Shield AI round is the biggest this year for any defense tech startup.

Defense tech historically has not drawn the venture capital associated with many other tech sectors. Crunchbase data shows that last year, U.S.-based defense tech startups saw only $2.1 billion invested in 58 total deals — and that includes Anduril’s $1.5 billion Series E in December.

This year such startups have raised less than $600 million — including Shield AI’s massive round.

That is not unusual for defense technologies. Despite Silicon Valley-developed tech being used by the military for decades, some investors have shied away from the space for moral reasons, and tech companies have been wary of letting their tech be used for military purposes.

X of the Week

%d