AI investors first technology companies serving the defense industry, such as Palantir, Primer and Anduril, are doing well. Anduril, for its part, reached a valuation of over $ 4 billion in less than four years. Many other companies that develop general-purpose, AI-driven technologies, such as image tagging, receive a large (undisclosed) portion of their revenue from the defense industry.
Investors in AI-driven tech companies that aren’t even meant to serve the defense industry often find that these companies end up (and sometimes inadvertently) help other powerful institutions, such as the forces of police, municipal agencies and media companies, to continue their functions.
Most do a lot of good work, like DataRobot helping agencies understand the spread of COVID, HASH running vaccine delivery simulations, or Lilt making school communications available to immigrant parents in a US school district.
The first step in taking responsibility is knowing what is happening on earth. It’s easy for start-up investors to ignore the need to know what’s going on in AI-powered models.
However, there are also less positive examples – the technology manufactured by the Israeli cyber-espionage company NSO was used to hack 37 smartphones belonging to journalists, human rights activists, business leaders and the United States. fiancée of murdered Saudi journalist Jamal Khashoggi, according to a report. by The Washington Post and 16 media partners. The report claims the phones were on a list of more than 50,000 numbers based in countries that monitor their citizens and are known to have hired the services of the Israeli company.
Investors in these companies can now be asked tough questions by other founders, sponsors and governments about whether the technology is too powerful, allows too much, or is applied too widely. These are questions of degree, but sometimes not even asked during an investment.
I’ve had the privilege of speaking to a lot of people with a lot of perspectives – CEOs of big companies, founders (now!) Of small companies and politicians – since the publication of “The AI-First Company” and the investment in such ventures for the better part of a decade. An important question keeps coming back to me: How do investors ensure that the startups they invest in apply AI responsibly?
Let’s be frank: it’s easy for start-up investors to dismiss such an important question by saying something like, “It’s so hard to tell when we’re investing.” Startups are nascent forms of something to come. However, AI-driven startups are working with something powerful from day one: tools that allow leverage far beyond our physical, intellectual, and temporal reach.
AI not only gives people the ability to put their hands around heavier objects (robots) or focus on more data (analytics), but it also gives them the ability to look into time (predictions). ). When people can make predictions and learn as they play, they can learn quickly. When people can learn fast, they can act fast.
Like any tool, these tools can be used for better or for worse. You can use a stone to build a house or throw it at someone. You can use gunpowder for beautiful fireworks or to shoot bullets.
Essentially similar AI-based computer vision models can be used to understand the movements of a dance group or terrorist group. AI-powered drones can point a camera at us as they descend from stepping stones, but they can also point a gun at us.
This article covers the basics, measures and policy of responsible investment in AI-driven companies.
Investors and board members of AI companies should at least partially take responsibility for the decisions of the companies in which they invest.
Investors influence founders whether they like it or not. Founders are constantly asking investors which products to build, which customers to approach, and which transactions to execute. They do this to learn and improve their chances of winning. They also do this, in part, to keep investors engaged and informed, as they can be a valuable source of capital.