Investors don’t really need to invest in net new companies to get exposure to AI’s potential halo effect; If all your portfolio companies start to integrate with the right existing tools on the market, they could bloom too. It’s the promise of horizontal tech.
Natasha Mascarenhas, TechCrunch
In a previous post I used the games industry as an example to make the case that AI probably isn’t going to be disruptive to any major verticals. Instead, it’s another tool in the creative process which can empower better, more efficient outcomes.
The quote from Natasha helped cement that even further: AI should be viewed as a utility, similar to electricity or water. We’ve already witnessed the commoditization of computing power, and now it’s the turn of generative AI platforms.
Viewed through this lens, we can avoid a lot of the hyperbole about AI threatening categories of employment. It will certainly have a significant impact on the way many industries operate, but not fundamentally transformative. More evolution than disruption.
What does this mean, right now? Not a great deal. The more forward-thinking you are, the more likely you are to be exploring whether there are ways that LLMs can help with your ‘jobs to be done’. In the most basic cases, can it ease your workload by eliminating some menial tasks, or provide a source of some inspiration?
If there’s one area where LLMs have the potential to diverge from the ‘AI as a Utility’ idea, it’s in ‘whitelabelled’ solutions. In essence, if OpenAI could sell me a version of ChatGPT that is stripped of all reference to news, politics, current events, media – something with the conversational skills of a human but none of the knowledge – that could be tremendously useful. It could then be independently trained on very specific data sets to build an model that is practical and reliable in niche applications.1
This would open up LLM-driven applications across a range of industries, whether it’s customer service chat bots to NPCs in video games. Imagine a company-wide virtual assistant that is aware of every invoice, file, public Slack conversation or email. The possibilities are clearly wide-ranging and powerful.
In fact, to expand on the AI as a utility metaphor, the closest parallel we can draw today – in terms of potential, risks, and regulation – is probably nuclear power.
I’m pro-nuclear, and understand the argument that regulation has slowed progressing that field. …but what has really slowed that industry down is the fear sown by catastrophic accidents linked to poor oversight. Three Mile Island and Chernobyl wrecked public perception, political will and engendered much of the over-regulation which followed.
For example, this 1978 article in The Washington Post covers the Soviet Union’s ‘optimistic’ approach to nuclear plant safety in the years prior to Chernobyl.
This paper from the IAEA, published as a response to Chernobyl but also mentioning Three Mile Island, describes an environment of broken trust, fear and disgust.
The dichotomy of ‘accelerationists’ and ‘doomers’ is childish. Neither is helpful. There has to be room for both progress and an appropriate level of caution. That doesn’t mean letting incumbents dictate the direction and severity of regulation, in the same way that it would have been insane to let the coal industry regulate nuclear. Regulatory capture is a concern, but too many are reducing this down to a narrative that any drive towards regulation is a threat to progress.
The world stands to gain significantly from AI, as it has from nuclear power. The dangers, while less obvious, may be no less threatening. Nobody really knows.2
As Elon Musk said himself: hope for the best, plan for the worst.
- Of course, a behemoth like Bloomberg can afford to build their own model to accomodate a specific focus. [↩]
- To be clear, what I’m referring to here is true AI. General intelligence. Today’s ‘generative AI’ is more of a distraction than a threat. It’s primary role seems to be a lever which VCs and technologists can pull to raise capital. [↩]