Tag: AI

  • It’s not a bubble until we don’t know it’s a bubble

    It’s not a bubble until we don’t know it’s a bubble

    The influence of productivity shocks, peer effects and cost of capital on AI IPO ambitions, and what happens next.

    While leaders at xAI, Anthropic and Mistral have been silent on their plans to go public, OpenAI is starting to open up.

    Back in May it was reported that negotiations with Microsoft included provisions that allowed OpenAI to file for an IPO. The transition to a Public Benefit Corporation (PBC) the following month made that technically possible. Both Altman (CEO) and Friar (CFO) have made statements alluding to the process since. Indeed, simply the fact that Friar is being put in front of the media more often as a leadership figure is a significant signal.

    OpenAI’s most recent release may underline this direction. By prioritising model economics (focusing on the “router” capability) rather than model performance, the reception to GPT-5 was poor. In reality, this may reflect a shift in posture toward public market metrics, and their willingness to take the PR hit.

    The prompt router also lays the groundwork for OpenAI to provide selective access to more expensive models. The next generation of LLMs will be powered by NVIDIA’s Blackwell chips, offering ~30x faster real-time inference. The chips are rolling out at the moment, with an impact on model releases expected next year.

    Assuming these models will be a major step-up in competence, this could be the tipping point for a wave of AI IPOs in 2026.


    Factor 1: Productivity Shocks

    In our model, two firms, with differing productivity levels, compete in an industry with a significant probability of a positive productivity shock. Going public, though costly, not only allows a firm to raise external capital cheaply, but also enables it to grab market share from its private competitors.

    IPO Waves, Product Market Competition, And the Going Public Decision: Theory and Evidence

    When an industry experiences (or anticipates) a significant positive productivity shock (an inflection point in their ability to generate value), this may trigger an “IPO wave”.

    Essentially, if there’s a significant step-up across the industry then there’s a real incentive to be the first (or at least be early) to tap public markets for capital to drive market-share expansion.

    LLMs have continued to improve over the last few years, with a number of hyped releases and growing experimentation amongst enterprise users, but there has yet to be a truly significant “productivity shock” moment.


    Factor 2: Peer Effects

    We find that observing a peer go public within the previous 12 months raises the propensity to undertake an IPO from a baseline rate of 0.31 percent per quarter to 0.44 percent per quarter, amounting to a 40 percent increase in IPO propensity. This result is robust to accounting for hot market effects and other common shocks that may affect competing firms’ IPO decisions.

    IPO Peer Effects

    The first to market has an advantage in that they may capture the demand for that industry amongst public market investors. They also bear all of the cost and the risk of blazing that trail, primarily that they might have greatly overestimated demand.

    There is some benefit to being a “fast follower” in these circumstances, which is often what triggers the “IPO wave” dynamic seen in the culmination of tech cycles. However, the later you are in that wave, the less of the benefit you capture.

    These periods, often characterised as “IPO windows” have been referred to in literature as “windows of misopportunity” for investors due to the increased failure rate. However, it’s also true that the few survivors tend to appear more innovative (in patent quality and quantity) than IPOs issued in other periods.

    Overall, this chapter tend to conclude that “windows of opportunity” provides real opportunity to the most inventive private firms and allow them to raise public capitals to further their innovations.

    Essays On Ipo Cycles And Windows Of Opportunity

    Factor 3: Cost of Capital

    We find that less profitable companies with higher investment needs are more likely to IPO. After going public, these firms increase their investments in both tangible and intangible assets relative to comparable firms that remain private. Importantly, they finance this increased investment not just through equity but also by raising more debt capital and expanding the number of banks they borrow from, suggesting the IPO facilitates their overall ability to raise funds.

    Access to Capital and the IPO Decision: An Analysis of US Private Firms

    There’s a common perception that going public is for mature companies who are past the period of aggressive growth, looking for more stable access to capital. This does not appear to be true.

    In fact, companies that IPO are often doing so in order to increase their investment in growth, including intangible assets including R&D spend. This is particularly true in heavily competitive markets and capital-intensive businesses.

    This has obvious relevance to LLM providers, who check a lot of these boxes. Certainly, in the phase of investing in infrastructure to support scale, lowering the cost of capital is a major priority.


    Factor 4: Beyond Hedging

    Similar to going public, hedging mitigates the effect of risk on a firm’s product market strategy, and, thus, results in greater product market aggressiveness. Therefore, in the presence of product market competition, hedging has a strategic benefit similar to that of an IPO. Importantly, we show that the availability of hedging reduces, but does not eliminate, the incentives to go public.

    Strategic IPOs and Product Market Competition

    Not necessarily a reason why LMM providers may look to IPO, but rather why they haven’t until now: “Hedging” in this context effectively reflects the relationships that many model providers have with large public companies like Microsoft, Amazon or Apple.

    Rather than going public themselves, they can rely on these partnerships to fund investment and distribute risk, offering some of the benefits of going public without any of the costs.

    However, the example of OpenAI’s relationship with Microsoft illustrates that it’s possible to outgrow these arrangements.


    IPO Windows

    Generally speaking, “IPO windows” are a mirage chased by liquidity-starved venture capitalists. A truly great company, like Figma, can IPO more-or-less whenever it wants to.

    However, that dynamic changes when you have a group of peer-companies in a fiercely competitive (and capital intensive) industry. At that point, it is likely that there will be some strategic clustering of IPO ambitions.

    A true “IPO window”, 1999/2000 or 2021, involves ~1,000 companies listing in the space of about six months. Diligence collapses, the quality of companies goes in the toilet, and public markets are torched for years afterwards. Sarbanes-Oxley killed this behavior in 2002, and it didn’t appear again until the low-interest-rate pandemic briefly drove public markets insane in 2021.

    Why Wait

    Assume that OpenAI is likely to be the first out. As today’s AI leader, with the widest consumer adoption and biggest brand, it seems the best positioned.1

    What are they waiting for?

    Primarily, they’ll be waiting to see if Blackwell unlocks the kind of productivity shock they are looking for. To clear their recent $500B valuation they’ll need to go public with undeniable momentum and a great story to tell new investors about future potential.

    Secondly, going public is just a huge amount of work. Both in a technical sense, preparing the company’s books for intense scrutiny, and in a brand and PR sense. Prospective investors may need educating about the product, or the image of leadership may need some rehabilitation.2

    IPOs are remarkably intense, and represent the most thorough inspection that a company will endure in its lifetime. This is why companies and their board of directors agonize over whether or not they are “ready” to go public. Auditors, bankers, three different sets of lawyers, and let us not forget the S.E.C., spend months and months making sure that every single number is correct, important risks are identified, the accounting is all buttoned up, and the proper controls are in place.

    Investors Beware: Today’s $100M+ Late-stage Private Rounds Are Very Different from an IPO

    Where Bubbles Emerge

    There has been endless talk of a bubble of AI investment, and certainly there seems to be a disconnect between price and value.

    This is true in private markets, reflected in transaction data, and in public markets, reflected by the delta between the MAG7 (all somewhat AI-connected) and the other 497 companies in the S&P.

    However, the truth is that bubbles only really happen in public markets. They require liquidity, enabling the wild sentiment-driven swings in price that characterise a bubble.

    In illiquid markets, like venture capital, you have what William Janeway called “speculative episodes”, which may be derived from a bubble playing out in public markets (via comps) but do not behave in the manner of a bubble.

    It’s almost as if wherever there is a liquid trading secondary market in assets, there you will find a bubble.

    The Upside of Wasteful Speculative Bubbles and the Downside of Efficiency

    Indeed, the concept of bubbles has been used in VC to disguise what is better described as simple greed and myopia. Investors behaving like traders — “riding the momentum train, and being ridden over by it, when it turns” — to quote Damodaran.

    It’s illogical to describe what is happening today as a bubble if all of the current participants (including Altman himself) acknowledge that it looks like a bubble. A key feature of speculative bubbles is surely that the participants do not realise it’s a bubble? A more honest characterisation is simply that VCs are choosing to gamble on AI because their LPs believe they should.

    This all changes when AI companies hit public markets, and the pool of investors (and capital) grows dramatically.

    Consider the environment: post productivity shock, with an IPO wave led by the largest model providers but cascading into related industries and any company tha can crest the narrative.

    This is precisely, and clasically, when we’d see a bubble emerge; in the volatile public markets, rather than the sluggish and opaque private markets.3

    Until then, call it what it is: degenerate trading behavior.


    (top image: Allegory on Tulipmania by Jan Brueghel the Younger)

    1. Arguably, xAI is the second in-line. []
    2. As a minor footnote here, I believe Altman’s recent comments about AI as a bubble, his uncertainty at leading a public company, and an AI CEO in 3 years, are all deliberate narrative prompts. []
    3. This also fits neatly with Howard Mark’s comments about the markets feeling ‘expensive’, and the potential for a major correction in the not-so-distant future. []
  • The Rot of Short-Termism in VC

    The Rot of Short-Termism in VC

    Venture capital is a seriously long-term game, with investments taking somewhere between 8 and 16 years to return liquidity.

    The distance to that horizon creates a lot of eccentricity.

    For example, VC does not reward following patterns or navigating market movements, neither of which is relevant to decade-long cycles. Consensus of pretty much any kind is toxic, as the more people agree with something the less profitable it becomes. Investment experience is like comfortable entropy, slowly eating-away at your ability to remain objective.

    In a sense, success itself is antimemetic: the better the outcome of an investment, the more likely you are to try and repeat it through pattern matching — destroying the calibration which allowed you to find it in the first place.

    Can you imagine how maddening that is?

    This is why the best GPs are oddballs. They live with the paradox that being a ‘good investor’ is a process of constant discovery, and the more lost you feel the better you are probably doing.

    It takes a certain madness to do well, and that is not something you can pick up on the job. You cannot be taught how to think in a contrarian manner. Nobody can give you the confidence required to wait a decade to see if you have good judgement. You have it, or you don’t.

    This is why great VCs earn a lot of respect. The role they play in financing entrepreneurial dreams is critical. From the semiconductor origins of Silicon Valley to SpaceX and our future on other planets, someone had to be there to write the check.

    If the incentives were well aligned, that’s where this story would end — as a fan-letter to weirdos. VC would remain a cottage industry investing in wacky stuff, offering strong returns for LPs.

    Unfortunately, that is not the case.

    Over the past decade we’ve seen the emergence of a new type of VC: one who moves between trends with the swagger of a heat-seeking missile, investing as if their money might go bad. This behavior is contrary to pretty much everything that we know about venture capital, and yet the trend has only accelerated.

    To understand why, we have to look at VC compensation:

    The ‘2 and 20’ structure of VC compensation is pretty well understood and has remained unchanged for a long time: You get 2% of the fund per year in ‘management fees’ to pay your bills and support portfolio companies, and 20% of ‘carried interest’ as a share in any profits made.1

    For people passionate about the outliers, carried interest is the hook. Secure enough big wins and you can make a vast amount of money, in contrast to management fees which aren’t exactly lucrative for a small fund. It’s also nice that carried interest aligns success of the firm with success of the founders.

    However, as capital flooded into private markets over the last couple of decades, and exits took longer to materialise, some cunning individuals recognised an opportunity: the 2% is guaranteed, independent of performance, and it is possible to ‘hack’ venture to maximise that income.

    You can do things the old fashioned way, raising (for example) two $100,000,000 funds in a ten year period, with the implied annual income of $4,000,000. Alternatively, you can squeeze three funds into that period, at double the size, and scale your income to a mighty $12,000,000. All without really needing to worry about underlying performance.

    To build that second scenario, you need to do three things:

    • Invest in the most overheated, capital-intensive industries, which allow you to justify raising and deploying larger funds ever more quickly. These industries are also an easy-sell for LPs, who want something to talk about at dinner parties.
    • Systematically undermine the understanding of valuation by promoting crude and illogical practices, and calling people nerds if they say things like “free cash flow”. Venture is a craft, not a science — which basically gives you carte blanche, right?
    • Pour capital into brand and status building for your firm, which LPs love. Celebrities, political figures, impressive offices, big events… Anything that shows them you’re a serious institution (with the perks that entails) and not some garage-band firm.

    Instead of looking 8 to 16 years in the future with your portfolio, you want to focus on the next 2 to 3 years in order to align with your fundraising cycles. You want companies that are likely to grow in value rapidly in the near future, so hype and consensus are powerful allies.

    The aim is to invest in a company at Seed and propel it to a Series A within 2 years at a 4-5x markup, which — if you can repeat it often enough — will look great to LPs. If they ask about DPI just talk about how the IPO markets should open next quarter next year.2

    It doesn’t even matter if you don’t think your portfolio companies are attractive investment with that markup, as there’s no obligation to participate. You have the growth on your books to help you raise the next fund, and some compliant downstream bag-holders, that’s all that matters.

    You can even build this strategy into how you price deals. Rather than try to objectively value the business, just tell founders to think about what a reasonable Series A price would look like for them, and then divide it by 3 for the Seed. That way, you’ve got the expectation of at least a 3x markup already built-in to the investment!

    You want to make sure the heat persists, to ensure prices at later stages remain frothy and your markups get better and better. So consider a bit of thought leadership to keep interest on your chosen sector. As long as LPs believe the hype, and keep investing in other funds on that theme, capital will keep piling in. Amplify that market momentum as much as possible. Volatility is your friend, and over time it can even help you wash out smaller managers that offer an unfavourable comparison on performance.

    Obviously the actual investment returns from this strategy are likely to be terrible, unless you’ve somehow timed another ZIRP/2021 exit phenomenon and can unload all of your crap on the public markets just before the music stops. It doesn’t really matter, because the median return in VC is so poor that you might just luck your way into top quartile anyway. Keep the paper marks strong, keep bullshitting LPs about the market conditions and the insane potential of whatever it is you are investing in, and you can probably keep buying back in with a new fund.

    It’s going to be toxic to founders, as they watch huge piles of capital being incinerated chasing hype instead of genuine innovation.

    It’s going to be toxic to innovation, as founders increasingly choose to pursue ideas that they think VCs will back, rather than real passion projects.

    It’s going to be toxic to VCs, as good practices around markups, pricing and portfolio management are ditched in favour of capital velocity and short-term incentives. It’s already frighteningly clear how much basic investing knowledge washed out of VC during ZIRP.

    It’s going to be toxic to LPs as already pretty shitty performance metrics for venture capital get even worse.

    1. The management fee is often frontloaded and scales back after the investment period. The 20% carried interest may also have a hurdle rate (e.g. 8%) which guarantees some return on investment for LPs before they split profits with the VC. []
    2. It might not matter though, as many institutional LP allocators collect their bonuses on markups, so their incentives are totally aligned with yours. They’ll probably have moved on to a new job in a few years anyway. []
  • AI as a Utility

    AI as a Utility

    Investors don’t really need to invest in net new companies to get exposure to AI’s potential halo effect; If all your portfolio companies start to integrate with the right existing tools on the market, they could bloom too. It’s the promise of horizontal tech.

    Natasha Mascarenhas, TechCrunch

    In a previous post I used the games industry as an example to make the case that AI probably isn’t going to be disruptive to any major verticals. Instead, it’s another tool in the creative process which can empower better, more efficient outcomes.

    The quote from Natasha helped cement that even further: AI should be viewed as a utility, similar to electricity or water. We’ve already witnessed the commoditization of computing power, and now it’s the turn of generative AI platforms.

    Viewed through this lens, we can avoid a lot of the hyperbole about AI threatening categories of employment. It will certainly have a significant impact on the way many industries operate, but not fundamentally transformative. More evolution than disruption.

    What does this mean, right now? Not a great deal. The more forward-thinking you are, the more likely you are to be exploring whether there are ways that LLMs can help with your ‘jobs to be done’. In the most basic cases, can it ease your workload by eliminating some menial tasks, or provide a source of some inspiration?

    If there’s one area where LLMs have the potential to diverge from the ‘AI as a Utility’ idea, it’s in ‘whitelabelled’ solutions. In essence, if OpenAI could sell me a version of ChatGPT that is stripped of all reference to news, politics, current events, media – something with the conversational skills of a human but none of the knowledge – that could be tremendously useful. It could then be independently trained on very specific data sets to build an model that is practical and reliable in niche applications.1

    This would open up LLM-driven applications across a range of industries, whether it’s customer service chat bots to NPCs in video games. Imagine a company-wide virtual assistant that is aware of every invoice, file, public Slack conversation or email. The possibilities are clearly wide-ranging and powerful.

    In fact, to expand on the AI as a utility metaphor, the closest parallel we can draw today – in terms of potential, risks, and regulation – is probably nuclear power.

    I’m pro-nuclear, and understand the argument that regulation has slowed progressing that field. 
but what has really slowed that industry down is the fear sown by catastrophic accidents linked to poor oversight. Three Mile Island and Chernobyl wrecked public perception, political will and engendered much of the over-regulation which followed.

    For example, this 1978 article in The Washington Post covers the Soviet Union’s ‘optimistic’ approach to nuclear plant safety in the years prior to Chernobyl.

    This paper from the IAEA, published as a response to Chernobyl but also mentioning Three Mile Island, describes an environment of broken trust, fear and disgust.

    The dichotomy of ‘accelerationists’ and ‘doomers’ is childish. Neither is helpful. There has to be room for both progress and an appropriate level of caution. That doesn’t mean letting incumbents dictate the direction and severity of regulation, in the same way that it would have been insane to let the coal industry regulate nuclear. Regulatory capture is a concern, but too many are reducing this down to a narrative that any drive towards regulation is a threat to progress.

    The world stands to gain significantly from AI, as it has from nuclear power. The dangers, while less obvious, may be no less threatening. Nobody really knows.2

    As Elon Musk said himself: hope for the best, plan for the worst.

    1. Of course, a behemoth like Bloomberg can afford to build their own model to accomodate a specific focus. []
    2. To be clear, what I’m referring to here is true AI. General intelligence. Today’s ‘generative AI’ is more of a distraction than a threat. It’s primary role seems to be a lever which VCs and technologists can pull to raise capital. []
  • Generative AI and the Games Industry

    Generative AI and the Games Industry

    This post looks at applications of generative AI in the context of the games industry, but much of the same logic can be applied elsewhere.

    Adapting to technological evolution

    With every new technology revolution – web3 most recently, and now AI – there follows a large herd of true believers. It can do all things, solve all ills, and life will never again be the same again. Enamoured by possibility, they follow with a true sense of opportunity.

    Loudest amongst this herd (and most critical of nay-sayers) are the wolves in sheeps’ clothing. The rent-seeking charlatans.

    This was explicit in the get-rich-quick era of web3, and much of the same problem has transferred over the AI as techno-pilgrims flee one sinking ship to pile into another.

    Secondly, on the other side of the coin, are the cynics. People who were raised on 56k modems and bulletin boards, who feel a deep discomfort as technology moves beyond their grasp. They felt like the rational resistance to web3, and so have little hesitation about weighing in on AI.

    We have to be conscious of both groups, and our own place on that spectrum.

    Why the games industry?

    There are three main reasons I’m keen to address the games industry as the case-study for this post:

    1. As with web3, AI is being shoved down people’s throats without due concern for why.
    2. It is largely focused on a young audience who are absent from these conversations.
    3. It connects with my personal experience in the games industry.

    If you want to read about the potential use cases for AI in banking, you’ll find a thousand thought-leader think-pieces. It was well-covered ground without much original thought even before ChatGPT came along.

    If you want to talk about the potential use cases of AI in the games industry, you’ll find some ex-crypto VCs and technologists trying desperately to pivot their brief experience. Insubstantial waffle.

    Perfection is the enemy of good

    Dealing with the more exciteable technophiles, you’ll probably notice they don’t show a lot of interest in the complex applications. Their interest is in the most extreme examples of movies, games or books being entirely generated by AI (or entirely decentralized, yada yada).

    Their point is simple: if AI can do these things crudely today, then tomorrow it will be able to do them well – and at that point we’ll be forced to embrace the bold new future. Right?

    This fallacy can be observed in every parent watching their child smear paint on paper for the first time: something inside them says ‘they could be a great artist’. It’s true: the ability to manifest art can be that simple, and the child has huge potential for improvement… Yet it’s still not going to happen for all but a miniscule few.

    In both cases, the AI model and the child, there cannot merely be push, there must also be pull. There must be a need being met. An appetite being satisfied. And 99% of the time, there isn’t. Once the novelty has worn off, nobody has any interest in watching an AI-generated movie, reading an AI-generated novel, playing an AI-generated game, or looking at your child’s paintings. There just isn’t a call for it.

    Instead of putting AI on the pedestal of a godlike creator, we should look at where it can be a tool to solve a problem.

    Merchants of fun

    You can get side-tracked in talking about experiences, socailising, adventuring, exploration, curiosity, challenge, status… Ultimately, games are vehicles for fun. That’s bedrock.

    Is an AI-generated game likely to be more fun than the alternative? No, of course not, and if you suspect otherwise then you’ve not spent enough time with the wonderful and wacky people who make games. They are true creatives.1

    Any application of generative AI to the games industry must have either enhance fun, or enhance the developers ability to deliver it.

    Exploration

    If you look at games like Minecraft or 7 Days to Die where you can explore a proceedurally generated world, it’s easy to see how generative AI might be able to supercharge that environment building.

    It’s worth considering, though, that this is a specific approach for a specific type of game. As good as these engines have gotten, most of the time games will require a more ‘designed’ world, with geography or features which play into gameplay mechanics, story elements or IP. Generative AI may offer tools to make this more efficient (as many proceedural tools already do), but is unlikely to replace it entirely.

    Socialization

    Imagine walking around a Skyrim or Cyberpunk style sandbox-world, full of NPC characters with their own unique look, voice, and personality. Each able to hold a conversation with you, flavoured with their own specific personality and knowledge. Not merely giving canned responses to pre-defined prompts, but able to interact fluidly with you and amongst themselves.

    Again, this is unlikely to ever be all a game needs. Stories still require specifcally designed characters with particular roles which need to be shaped by the intention of writers and a design team, but it is still a tremendous opportunity to solve the social component of virtual worlds.

    These are two quickly-sketched examples of how generative AI could enable a leap forward in the experience provided by games devleopers – and I am sure there are many more to be found.2

    Tapping into the market

    I wanted to do this in a more subtle manner, but it’s just more practical to break down Andrew Chen’s Twitter thread:

    Games can take 3+ years to build, and technology adoption happens at specific windows of time

    If your generative AI tool is a plugin (for the Unreal Engine, for example) then a studio can pick it up at any time and add it to their development stack.3

    You shouldn’t be limited to thinking in terms of ideas that are ‘disruptive’ to how games are made, and indeed most of the opportunity may be in ideas which are complimentary.

    indie games make little $. There’s only a few scaled players, who will always push on pricing

    If you were going to target indie developers it would have to be with a very specific value proposition and business model (e.g. Unity in 2004). There’s no reason to worry about this otherwise; there are enough larger studios.

    the games ecosystem is insular, with its own conferences, luminaries, and networks / networking” in the games industry often involves, well, gaming. Are you good at Valorant? 🙂

    Can you tell me an industry which doesn’t have its own conferences, luminaries and networks?

    The games industry is not insular, and it is comical to characterize it as a bunch of nerds playing games together. It’s a wonderfully open, social and diverse community.4

    a large % of game cos have artists and creative people. Many are threatened by, and oppose, AI tech

    I don’t know of anyone in the games industry, artist or designer, who isn’t starry-eyed at the possibilities of what AI can enable.

    They are also familiar enough with how games work to recognise that human input is always going to be required to shape and polish the human experience which emerges on the other side.

    you need to generate editable, riggable, high-quality art assets. Right now assets are idiosyncratic and hard to edit

    Generative AI has not yet proven that it can generate useable assets, never mind well-optimised thematic assets. That problem can probably be solved, but to what end?

    Will a world created by a generative AI ever truly feel interesting, coherent, beautiful? Maybe there are better things for it to do?

    large publishers often provide tech to their internal studios. They’ll partner to learn about AI, but will try to build in-house. Is your tech defensible?

    That might have been the case 15 years ago, but the vast improvement in game engines and tools has meant that developers are much more likely to build on existing platforms.

    If a publisher believes that a tool would make development cheaper and faster then they’ll support it without blinking.

    large gaming cos care a lot about their models and data not being shared across the industry. How do you guarantee that? / they also care that their models are trained on data that’s safe from a copyright perspective. There’s lots of hoops to jump through

    Stretching a bit here, but: You train your tools on an open set of data to the point where they are useable, and allow developers to provide additional training based on data from their own IP. In that scenario there is no reason for crossover between studios.

    It’s unlikely that training from one game would ever be useful to the application of the AI in another. It is probably more likely to produce undesirable results.

    Conclusion

    Some years ago an associate of mine went to interview for a job at a games company in Seattle. The interviewer had previously been the lead designer on Starcraft, and naturally expected the candidate to play a match against him while fielding questions about the role.

    The games industry is full of these amusing anecdotes of quirky behavior, and there is a pronounced culture associated with that. However, it is condescending to think that culture stands in the way of progress, or that games studios can’t engage with business and technology partners in a perfectly competent manner.

    If you make a useful tool which solves a problem for the games industry, you will be able to access the right people to make a sale. I’d go so far as to say it’s probably easier and faster moving than many other industries.

    If that is your aim, make sure you are spending enough time talking to games developers, learning about how games are made, understanding the player mentality, and the problems that you might be able to address. As always, finding product:market fit can require a lot of learning and iteration.

    Most of all, ignore the false prophets who were reading from the web3 gospel just a few months ago. They will just ride this trend until something else comes along.

    1. Yes, throughout this article I am drawing a deliberate and passive-aggresive distinction between ‘creating’ and ‘generating’. []
    2. It bothers me that I covered Explorers and Socializers, but didn’t have the time to identify anything for Achievers and Killers. []
    3. And in most mid-large studios there are usually multiple teams running in parallel focused on different projects at different stages of development. []
    4. The irony of a venture capitalist calling the games industry ‘insular’ is not lost on me. []
  • The Negligible Cultural Impact of AI

    The Negligible Cultural Impact of AI

    Good art (including novels, games, movies) is defined by the humanity involved. Emotion, humour, tension. Even when AI attempts to mimic those attributes, we’ll still prefer human experiences over synthetic ones.

    We’re inclined to believe each new innovation is the ‘best’, and that the technology-driven approach is always superior. To overlook almost anything in pursuit of speed or efficiency.

    We must have the latest invention, and we’ll use it proudly until the novelty – and associated status – wears off. Then those gains in speed and efficiency can work their way to the market they’re intended for: people who are price or time sensitive.

    Microwaves were billed as kitchen gadgets for the wealthy, revolutionising home cooking. It turns out that we’d rather bake artisanal sourdough bread in a wood-fired oven, when we have the luxury of time and choice.

    We consistently overestimate the cultural impact of ‘technology for technology’s sake’. Popular visions of the future in science fiction show the wealthy living in hyper-minimal grey boxes with robots for every function. Utterly dull.

    Avatar was supposed to push the envelope for the movie industry with stereoscopic technology and CGI, offering a vivid and immersive experience like never before. It remains the highest grossing movie of all time but the cultural impact, relative to that, is miniscule. Few really cared about the story, or the characters involved.

    The protagonist of that franchise? James Cameron, with a 3D camera over his shoulder.

    AI only threatens the bottom-of-the-barrel stuff.

    Free apps, stock images, SEO-driven content.

    It is not a threat to broad swathes of industry and the arts in which humanity plays a major role. Genuine empathy and emotion is only going to become more valuable, as the rest of our lives become more technology oriented.

    That’s not to say that AI wont be powerful and practical. It is already shaping whole industries. We just need to have a realistic perspective on where that importance lies.

    Consider another parallel: artificial meat.

    It caused a brief stir when it was new and exciting, popping up in all kinds of fancy gastropubs. And then interest fell off. The ultimate customer for that product, once it meets the promise of being cheaper and greener than real meat, is not fine-dining restaurants. It is McDonalds. It is MREs. It is the boxes of frozen chicken nuggets in your local discount market.

    Nobody will love it.