• “Why don’t VCs set marks with 409a valautions?”

    “Why don’t VCs set marks with 409a valautions?”

    This is a question I saw on Reddit’s often-comical /r/venturecapital, which I thought was interesting enough to write out a decent response to. It hits at the root of a few major problems in the asset class which are always worth addressing.

    A 409A valuation, named after Section 409A of the United States Internal Revenue Code (IRC), refers to the process of determining the fair market value of a privately held company’s common stock. It is often conducted to comply with the tax rules governing non-qualified deferred compensation plans, such as stock options, stock appreciation rights (SARs), and other equity-based compensation arrangements.

    A lovely summary from ChatGPT

    First thing’s first: Generally speaking, VCs don’t care about valuation, and especially not ‘fair value’.

    …and that’s quite reasonable. VCs have their own investment strategy, their own approach to calculating risk vs potential, and if it works for them (and their LPs) then great. More power to them.

    What’s important here is that while we often use the word valuation in reference to deal terms and portfolio performance, what we really should say 99% of the time is price.

    In venture capital, price factors in a number of things, including the advantages of preferred stock over common stock, but most significantly it is geared at reflecting what the market would be likely to pay for that startup at the time. This is why VCs focus so much on comparable deals when pricing rounds, even if it ends up being a bit circular, with everyone copying everyone elses homework.1

    So that is the status quo. But why are VCs interested in preferred stock in the first place?

    Venture capital is all about power law, right? The idea is to invest in many startups, expect to lose money on 80%, and make a varying amount of money back from the remaining 20%.

    So why do they care so much about downside protection, rather than maximising that upside?

    When you add a liquidation preference to a deal, the implied value of the equity increases, meaning you get a smaller % for your capital. Lower returns at exit. That kind of trade-off flies in the face of power law, so why is it of interest?

    There’s a clue in this great article from William Rice:

    Liquidation preferences insulate VC firms from losses, so they can delay markdowns until after they raise another fund. VC returns follow a J-curve, therefore losses come much earlier than returns. Liquidation preferences can serve as valid reasons to not mark-down investments as companies begin to miss milestones or don’t receive an exciting Series A valuation bump.

    William Rice, “Slugging Percentage vs. Batting Average: How Loss Aversion Hurts Seed Investors”

    Liquidation preferences are mostly an irratational response from loss-averse VCs, some of whom may be trying to shield themselves against reporting poor performance to LPs. Maybe that’s overly cynical; I’m all ears if anyone has a better explanation.

    The core assertion here is that in a more rational and healthy market, liquidation preferences probably wouldn’t exist and VCs would just buy common stock.

    An industry with few standards

    Now that we have some understanding of how equity is priced and why preferences exist, let’s return to the original proposition: that VC investments could be marked up or down based on 409a valuations.

    In some cases, VCs do set marks with 409a valuations, but not all. Unfortunately – as with much of VC – there are no real standards.2

    Some VCs will only set marks based on fundraising activity, some will also consider 409a updates, some will factor new SAFE caps. Some will ignore downrounds, some won’t.

    The way VCs price rounds is subjective and non-standardised, and therefore way they track the value of those investments is also subjective and non-standardised.

    It might be going too far to say that this was all designed to obscure performance and protect charlatans, but this is probably how I would design VC if that was my intention.

    Performance > relationships

    Putting aside deal pricing for now: a VC firm could use any framework that provides a systematic read on fair value, such as this one from Equidam, and apply that to tracking portfolio company performance.

    This would represent a huge shift in how VCs operate, and how they manage relationships with LPs. It’s also something that I’ve written about at some length before.

    The horizon for useful feedback could be annual (or even quarterly) rather than 5-10 years.

    LPs could hold fund managers accountable for performance, and we may see that many household names (which attrract the lion’s share of capital and startup attention) are actually dramatically underperforming. They could more confidently back emerging managers, who could provide more meaningful metrics of success.

    VCs would be able to follow portco growth more precisely and learn much more quickly about what works and what doesn’t. Good managers would be able to fundraise much more easily.

    Crucially, it would make VC an industry based on performance rather than relationships and hype chasing as it is today. It would make VC better at backing innovation, which is what the whole asset class was built around.

    Startups are volatile, but capital should be stable

    Finally, if VCs were also to price deals based at least partially on fair value, we’d avoid momentum-driven valuation rollercoasters like we’ve seen in the last two years. Much less risk of valuation bubbles and crashes, more stability for LPs and VC funds, more consistent access to capital for founders, and – again – an asset class that could better serve innovation.3

    A more objective and independent perspective on startup potential, better suited to investment in the innovative outliers that venture capital was created to support.

    I’ve spoken to a number of people – LPs, VCs and founders – about this topic. There’s a near universal acceptance that a standard for private company valuation would be of huge benefit to the whole venture ecosystem.

    Unfortunately, none of them are particularly incentivised to make a stand on it.

    Some may benefit from the status quo, and the rest are keen to maintain their relationships by not making waves.

    1. Thus bubbles, etc… more on that later. []
    2. If anyone has any data on this, I’d love to see it. []
    3. There are also huge second-order effects, like how it would make venture capital fairer by removing more of the bias found in less structured approaches to valuation – but that’s for another post. []
  • It’s all about identifying outliers

    It’s all about identifying outliers

    What startup investors can learn from sports betting

    Early stage investing is a complex and relatively new practice, which makes it fertile ground for analogies which can help explain the more abstract concepts to both newcomers and veterans alike. 

    In this particular case, grappling with the intrinsic value of pre-revenue startups, there’s an interesting parallel to sports betting. Fundamentally, both involve looking at the strength of a team and the competitive landscape and making a judgement on future potential.

    What we’re considering here is the idea that a startup – even a pre-revenue startup – has a determinable value even before that value has been tested in the form of a market transaction. This is also what you might call a ‘fair market’ valuation, which is what we aim for at Equidam

    MOIC vs. betting odds

    In early stage investing, investors will look to benchmark potential returns using a metric called the multiple of invested capital (MOIC). MOIC is calculated by taking the total potential return on an investment and dividing it by the amount of money invested. For example, with an investment of $100,000 in a company with an expected MOIC of 10, the company should have the potential to return $1M.

    In sports betting, participants measure their potential returns using the odds of winning, which represents the probability of success. For example, if the odds of a team winning a basketball game are 9:1, it means the team is assumed to have a 10% chance of winning and the return would be a multiple of 10. 

    Rewarding the earliest participants

    In both examples, the earliest (successful) participants receive the most lucrative returns. In investing, this is because early investors are able to get a lower share price than later investors. In sports betting, this is because early participants are able to secure better odds.

    In both cases, this is for the same reason: At the very beginning there is the least available evidence to indicate an assumed outcome, thus a greater level of perceived risk associated with the choice. This is true both in terms of signals from other participants (other bets or investments made) as well as actual progress in terms of milestones achieved, such as games won or revenue secured. 

    Qualitative and quantitative measures

    In early stage investing, investors use a variety of qualitative and quantitative measures to judge the potential of a company. Qualitative measures might include the management team, strategic relationships, and the competitive environment. Quantitative measures include things like the company’s projected financial performance, market growth and associated risk. This is reflected in the form of the valuation, which ultimately informs the potential return on investment.

    In sports betting, participants use similar measures to judge the potential of a team. That might include the team’s roster, their experience together, track record of the coach, and the threat posed by other teams. This is reflected in the form of a perspective on what the betting odds should be to provide appropriate upside for that level of risk.  

    What this means for early stage investors

    According to some, a startup does not have a valuation until it has been priced in an equity transaction. To an extent (in a strict and formal sense) that is correct. It does not itself have a valuation, because value is not an objective concept. Like beauty, it lies in the eye of the beholder. However, we shouldn’t pretend that an equity transaction represents an ‘objective’ read on value either; it’s also just the opinion of an investor.

    What this analogy illustrates is that you, as an early stage investor, should have your own personal read on valuation as a reflection of future potential. You need to understand the qualitative and quantitative factors involved, and determine a practical framework to run your own analysis. It’s the best way to sharpen your judgement on future potential, take an informed perspective on risk vs return, and put your money to better use.

    If you are investing your own money, it’s not crucial that your valuation framework be seen as objective or fair. Many investors look at valuation primarily in terms of market context and what other investors are doing. Others use simple heuristics like national averages adjusted with a few qualitative measures, even if that screens out some deals. Whatever approach you use, if it allows you to reach your desired level of returns then it is clearly working. 

    When being objective is important 

    Imagine it’s January 2023, and a friend is looking to place a bet on the upcoming NBA championship, and you – being wise and well informed – recommend putting money on the Denver Nuggets. This is a team which hasn’t won a championship in its 47 year history, and a year ago they lost in the first round, so the odds are great (in terms of MOIC) but your friend will take some convincing. 

    It’s in explaining this opportunity to another person that objectivity becomes important; your rationale must survive without the support of your own biases and perceptions. What are the data points which conclude that the Denver Nuggets have been overlooked or undervalued by the market? What is it about their 2023 roster and the wider competitive environment which indicates for success? How do you piece that into a compelling story for your friend? 

    This is, again, mirrored in the world of early stage investment. If you are looking for input on the potential of a startup – which has not yet been rubber-stamped in a market transaction – you will want to see it in a transparent, objective format which covers all of the key indicators. This is applicable in a range of cases, whether that is determining a valuation for the first round of a company, proposing a valuation to a group of angel investors, or reporting updated valuations to your LPs.

    This is where we finally arrive closer to assigning a ‘fair valuation’ to a startup, rather than the individual perspectives on valuation. Not a number determined by the combination of gut-feel and Excel-gymnastics designed to pattern-match past success, but something scrutable, explainable and repeatable. 

    Crucially, valuation can be incredibly useful even when it’s not associated with a market transaction. In fact, the single perspective of a lead investor on the value of a company is potentially less valid, and less reliable, than a more objective framework.1

    Backing outliers is the whole ballgame

    Early stage investment pivots around uncertainty and valuation is always a tricky exercise in assessing the tangible and the intangible. Reaching 100% efficiency in the risk/reward is never going to happen. 

    Similarly, your friend doesn’t have to buy every data point in your recommendation, they just have to understand what you are looking at, the conclusions drawn, and appreciate that it was a rational process with an outcome they can challenge or disregard as they wish. 

    Had they made their bet based on the odds at the start of the season, following the favourite as indicated by the market, they’d have lost their money. Had they waited a few months to see how each team performed in order to inform their judgement, they would have increased their chance to pick the right team, but with much lower potential returns.

    And that’s the argument in a nutshell. In order to understand an opportunity while the terms are most favourable, or to explain that opportunity to others, you need to think about practical and objective measures of future potential. Early stage investing is all about identifying outliers, like the Nuggets, which is precisely why we approach valuation from this perspective. 

    1. Especially given the extreme proclivity of investors to pass the buck, and base their pricing on other market transactions. []
  • Startups are the clients of Venture Capital

    Startups are the clients of Venture Capital

    As a founder learning the ropes of venture capital, you might see VCs as asset managers, with LPs as their customers and your equity as the asset being managed.

    This is heavily implied by the chain of responsibility: you are required to report your progress to your VC investors who want to see milestones crossed and targets met. Similarly, VCs then have to report on the fund’s investments to their LPs. 

    It would be an odd relationship if customers were accountable to service providers, right? 

    Understanding the relationships

    When an LP commits to investing in a VC fund, they are typically locking themselves in for a ten year relationship. That’s three to four years over which they expect their capital to be invested, and six to seven years in which they hope they’ll start to realize those returns. This mirrors the kind of relationship you will have with a VC, which lasts a similar period of time from investment to exit. 

    In this context you might understand that LPs don’t really resemble a customer, and neither do VCs. Instead, they are the shareholders and operators of a specialized financing instrument for early stage companies. The relationship matters, updates are intended to prompt feedback, and success is shared. Crucially, both parties rely on the firm building a reputation for offering a good service, fair terms, and accelerating success stories. 

    Fred Wilson of Union Square Ventures shared more of the VC perspective on this in his 2005 blog post:

    The entrepreneur creates the value, they are the ‘raw material’ in the venture capital business.  If there were no entrepreneurs, there would be no venture capital business.

    Fred Wilson of Union Square Ventures, in “The VC’s Customer

    How this shapes fundraising

    It can be difficult to view this from the perspective of a founder, as it assumes you are in a position of control – an odd fit with the usual perception of the venture capital process. Isn’t fundraising all about struggling with an endless string of rejections? Again, the bigger picture allows us to see how the relationships really function.

    Healthy markets rely on consumers having freedom of choice, and this is where venture capital suffers from an image problem: When you’re hammered with messaging about how slim the odds are for success, it can seem like raising money from a top-tier firm is the most important signal for success. The moth-like attraction to the top of the market means those firms are swamped with pitches and thus issue even more rejections.1

    However, if you look at venture capital as a marketplace of firms looking to service startups, you might be more inclined to think in terms of practical comparison. Ignoring the logos, who can best serve your particular needs? Where are the hidden gems and less obvious bargains?

    Consider consumer brands, where bigger companies tend to be worse at serving more specific (more technical, higher performance) consumer needs. A larger target market implies more mainstream use cases, and your brand often becomes more important than the performance of your products. At that point, there are likely to be smaller brands that outperform in a particular niche where their expertise makes a difference.

    This is a reasonable metaphor for venture capital, illustrating the benefit of approaching fundraising as a customer looking for a solution rather than an entrepreneur with their hat in their hand. As with any transaction, you are looking for the best bang for your buck, and smaller specialist funds are likely to deliver exactly that – for all sides of the transaction. 

    Highly specialist portfolios from young firms have a top-quartile hit rate of 61%, representing a 2x increase from the most generalist portfolios.

    Liam Shalon of Level Ventures in “Outperformance in Early-Stage Specialist Firms: A Data-driven Analysis
    Photo by Victoriano Izquierdo on Unsplash

    1. And often lose focus on the fundamental role of VCs: financing innovation, not shaping the future. []
  • LPs should encourage VC evolution

    LPs should encourage VC evolution

    In a previous article I wrote about the threat of consensus in venture capital.

    A few days later, Eric Tarczynski shared a fascinating thread about the journey with Contrary, his VC firm. He addressed this point about consensus with admirable candour, summarised here in two points:

    1. Raising from LPs is easier if you have recognisable logos attached to your previous funds. Success is measured by which big names in VC co-invested with you.
    2. Raising from LPs is easier if they get good references from their existing VCs. So you send deals to them, network with them, and co-invest with them. Success is measured by relationships.

    It’s unusual to get such an unvarnished look at the inside workings of venture capital, and the thread elicited a number of reactions. Most agreed it was a tough pill to swallow:

    Eric’s awesome but boy is that thread a pretty damning look into the inside-baseball-nepotism that starts from the top (LPs) and infects the whole VC ecosystem.

    Luke Thomspon [source]

    ‘We thought that being good investors with a unique thesis that actually makes money would be the best strategy, turns out, following the herd, piling onto garbage, and being unquestioning vassals to incumbent investor power gets you a larger fund’ – My interpretation

    Del Johnson [source]

    There’s an elephant in the room in all of this. Or perhaps it’s a bull in a china shop. Either way, everyone seems to be ignoring it and it’s doing a lot of damage.

    Weak signals

    From pre-seed to IPO, there is no consistent, transparent measure of success. That’s a long time for a GP to deploy capital without any concrete metrics for success. How does an LP ascertain if their money is being put to good use?

    Samir Kaji of Allocate (and former SVB MD) shared his take on the problem that LPs face:

    LPs are programmed to use past track record as the primary driver in making a decision on whether to invest in a new fund (A recent study showed historical persistence of VC is that 70% chance a fund performs above median if prior fund is 1st Q). However, more than ever, track record can be a very weak indicator if the fund is within <5-7 years.

    • Spread of how VCs are valuing the same companies is large.
    • Current TVPI to final DPI delta will be large for many funds, and some funds have resilient companies; others are filled w/companies that were pure momentum (but still marked up).
    Samir Kaji, Allocate

    There is an obvious desire from both sides to find something to show. As Luke put it, “we can pretend it’s all about independent thinking, non consensus and right, etc, but when you’re going out for Fund 2 and on a stack of unrealized, LPs want other signals.”

    This is why we end up focusing on ‘logo hunting’ and co-investment culture. If we’re all a gang, and we back each other up, then we’ll maintain the confidence of LPs. Meanwhile, the LPs are probably feeling a degree of comfort from investing in a few different funds, without realizing how intermingled and codependent they are.

    As Chamath Palihapitiya wrote in Advice to Startup Founders and Employees: Strength Doesn’t Always Come in Numbers:

    As it turns out, what VCs of the past decade assumed to be market alpha may have actually been market beta (i.e. fellow venture funds bidding up the same cohort of companies over several funding rounds).

    Chamath Palihapitiya, Social Capital

    This is clearly an undesirable outcome for LPs: The data for measuring venture capital fund performance is flimsy and creates a huge perverse incentives for GPs. This is clearly not good enough when so much capital is at stake. Especially when it involves pensions funds and university endowments. It’s a bad look for everyone.

    The final nail in the coffin here is how current practices can create a reality-distortion field around actual performance: in effect, a company’s ‘public’ valuation only changes when they want it to. This was outlined at length in a thread from Anand Sanwal of CBInsights, which included this slide from SVB:

    This is on the mind of every LP at the moment. What do their ‘paper’ returns from 2021/22 actually mean anymore? What will happen when the companies they are invested in via VC are forced to come to terms with reality?

    Meaningful benchmarks

    When you start talking about standardising anything in venture capital, there’s a reliably cold response. Everybody likes to believe they have their secret sauce, their intuition, their process, their edge over others… despite all signs pointing towards none of that changing the outcome.

    When you talk about measuring the performance of early stage companies, that’s when the real pushback begins. There’s too much uncertainty. It’s too unreliable. Projections are always a pipe-dream.

    There’s one simple response to these concerns: “Perfect is the enemy of good“.

    If you open yourself to new ways of looking at valuation (it’s not just about “market passing”), and new ways of performing valuation, you will find that there are practical, systematic frameworks to measure and report the development of private companies.

    Don’t get twisted up about producing an “accurate” result for an early stage company, it is foolishness – and not the point. The goal is to provide solid, useful benchmarks which can be calibrated against the market in a transparent manner.

    For an example of how this might be achieved, I will always recommend a read through Equidam’s methodology. It combines perspectives on verifiable characteristics via the qualitative methods, the exit potential via the VC method, and the vision for growth via the DCF methods. All packaged up into a nice, comprehensive report.

    What standardized reporting does for the LP/VC relationship

    If you can imagine a world where VCs produce quarterly reports on fund performance using a standardised framework, there are a number of profound benefits:

    1. LPs could better assess the performance of their existing VCs, creating more of a meritocracy.
    2. VCs would have an easier time raising, in addition to shortening their own internal feedback-loops to improve decision making.
    3. Moving away from current lazy valuation practices (ARR multiples) would help avoid extreme fluctuations in valuation, as we’ve experienced since 2021.
    4. It will (slowly) kill the dinosaurs, the giant firms which played a part in the development of this ecosystem and all of its flaws.
    5. A move towards transparency – especially around valuation – would be timely, as the SEC’s gaze falls on venture capital.
    6. There are also interesting considerations for liquidity in secondary markets serving private company equity, but that’s a whole post of its own.

    Conclusion

    It seems clear to me that this change will not come easily to venture capitalists, who are either comfortable with the status-quo or simply find it convenient. However, it might be possible for LPs to set new terms as market dynamics have shifted power in their direction.

    Still, this is a difficult argument to make. I’m suggesting no less than upending how much of venture capital operates, and I’m doing so from the position of a relative outsider.

    But I guess that’s the point? Venture capital has been a closed ecosystem for too long, full of esoteric practices shaped by a relatively tiny group of individuals. There is plenty of room for improvement, especially if we stop getting hung up on the need for ‘perfect’, when the current status is ‘poor’.

    Finally, a bigger point than any of the six I mentioned previously: if this makes us better at allocating capital to innovative ideas, and innovative people, then it’s got to be worthwhile.

  • AI as a Utility

    AI as a Utility

    Investors don’t really need to invest in net new companies to get exposure to AI’s potential halo effect; If all your portfolio companies start to integrate with the right existing tools on the market, they could bloom too. It’s the promise of horizontal tech.

    Natasha Mascarenhas, TechCrunch

    In a previous post I used the games industry as an example to make the case that AI probably isn’t going to be disruptive to any major verticals. Instead, it’s another tool in the creative process which can empower better, more efficient outcomes.

    The quote from Natasha helped cement that even further: AI should be viewed as a utility, similar to electricity or water. We’ve already witnessed the commoditization of computing power, and now it’s the turn of generative AI platforms.

    Viewed through this lens, we can avoid a lot of the hyperbole about AI threatening categories of employment. It will certainly have a significant impact on the way many industries operate, but not fundamentally transformative. More evolution than disruption.

    What does this mean, right now? Not a great deal. The more forward-thinking you are, the more likely you are to be exploring whether there are ways that LLMs can help with your ‘jobs to be done’. In the most basic cases, can it ease your workload by eliminating some menial tasks, or provide a source of some inspiration?

    If there’s one area where LLMs have the potential to diverge from the ‘AI as a Utility’ idea, it’s in ‘whitelabelled’ solutions. In essence, if OpenAI could sell me a version of ChatGPT that is stripped of all reference to news, politics, current events, media – something with the conversational skills of a human but none of the knowledge – that could be tremendously useful. It could then be independently trained on very specific data sets to build an model that is practical and reliable in niche applications.1

    This would open up LLM-driven applications across a range of industries, whether it’s customer service chat bots to NPCs in video games. Imagine a company-wide virtual assistant that is aware of every invoice, file, public Slack conversation or email. The possibilities are clearly wide-ranging and powerful.

    In fact, to expand on the AI as a utility metaphor, the closest parallel we can draw today – in terms of potential, risks, and regulation – is probably nuclear power.

    I’m pro-nuclear, and understand the argument that regulation has slowed progressing that field. …but what has really slowed that industry down is the fear sown by catastrophic accidents linked to poor oversight. Three Mile Island and Chernobyl wrecked public perception, political will and engendered much of the over-regulation which followed.

    For example, this 1978 article in The Washington Post covers the Soviet Union’s ‘optimistic’ approach to nuclear plant safety in the years prior to Chernobyl.

    This paper from the IAEA, published as a response to Chernobyl but also mentioning Three Mile Island, describes an environment of broken trust, fear and disgust.

    The dichotomy of ‘accelerationists’ and ‘doomers’ is childish. Neither is helpful. There has to be room for both progress and an appropriate level of caution. That doesn’t mean letting incumbents dictate the direction and severity of regulation, in the same way that it would have been insane to let the coal industry regulate nuclear. Regulatory capture is a concern, but too many are reducing this down to a narrative that any drive towards regulation is a threat to progress.

    The world stands to gain significantly from AI, as it has from nuclear power. The dangers, while less obvious, may be no less threatening. Nobody really knows.2

    As Elon Musk said himself: hope for the best, plan for the worst.

    1. Of course, a behemoth like Bloomberg can afford to build their own model to accomodate a specific focus. []
    2. To be clear, what I’m referring to here is true AI. General intelligence. Today’s ‘generative AI’ is more of a distraction than a threat. It’s primary role seems to be a lever which VCs and technologists can pull to raise capital. []
  • Generative AI and the Games Industry

    Generative AI and the Games Industry

    This post looks at applications of generative AI in the context of the games industry, but much of the same logic can be applied elsewhere.

    Adapting to technological evolution

    With every new technology revolution – web3 most recently, and now AI – there follows a large herd of true believers. It can do all things, solve all ills, and life will never again be the same again. Enamoured by possibility, they follow with a true sense of opportunity.

    Loudest amongst this herd (and most critical of nay-sayers) are the wolves in sheeps’ clothing. The rent-seeking charlatans.

    This was explicit in the get-rich-quick era of web3, and much of the same problem has transferred over the AI as techno-pilgrims flee one sinking ship to pile into another.

    Secondly, on the other side of the coin, are the cynics. People who were raised on 56k modems and bulletin boards, who feel a deep discomfort as technology moves beyond their grasp. They felt like the rational resistance to web3, and so have little hesitation about weighing in on AI.

    We have to be conscious of both groups, and our own place on that spectrum.

    Why the games industry?

    There are three main reasons I’m keen to address the games industry as the case-study for this post:

    1. As with web3, AI is being shoved down people’s throats without due concern for why.
    2. It is largely focused on a young audience who are absent from these conversations.
    3. It connects with my personal experience in the games industry.

    If you want to read about the potential use cases for AI in banking, you’ll find a thousand thought-leader think-pieces. It was well-covered ground without much original thought even before ChatGPT came along.

    If you want to talk about the potential use cases of AI in the games industry, you’ll find some ex-crypto VCs and technologists trying desperately to pivot their brief experience. Insubstantial waffle.

    Perfection is the enemy of good

    Dealing with the more exciteable technophiles, you’ll probably notice they don’t show a lot of interest in the complex applications. Their interest is in the most extreme examples of movies, games or books being entirely generated by AI (or entirely decentralized, yada yada).

    Their point is simple: if AI can do these things crudely today, then tomorrow it will be able to do them well – and at that point we’ll be forced to embrace the bold new future. Right?

    This fallacy can be observed in every parent watching their child smear paint on paper for the first time: something inside them says ‘they could be a great artist’. It’s true: the ability to manifest art can be that simple, and the child has huge potential for improvement… Yet it’s still not going to happen for all but a miniscule few.

    In both cases, the AI model and the child, there cannot merely be push, there must also be pull. There must be a need being met. An appetite being satisfied. And 99% of the time, there isn’t. Once the novelty has worn off, nobody has any interest in watching an AI-generated movie, reading an AI-generated novel, playing an AI-generated game, or looking at your child’s paintings. There just isn’t a call for it.

    Instead of putting AI on the pedestal of a godlike creator, we should look at where it can be a tool to solve a problem.

    Merchants of fun

    You can get side-tracked in talking about experiences, socailising, adventuring, exploration, curiosity, challenge, status… Ultimately, games are vehicles for fun. That’s bedrock.

    Is an AI-generated game likely to be more fun than the alternative? No, of course not, and if you suspect otherwise then you’ve not spent enough time with the wonderful and wacky people who make games. They are true creatives.1

    Any application of generative AI to the games industry must have either enhance fun, or enhance the developers ability to deliver it.

    Exploration

    If you look at games like Minecraft or 7 Days to Die where you can explore a proceedurally generated world, it’s easy to see how generative AI might be able to supercharge that environment building.

    It’s worth considering, though, that this is a specific approach for a specific type of game. As good as these engines have gotten, most of the time games will require a more ‘designed’ world, with geography or features which play into gameplay mechanics, story elements or IP. Generative AI may offer tools to make this more efficient (as many proceedural tools already do), but is unlikely to replace it entirely.

    Socialization

    Imagine walking around a Skyrim or Cyberpunk style sandbox-world, full of NPC characters with their own unique look, voice, and personality. Each able to hold a conversation with you, flavoured with their own specific personality and knowledge. Not merely giving canned responses to pre-defined prompts, but able to interact fluidly with you and amongst themselves.

    Again, this is unlikely to ever be all a game needs. Stories still require specifcally designed characters with particular roles which need to be shaped by the intention of writers and a design team, but it is still a tremendous opportunity to solve the social component of virtual worlds.

    These are two quickly-sketched examples of how generative AI could enable a leap forward in the experience provided by games devleopers – and I am sure there are many more to be found.2

    Tapping into the market

    I wanted to do this in a more subtle manner, but it’s just more practical to break down Andrew Chen’s Twitter thread:

    Games can take 3+ years to build, and technology adoption happens at specific windows of time

    If your generative AI tool is a plugin (for the Unreal Engine, for example) then a studio can pick it up at any time and add it to their development stack.3

    You shouldn’t be limited to thinking in terms of ideas that are ‘disruptive’ to how games are made, and indeed most of the opportunity may be in ideas which are complimentary.

    indie games make little $. There’s only a few scaled players, who will always push on pricing

    If you were going to target indie developers it would have to be with a very specific value proposition and business model (e.g. Unity in 2004). There’s no reason to worry about this otherwise; there are enough larger studios.

    the games ecosystem is insular, with its own conferences, luminaries, and networks / networking” in the games industry often involves, well, gaming. Are you good at Valorant? 🙂

    Can you tell me an industry which doesn’t have its own conferences, luminaries and networks?

    The games industry is not insular, and it is comical to characterize it as a bunch of nerds playing games together. It’s a wonderfully open, social and diverse community.4

    a large % of game cos have artists and creative people. Many are threatened by, and oppose, AI tech

    I don’t know of anyone in the games industry, artist or designer, who isn’t starry-eyed at the possibilities of what AI can enable.

    They are also familiar enough with how games work to recognise that human input is always going to be required to shape and polish the human experience which emerges on the other side.

    you need to generate editable, riggable, high-quality art assets. Right now assets are idiosyncratic and hard to edit

    Generative AI has not yet proven that it can generate useable assets, never mind well-optimised thematic assets. That problem can probably be solved, but to what end?

    Will a world created by a generative AI ever truly feel interesting, coherent, beautiful? Maybe there are better things for it to do?

    large publishers often provide tech to their internal studios. They’ll partner to learn about AI, but will try to build in-house. Is your tech defensible?

    That might have been the case 15 years ago, but the vast improvement in game engines and tools has meant that developers are much more likely to build on existing platforms.

    If a publisher believes that a tool would make development cheaper and faster then they’ll support it without blinking.

    large gaming cos care a lot about their models and data not being shared across the industry. How do you guarantee that? / they also care that their models are trained on data that’s safe from a copyright perspective. There’s lots of hoops to jump through

    Stretching a bit here, but: You train your tools on an open set of data to the point where they are useable, and allow developers to provide additional training based on data from their own IP. In that scenario there is no reason for crossover between studios.

    It’s unlikely that training from one game would ever be useful to the application of the AI in another. It is probably more likely to produce undesirable results.

    Conclusion

    Some years ago an associate of mine went to interview for a job at a games company in Seattle. The interviewer had previously been the lead designer on Starcraft, and naturally expected the candidate to play a match against him while fielding questions about the role.

    The games industry is full of these amusing anecdotes of quirky behavior, and there is a pronounced culture associated with that. However, it is condescending to think that culture stands in the way of progress, or that games studios can’t engage with business and technology partners in a perfectly competent manner.

    If you make a useful tool which solves a problem for the games industry, you will be able to access the right people to make a sale. I’d go so far as to say it’s probably easier and faster moving than many other industries.

    If that is your aim, make sure you are spending enough time talking to games developers, learning about how games are made, understanding the player mentality, and the problems that you might be able to address. As always, finding product:market fit can require a lot of learning and iteration.

    Most of all, ignore the false prophets who were reading from the web3 gospel just a few months ago. They will just ride this trend until something else comes along.

    1. Yes, throughout this article I am drawing a deliberate and passive-aggresive distinction between ‘creating’ and ‘generating’. []
    2. It bothers me that I covered Explorers and Socializers, but didn’t have the time to identify anything for Achievers and Killers. []
    3. And in most mid-large studios there are usually multiple teams running in parallel focused on different projects at different stages of development. []
    4. The irony of a venture capitalist calling the games industry ‘insular’ is not lost on me. []
  • Why venture capital should be consensus-averse

    Why venture capital should be consensus-averse

    In The General Theory of Employment, Interest and Money, Keynes wrote about investment through the metaphor of a newspaper contest to select the six best looking people from a group of photos, with the prize being awarded to the contestant whose choice most closely corresponded to the average of all contestants.

    Keynes’ point was that, despite the clear and simple instruction, contestants are actually not inclined to consider which of the photographed people are the best looking. Rather, they now consider a third-degree perspective of ‘what would the average person imagine that the average opinion is?’

    We have reached the third degree where we devote our intelligences to anticipating what average opinion expects the average opinion to be. And there are some, I believe, who practise the fourth, fifth and higher degrees.

    John Maynard Keynes, Economist

    In A Simple Model of Herd Behavior, Abhijit V. Banerjee examined the inefficiencies created when decision making becomes reliant on signals from others. We become inclined to abandon our own data, in favor of prioritizing signals which themselves may also be based on nothing more than another prior signal. 

    If decreasing returns (average payoffs decline as the number of people who choose it increases) tends to reduce herding, one would expect increasing returns, which rewards doing what a lot of others are doing, to increase the tendency to herd. This is indeed what we find.

    Abhijit V. Banerjee, Ford Foundation International Professor of Economics at Massachusetts Institute of Technology

    There are a number of social psychological drivers of this behavior, but the most obvious are our desire to associate with popular choices, and the greater dispersion of responsibility if that choice proves wrong. 

    Consensus threatens innovation

    Generally, herd behavior is problematic in how it undermines sound judgment and rational choice, though by nature it tends to be low-stakes and risk-controlled. For venture capital, this innately human behavior should be viewed as an existential threat, running contrary to the needs of effectively identifying and funding innovation

    If no great book or symphony was ever written by committee, no great portfolio has ever been selected by one, either.

    Peter Lynch, former manager of the Magellan Fund at Fidelity Investments

    The root of the name venture capital, as Evan Armstrong reminds us in Venture Capital is Ripe for Disruption, is adventure capital. It’s only really an adventure if you’re not sure of the destination, and backing innovation is exactly that: you are straying into the unknown; high risk, large potential reward. 

    The classic archetype of a venture capitalist, fitting with this concept, is a highly perceptive and analytical individual who can evaluate all kinds of oddball, out-of-the-box startups and identify the ones with potential. Someone who sees opportunities where others do not, who does not care about (or actively avoids) pattern-matching with past successes, and who ignores the noise of signals from their peers.  

    There is an old saying in enterprise software, “No one is fired for buying IBM”—people mitigate risk for their decisions by choosing the consensus option.

    This occurs even in the supposedly risky world of venture capital.

    Evan Armstrong, ‘Reformed’ Venture Capitalist

    Hunger drives herd behavior

    In recent years, as the appetite for cheap capital grew to unsustainable heights, venture capitalists became preoccupied with following external signals to ascertain whether the market would agree to provide capital to their portfolio. Would their peers validate their investment choices? Would prospective LPs recognise the value of earlier investments if they weren’t shared with other respected names? Herd behavior crept in with pernicious effect; the seductive comfort of piling into seemingly safe deals with other investors. Manufacturing winners. 

    As long as downstream investors continued participating in the game of artificial value growth (and why wouldn’t they) it was still a good model, right?

    As long as the (paper) returns were good, it was still venture capital, right?

    We know how that ended. We also broadly know why it ended (crude valuation practices, interest rates making capital more expensive, exit markets rejecting inflated prices… etc). The question we should ask now is what can be done to stop it happening again? 

    Learning from mistakes

    Anyone involved in investment of any kind should be aware of the way signals should be handled (with oven gloves). It is valuable input that can shape an investment decision but shouldn’t drive it. For venture capital, that might mean reevaluating everything from deal flow management to valuation practices. 

    • Are the majority of your deals sourced through referrals from other investors?
    • When evaluating potential investments, how dependent is your conviction on recent similar deals? 
    • How much analytical rigor are you applying to the individual nature of each opportunity?
    • When setting valuation, how much do you rely on crude ARR multiples?
    • How much does the VC Twitter echo-chamber shape your approach to early stage investment, generally? 

    These might seem like basic questions, but there is clear cause to begin a first-principles reevaluation of how capital is allocated to ideas and founders. The responsibility is to effectively fund technological progress, not to exploit an uncertain market for short-term gains.

    A new approach, with a more analytical focus on individual businesses, may seem unrealistic: too much time involved, too much uncertainty. To that, I’ll close on three points:

    • Startups in 2023 are running leaner. The great hunger for capital is over, for now. That opens the opportunity to strike out and make fund returning deals without needing to drag other investors along with you. Your ability to identify winners (not simply agree on them) matters more than ever.
    • There are tools and frameworks which make analysing startups in detail much more practical (Equidam is an obvious example). Build a process which lets you collect data about opportunities and decisions, allowing you to develop and codify your experience.
    • Reconsider industry dogma about practices and perceptions (for example: about financial projections at early stage). More data = better decisions, you just need to pick the right lens to derive the right value.

    As many have said, the 2023 vintage has great promise. Particularly for investors who best adapt to the new conditions.

    [EDIT 26/03/2023: Adding a link to Chamath Palihapitiya’s article about herd behavior in venture funds and the risks involved. It’s a much more analytical perspective, which you can read here.]

    [EDIT 22/06/2025: Adding an overdue link to Geri Kirilova’s article about enmeshment in venture capital, providing another perspective on this problem, which you can read here.]

  • The Negligible Cultural Impact of AI

    The Negligible Cultural Impact of AI

    Good art (including novels, games, movies) is defined by the humanity involved. Emotion, humour, tension. Even when AI attempts to mimic those attributes, we’ll still prefer human experiences over synthetic ones.

    We’re inclined to believe each new innovation is the ‘best’, and that the technology-driven approach is always superior. To overlook almost anything in pursuit of speed or efficiency.

    We must have the latest invention, and we’ll use it proudly until the novelty – and associated status – wears off. Then those gains in speed and efficiency can work their way to the market they’re intended for: people who are price or time sensitive.

    Microwaves were billed as kitchen gadgets for the wealthy, revolutionising home cooking. It turns out that we’d rather bake artisanal sourdough bread in a wood-fired oven, when we have the luxury of time and choice.

    We consistently overestimate the cultural impact of ‘technology for technology’s sake’. Popular visions of the future in science fiction show the wealthy living in hyper-minimal grey boxes with robots for every function. Utterly dull.

    Avatar was supposed to push the envelope for the movie industry with stereoscopic technology and CGI, offering a vivid and immersive experience like never before. It remains the highest grossing movie of all time but the cultural impact, relative to that, is miniscule. Few really cared about the story, or the characters involved.

    The protagonist of that franchise? James Cameron, with a 3D camera over his shoulder.

    AI only threatens the bottom-of-the-barrel stuff.

    Free apps, stock images, SEO-driven content.

    It is not a threat to broad swathes of industry and the arts in which humanity plays a major role. Genuine empathy and emotion is only going to become more valuable, as the rest of our lives become more technology oriented.

    That’s not to say that AI wont be powerful and practical. It is already shaping whole industries. We just need to have a realistic perspective on where that importance lies.

    Consider another parallel: artificial meat.

    It caused a brief stir when it was new and exciting, popping up in all kinds of fancy gastropubs. And then interest fell off. The ultimate customer for that product, once it meets the promise of being cheaper and greener than real meat, is not fine-dining restaurants. It is McDonalds. It is MREs. It is the boxes of frozen chicken nuggets in your local discount market.

    Nobody will love it.

  • Growth incentives – web3’s failure

    Growth incentives – web3’s failure

    Web3 has largely failed, and we should talk about it

    There’s an elephant in the room.

    In the space of just a few months, NFT PFPs have vanished from Twitter, .eth usernames have fallen out of vogue, and a whole category of social media celebrities has vanished.

    The tech world went from frothing at the mouth about the future of the internet, how life would be different in the metaverse, to “oh hey, is that AI I see over there?” and wandering off.

    I’m not surprised. I’ve spent a good amount of time writing about how web3 products have ignored consumer interests, and perhaps even more writing about how web3 has had to ignore the past in order to fake progress the present.

    I don’t mind that we’ve moved on. But we should talk about why. There should be some accountability and humility from those who were the most bullish.

    I asked on Twitter whether anyone had dared write a web3 post-mortem:

    The comparison is apt, and I suggest reading the linked article to better understand why. To summarise: the technology was cool but awkward to use, and ultimately consumers didn’t care that much.1

    So what does any of this have to do with referral programs?2

    The above explains fairly well, I think, why web3 failed to cross the chasm. There was technology, and there was money, but it was not being used to solve real problems. And yet, for a period of time, it had us all capitvated – if not actually invested. Why?

    Web3 had a monumental referral program

    One curiosity to look back upon, in all of this, is that hype for NFTs was front-running interest in ‘web3’ or ‘metaverse’.

    In Feb 2021 we were keen to learn more about these magical jpgs, but it wasn’t until April that metaverse reared its head, and only by December was interest in web3 picking up steam.

    But… weren’t web3 and metaverse concepts the use case for NFTs? How could the interest preceed the use case?

    In the beginning, people were hoodwinked into thinking this was a ‘digital art’ revolution and – thanks to a few exceptional examples – a lucrative one at that.

    ‘Digital art’ seems quaint in comparison to the grand promises of an internet revolution which came later. It doesn’t matter; it was enough. Our interest was captured, and money started to flow into the ecosystem. Consider, at this point, the old gold rush analogy about selling picks and shovels.

    NFTs provided a sufficient level of interest and capital for creative (and ethically questionable) people to invent new ways to sell more NFTs. Most metaverse ideas were borne out of this NFT gold rush, as well as much of what drives ‘web3’.

    The more ambitious these ideas became, the more we talked about it, the more celebrities and brands got involved, the more certain it all seemed. We’d share interesting projects as ‘alpha’ in exclusive chat groups, and we’d proudly represent our NFT project of choice on social media.

    The noise created was incredible, and the message was clear: join us in getting rich, or miss the train.

    This fundamentally optimised web3 adoption for those who wanted to get rich, not those who were interested in building the next interation of the internet.

    Trust, privacy and decentralisation? Nowhere to be seen.

    Much like crypto, and for similar reasons, it became cannibalistic. People backing one project would lash out at others. All competition was a threat. There was no spirit of collaboration. All motivation was pointed toward increasing the (perceived) value of a project.

    That’s a fine motivation if you are an investor, but it’s fatal when your investors are also your ‘users’. Much like a startup focusing efforts on increasing valuation rather than increasing value to users, it’s going to end with a bang.

    In conclusion…

    The collapse of web3 can be attributed entirely to the perversion of its growth.

    The ecosystem created was built around a bubble, without any incentives for long term growth. No reason to spend time identifying and solving real problems.

    It’s a shame, because buried deep in there were some people genuinely trying to build a better future, but it is incredibly difficult to maintain that focus if ‘financialization’ happens too early.

    Additional reading:

    Why you should rethink referral programs

    About a month ago, Mobolaji Olorisade and Grillo Adebiyi, of African Fintech giant Cowrywise, released a retrospective on their experimentation with referral programs for customer acquisition.

    It’s a supremely interesting read, and I reccomend checking it out, but I’ll provide a brief summary below.

    In short: referral programs are a perverse sign-up incentive, which lead to all kinds of unintended consequences. Rather than calibrating your focus on your ideal customer profile, it drags you in other directions – towards those that see an opportunity to exploit the program.

    Of all of the users of your product, it is the ones that found you organically, because you’re a perfect fit for their needs, which will sign up most readily and have have the greatest loyalty. In practical terms: the strongest LTV/CAC.

    1. If you imagine that 3D TVs had developed a similar rabidly absolutist mentality to web3 enthusiasts, demanding 3D content be exclusive to 3D TVs – and 3D TVs ONLY support 3D content, the parallels are perhaps even more vivid. []
    2. Paid referral programs are a common growth strategy in the Fintech world, particularly in the ‘growth at all costs’ era. Startups would spend VC money on paying new users to onboard, depositing $10 or $25 in their new digital wallet, because all that mattered was rate of acquisition. []
  • Metaverse – Reinventing the wheel

    Metaverse – Reinventing the wheel

    Earlier this week, web3 Studios released their ‘Digital Identities Report‘, sharing a variety of opinions and predictions on the future of identity and social interaction in a ‘metaverse’ environment.

    There is more than fifteen years worth of fascinating sociological research on virtual worlds and digital identity. You would not know that from reading this report.

    It simultaneously presents web3 worlds as an entirely new concept that is being shaped by a new generation of ‘web3 thinkers’, while also positioning Roblox as an example of a metaverse.1

    I’ve written about this before. Specifically in regard to web3 enthusiasts ignoring the incredible groundwork down in science fiction and games, and more recently on how metaverses are fundamentally a non-technical social proposition.

    Mostly those arguments have addressed the general web3 discourse on Twitter, wishing it was better informed about the existing groundwork in this field.

    It’s a deeper issue when companies (selling web3 products) collaborate with web3 influencers (mostly NFT shills) to produce a report that is essentially a sales catalogue – but frame it as some insightful look at the social aspects of virtual worlds.

    We’re all supposed to rub our chins, and ponder this brave new world of identity in a digital environment. Once we buy one of their avatars, of course.

    So, here (and in the corresponding Twitter thread) I wanted to share a few genuinely good papers on the sociology of virtual world and digital identity:

    If you are genuinely interested in building the future of social interaction online, there is an absolute wealth of information available to you. It is well covered ground – thanks to genuine experts, who often spent years immersed in virtual worlds as a part of their research.

    Stretch your legs, take a wander outside of the web3 bubble.

    1. Roblox is an online game released in 2006, enjoyed by an audience that is mostly under 12 years old. Did you know that Gucci have a ‘metaverse’ installation there? []