• LPs should encourage VC evolution

    LPs should encourage VC evolution

    In a previous article I wrote about the threat of consensus in venture capital.

    A few days later, Eric Tarczynski shared a fascinating thread about the journey with Contrary, his VC firm. He addressed this point about consensus with admirable candour, summarised here in two points:

    1. Raising from LPs is easier if you have recognisable logos attached to your previous funds. Success is measured by which big names in VC co-invested with you.
    2. Raising from LPs is easier if they get good references from their existing VCs. So you send deals to them, network with them, and co-invest with them. Success is measured by relationships.

    It’s unusual to get such an unvarnished look at the inside workings of venture capital, and the thread elicited a number of reactions. Most agreed it was a tough pill to swallow:

    Eric’s awesome but boy is that thread a pretty damning look into the inside-baseball-nepotism that starts from the top (LPs) and infects the whole VC ecosystem.

    Luke Thomspon [source]

    ‘We thought that being good investors with a unique thesis that actually makes money would be the best strategy, turns out, following the herd, piling onto garbage, and being unquestioning vassals to incumbent investor power gets you a larger fund’ – My interpretation

    Del Johnson [source]

    There’s an elephant in the room in all of this. Or perhaps it’s a bull in a china shop. Either way, everyone seems to be ignoring it and it’s doing a lot of damage.

    Weak signals

    From pre-seed to IPO, there is no consistent, transparent measure of success. That’s a long time for a GP to deploy capital without any concrete metrics for success. How does an LP ascertain if their money is being put to good use?

    Samir Kaji of Allocate (and former SVB MD) shared his take on the problem that LPs face:

    LPs are programmed to use past track record as the primary driver in making a decision on whether to invest in a new fund (A recent study showed historical persistence of VC is that 70% chance a fund performs above median if prior fund is 1st Q). However, more than ever, track record can be a very weak indicator if the fund is within <5-7 years.

    • Spread of how VCs are valuing the same companies is large.
    • Current TVPI to final DPI delta will be large for many funds, and some funds have resilient companies; others are filled w/companies that were pure momentum (but still marked up).
    Samir Kaji, Allocate

    There is an obvious desire from both sides to find something to show. As Luke put it, “we can pretend it’s all about independent thinking, non consensus and right, etc, but when you’re going out for Fund 2 and on a stack of unrealized, LPs want other signals.”

    This is why we end up focusing on ‘logo hunting’ and co-investment culture. If we’re all a gang, and we back each other up, then we’ll maintain the confidence of LPs. Meanwhile, the LPs are probably feeling a degree of comfort from investing in a few different funds, without realizing how intermingled and codependent they are.

    As Chamath Palihapitiya wrote in Advice to Startup Founders and Employees: Strength Doesn’t Always Come in Numbers:

    As it turns out, what VCs of the past decade assumed to be market alpha may have actually been market beta (i.e. fellow venture funds bidding up the same cohort of companies over several funding rounds).

    Chamath Palihapitiya, Social Capital

    This is clearly an undesirable outcome for LPs: The data for measuring venture capital fund performance is flimsy and creates a huge perverse incentives for GPs. This is clearly not good enough when so much capital is at stake. Especially when it involves pensions funds and university endowments. It’s a bad look for everyone.

    The final nail in the coffin here is how current practices can create a reality-distortion field around actual performance: in effect, a company’s ‘public’ valuation only changes when they want it to. This was outlined at length in a thread from Anand Sanwal of CBInsights, which included this slide from SVB:

    This is on the mind of every LP at the moment. What do their ‘paper’ returns from 2021/22 actually mean anymore? What will happen when the companies they are invested in via VC are forced to come to terms with reality?

    Meaningful benchmarks

    When you start talking about standardising anything in venture capital, there’s a reliably cold response. Everybody likes to believe they have their secret sauce, their intuition, their process, their edge over others… despite all signs pointing towards none of that changing the outcome.

    When you talk about measuring the performance of early stage companies, that’s when the real pushback begins. There’s too much uncertainty. It’s too unreliable. Projections are always a pipe-dream.

    There’s one simple response to these concerns: “Perfect is the enemy of good“.

    If you open yourself to new ways of looking at valuation (it’s not just about “market passing”), and new ways of performing valuation, you will find that there are practical, systematic frameworks to measure and report the development of private companies.

    Don’t get twisted up about producing an “accurate” result for an early stage company, it is foolishness – and not the point. The goal is to provide solid, useful benchmarks which can be calibrated against the market in a transparent manner.

    For an example of how this might be achieved, I will always recommend a read through Equidam’s methodology. It combines perspectives on verifiable characteristics via the qualitative methods, the exit potential via the VC method, and the vision for growth via the DCF methods. All packaged up into a nice, comprehensive report.

    What standardized reporting does for the LP/VC relationship

    If you can imagine a world where VCs produce quarterly reports on fund performance using a standardised framework, there are a number of profound benefits:

    1. LPs could better assess the performance of their existing VCs, creating more of a meritocracy.
    2. VCs would have an easier time raising, in addition to shortening their own internal feedback-loops to improve decision making.
    3. Moving away from current lazy valuation practices (ARR multiples) would help avoid extreme fluctuations in valuation, as we’ve experienced since 2021.
    4. It will (slowly) kill the dinosaurs, the giant firms which played a part in the development of this ecosystem and all of its flaws.
    5. A move towards transparency – especially around valuation – would be timely, as the SEC’s gaze falls on venture capital.
    6. There are also interesting considerations for liquidity in secondary markets serving private company equity, but that’s a whole post of its own.

    Conclusion

    It seems clear to me that this change will not come easily to venture capitalists, who are either comfortable with the status-quo or simply find it convenient. However, it might be possible for LPs to set new terms as market dynamics have shifted power in their direction.

    Still, this is a difficult argument to make. I’m suggesting no less than upending how much of venture capital operates, and I’m doing so from the position of a relative outsider.

    But I guess that’s the point? Venture capital has been a closed ecosystem for too long, full of esoteric practices shaped by a relatively tiny group of individuals. There is plenty of room for improvement, especially if we stop getting hung up on the need for ‘perfect’, when the current status is ‘poor’.

    Finally, a bigger point than any of the six I mentioned previously: if this makes us better at allocating capital to innovative ideas, and innovative people, then it’s got to be worthwhile.

  • AI as a Utility

    AI as a Utility

    Investors don’t really need to invest in net new companies to get exposure to AI’s potential halo effect; If all your portfolio companies start to integrate with the right existing tools on the market, they could bloom too. It’s the promise of horizontal tech.

    Natasha Mascarenhas, TechCrunch

    In a previous post I used the games industry as an example to make the case that AI probably isn’t going to be disruptive to any major verticals. Instead, it’s another tool in the creative process which can empower better, more efficient outcomes.

    The quote from Natasha helped cement that even further: AI should be viewed as a utility, similar to electricity or water. We’ve already witnessed the commoditization of computing power, and now it’s the turn of generative AI platforms.

    Viewed through this lens, we can avoid a lot of the hyperbole about AI threatening categories of employment. It will certainly have a significant impact on the way many industries operate, but not fundamentally transformative. More evolution than disruption.

    What does this mean, right now? Not a great deal. The more forward-thinking you are, the more likely you are to be exploring whether there are ways that LLMs can help with your ‘jobs to be done’. In the most basic cases, can it ease your workload by eliminating some menial tasks, or provide a source of some inspiration?

    If there’s one area where LLMs have the potential to diverge from the ‘AI as a Utility’ idea, it’s in ‘whitelabelled’ solutions. In essence, if OpenAI could sell me a version of ChatGPT that is stripped of all reference to news, politics, current events, media – something with the conversational skills of a human but none of the knowledge – that could be tremendously useful. It could then be independently trained on very specific data sets to build an model that is practical and reliable in niche applications.1

    This would open up LLM-driven applications across a range of industries, whether it’s customer service chat bots to NPCs in video games. Imagine a company-wide virtual assistant that is aware of every invoice, file, public Slack conversation or email. The possibilities are clearly wide-ranging and powerful.

    In fact, to expand on the AI as a utility metaphor, the closest parallel we can draw today – in terms of potential, risks, and regulation – is probably nuclear power.

    I’m pro-nuclear, and understand the argument that regulation has slowed progressing that field. …but what has really slowed that industry down is the fear sown by catastrophic accidents linked to poor oversight. Three Mile Island and Chernobyl wrecked public perception, political will and engendered much of the over-regulation which followed.

    For example, this 1978 article in The Washington Post covers the Soviet Union’s ‘optimistic’ approach to nuclear plant safety in the years prior to Chernobyl.

    This paper from the IAEA, published as a response to Chernobyl but also mentioning Three Mile Island, describes an environment of broken trust, fear and disgust.

    The dichotomy of ‘accelerationists’ and ‘doomers’ is childish. Neither is helpful. There has to be room for both progress and an appropriate level of caution. That doesn’t mean letting incumbents dictate the direction and severity of regulation, in the same way that it would have been insane to let the coal industry regulate nuclear. Regulatory capture is a concern, but too many are reducing this down to a narrative that any drive towards regulation is a threat to progress.

    The world stands to gain significantly from AI, as it has from nuclear power. The dangers, while less obvious, may be no less threatening. Nobody really knows.2

    As Elon Musk said himself: hope for the best, plan for the worst.

    1. Of course, a behemoth like Bloomberg can afford to build their own model to accomodate a specific focus. []
    2. To be clear, what I’m referring to here is true AI. General intelligence. Today’s ‘generative AI’ is more of a distraction than a threat. It’s primary role seems to be a lever which VCs and technologists can pull to raise capital. []
  • Generative AI and the Games Industry

    Generative AI and the Games Industry

    This post looks at applications of generative AI in the context of the games industry, but much of the same logic can be applied elsewhere.

    Adapting to technological evolution

    With every new technology revolution – web3 most recently, and now AI – there follows a large herd of true believers. It can do all things, solve all ills, and life will never again be the same again. Enamoured by possibility, they follow with a true sense of opportunity.

    Loudest amongst this herd (and most critical of nay-sayers) are the wolves in sheeps’ clothing. The rent-seeking charlatans.

    This was explicit in the get-rich-quick era of web3, and much of the same problem has transferred over the AI as techno-pilgrims flee one sinking ship to pile into another.

    Secondly, on the other side of the coin, are the cynics. People who were raised on 56k modems and bulletin boards, who feel a deep discomfort as technology moves beyond their grasp. They felt like the rational resistance to web3, and so have little hesitation about weighing in on AI.

    We have to be conscious of both groups, and our own place on that spectrum.

    Why the games industry?

    There are three main reasons I’m keen to address the games industry as the case-study for this post:

    1. As with web3, AI is being shoved down people’s throats without due concern for why.
    2. It is largely focused on a young audience who are absent from these conversations.
    3. It connects with my personal experience in the games industry.

    If you want to read about the potential use cases for AI in banking, you’ll find a thousand thought-leader think-pieces. It was well-covered ground without much original thought even before ChatGPT came along.

    If you want to talk about the potential use cases of AI in the games industry, you’ll find some ex-crypto VCs and technologists trying desperately to pivot their brief experience. Insubstantial waffle.

    Perfection is the enemy of good

    Dealing with the more exciteable technophiles, you’ll probably notice they don’t show a lot of interest in the complex applications. Their interest is in the most extreme examples of movies, games or books being entirely generated by AI (or entirely decentralized, yada yada).

    Their point is simple: if AI can do these things crudely today, then tomorrow it will be able to do them well – and at that point we’ll be forced to embrace the bold new future. Right?

    This fallacy can be observed in every parent watching their child smear paint on paper for the first time: something inside them says ‘they could be a great artist’. It’s true: the ability to manifest art can be that simple, and the child has huge potential for improvement… Yet it’s still not going to happen for all but a miniscule few.

    In both cases, the AI model and the child, there cannot merely be push, there must also be pull. There must be a need being met. An appetite being satisfied. And 99% of the time, there isn’t. Once the novelty has worn off, nobody has any interest in watching an AI-generated movie, reading an AI-generated novel, playing an AI-generated game, or looking at your child’s paintings. There just isn’t a call for it.

    Instead of putting AI on the pedestal of a godlike creator, we should look at where it can be a tool to solve a problem.

    Merchants of fun

    You can get side-tracked in talking about experiences, socailising, adventuring, exploration, curiosity, challenge, status… Ultimately, games are vehicles for fun. That’s bedrock.

    Is an AI-generated game likely to be more fun than the alternative? No, of course not, and if you suspect otherwise then you’ve not spent enough time with the wonderful and wacky people who make games. They are true creatives.1

    Any application of generative AI to the games industry must have either enhance fun, or enhance the developers ability to deliver it.

    Exploration

    If you look at games like Minecraft or 7 Days to Die where you can explore a proceedurally generated world, it’s easy to see how generative AI might be able to supercharge that environment building.

    It’s worth considering, though, that this is a specific approach for a specific type of game. As good as these engines have gotten, most of the time games will require a more ‘designed’ world, with geography or features which play into gameplay mechanics, story elements or IP. Generative AI may offer tools to make this more efficient (as many proceedural tools already do), but is unlikely to replace it entirely.

    Socialization

    Imagine walking around a Skyrim or Cyberpunk style sandbox-world, full of NPC characters with their own unique look, voice, and personality. Each able to hold a conversation with you, flavoured with their own specific personality and knowledge. Not merely giving canned responses to pre-defined prompts, but able to interact fluidly with you and amongst themselves.

    Again, this is unlikely to ever be all a game needs. Stories still require specifcally designed characters with particular roles which need to be shaped by the intention of writers and a design team, but it is still a tremendous opportunity to solve the social component of virtual worlds.

    These are two quickly-sketched examples of how generative AI could enable a leap forward in the experience provided by games devleopers – and I am sure there are many more to be found.2

    Tapping into the market

    I wanted to do this in a more subtle manner, but it’s just more practical to break down Andrew Chen’s Twitter thread:

    Games can take 3+ years to build, and technology adoption happens at specific windows of time

    If your generative AI tool is a plugin (for the Unreal Engine, for example) then a studio can pick it up at any time and add it to their development stack.3

    You shouldn’t be limited to thinking in terms of ideas that are ‘disruptive’ to how games are made, and indeed most of the opportunity may be in ideas which are complimentary.

    indie games make little $. There’s only a few scaled players, who will always push on pricing

    If you were going to target indie developers it would have to be with a very specific value proposition and business model (e.g. Unity in 2004). There’s no reason to worry about this otherwise; there are enough larger studios.

    the games ecosystem is insular, with its own conferences, luminaries, and networks / networking” in the games industry often involves, well, gaming. Are you good at Valorant? 🙂

    Can you tell me an industry which doesn’t have its own conferences, luminaries and networks?

    The games industry is not insular, and it is comical to characterize it as a bunch of nerds playing games together. It’s a wonderfully open, social and diverse community.4

    a large % of game cos have artists and creative people. Many are threatened by, and oppose, AI tech

    I don’t know of anyone in the games industry, artist or designer, who isn’t starry-eyed at the possibilities of what AI can enable.

    They are also familiar enough with how games work to recognise that human input is always going to be required to shape and polish the human experience which emerges on the other side.

    you need to generate editable, riggable, high-quality art assets. Right now assets are idiosyncratic and hard to edit

    Generative AI has not yet proven that it can generate useable assets, never mind well-optimised thematic assets. That problem can probably be solved, but to what end?

    Will a world created by a generative AI ever truly feel interesting, coherent, beautiful? Maybe there are better things for it to do?

    large publishers often provide tech to their internal studios. They’ll partner to learn about AI, but will try to build in-house. Is your tech defensible?

    That might have been the case 15 years ago, but the vast improvement in game engines and tools has meant that developers are much more likely to build on existing platforms.

    If a publisher believes that a tool would make development cheaper and faster then they’ll support it without blinking.

    large gaming cos care a lot about their models and data not being shared across the industry. How do you guarantee that? / they also care that their models are trained on data that’s safe from a copyright perspective. There’s lots of hoops to jump through

    Stretching a bit here, but: You train your tools on an open set of data to the point where they are useable, and allow developers to provide additional training based on data from their own IP. In that scenario there is no reason for crossover between studios.

    It’s unlikely that training from one game would ever be useful to the application of the AI in another. It is probably more likely to produce undesirable results.

    Conclusion

    Some years ago an associate of mine went to interview for a job at a games company in Seattle. The interviewer had previously been the lead designer on Starcraft, and naturally expected the candidate to play a match against him while fielding questions about the role.

    The games industry is full of these amusing anecdotes of quirky behavior, and there is a pronounced culture associated with that. However, it is condescending to think that culture stands in the way of progress, or that games studios can’t engage with business and technology partners in a perfectly competent manner.

    If you make a useful tool which solves a problem for the games industry, you will be able to access the right people to make a sale. I’d go so far as to say it’s probably easier and faster moving than many other industries.

    If that is your aim, make sure you are spending enough time talking to games developers, learning about how games are made, understanding the player mentality, and the problems that you might be able to address. As always, finding product:market fit can require a lot of learning and iteration.

    Most of all, ignore the false prophets who were reading from the web3 gospel just a few months ago. They will just ride this trend until something else comes along.

    1. Yes, throughout this article I am drawing a deliberate and passive-aggresive distinction between ‘creating’ and ‘generating’. []
    2. It bothers me that I covered Explorers and Socializers, but didn’t have the time to identify anything for Achievers and Killers. []
    3. And in most mid-large studios there are usually multiple teams running in parallel focused on different projects at different stages of development. []
    4. The irony of a venture capitalist calling the games industry ‘insular’ is not lost on me. []
  • Why venture capital should be consensus-averse

    Why venture capital should be consensus-averse

    In The General Theory of Employment, Interest and Money, Keynes wrote about investment through the metaphor of a newspaper contest to select the six best looking people from a group of photos, with the prize being awarded to the contestant whose choice most closely corresponded to the average of all contestants.

    Keynes’ point was that, despite the clear and simple instruction, contestants are actually not inclined to consider which of the photographed people are the best looking. Rather, they now consider a third-degree perspective of ‘what would the average person imagine that the average opinion is?’

    “We have reached the third degree where we devote our intelligences to anticipating what average opinion expects the average opinion to be. And there are some, I believe, who practise the fourth, fifth and higher degrees.”

    John Maynard Keynes, Economist

    In A Simple Model of Herd Behavior, Abhijit V. Banerjee examined the inefficiencies created when decision making becomes reliant on signals from others. We become inclined to abandon our own data, in favor of prioritizing signals which themselves may also be based on nothing more than another prior signal. 

    “This suggests that the very act of trying to use the information contained in the decisions made by others makes each person’s decision less responsive to her own information and hence less informative to others. Indeed, we find that in equilibrium the reduction of informativeness may be so severe that in an ex ante welfare sense society may actually be better off by constraining some of the people to use only their own information.”

    Abhijit V. Banerjee, Ford Foundation International Professor of Economics at Massachusetts Institute of Technology

    There are a number of social psychological drivers of this behavior, but the most obvious are our desire to associate with popular choices, and the greater dispersion of responsibility if that choice proves wrong. 

    Consensus threatens innovation

    Generally, herd behavior is problematic in how it undermines sound judgment and rational choice, though by nature it tends to be low-stakes and risk-controlled. For venture capital, this innately human behavior should be viewed as an existential threat, running contrary to the needs of effectively identifying and funding innovation

    “If no great book or symphony was ever written by committee, no great portfolio has ever been selected by one, either.”

    Peter Lynch, former manager of the Magellan Fund at Fidelity Investments

    The root of the name venture capital, as Evan Armstrong reminds us in Venture Capital is Ripe for Disruption, is adventure capital. It’s only really an adventure if you’re not sure of the destination, and backing innovation is exactly that: you are straying into the unknown; high risk, large potential reward. 

    The classic archetype of a venture capitalist, fitting with this concept, is a highly perceptive and analytical individual who can evaluate all kinds of oddball, out-of-the-box startups and identify the ones with potential. Someone who sees opportunities where others do not, who does not care about (or actively avoids) pattern-matching with past successes, and who ignores the noise of signals from their peers.  

    “There is an old saying in enterprise software, “No one is fired for buying IBM”—people mitigate risk for their decisions by choosing the consensus option.

    This occurs even in the supposedly risky world of venture capital.”

    Evan Armstrong, ‘Reformed’ Venture Capitalist

    Hunger drives herd behavior

    In recent years, as the appetite for cheap capital grew to unsustainable heights, venture capitalists became preoccupied with following external signals to ascertain whether the market would agree to provide capital to their portfolio. Would their peers validate their investment choices? Would prospective LPs recognise the value of earlier investments if they weren’t shared with other respected names? Herd behavior crept in with pernicious effect; the seductive comfort of piling into seemingly safe deals with other investors. Manufacturing winners. 

    As long as downstream investors continued participating in the game of artificial value growth (and why wouldn’t they) it was still a good model, right?

    As long as the (paper) returns were good, it was still venture capital, right?

    We know how that ended. We also broadly know why it ended (crude valuation practices, interest rates making capital more expensive, exit markets rejecting inflated prices… etc). The question we should ask now is what can be done to stop it happening again? 

    Learning from mistakes

    Anyone involved in investment of any kind should be aware of the way signals should be handled (with oven gloves). It is valuable input that can shape an investment decision but shouldn’t drive it. For venture capital, that might mean reevaluating everything from deal flow management to valuation practices. 

    • Are the majority of your deals sourced through referrals from other investors?
    • When evaluating potential investments, how dependent is your conviction on recent similar deals? 
    • How much analytical rigor are you applying to the individual nature of each opportunity?
    • When setting valuation, how much do you rely on crude ARR multiples?
    • How much does the VC Twitter echo-chamber shape your approach to early stage investment, generally? 

    These might seem like basic questions, but there is clear cause to begin a first-principles reevaluation of how capital is allocated to ideas and founders. The responsibility is to effectively fund technological progress, not to exploit an uncertain market for short-term gains.

    A new approach, with a more analytical focus on individual businesses, may seem unrealistic: too much time involved, too much uncertainty. To that, I’ll close on three points:

    • Startups in 2023 are running leaner. The great hunger for capital is over, for now. That opens the opportunity to strike out and make fund returning deals without needing to drag other investors along with you. Your ability to identify winners (not simply agree on them) matters more than ever.
    • There are tools and frameworks which make analysing startups in detail much more practical (Equidam is an obvious example). Build a process which lets you collect data about opportunities and decisions, allowing you to develop and codify your experience.
    • Reconsider industry dogma about practices and perceptions (for example: about financial projections at early stage). More data = better decisions, you just need to pick the right lens to derive the right value.

    As many have said, the 2023 vintage has great promise. Particularly for investors who best adapt to the new conditions.

    [EDIT 26/03/2023: Adding a link to Chamath Palihapitiya’s article about herd behavior in venture funds and the risks involved. It’s a much more analytical perspective, which you can read here.]

    [EDIT 22/06/2025: Adding an overdue link to Geri Kirilova’s article about enmeshment in venture capital, providing another perspective on this problem, which you can read here.]

  • The Negligible Cultural Impact of AI

    The Negligible Cultural Impact of AI

    Good art (including novels, games, movies) is defined by the humanity involved. Emotion, humour, tension. Even when AI attempts to mimic those attributes, we’ll still prefer human experiences over synthetic ones.

    We’re inclined to believe each new innovation is the ‘best’, and that the technology-driven approach is always superior. To overlook almost anything in pursuit of speed or efficiency.

    We must have the latest invention, and we’ll use it proudly until the novelty – and associated status – wears off. Then those gains in speed and efficiency can work their way to the market they’re intended for: people who are price or time sensitive.

    Microwaves were billed as kitchen gadgets for the wealthy, revolutionising home cooking. It turns out that we’d rather bake artisanal sourdough bread in a wood-fired oven, when we have the luxury of time and choice.

    We consistently overestimate the cultural impact of ‘technology for technology’s sake’. Popular visions of the future in science fiction show the wealthy living in hyper-minimal grey boxes with robots for every function. Utterly dull.

    Avatar was supposed to push the envelope for the movie industry with stereoscopic technology and CGI, offering a vivid and immersive experience like never before. It remains the highest grossing movie of all time but the cultural impact, relative to that, is miniscule. Few really cared about the story, or the characters involved.

    The protagonist of that franchise? James Cameron, with a 3D camera over his shoulder.

    AI only threatens the bottom-of-the-barrel stuff.

    Free apps, stock images, SEO-driven content.

    It is not a threat to broad swathes of industry and the arts in which humanity plays a major role. Genuine empathy and emotion is only going to become more valuable, as the rest of our lives become more technology oriented.

    That’s not to say that AI wont be powerful and practical. It is already shaping whole industries. We just need to have a realistic perspective on where that importance lies.

    Consider another parallel: artificial meat.

    It caused a brief stir when it was new and exciting, popping up in all kinds of fancy gastropubs. And then interest fell off. The ultimate customer for that product, once it meets the promise of being cheaper and greener than real meat, is not fine-dining restaurants. It is McDonalds. It is MREs. It is the boxes of frozen chicken nuggets in your local discount market.

    Nobody will love it.

  • Growth incentives – web3’s failure

    Growth incentives – web3’s failure

    Web3 has largely failed, and we should talk about it

    There’s an elephant in the room.

    In the space of just a few months, NFT PFPs have vanished from Twitter, .eth usernames have fallen out of vogue, and a whole category of social media celebrities has vanished.

    The tech world went from frothing at the mouth about the future of the internet, how life would be different in the metaverse, to “oh hey, is that AI I see over there?” and wandering off.

    I’m not surprised. I’ve spent a good amount of time writing about how web3 products have ignored consumer interests, and perhaps even more writing about how web3 has had to ignore the past in order to fake progress the present.

    I don’t mind that we’ve moved on. But we should talk about why. There should be some accountability and humility from those who were the most bullish.

    I asked on Twitter whether anyone had dared write a web3 post-mortem:

    The comparison is apt, and I suggest reading the linked article to better understand why. To summarise: the technology was cool but awkward to use, and ultimately consumers didn’t care that much.1

    So what does any of this have to do with referral programs?2

    The above explains fairly well, I think, why web3 failed to cross the chasm. There was technology, and there was money, but it was not being used to solve real problems. And yet, for a period of time, it had us all capitvated – if not actually invested. Why?

    Web3 had a monumental referral program

    One curiosity to look back upon, in all of this, is that hype for NFTs was front-running interest in ‘web3’ or ‘metaverse’.

    In Feb 2021 we were keen to learn more about these magical jpgs, but it wasn’t until April that metaverse reared its head, and only by December was interest in web3 picking up steam.

    But… weren’t web3 and metaverse concepts the use case for NFTs? How could the interest preceed the use case?

    In the beginning, people were hoodwinked into thinking this was a ‘digital art’ revolution and – thanks to a few exceptional examples – a lucrative one at that.

    ‘Digital art’ seems quaint in comparison to the grand promises of an internet revolution which came later. It doesn’t matter; it was enough. Our interest was captured, and money started to flow into the ecosystem. Consider, at this point, the old gold rush analogy about selling picks and shovels.

    NFTs provided a sufficient level of interest and capital for creative (and ethically questionable) people to invent new ways to sell more NFTs. Most metaverse ideas were borne out of this NFT gold rush, as well as much of what drives ‘web3’.

    The more ambitious these ideas became, the more we talked about it, the more celebrities and brands got involved, the more certain it all seemed. We’d share interesting projects as ‘alpha’ in exclusive chat groups, and we’d proudly represent our NFT project of choice on social media.

    The noise created was incredible, and the message was clear: join us in getting rich, or miss the train.

    This fundamentally optimised web3 adoption for those who wanted to get rich, not those who were interested in building the next interation of the internet.

    Trust, privacy and decentralisation? Nowhere to be seen.

    Much like crypto, and for similar reasons, it became cannibalistic. People backing one project would lash out at others. All competition was a threat. There was no spirit of collaboration. All motivation was pointed toward increasing the (perceived) value of a project.

    That’s a fine motivation if you are an investor, but it’s fatal when your investors are also your ‘users’. Much like a startup focusing efforts on increasing valuation rather than increasing value to users, it’s going to end with a bang.

    In conclusion…

    The collapse of web3 can be attributed entirely to the perversion of its growth.

    The ecosystem created was built around a bubble, without any incentives for long term growth. No reason to spend time identifying and solving real problems.

    It’s a shame, because buried deep in there were some people genuinely trying to build a better future, but it is incredibly difficult to maintain that focus if ‘financialization’ happens too early.

    Additional reading:

    Why you should rethink referral programs

    About a month ago, Mobolaji Olorisade and Grillo Adebiyi, of African Fintech giant Cowrywise, released a retrospective on their experimentation with referral programs for customer acquisition.

    It’s a supremely interesting read, and I reccomend checking it out, but I’ll provide a brief summary below.

    In short: referral programs are a perverse sign-up incentive, which lead to all kinds of unintended consequences. Rather than calibrating your focus on your ideal customer profile, it drags you in other directions – towards those that see an opportunity to exploit the program.

    Of all of the users of your product, it is the ones that found you organically, because you’re a perfect fit for their needs, which will sign up most readily and have have the greatest loyalty. In practical terms: the strongest LTV/CAC.

    1. If you imagine that 3D TVs had developed a similar rabidly absolutist mentality to web3 enthusiasts, demanding 3D content be exclusive to 3D TVs – and 3D TVs ONLY support 3D content, the parallels are perhaps even more vivid. []
    2. Paid referral programs are a common growth strategy in the Fintech world, particularly in the ‘growth at all costs’ era. Startups would spend VC money on paying new users to onboard, depositing $10 or $25 in their new digital wallet, because all that mattered was rate of acquisition. []
  • Metaverse – Reinventing the wheel

    Metaverse – Reinventing the wheel

    Earlier this week, web3 Studios released their ‘Digital Identities Report‘, sharing a variety of opinions and predictions on the future of identity and social interaction in a ‘metaverse’ environment.

    There is more than fifteen years worth of fascinating sociological research on virtual worlds and digital identity. You would not know that from reading this report.

    It simultaneously presents web3 worlds as an entirely new concept that is being shaped by a new generation of ‘web3 thinkers’, while also positioning Roblox as an example of a metaverse.1

    I’ve written about this before. Specifically in regard to web3 enthusiasts ignoring the incredible groundwork down in science fiction and games, and more recently on how metaverses are fundamentally a non-technical social proposition.

    Mostly those arguments have addressed the general web3 discourse on Twitter, wishing it was better informed about the existing groundwork in this field.

    It’s a deeper issue when companies (selling web3 products) collaborate with web3 influencers (mostly NFT shills) to produce a report that is essentially a sales catalogue – but frame it as some insightful look at the social aspects of virtual worlds.

    We’re all supposed to rub our chins, and ponder this brave new world of identity in a digital environment. Once we buy one of their avatars, of course.

    So, here (and in the corresponding Twitter thread) I wanted to share a few genuinely good papers on the sociology of virtual world and digital identity:

    If you are genuinely interested in building the future of social interaction online, there is an absolute wealth of information available to you. It is well covered ground – thanks to genuine experts, who often spent years immersed in virtual worlds as a part of their research.

    Stretch your legs, take a wander outside of the web3 bubble.

    1. Roblox is an online game released in 2006, enjoyed by an audience that is mostly under 12 years old. Did you know that Gucci have a ‘metaverse’ installation there? []
  • Ticketing – the model for consumer tokens

    Ticketing – the model for consumer tokens

    I’ve been labelled a ‘Web3 skeptic’. If you’ve read any of my other content here, you’re probably just confused about how I feel. So, let me clarify:

    Much of the capital that has been poured into Web3, to date, has been wasted. Too many get-rich-quick schemes and half-baked ideas. We need to do better. Specifically in demonstrating the practical, tangible benefits of the technology.

    It’s difficult (as Marc Andreessen will attest) to pin down a solid Web3 use-case. That doesn’t mean we’re wrong, it just means we are early.

    In the 90s, it was so fun to play with the internet. The online chat rooms and messaging boards. Email. Browsing the internet. Audio files. Much of this tech was half-baked back then, so it was hard to see what they’d be used for. This is where I think we are with Web3.

    Elizabeth Yin

    If a person believes that Web3 really is the future of the internet, then they should be able to articulate the (theoretical) gains, right?

    That hasn’t really been the case up to now. So let’s try, beginning with a use-case I discussed with someone in the thread linked to Elizabeth Yin’s quote:

    The Ticketing Use-Case

    You cant suggest reinventing an industry without looking at the pros and cons of how it operates today. Too many suggested Web3 use-cases fly in the face of reality because the fundamental research hasn’t been done.

    And let’s face it, for this example that means one thing:

    Ticketing in Web2 with Ticketmaster

    There are a number of well written articles1 which cover why Ticketmaster is a behemoth. They are worth reading, particularly for perspective on how many competitors Ticketmaster has crushed over the years.

    I’ll attempt to summarise the key points here. First the strengths:

    Venues love it

    Ticketmaster was the first ticketing platform to revenue-share with venues that adopted their solution, offering a percentage of their service charge. They also make sure that venues are paid promptly, if not in advance, reliably.

    Artists love it

    When you are the defacto platform for ticket sales, you build up an incredible database of customers, and a wealth of data about their preferences and demographics. If you want to make sure an event is sold out, Ticketmaster is the way to get that done.

    Consumers tolerate it

    Tickets going on sale for a major performance are an IT nightmare: a huge number of users, all at once, trying to complete a relatively complex transaction. The ability to scale capacity to accomodate for demand is key, and Ticketmaster has proven it does that well.

    Now the weaknesses:

    Venues suspect they could do better

    If you are reliably selling out your venue, because it’s the best in the area and you’ve built something great, you might start to wonder if you really need Ticketmaster.

    You may get a fraction of their service charge, but if you ticketed your own events you would secure a bigger percentage and build up your own customer database.

    Artists could absolutely do better

    Let’s say you are a tremendously popular musical artist with a global fanbase.

    Wouldn’t you LOVE to be able to own all of the data related to your fans? Wouldn’t you like to own all of that traffic? Maybe build a ticketing system that made sure die-hard fans were looked after, and scalpers had a harder time? Offer them a fairer price, from which you extract a bigger percentage?

    Customers deserve better

    We could focus on the massive legacy tech stack, and how slowly Ticketmaster moves with updating their platform to provider a better experience, but the obvious choice here is the service charge. As a consumer you are quite frequently paying double the actual ticket price for a sub-par service.

    So, with all of that in mind, how can Web3 offer a clear improvement on the ticketing experience – for all three stakeholders in the process?

    Ticketing in Web3

    It seems clear to me that no Web3 solution can compete with Ticketmaster on its own terms. It is too well embedded in the industry, and offers the lowest-risk outcome for the major stakeholders: venues and artists.

    The only way forward is to present an entirely new model for ticketing, focused on the key values offered by Web3 technology: decentralisation, privacy, security.

    An open protocol for all stakeholders

    It seems reasonable to start by assuming that neither the venue or the artist should ‘own’ ticketing, nevermind an external corporation. There is too much value that is being controlled by just one party in a multi-party transaction.

    We can imagine that a Web3 implementation of ticketing would begin in a fairly standard manner: a user signs up by connecting with (or setting up) their wallet, adding as much additional information (name, email, etc) in that flow as they are comfortable sharing.

    This registration could happen at the point of sale when buying a ticket, or earlier, when joining a band’s official fan club or a local venue’s online community – with the usual membership incentives.

    The created/connected wallet would then serve as that individual’s identity for any band, venue or ticket seller which was built on this technology.

    I want to go see Metallica

    To play out a scenario, let’s say this user wants to attend the 2024 Metallica tour at the Birmingham NEC Arena.

    A) They could visit the Metallica website and connect their wallet on the tour page, which would highlight the tour dates in their area, and allow them to purchase directly.

    B) They could visit the Birmingham NEC website and connect their wallet on the event calendar page, which would higlight events which fit their preferences and history.

    C) They could visit any number of other websites that have this platform integrated – be they local event pages, Metallica fan groups, niche metal communities, or big national event agencies.

    In each scenario, a small percentage of the ticket price is awarded to the originator of the sale.

    The next part of the transaction will be based on whatever terms have been agreed between Metallica and the NEC.

    • The NEC get 100% of ticket revenue until the venue rental is paid off.
    • The NEC gets a percentage of revenue until their costs are covered.
    • The NEC gets a fixed percentage of total revenue.
    • The NEC rental is financed in advance, against the future revenue of ticket sales, based on historical sales performance.2

    The remaining ticket revenue is sent to Metallica, with the option of doing a further division to secondary stakeholders such as logistics companies, catering agencies, supporting acts, charities etc.

    Crucially, both the NEC and Metallica are able to capture data from the transactions, monitor how well an event is selling in real-time, own their own part of that promotion and the revenue it yields, have optimal cashflow, and build a better understanding of their audience.

    Long term, the NEC could reward some of their most regular event attendees with early access to ticket sales, discounted tickets, or perks in the venue like free drinks or VIP section access.

    Metallica could build a database of its fans worldwide. Which fans always come to see them when they are in town? Which fans have travelled the most to see them around the world? Which fans have been following them for the longest? Again, there are obvious opportunities here for Metallica to reward true fans with early access to sales, discounts, exclusive merchandise, meet and greets… etc.

    From the perspective of the user, they build a closer relationship with the NEC, being rewarded for their patronage, and they enjoy a sense of recognition from Metallica for their loyalty. They also get much more fairly and transparently priced tickets, and a super experience as a buyer/consumer.

    Conclusion

    A system like this would need to build significant momentum to compete with the incredible momentum of Ticketmaster, but there are enough network effects (and existing incentives to provide a better, more fairly priced alternative) that it certainly has the potential.

    There are also much deeper and more thought-provoking possibilities when you start to consider how else this platform could be integrated into other platforms and services, or where else it could be relevant in its application.

    1. https://slate.com/human-interest/2015/05/ticketmaster-why-do-so-many-music-venues-use-it-when-everyone-hates-it.html, https://www.wired.com/2010/11/mf-ticketmaster/ []
    2. Indeed, the entire cost of the event, band, venue, logistics, support, could be financed on historical data. []
  • Virtual Worlds – a social, not technological, phenomenon

    Virtual Worlds – a social, not technological, phenomenon

    To begin, a quote from Tom Boellstorff, Professor of Anthropology at UC Irvine:

    The metaverse’s history indicates that social immersion is the metaverse’s foundation.

    Tom Boellstorff

    Tom is a bona fide expert in virtual worlds, and I recommend reading the whole article.1

    Through his work, he has spent a tremendous amount of time in Second Life, the most notorious of the non-game virtual worlds, including two years doing field-work for the book ‘Coming of Age in Second Life‘.

    The bottom line in his article for Fast Company is that meaningful immersion is achieved socially, rather than technologically. It is not about VR headsets, it is about networks of relationships.

    Put another way:

    Humans are at the centre of it. Not technology.

    Web3 – learning from science fiction

    If you have spent any time at all in environments like Second Life, or close equivelents in the MMO genre like Utima Online, World of Warcraft or EVE, that sentiment should ring true.

    None of those games have cutting-edge graphics or VR capability, but they do have immensely strong social dynamics. They are compelling, immersive experiences because you are in a living world, populated by real people. That they act like real people is important, whether they look like real people is not.

    Put yet another way:

    A cyberspace is defined more by the interactions among the actors within it than by the technology with which it is implemented.

    The Lessons of Lucasfilm’s Habitat

    The social environment of virtual worlds stands in stark contrast to platforms like Facebook. Virtual worlds enable authentic relationships, whereas what we refer to as ‘social media’ today largely trivialises relationships by reducing them to basic forms of engagement.2

    That social element is the mainstay of retention for games like World of Warcraft. It is not uncommon for an individual will maintain their subscription because of the community (their circle of friends, their guild or clan mates) and the strength of their identity (recognition, reputation, notoriety) they have built in relation to the story of the world.3

    Three pillars for virtual worlds

    If you examine the social dynamics which drive virtual worlds, there are three critical factors:

    • Identity – Individual expression.
    • Community – Relationships.
    • Story – Context and purpose.

    That trio ecompass absolutely everything required in a successful virtual world. Not graphical prowess. Not VR. Not financial incentives. Not elaborate mechanics. Importantly, while all three factors are critical, both community and story are dependent on identity.

    It is difficult to overstate just how important that concept of identity has become in the relatively narrow context of virtual worlds. How crucial it is to their success, and how much it contributes to a rewarding player experience.

    There’s no reason for it to stop there. If you were able to design your personal identity from the ground up, in a more expressive (or more discreet) format, ignoring norms and conventions, why wouldn’t you? This concept is destined to spread further, as our online life diverges from our offline life and we gravitate towards the format which best fits the context.4

    What we’re circling back to is the question of identity in ‘the metaverse’; how these principles for individual virtual worlds apply to our entire virtual existence: what will identity look like in a Web3 world?

    Put another way: what do the concepts of Identity, Community and Story look like at a meta level which spans multiple projects, platforms and mediums?

    How can a ‘metaidentity’ enable a new model for the web?

    What form does a ‘metacommunity’ take for Web3?

    Could there be a ‘metanarrative’ for this new context?

    1. There’s an interesting semantic argument here, about whether metaverses are virtual worlds. I’m personally of the opinion that ‘the metaverse’ is a more nebulous description for whole digital aspect to our existence, centered around our digital identity. Virtual worlds are more specific sandbox environments for exploration and socialising in a digital environment. []
    2. Anecdotally, I know at least 3 couples who married after meeting in an MMO. I’ve never heard of anyone getting married after meeting on Facebook. []
    3. It is also not uncommon for that virtual identity to persist outside of the game, both across other mediums and also for many years after the game servers my close. []
    4. For people whose status, credibility or legacy is rooted in their real identity, they may choose to continue using their real identity as their digital avatar. For others, there is an increasing trend to invest in a cross-platform virtual identity. It was evident in the gaming communities of the early 2000s, and it is perhaps even more common today amonst Web3 enthusiasts. []
  • Screening Pitches

    Screening Pitches

    There are five straight-forward questions with which you can quickly evaluate a startup pitch, combining the strength of a proposition with its delivery.

    These questions bear some some resemblance to the Scorecard Method of startup valuation, which focuses on qualitative measures for early-stage companies, but with an additional focus on quantifying the market need.

    I have applied this approach to screening accelerator applications, but it can be used as the first step of evaluation in any pitch process.

    For the sake of simplicity we can score each of these on a scale of 1 to 5.

    1) Severity of Problem

    This is a question that can vary significantly based on the market you are looking at. Emerging economies tend to have more of a focus on the (high scoring) primary problems, which is why they’ve been able to better resist economic downturns.

    1 – Micromobility, dating apps, rapid delivery (esp. red ocean)

    5 – Access to water, energy, core financial services (esp. blue ocean)

    2) Strength of Solution

    Simply, are you providing a way for people to better cope with a particular pain, or have you managed to cure it in a complete and lasting manner?

    1 – Solution alleviates the problem

    5 – Solution eliminates the problem

    3) Scalability

    There’s almost always a focus on the size of market. TAM, SAM and SOM will feature in virtually every startup pitch deck. What’s often overlooked is how easy it is to scale into that market. Regulatory barriers, poor infrastructure, or corporate customers who move slowly are always a threat.

    1 – Infrastructure or regulatory requirements, long sales cycles and onboarding (esp. in small markets)

    5 – Web or mobile based product that is available on-demand to the entire target market (esp. in large markets)

    4) Profitability

    In many markets a poor product will win if it is just slightly cheaper than a better product. This kind of price suppression can be a killer for otherwise solid businesses. Similarly, some problems require costly solutions like agent networks, physical touchpoints, or a highly involved sales and customer service capability.

    1 – Low margins (high CAC/CRC/COGS, low LTV)

    5 – High margins (low CAC/CRC/COGS, high LTV)

    5) Team

    This is the hardest part of a pitch deck to quickly evaluate, and requires the most additional research. LinkedIn, interviews, papers, Glassdoor… the number of potential resources extends as far as your willingness to do the research.

    1 – No obvious fit for the problem being solved, by education, experience, or personal background.

    5 – Exactly who you imagine should be tackling this problem, with a combination of both motivation and ability.

    Conclusion

    At the end of this fairly rudimentary process you have a score out of 25 which should give you a very broad overview of the potential of this business. It is intended to quickly take a list of some hundreds of pitches down to the 20-30 you think are worth a closer look.

    At that point you can then start looking at some of the more granular data:

    1. Existing partners, strategic relationships, etc
    2. Industry and regional context
    3. Traction and development of competitors
    4. Revenue forecasts and unit economics
    5. IP considerations