The Hedge Bet on Humanity
Happy new year! To kick 2024, I’d love people to subscribe to my newsletter and listen to my podcast, The Ned Ludd Radio Hour. I’ve recorded an audio version of this blog, which is embedded below, so you can listen instead of reading, listen while reading, or ignore altogether and just read.
Sometimes, when you encounter an idea for the first time, you forever remember it in the terms of that meeting. For me, when I hear about the financial term “hedging” (or is subsidiaries, “hedge bet” or “hedge fund”) I will always think of it the way it was described in Lucy Prebble’s play, Enron. Back in 2009, Prebble dramatised the collapse of the Texan energy giant, and had the corporation’s CFO, Andy Fastow, describe hedging. Here’s what he says:
“If you got a lot of money in airlines, for example, you might think, hey, this is all going really well, lots of people fly my investment is safe and going up. But what happens if there’s a huge airplane crash, maybe people die, oh no, folk get scared of flying and your stocks plunge. Well, the smart guy hedges his airline investment with — maybe — an investment in a car rental company. When air travel frightens people, they want to feel in control, they’ll drive interstate… When I write down everything that can possibly go wrong, as a formula. A formula I control. Nothing seems scary any more.”
Airlines and rental cars. I will forever see these as the base units of hedging. On the one hand, the innovation, the moonshot. A product that improves upon the consumer experience of a generation, which has the potential to cause a radical gear shift in the economy. And, on the other hand, a bet based on innovation failing to innovate. What if your game-changer doesn’t change the game? What then?
The Artificial Intelligence revolution was the technology story of 2023. It was also the business story of 2023. And it has potential to be the political story of 2024. In short, it is unavoidable. All around the world, major companies are pivoting their technological efforts to make sure that they don’t slack in the arms race that is AI. At present, the race is being dominated by Microsoft-backed OpenAI and Google-owned DeepMind, but everyone, from Meta to Elon Musk, either has a dog in this fight or is down the puppy farm, eying up a nice preggo bitch.
And these are the base tools, the architecture on which our AI future will be built. In tandem with these blueprints for our digital future are a thousand companies unlocking their potential. Potential to do everything from write client pitches (and blogs) to creating images that render the recesses of the imagination in stark technicolor. Transport, medicine, law, media, entertainment: everyone is grappling with the way that infrastructure can be improved and streamlined by breakthroughs in Artificial Intelligence. If you’re running a tech-focused VC firm right now (or even one not usually interested in early-stage tech companies) you’re likely only taking a handful of meetings with companies that aren’t, in some way, engaging with our AI future. And so, the question that raises for me, is: how do we hedge against this?
Firstly, I think it’s important to reiterate why a hedge is necessary. Foremost amongst those reasons is the very nature of Artificial Intelligence. Personally, I think AI is a bad business bet. Coded into its very essence is the ability to make itself redundant. What, after all, is a technology business other than the sum total of the intelligence — either acquired over time and turned into intellectual property, or the bods currently contracted to work in its labs and offices — in the company? Nobody values Meta because of the real estate they own in Menlo Park, or Amazon on the back of its skyscraper on Terry Avenue. These are digital infrastructure companies, and AI is a digital infrastructure tool.
How many of the Nasdaq-100 companies do not have the potential, at the very least, to lose value off the back of insurgent AI technologies? And how many investors in America do not have at least some of their capital tied up in these companies? To some extent, investors in AI are collaborating in their own, eventual, redundancy.
Once the genie is out of the bottle — an ambition that some, but not all, AI companies are working towards — there is no capitalistic solution for re-corking. It is all well and good whilst the technology is safely ensconced within the purview of Microsoft or Google, but what happens once there are open source options for developers the world over? What happens once state actors, who are less conspicuously business-minded, get their mitts on the technology? AI has the potential to be self-replicating. Once you’ve created a sufficiently powerful AI (and this is before we get to anything like Artificial General Intelligence (AGI)) it will have the potential to build product. Perhaps if you’re Mr Microsoft or Señor Googlé you’re ok with that — but what about the rest of the investment community?
Then there’s the, at present still distant, question of quantum computing. Should that become a reality, it will not only change the entirety of our infrastructure as we know it, but accelerate the capabilities of AI to an unthinkable level. With an amorphous digital blob capable of executing any task in a fraction of an instant, what space is there for capitalism? How does a system build on service provision survive the total eradication of the need for services to be provided? The only solace is that, like most of the hardware questions, the advent of workable quantum computing seems many years off… (but, wait a moment, didn’t we say that about AI not too long ago?).
And then there’s the pace of progression with the technology, which makes it self-gazumping. Make a billion dollar bet on a company today, and that could look silly tomorrow. This is the reason why, in 2023, we’ve seen so much money being expended on building product. There is a real, genuine fear that if you miss out now, you’ll spend the next decade ruing your mistake, like failing to buy Apple stock when Steve Jobs was launching the iMac. This is leading to a lot of speculative bets on products that — to me, at least — look a lot like they could be absorbed, replicated or beaten within the next technological cycle.
So how do you properly hedge against Artificial Intelligence?
The first thing is to analyse the weaknesses of the new technologies, so that you can understand how the light gets in. For me, the most obvious weakness is take-up. At present, AI is infiltrating our digital existence more and more each day. It is being stealthily introduced to lots of products that are intended for mass-market, general consumption, in the same way that autocorrect became ubiquitous in email and texting, or those irritating squiggly lines appeared under my typoz in my word processor. The market for integrated AI apps is vast. Whether it’s GPS or smart speakers, chat clients or video transcription or remote triaging, the potential to slip AI into our lives is huge.
But the big AI models are built on a more complete vision for AI adoption than the sharpening of existing tools. Looking at the two most advanced existing programmes — ChatGPT and Dall.E — exposes a weakness. Most people don’t ever need to write documents or create images. The way that the internet’s vociferous userbase works means that the noise on adoption flows outwards from the most native participants in a new technology. And so ChatGPT’s arrival was loudly trumpeted by journalists — people like me — whose basic day job is to type and type and type and all work and no play makes Nick a dull boy. They/we were the signal boosters for ChatGPT because it exists as a potential journalist replacer.
And then, down the line a bit, were all the students and people still in education, who make up a large percentage of the participatory internet because they have a) the means to communicate, and b) time on their hands. And so all these people started talking about ChatGPT because it’s an obviously important, dangerous and useful tool in academia. It’s not a potential student-replacer, but it is something that could have a huge impact on that industry. Then there are another group of digital natives — client service professionals — who make up another of the internet’s major clans. Again, it was obvious how ChatGPT might impact or influence their work, albeit in more minor ways, and so they kept the conversational flame burning. And so on.
But it is not an endless cycle (and with tools like Dall.E and Midjourney, it’s even shorter). Neither of my parents — who are both Boomers, stokers of the economic engine that drives our world — are ever likely to need to use an AI product to, say, build a pitch deck, or create a visualisation for clients. They both have/had normal jobs; jobs that would’ve been largely unimpacted by these technologies. And these are middle class, professional people: there are also a lot of people who are outside of digital fluency or adoption. Why would 99% of the world’s population ever need to prompt engineer an animated video? Why would 99% of the world’s (non-student) population ever need to write an essay?
In its potential scale, the AI revolution is often likened to the Internet revolution. That seems eminently possible. But there’s another potential future where the impact is more akin to the way that computer generated visual effects developments impacted cinema. The dawn of CGI had two major impacts: it damaged the practical effects industry, and it enhanced the visual effects industry. But for 99% of the world’s population, whose only interaction with cinema is as consumers, it changed nothing. One day they were watching Nosferatu, the next they were watching Twilight. Behind the scenes this may have been a sea change, but, for most people, it was a relatively cosmetic alteration to a product.
There’s also an ideological question that I have about AI and its purpose. There is a tendency amongst the technology’s evangelists to take what I would call the Alexandrian approach: i.e. they believe that the purpose of knowledge is to be acquired and stored, ready for access. It is a commoditisation of knowledge, which conflates “access” to information with “understanding” of that information.
I was speaking a few months ago with a friend who works in software development. He’s spent decades in the industry and is very well respected, even winning an Academy Award for his work. I mentioned that I felt irked when people described themselves as “AI artists” because they successfully typed a few words into Dall.E and exported bizarrely eroticised hyperrealistic portraits of men and women. He took greater umbrage at those who called themselves “prompt engineers”. But the point was the same. AI’s zealots, those who proselytise its eventual ubiquity, tend to be people who equate “access” with “understanding”. They believe that they know about history, simply because ChatGPT is waiting to tell them. They believe they can create art, just by inputting a few basic commands in some pre-existing software. And they believe that they can engineer the next wave of digital architecture off the complex, minute building blocks of the real minds, the real geniuses, who are building these AI technologies.
The hedge bet against AI is to bet on humanity.
It’s to bet on humanity and the world, and the need for people to interact with their physical space, to reassert itself. The covid-19 pandemic ought to have been a stark warning to the business world that increasing technological alienation creates unforeseen and unhappy consequences. Generation Z (the so-called Zoomers, because they’ve spent much of their formative years on Zoom) have undergone a well-documented mental health crisis. I suspect, in coming years, we will also bear witness to an educational crisis. Knowledge, language, culture: all these things have shifted down the order of priorities. The education system — which is the primary social structure for almost anyone under the age of 18 — is scrabbling to keep pace with technological changes. To be a smartarse in class, even 10 years ago, you needed to be actually smart; now you can just be a smartarse with a smartphone. You don’t need to actually be smart. The dumbing down of smartarsery is symptomatic of a system which is leaving us with just endless arses. Arse after arse after arse.
How do you raise a happy child in the 21st century? You make them do sport and socialise with their friends. You make them mow the neighbours’ lawns to raise money to buy pot off the grubby kid who loiters by the gas station. You teach them how to drive and then send them off to college in a beat-up station wagon, where they bunk with some kid they’ve never met before who farts and snores all night. You let them meet people, let them hate people, let them love people. You let them have sex, get married, have kids, get divorced, have a mid-life crisis, move to Thailand, move home, get hired, get fired, retrain, remarry, build a pension pot, retire, regress, move in with the aforementioned kids, grow old, die, get buried, become part of the carbon cycle we learned about in school. That corpse is a happy kid.
The hedge against AI is to believe that humans will stay interested and engaged in their surroundings. Some will live new lives with new possibilities born of new technologies — but many, most even, will live old lives. Will walk the dog, get coffee, feel oddly sentimental about the smell of sea water. And that means you can bet on curiosity, bet that people will still want to feel some sense of independence and autonomy in their creation. It means you can bet on competition, bet that the indomitable spirit of humans will make them strive to be a better human than the next human (and certainly a better human than the next computer). It means you can bet on people staying in this too too solid flesh, bet that they will stay frail and mutable and fickle. Bet that they will want to read books and drive their own cars and speak to a doctor in person and travel from one shitty country to the next. Climb mountains, dive in seas, eat in restaurants, dance under the moonlight — this is getting a bit soppy, sorry — meet people who are like them, meet people who are unlike them, and grow old. Grow old because, at the end of the day, all other options are closed.
This is the hedge bet on humanity, the rental car to the airplane of technological accelerationism. It doesn’t mean that it’s a good bet — it might be a bet that only pays off if some other great disaster befalls us — but it’s a bet that worth taking. Because even if it doesn’t pay out (though it is, really, a bet on capitalism prevailing) you’ll still be able to feel the breeze in your hair. And can you really put a price on that?
Thanks for reading. Do follow me on Twitter/X and drop me a line to nick@podotpods.com if you ever want to discuss technology, the media or whatever’s bothering you at that particular moment.