I Will Never Use AI to Code (or write)

These days, my stance on AI pisses a lot of people right off. I decided to write down exactly why I'm perfectly happy pissing off quite a few people with my anti-AI conviction, and why I'm unlikely to change my mind. Strap in, this is going to be one hell of a ride 😅

I fucking enjoy coding

I actually like writing code. Why would I want to give up something I enjoy? I was explaining AI coding to my Dad and asked him:

"Would you write a book if writing was prompting an AI to generate a novel and then giving it feedback, like 'flesh out this character more'?"

"No, because it wouldn't be my writing, it wouldn't be fun?"

"That's exactly how I feel about using AI to code. Maybe the only useful thing about AI coding is making it easy to identify engineers that don't enjoy writing code."

I also fucking enjoy writing. I enjoy editing my writing, I enjoy making it as succinct as possible. I especially enjoy, on occasion, intentionally breaking the rules for effect. Sometimes, I particularly enjoy finding another place to squeeze in yet another superfluous fucking swearword! 😁

I wish I could leave it here, but this isn't reason enough for a lot of people. I guess I'll provide some logical arguments or something, whatever.

Skill Development and the Illusion of Learning

A lot of people mistake recognition for recollection, and comprehension for understanding. These are not the same thing. You don't develop skills by reading about them. You have to use them, to process the information, integrate what you've learnt into your existing mental schema, try to use it, make some mistakes, identify contradictions between what you've learnt and your existing mental model and resolve them. You have to do the work for any of this to stick.

The fact is that most of the time, AI isn't helping you do this at all. It's feeding you information that you recognize, not things that you can recall for yourself. By reading about it you're only gaining superficial knowledge of the topic, you won't be able to manipulate the ideas or work with them yourself, not in a meaningful way.

A friend of mine made an excellent point recently, saying

There’s some super mundane tasks in every profession… but if you don’t do them and understand the reasons why and how they’re done…. You’re missing out on very critical information. They cannot be skipped.

I saw something another designer posted, saying she was hiring and she was really sad. Every portfolio she saw was the same mix and mash up of templates and design assets available online. Or done in one of the many web builders that are basically just a drag and drop interface. She was saying how designers aren’t learning web constraints of any kind, or how to do icons or graphics from scratch. And everything looks the same because nobody knows the constraints to be able to push them, or be creative with them.

On the other hand, if the AI is "better" than you at a particular topic, then you're incapable of identifying its confident hallucinations, you're incapable of correctly judging the quality of its output. This is much like a company started by founders that lack a particular skill — they have a gaping blind spot in that area and, lacking the requisite skills to judge competence in that field, struggle hiring someone competent. It's an externalised Dunning-Kruger effect.

Skill Decay And the Outsourcing of Cognitive Effort

If you are more skilled than the AI in a particular field, then yes you could confidently judge the quality of its work and guide it towards better outcomes. But unless you continue exercising those skills yourself, this coaching role will decay your own skills and judgments. Your skills will trend downwards over time, and as your skills decay you will experience greater frustration the next time you try to use them. This increasing frustration makes outsourcing tasks to AI more tempting, further accelerating the decay of your hard earned skills. Using AI for your profession forms a feedback loop wherein your skills decay while its skills increase, helping the AI provider make you more dependent on it.

Yes, teaching people helps you develop your own skills, but not without continued practice. Teaching in isolation is not enough. Anyway, you aren't really teaching when you are wrangling AI, and definitely not teaching people.

Skill Collapse And the End of Capability

If AI requires experts in a field to create training data for it to develop its capability, but using AI causes people's skills to decay, and robs that field of the economics that created those experts, then eventually progress in that discipline will stop. AI destroys the resources required for its own creation and progress. If we stop hiring junior software engineers because we're going to outsource their work to AI, we will never have new senior software engineers. Who's going to write all of the open source code for AI to train on? Won't someone think of the AI children?!

Of course, nothing about this is sustainable! Imagine we kept using AI for coding for the next 10, 15, 20 years, imagine we got rid of all software engineering jobs. With continued training these AI models will increasingly be trained on AI output, despite efforts to filter it out of the training data. So then the models collapse, they increasingly hallucinate and we are increasingly unable to detect it because we lack the expertise to identify it. Now we have no models and no software engineers. AI might not cause just model collapse, it could cause a total skill collapse.

But the really worrying thing is that we don't actually need model collapse for AI to cause a total skill collapse. AI could cause a global skill shortage in just a few years if we continue using it like we are now: pulling up the ladder behind us, robbing industries of new talent, and reducing the number of active experts in a given field, and then the AI industry collapses due to its utterly shoddy economics. Great time to be one of those few remaining active experts, probably a terrible time to be anyone else in society.

Software Engineering Is a Team Sport

Software doesn't exist in a vacuum. It can't be divorced from the people that are actively working on it. The software and the people form a symbiotic relationship, they are the emergent system. The code changes the behaviour of software engineers, and software engineers change the code and its behaviour. Every line of code is a liability, it's a potential bug, it's a potential source of confusion, it's potentially wrong. The asset is the understanding that has formed around each line of code, the conversations between teammates, the time spent clarifying what the business or user really wanted. All code that someone on the team didn't write is considered legacy code. Usually, the best thing a software engineer can do is delete code.

AI is a tool that can only produce software liabilities, it is incapable of producing the asset itself. People who haven't written software on a team don't understand this at all. Not all people who have written software on a team even understand this (especially the rockstar-arsehole engineers). The rest of the business thinks that the code itself is an asset, they often think more code is better. Heck, I've even had consultants ask me to report on the number of lines of code in a software system as part of investor due diligence, who then acted like they had discovered some kind of "gotcha" when I reported a number lower than they were expecting. The code is not the asset, typing was never the bottleneck, AI makes code bases worse.

AI and the Case of the Missing Gross Profit, Net Profit, or Any Fucking Profit at All for That Matter

These things don't fucking make money. Everyone using AI right now is using it based on it being heavily subsidized by Anthropic, OpenAI, VCs, and an enormous, utterly gargantuan, amount of debt. At this point I've read nearly 100,000 words analyzing the financials behind the AI bubble. I strongly recommend reading The AI Data Centre Financial Crisis by Ed Zitron who explains this much better than I can.

Let's start by meeting the cast of this god awful tragedy:

  • The landlords: They purchased the land upon which these AI data centers will be built.
  • The Builders: Contracted to build facilities and power stations for the colocation companies
  • Colocation companies: They lease the land to build and provide what's called a "powered shell", a facility with power, physical security, and internet connections. Some of these companies are failed Crypto startups clinging to life.
  • AI Compute Providers: These companies allow AI Labs and AI-curious hedgefunds to rent access to GPUs.
  • AI Labs: Train and provide the models
  • AI service providers: These companies built their businesses using and repackaging the APIs of AI Labs
    • Played by: Cursor, Windsurf, CodeRabbit, and literally hundreds more

These companies are building these AI data centers using close to $1 trillion dollars of debt. Each of these companies have raised enormous and unprecedented amounts of debt In addition to raising capital through equity. Each of these companies poses a serious risk to the other companies above and below it in the AI supply chain, and all of them have taken on this $1 trillion in debt for an industry that has made, allegedly, $37 billion in revenue, $0 in profit.

Threats to this enormous tower of debt include:

  • Fluctuations in energy price, e.g. geopolitical conflict
  • Insurance — Data centers this big have literally never been insured before
  • Failures in energy storage and transmission
  • Bad maths: they've taken on debt assuming facilities will be built faster than physically possible, but delaying revenue risks insolvency.
  • Bad business: many of these companies somehow ignore the fact that NVIDIA continue releasing new chips requiring new infrastructure like the 1 MW Kyber racks. These new racks obsolete all of the AI data centers currently being built using the Vera Rubin racks, before these data centers even go online. Which is every AI data centre under construction.

I don't know if anyone will be doing much AI coding by 2030.

AI and the Void of Accountability

If you can't judge the quality of AI output, how can you be held accountable for its results? Are we going to use Claude to code flight control systems? Who's accountable if it causes a plane crash? Anthropic? The Product Builder/Prompt Engineer using it to generate a flight control system they were incapable of producing themselves?

The fact is that the companies behind these models are already doing everything they can to avoid accountability for the impact of their AI systems. With chatbots helping children commit suicide, playing a role in or even encouraging multiple deaths, fueling AI psychosis, amplifying disinformation campaigns, and being used to sexually harass people including children, these companies will happily continue training their systems on our data while avoiding accountability for the resultant harms.

Environmental Devastation

I'm not sure I can even be bothered writing this section. AI companies are planning/hoping/naively-expecting to bring online 10 gigawatts of total load each year. 10 GW is enough to power every home in London. Adding another London worth of power generation every year is not going to be good for the environment. This should be really fucking obvious!

An Irrational Economy

  • AI Labs rely on AI Services companies to spend money using their APIs
  • AI Services Companies rely on other companies to purchase their services
  • Other companies purchase AI services to reduce their headcount
  • These other companies rely on consumers purchasing their products

If AI revenue optimism/delusion is predicated on using it to broadly reduce headcount across industries, how would consumers be able to afford their products? Does everyone really think it would only affect the customers of other companies, but not their customers? Their own customers would somehow be spared from AI-induced layoffs? In a sense, it's almost poetic; extracting all of the wealth and consolidating it to the point that there is no liquid money and no further economic exchange really is the ultimate neo-conservative own-goal. Good thing there's no way AI could really put everyone out of a job, except by causing a global financial crisis greater than any we've ever seen.

Let's do some maths 🤓 There are 86 billion neurons in the human brain forming 1 quadrillion (10**15) synapses. At 64 bytes per parameter (synapse) thats 64 * (10 ** 15) = 64,000,000,000,000,000 bytes of memory to run a GPT with an equivalent number of parameters. That is 59,604,645 GB or 58,208 TB of RAM. For a single instance of a human brain equivalent! This is 6.2 times larger than Stargate Alibene! It would require 310,440 NVIDIA Blackwell B200's which would come in 38,805 NVL72s costing $3 million each, for a total of $116.4 billion of GPUs requiring 6 giga-watts and 5,425 acres per concurrent human-brain equivalent. And this is supposed to replace someone earning six figures?!

If only someone had thought about this from first principles 🤦‍♂️

Conclusion: I Don't Want to Use AI

The thing is that even if I was wrong (I'm not) and AI was somehow helpful for software engineering (it isn't), I still wouldn't want to use it.

Might I be wrong? I don't know, maybe? Might I be left behind in my industry? I guess, but I don't think it's likely. But honestly I would rather be wrong and left behind than spend my days prompting an AI to do my chosen craft, something I love.

Subscribe to A Democratic Economy

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe