The Use of Intuition in Software Engineering

or How I Learned to Stop Worrying and Love Generative AI

by Benjamin Harnett

A photo of Slim Pickens playing Major Kong as he rides the atom bomb down at the finale of Dr. Strangelove with his head replaced by the Open AI logo.

Yee Haw!

WOW, is the human mind (the only one with which I am intimately acquainted) ever a marvel. Embarrassingly, I have only just, at the age of 43, received my first driver’s license. This, despite, over the preceding 27 some-odd years, having had about ten scattered hours total of practice behind the wheel. In the week leading up to my driving test, my youngest brother, who had just gone through the same experience, decided I needed some focused practice, and we did three hour-long sessions on successive nights turning, parking, and driving around town.

That Friday, my wife, Toni, drove me to the DMV test location at the fairgrounds, and about 20 minutes later, after making a sort of figure-eight loop through the heart of weekday mid-morning traffic, parallel parking, and doing a k-turn, I had passed. (You don’t find out until the end of the day online now, and I spent the following hours worried—but Toni knew just from looking at the tester as we exited the car that I had nothing to worry about.)

I am now a licensed driver in the state of New York. This blog is not about my being a late bloomer. This blog is about what allowed me to go from anxious novice to confident, safe driver in a fairly short span of time.

Well, Ben, you’re exceptional, you might say. No! I am not.

Look around you. That old man drives. That freckle-faced kid. Your dipshit cousin Steve. It’s okay, everyone agrees he is truly stupid. But even Steve drives. And not just drive, he, like most other people, can drive just about any car after a rudimentary check of the controls. Years ago, these were all people driving stick shifts, responding to subtle changes in the conditions by raising or lowering the gear while guiding a multi-ton metal contraption on wheels at high speed over pitted roads around blind turns with numerous other humans of all sorts and distractions doing the same thing.

Meanwhile, we are decades in to the quest for driverless cars, where even the most sophisticated require constant human interaction while tooting around at low speeds through carefully mapped city streets with the aid of radar technology, and yet still wind up blocking traffic or trapped in a parking lot endlessly circling each other honking in the middle of the night.

And despite immense investment, waived or ignored safety regulations, and continually increasing compute power, we find ourselves stuck, here, at the end of the line, always a couple years out, always on the cusp of the final breakthrough, never making it. Every day, we click into the captcha window “this is a bicycle, this is a traffic light, this is a crossing”—maybe that last click is the click that will do it, unlock the final pattern match, be the thing that sets us free.

IMAGINE your hands and feet were wheels, imagine you turned the same way your body does, without a thought, simply fulfilling your will—you want the glass of water on the table, you get the glass of water. You don’t think, okay, now I stand, fourteen steps, move the right leg, bent a little at the knee, fall forward into your step and then swing the left—no you just get it. The driverless car has the advantage, it embodies its task, but even so simplified, augmented with radar and computer vision, having only to command and the car acts, and it can’t fairly beat a fifteen year-old kid.

How embarrassing.

Meanwhile, you are moving your body, you are rehearsing a conversation you had with your boss, trying to figure out how you could have arranged it better, you’re also thinking about the video you watched, the one about doing your own crown molding, and trying to imagine what it will do to your dining room, and you are making a mental list of the things you need to get, all while feeling the way your body will feel on the ladder and wondering if you need a taller one, and your wife asks you to run to the store so you grab the keys and you back out into the street and turning you hit the brakes as your neighbor’s cat dashes out behind you, and you think about running for the town council. And when you get back it’s time to help Johnny with his fractions, and you imagine clever ways to explain them and dream about doing better than your parents because they were hopeless at conveying what a third of a half but you see it’s so simple and fish for coins in your pocket to try to show him, because he’s at the age that is very money motivated (but isn’t every age).

And so that’s general intelligence.

It’s the same thing the cat that slinks from yard to yard has, and even, on some debatable sliding scale, a mouse has, a fly, and so on down.

It is not something a computer, or even a network of computers has. Computers are programmed, by dint of our general intelligence, to solve problems. Perform actions. We may build out an algorithm instruction by instruction, do this, do that, in this condition do the other thing, keep track of this value, and these instructions may get more and more complex. We may program the computer to review many different kinds of inputs and adjust its algorithm accordingly, until encountering an input like but not the same as the other inputs, the computer spits out the correct answer. This is called machine learning.

Machine learning has achieved some incredible feats of human-like intelligence, and on some very specific tasks have surpassed human capabilities, while the AI technologies that are gathered under the banner of Generative AI have caught the attention of the world. The technology is so impressive, that its proponents are crowing that achieving the holy grail of artificial general intelligence (AGI) is but a few tantalizing and dangerous years away.

But is it?

My intuition says no. In fact, it says that no computational method will ever achieve the kind of general all-purpose intelligence that characterizes the living spirit that has allowed us to achieve so much.

There isn’t going to be a robot that can serve as your butler and driver and later teaches your kids fractions and afterwards walks the dog and puts a new coat of paint on your house. I can back this up with reason, too. But it’s this thing, this intuition, that is the crux of my assertion.

Intuition.

FOR the past thirteen years I’ve worked as a professional software developer. Where I work, I am the highest ranking individual contributor in Data Engineering, its sole Principal Engineer—people look to me to help guide our technical strategy and solve difficult problems. Of course I have a fair amount of practical knowledge and experience. I’ve studied the theory and practical parts of computing and the writing of software code, and been trained in various development and project-management methodologies.

But it’s the intuition about my role and about the building and maintenance of software systems and the proper modeling and monitoring of data that has allowed me to be successful.

And in truth, all of my colleagues have this intuition about their various focuses, to a greater or lesser degree.

What is intuition, but the knowing of something without consciously understanding the why of it. In software engineering it is a feel for the right approach to a problem. In construction it might be reaching instantly for the appropriate tool or when climbing finding the best hold.

There are two books by Tracy Kidder which have always struck me as some of the truest descriptions of what it means to be and how one is a builder of things both individually and as a team. They might not demonstrate the latest, or even healthy, management techniques. One is directly related to computer technology, and the other tangentially so: his House is the story of the building of a house from the client to the architect to the carpentry crew—the client, a lawyer, at the end, exercises an option buried in the contract to deny some payment that amounts to part of the builder’s profit, despite a satisfactory job: How could you do this when we did the best anyone could the builder asks, the lawyer’s response is you are builders, that’s what you do, I’m a lawyer, this is what I do. (Important to remember what role people occupy in an endeavor in order to understand their actions.)

Instructive. But in The Soul of a New Machine Kidder expertly tells the story of an engineer, Tom West, leading a team at Data General in a race to build a new 32-bit minicomputer. The companies involved, the technologies, even the product-line have all vanished, but there is a moment that for me portrays the role of intuition in engineering more clearly than I’ve seen anywhere.

West instructs the team to build the 32-bit capability without using a mode-bit—the details are unimportant, but by setting this limitation to the implementation, he achieves a series of goals at the same time: navigating internal political tensions around different teams working on 32-bit computers at the same company, giving the marketing team the ability to sell it as easily supporting existing applications, and most importantly motivating the engineering team with a novel challenge. The decision goes against the most elegant and practical approaches, but is successful—it’s never explained as anything more than a gut feeling, and having myself had the same, easy to believe as one.

It’s fashionable in tech circles to refer to this ability as “pattern matching” and psychologists, in talking about intuition, often explain it in that fashion. But as regards the tech folks, it’s a gross misapplication of the process of machine learning, which itself is only a computational guess at the mind’s function, so it is a circular definition, while for psychologists, it’s an analogy that is more or less a handwavy application of the epistemological understanding of the Ancient Greeks. Yes—intuition could be unconscious inductive reasoning, as Aristotle might have argued.

You can’t teach intuition. We know it comes from experience. Although we often try to model certain approaches and behaviors, ultimately the best intuition comes from doing the thing yourself. It’s easy to see why pattern matching or recognition is an appealing explanation: you see enough examples, your brain, through some unexplained process, maybe it’s connections between neurons, builds an abstract model, it seems that something like that happens at least, and then the model is fed new input and quickly pops out the answer—no conscious step 1, step 2, step 3, just bing.

But there are problems. Consider language: babies don’t hear enough examples to generate the complex grammar of the speech they quickly pick up once they get to talking. It was enough to drive linguists like Chomsky to seek somewhat fruitlessly kinds of language organs prebuilt into the brain. Others endeavor to prove that maybe they do have enough examples. But surely they don’t have as examples to match even the minutest fraction of a fraction of what Large Language Models are fed (the entire literary, technical, and shitposting output of all recorded human history) from which emerge the plausible if somewhat irritatingly monotonous and obsequious speech-product being hyped as a heartbeat away from superhuman intellect.

INTERROGATE your own experience of intuition. Are you really just extrapolating from something you’ve seen before? Even a parade of things you’ve seen before? When you write a line of poetry, if you are a poet, or when you put a bleb of white paint there on the canvas, and suddenly the vase becomes a vase. When you can see exactly what rug will tie the room together. You know the right of a thing, but not the why. So, yes, a moment of recognition of the form of the good. Making Plato happy. Except there is no algorithmic route to the good. While pattern matching is a clearly mathematical, algorithmic exercise. It can be explained. Creation, of art, for example, or of a great mathematical discovery (you can hear Roger Penrose describe his moments of mathematical intuition in The Emperor’s New Mind, or theoretical physicists talking about their greatest leaps, the wobbly plate of Feynman for example) is something different from recognizing a pattern. It so far defies explanation.

If you were always conceiving from the boundaries of an existing pattern, how could intuition guide the creation of something new?

There is a bit more rigor behind this, the idea of the non-computability of human (or other) intelligence, and it has an elegant connection to one of the fathers of computer science: Alan Turing (of Turing-test fame and the Enigma machine). Turing theorized a universal computer, called a Turing machine. It takes input from a tape, and has a set of instructions which tell it what to do when it encounters any symbol on the tape—move left, move right, write to the tape, and so on. It’s important to know that absolutely anything you can do with a computer can be represented as a Turing machine. So, you can have a calculator, or represent a game of Mario, or the actions of a chess computer, this can all be reproduced by encoding the right programming into a Turing machine, so can even the most sophisticated Generative AI model. Parallel processing, true random algorithms. ChatGPT is a Turing machine.

Let’s pause here. Executives of AI companies, their scientists, and friendly theorists have been talking about the inevitable arrival of superhuman artificial intelligence capabilities. These technologies are both a danger and a panacea—the intelligence could decide to eliminate mankind (like in countless science fiction stories), and at the same time they argue, as Bill Gates does, that this superintelligence will be so smart as to instantly solve our most critical problems, such as global warming and disease. The fact that we are so close—on the very verge—of this attainment means we should redirect all our resources into hardware and software development, at the cost of exacerbating all the other problems, in the hope that AI will solve it. Companies like Open AI and Anthropic claim their systems will deliver this, AGI, which will start at human scale, and then grow exponentially beyond our understanding.

Around the corner are artificial intelligence agents, far superior to the effectively toy-like “I found some answers on the internet for you” of Siri and Echo, which will serve like virtual personal assistants. Robots will care for the elderly, and Microsoft Copilot will write all your code. Yes, jobs will be destroyed, but human productivity will be multiplexed beyond all reckoning. Given the rapid developments in information technology in the past decades, the relentless increase in computing power, cost efficiencies, and so on, it’s hard not to believe them.

Hard not to believe, but it is done. There are doubters. And don’t we have to admit that Elon Musk’s earth-shattering robot was a man in a rubber suit, that the latest and greatest virtual agents are janky, barely functioning above the level of a confused, hard of hearing boomer trying to navigate the latest Windows update, that all of the amazing emergent capacities and test-passing abilities of LLMs are just reproductions of training data, that driverless cars, the immersive metaverse, massive open online courses, cryptocurrency, the Segway (a cheap dig I know), Theranos, Juicero, all have failed to deliver the earth-shattering revolutionary change, the utopian dreams they were sold as.

You’ll see these kinds of arguments alongside some pretty dire financial outlooks, bubble-like behavior, misleading statements, moving goalposts, cooked metrics (pivot to video anyone?) and invalid benchmarks in AI-skeptical works like Ed Zitron’s relentless Substack.

But I don’t need those arguments. I don’t even need the argument of Iris van Rooij and her colleagues, which I find very convincing, that even if human cognition is “computable” that achieving AGI is an intractable problem, that is, with all the computing resources in all the world and conceivable to be, it will still be unsolved. I don’t need these arguments because every technology we currently have and even conceive of—even quantum computing—that could deliver artificial intelligence is a Turing machine.

SO it comes back to the simple question: what is the theory by which a Turing machine can achieve human intelligence, let alone “superintelligence”—and the answer is not much of one. To the best I can understand, the idea is that the human mind is a pattern matcher, and that problem solving is pattern matching. That artificial neural networks in various configurations with clever translations and wiring and so on (all of it still falling into the domain of Turing computing), can be fed all the world’s known knowledge, and that this will create models in the net that represent the fundamental units of the mind, that suddenly, and magically, gain not just reasoning, but the kind of intuitive knowing we consider to be the hallmark of intelligence.

(Notice we are not even touching the third-rail question of intelligence: sentience, consciousness, the being-for-itself of Sartre, which is assumed in all these arguments to be an emerging property of scale or complexity, with no justification.)

Unfortunately I think there’s a pretty fundamental problem. It’s impossible for a Turing machine to perfectly mirror intuition without invalidating something called Gödel’s incompleteness theorem. It is a very subtle, revolutionary theorem, and the nuances are very hard to get right. In fact I’m sure I won’t quite nail it here.

Still, I’ll give a rough sketch of how you get here from there. First, you have to imagine that one characteristic of artificial general intelligence is that it should be able to mirror humans’ achievements in mathematics—that is discovering (or creating) a consistent set of mathematical laws. And you would also have to say that in order to generate true and consistent laws, an intelligence would need to know that each one is true. Otherwise they wouldn’t be consistent.

The only way for the computer to know if something is true is to prove that it is true from a set of axioms. That is to run a Turing machine with some assumptions on the question. But Gödel proved a consistent set of mathematical laws cannot be proved consistent using those laws, but the very laws that govern a Turing machine are a consistent set of mathematical laws, hence there’s at least one form of mathematics that Turing machines cannot produce. This argument becomes quite generalizable to any formal system and the end result is a solid argument that something is going on in the human mind that cannot be computed.

But intuitively, this makes a lot of sense. Why can we still quite easily identify artificially generated text despite all the immense amounts of resources and computing power thrown at it? I think the limitations of machine learning became visually apparent when images produced by generative adversarial networks (GANs) like DALL-e began circulating: before with computer vision it was pretty impressive to see it identifying cats and stop signs and people, but when we turned around and programmed the models to reveal what information it encoded to make these determinations, we saw the freakish unreal and upsetting truth: people with far too many teeth, hands with fifteen or twenty sausage fingers or all knuckles or a bouquet of thumbs. Obviously the results were impressive but also demonstrated that we were a lot farther from a true understanding of the incredible variety of concepts even some of the simplest living brains hold.

Meanwhile, LLM-generated texts and code continue to deliver the literary and software equivalents of the GANs teethful mouths or nightmare eyes, rendered less unheimlich (eerie) and more apparently serviceable by the models’ adoption of the tricks of a carnival psychic, weasel words, added impressive-looking citations, a general willingness to accept reduced quality in exchange, maybe, for more quantity (of what exactly?) and people’s desperate need to believe. And the fact that there are sufficient cases where, well, likely slop is good enough to do the job.

WHERE does this all leave us?

First and foremost: it’s plain that artificial general intelligence is impossible under all current and future computer technologies absent the discovery of some new physics or metaphysics. But if AGI is impossible, then what is possible—certainly step-wise improvement in various narrow tasks, for which it joins a crowded field of AI that is contributing to improvements in our ability to extend our own intelligence. So it’s a tool, like the proper hammer for a task. To be used with full understanding of its limitations. But the limitations of LLMs are big and un-remediable: it cannot generate true statements, only statements that look true. It cannot be creative outside of the realm of what it has been trained upon.

Back to my world of software engineering, then—studies show that coders using Generative AI show declines in critical thinking. Through a lack of their own doing, people who rely on it will fail to train their intuitions, or spend their time tinkering in a formal system that is neither consistent nor new.

So here’s a warning for everyone who thinks they should be all in: Generative AI isn’t the parent of innovation, but the death of it.




If you liked this impassioned rant, you might enjoy my novel The Happy Valley, or my short-story collection Gigantic: Stories from the End of the World, or my poetry collection Animal, Vegetable, Mineral. Disclosure: I will make money if you order through these links or purchase books that I link to earlier on!

© 2025 Benjamin Harnett etc. etc.