AI Can’t Teach AI New Tricks

A lack of human prose leads to what’s called ‘model collapse’ and pure gibberish.

WSJ, Andy Kessler, Oct. 20, 2024 

Artificial intelligence is our economy’s next productivity power pack. It better be—OpenAI just raised $6.6 billion, the largest venture-capital funding of all time, at a $157 billion valuation. Oh, and the company will lose $5 billion this year and is projecting $44 billion in losses through 2029.

We are bombarded with breathless press releases. Anthropic CEO Dario Amodei predicts that “powerful AI” will surpass human intelligence in many areas by 2026. OpenAI claims its latest models are “designed to spend more time thinking before they respond. They can reason through complex tasks and solve harder problems.” Thinking? Reasoning? Will AI become humanlike? Conscious?

Not so fast. Large language models are impressive, but they are still statistical models mimicking human thinking. You can’t just throw cheaper chips at the problem and expect growth. More than Moore’s Law (chips doubling in density every 18 months) is in play. I hate to be the one to throw it, but here’s some cold water on the AI hype cycle:

Moravec’s paradox: Babies are smarter than AI. In 1988 robotics researcher Hans Moravec noted that “It is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility.” Most innate skills are built into our DNA, and many of them are unconscious.

Mr. Moravec went on: “Encoded in the large, highly evolved sensory and motor portions of the human brain is a billion years of experience about the nature of the world and how to survive in it.” DNA is the carrier of life’s success signals. Yes, since we were fish.

AI has a long way to go. Last week, Apple AI researchers seemed to agree, noting that “current LLMs are not capable of genuine logical reasoning; instead, they attempt to replicate the reasoning steps observed in their training data.”

Summing up the paradox, Harvard psychology professor Steven Pinker told me last week, “Things that are easy for us, like manipulating the 3-D world, are hard for AI. We’re not going to get an AI plumber anytime soon. Things that are hard for us, like diagnosing disease or writing code, may be easy for AI.”

Linguistic apocalypse paradox: As I’ve noted before, AI smarts come from human logic embedded between words and sentences. Large language models need human words as input to become more advanced. But some researchers believe we’ll run out of written words to train models sometime between 2026 and 2032.

Remember, you can’t train AI models on AI-generated prose. That leads to what’s known as model collapse. Output becomes gibberish. Think of it as inbreeding—AI needs new human input to provide fresh perspective. One study suggests if even 1% of data is synthetic, it’s enough to spoil training models. More humans are needed—which is precisely why OpenAI cut a content deal with Dow Jones, and another last week with Hearst for more than 60 magazines and newspapers.

Current models train on 30 trillion human words. To be Moore’s Law-like, does this scale 1000 times over a decade to 30 quadrillion tokens? Are there even that many words written? Writers, you better get crackin’.

Scaling paradox: Early indications suggest large language models may follow so-called power-law curves. Google researcher Dagang Wei thinks that “increasing model size, dataset size, or computation can lead to significant performance boosts, but with diminishing returns as you scale up.” Yes, large language models could hit a wall.

Spending paradox: Data centers currently have an almost insatiable demand for graphics processing units to power AI training. Nvidia generated $30 billion in revenue last quarter and expectations are for $177 billion in revenue in 2025 and $207 billion in 2026. But venture capitalist David Cahn of Sequoia Capital wonders if this is sustainable. He thinks the AI industry needs to see $600 billion in revenue to pay back all the AI infrastructure spending so far. Industry leader OpenAI expects $3.7 billion in revenue this year, $12 billion next year, and forecasts $100 billion, but not until 2029. It could take a decade of growth to justify today’s spending on GPU chips.

Goldman Sachs’s head of research wrote a report, “GenAI: Too much spend, too little benefit?” He was being nice with the question mark. Nobel laureate and Massachusetts Institute of Technology economist Daron Acemoglu thinks AI can perform only 5% of jobs and tells Bloomberg, “A lot of money is going to get wasted.” Add to that the cost of power—a ChatGPT query uses nearly 10 times the electricity of a Google search. Microsoft is firing up one of the nuclear reactors at Three Mile Island to accommodate rising power needs. Yikes.

I’m convinced AI will transform our lives for the better, but it isn’t a straight line up.