The era of artificial intelligence is here, and boy, are people freaking out

The Free Press, Marc Andreessen, 11.07.2023

Fortunately, I am here to bring the good news: AI will not destroy the world, and in fact may save it.

First, a short description of what AI is : the application of mathematics and software code to teach computers how to understand, synthesize, and generate knowledge in ways similar to how people do it. AI is a computer program like any other—it runs, takes input, processes, and generates output. AI’s output is useful across a wide range of fields, ranging from coding to medicine to law to the creative arts. It is owned by people and controlled by people, like any other technology.

A shorter description of what AI isn’t : killer software and robots that will spring to life and decide to murder the human race or otherwise ruin everything, like you see in the movies. An even shorter description of what AI could be : a way to make everything we care about better.

Why AI Can Make Everything We Care About Better

The most validated core conclusion of social science across many decades and thousands of studies is that human intelligence makes a very broad range of life outcomes better. Smarter people have better outcomes in almost every domain of activity: academic achievement, job performance, occupational status, income, creativity, physical health, longevity, learning new skills, managing complex tasks, leadership, entrepreneurial success, conflict resolution, reading comprehension, financial decision making, understanding others’ perspectives, creative arts, parenting outcomes, and life satisfaction.

Further, human intelligence is the lever that we have used for millennia to create the world we live in today: science, technology, math, physics, chemistry, medicine, energy, construction, transportation, communication, art, music, culture, philosophy, ethics, morality. Without the application of intelligence on all these domains, we would all still be living in mud huts, scratching out a meager existence of subsistence farming. Instead we have used our intelligence to raise our standard of living on the order of 10,000× over the last 4,000 years.

What AI offers us is the opportunity to profoundly augment human intelligence to make all of these outcomes of intelligence—and many others, from the creation of new medicines to ways to solve climate change to technologies to reach the stars—much, much better from here.

AI augmentation of human intelligence has already started—AI is already around us in the form of computer control systems of many kinds, is now rapidly escalating with AI large language models like ChatGPT, and will accelerate very quickly from here— if we let it.

In our new era of AI:

  • Every child will have an AI tutor that is infinitely patient, infinitely compassionate, infinitely knowledgeable, infinitely helpful. The AI tutor will be by each child’s side every step of their development, helping them maximize their potential with the machine version of infinite love.

Every person will have an AI assistant/coach/mentor/trainer/advisor/therapist that is infinitely patient, infinitely compassionate, infinitely knowledgeable, and infinitely helpful. The AI assistant will be present through all of life’s opportunities and challenges, maximizing every person’s outcomes.

Every scientist will have an AI assistant/collaborator/partner that will greatly expand their scope of scientific research and achievement. Every artist, every engineer, every businessperson, every doctor, every caregiver will have the same in their worlds.

Every leader of people—CEO, government official, nonprofit president, athletic coach, teacher—will have the same. The magnification effects of better decisions by leaders across the people they lead are enormous, so this intelligence augmentation may be the most important of all.

Productivity growth throughout the economy will accelerate dramatically, driving economic growth, creation of new industries, creation of new jobs, and wage growth, and result in a new era of heightened material prosperity across the planet.

Scientific breakthroughs and new technologies and medicines will dramatically expand, as AI helps us further decode the laws of nature and harvest them for our benefit.

The creative arts will enter a golden age, as AI-augmented artists, musicians, writers, and filmmakers gain the ability to realize their visions far faster and at greater scale than ever before.

I even think AI is going to improve warfare, when it has to happen, by reducing wartime death rates dramatically. Every war is characterized by terrible decisions made under intense pressure and with sharply limited information by very limited human leaders. Now, military commanders and political leaders will have AI advisors that will help them make much better strategic and tactical decisions, minimizing risk, error, and unnecessary bloodshed.

In short, anything that people do with their natural intelligence today can be done much better with AI, and we will be able to take on new challenges that have been impossible to tackle without AI, from curing all diseases to achieving interstellar travel.

  • And this isn’t just about intelligence! Perhaps the most underestimated quality of AI is how humanizing it can be. AI art gives people who otherwise lack technical skills the freedom to create and share their artistic ideas. Talking to an empathetic AI friend really does improve the ability to handle adversity. And AI medical chatbots are already more empathetic than their human counterparts. Rather than making the world harsher and more mechanistic, infinitely patient and sympathetic AI will make the world warmer and nicer.

The stakes here are high. The opportunities are profound. AI is quite possibly the most important—and best—thing our civilization has ever created, certainly on par with electricity and microchips, and probably beyond those.

The development and proliferation of AI—far from a risk that we should fear—is a moral obligation that we have to ourselves, to our children, and to our future. We should be living in a much better world with AI, and now we can.

So Why the Panic?

In contrast to this positive view, the public conversation about AI is presently shot through with hysterical fear and paranoia.

We hear claims that AI will variously kill us all, ruin our society, take all our jobs, cause crippling inequality, and enable bad people to do awful things.

What explains this divergence in potential outcomes from near utopia to horrifying dystopia? Historically, every new technology that matters, from electric lighting to automobiles to radio to the internet, has sparked a moral panic —a social contagion that convinces people the new technology is going to destroy the world, or society, or both. The fine folks at Pessimists Archive have documented these technology-driven moral panics over the decades; their history makes the pattern vividly clear. It turns out this present panic is not even the first for AI.

Now, it is certainly the case that many new technologies have led to bad outcomes—often the same technologies that have been otherwise enormously beneficial to our welfare. So it’s not that the mere existence of a moral panic means there is nothing to be concerned about. But a moral panic is by its very nature irrational —it takes what may be a legitimate concern and inflates it into a level of hysteria that ironically makes it harder to confront actually serious concerns.

And wow, do we have a full-blown moral panic about AI right now.

This moral panic is already being used as a motivating force by a variety of actors to demand policy action—new AI restrictions, regulations, and laws. These actors, who are making extremely dramatic public statements about the dangers of AI—feeding on and further inflaming moral panic—all present themselves as selfless champions of the public good.

But are they? And are they right or wrong?

The Baptists and Bootleggers of AI

Economists have observed a longstanding pattern in reform movements of this kind. The actors within movements like these fall into two categories—Baptists and Bootleggers—drawing on the historical example of the prohibition of alcohol in the United States in the 1920s :

  • Baptists are the true believer social reformers who legitimately feel—deeply and emotionally, if not rationally—that new restrictions, regulations, and laws are required to prevent societal disaster.For alcohol prohibition, these actors were often literally devout Christians who felt that alcohol was destroying the moral fabric of society. For AI risk, these actors are true believers that AI presents one or another existential risks—strap them to a polygraph, they really mean it.

  • Bootleggers are the self-interested opportunists who stand to profit financially by the imposition of new restrictions, regulations, and laws that insulate them from competitors.For alcohol prohibition, these were the literal bootleggers who made a fortune selling illicit alcohol to Americans when legitimate alcohol sales were banned.For AI risk, these are CEOs who stand to make more money if regulatory barriers are erected that form a cartel of government-blessed AI vendors protected from new start-up and open source competition—the software version of “too big to fail” banks.

A cynic would suggest that some of the apparent Baptists are also Bootleggers—specifically the ones paid to attack AI by their universities , think tanks , activist groups , and media outlets. If you are paid a salary or receive grants to foster AI panic. . . you are probably a Bootlegger.

The problem with the Bootleggers is that they win. The Baptists are naive ideologues, the Bootleggers are cynical operators, and so the result of reform movements like these is often that the Bootleggers get what they want—regulatory capture, insulation from competition, the formation of a cartel—and the Baptists are left wondering where their drive for social improvement went so wrong.

We just lived through a stunning example of this—banking reform after the 2008 global financial crisis. The Baptists told us that we needed new laws and regulations to break up the “too big to fail” banks to prevent such a crisis from ever happening again. So Congress passed the Dodd-Frank Act of 2010, which was marketed as satisfying the Baptists’ goal, but in reality was co-opted by the Bootleggers—the big banks. The result is that the same banks that were “too big to fail” in 2008 are much, much larger now.

So in practice, even when the Baptists are genuine—and even when the Baptists are right —they are used as cover by manipulative and venal Bootleggers to benefit themselves. And this is what is happening in the drive for AI regulation right now.

However, it isn’t sufficient simply to identify the actors and impugn their motives. We should consider the arguments of both the Baptists and the Bootleggers on their merits.

AI Risk #1: Will AI Kill Us All?

The first and original AI doomer risk is that AI will decide to literally kill humanity. The fear that technology of our own creation will rise up and destroy us is deeply coded into our culture. The Greeks expressed this fear in the Prometheus myth—Prometheus brought the destructive power of fire, and more generally technology ( techne ), to man, for which Prometheus was condemned to perpetual torture by the gods. Later, Mary Shelley gave us moderns our own version of this myth in her novel Frankenstein; or, The Modern Prometheus , in which we develop the technology for eternal life, which then rises up and seeks to destroy us. And of course, no AI panic newspaper story is complete without a still image of a gleaming red-eyed killer robot from James Cameron’s Terminator films.

The presumed evolutionary purpose of this mythology is to motivate us to seriously consider potential risks of new technologies—fire, after all, can indeed be used to burn down entire cities. But just as fire was also the foundation of modern civilization as used to keep us warm and safe in a cold and hostile world, this mythology ignores the far greater upside of most—all?—new technologies, and in practice inflames destructive emotion rather than reasoned analysis. Just because premodern man freaked out like this doesn’t mean we have to; we can apply rationality instead.

My view is that the idea that AI will decide to literally kill humanity is a profound category error. AI is not a living being that has been primed by billions of years of evolution to participate in the battle for the survival of the fittest, as animals are, and as we are. It is math—code—computers, built by people, owned by people, used by people, controlled by people. The idea that it will at some point develop a mind of its own and decide that it has motivations that lead it to try to kill us is a superstitious hand-wave.

In short, AI doesn’t want , it doesn’t have goals , it doesn’t want to kill you , because it’s not alive. And AI is a machine—it’s not going to come alive any more than your toaster will.

Now, obviously, there are true believers in killer AI—Baptists—who are gaining a suddenly stratospheric amount of media coverage for their terrifying warnings, some of whom claim to have been studying the topic for decades and say they are now scared out of their minds by what they have learned. Some of these true believers are even actual innovators of the technology. These actors are arguing for a variety of bizarre and extreme restrictions on AI ranging from a ban on AI development , all the way up to military air strikes on datacenters and nuclear war. They argue that because people like me cannot rule out future catastrophic consequences of AI, that we must assume a precautionary stance that may require large amounts of physical violence and death in order to prevent potential existential risk.

My response is that their position is nonscientific. What is the testable hypothesis? What would falsify the hypothesis? How do we know when we are getting into a danger zone? These questions mainly go unanswered apart from “You can’t prove it won’t happen!” In fact, these Baptists’ position is so nonscientific and so extreme—a conspiracy theory about math and code—and is already calling for physical violence, that I will do something I would normally not do and question their motives as well.

Specifically, I think three things are going on:

First, recall that John von Neumann responded to Robert Oppenheimer’s famous hand-wringing about his role creating nuclear weapons—which helped end World War II and prevent World War III—with, “Some people confess guilt to claim credit for the sin.” What is the most dramatic way one can claim credit for the importance of one’s work without sounding overly boastful? This explains the mismatch between the words and actions of the Baptists who are actually building and funding AI—watch their actions, not their words. (Truman was harsher after meeting with Oppenheimer: “Don’t let that crybaby in here again.” )

Second, some of the Baptists are actually Bootleggers. There is a whole profession of “AI safety expert,” “AI ethicist,” “AI risk researcher.” They are paid to be doomers, and their statements should be processed appropriately.

Third, California is justifiably famous for our many thousands of cults , from EST to the Peoples Temple, from Heaven’s Gate to the Manson Family. Many, although not all, of these cults are harmless, and maybe even serve a purpose for alienated people who find homes in them. But some are very dangerous indeed, and cults have a notoriously hard time straddling the line that ultimately leads to violence and death.

And the reality, which is obvious to everyone in the Bay Area but probably not outside of it, is that “AI risk” has developed into a cult , which has suddenly emerged into the daylight of global press attention and the public conversation. This cult has pulled in not just fringe characters, but also some actual industry experts and a not small number of wealthy donors—including, until recently, Sam Bankman-Fried. And it’s developed a full panoply of cult behaviors and beliefs.

This cult is why there is a set of AI risk doomers who sound so extreme —it’s not that they actually have secret knowledge that make their extremism logical, it’s that they’ve whipped themselves into a frenzy and really are. . . extremely extreme.

It turns out that this type of cult isn’t new—there is a long-standing Western tradition of millenarianism , which generates apocalypse cults. The AI risk cult has all the hallmarks of a millenarian apocalypse cult. From Wikipedia, with additions by me:

Millenarianism is the belief by a group or movement [AI risk doomers] in a coming fundamental transformation of society [the arrival of AI], after which “all things will be changed” [AI utopia, dystopia, and/or end of the world]. . . . Only dramatic events [AI bans, air strikes on datacenters, nuclear strikes on unregulated AI] are seen as able to change the world [prevent AI] and the change is anticipated to be brought about, or survived, by a group of the devout and dedicated. In most millenarian scenarios, the disaster or battle to come [AI apocalypse, or its prevention] will be followed by a new, purified world [AI bans] in which the believers will be rewarded [or at least acknowledged to have been correct all along].

This apocalypse cult pattern is so obvious that I am surprised more people don’t see it.

Don’t get me wrong, cults are fun to hear about, their written material is often creative and fascinating , and their members are engaging at dinner parties and on TV. But their extreme beliefs should not determine the future of laws and society— obviously not.

AI Risk #2: Will AI Ruin Our Society?

The second widely mooted AI risk is that AI will ruin our society, by generating outputs that will be so “harmful,” to use the nomenclature of this kind of doomer, as to cause profound damage to humanity, even if we’re not literally killed.

Short version: if the murder robots don’t get us, the hate speech and misinformation will.

This is a relatively recent doomer concern that branched off from and somewhat took over the “AI risk” movement that I described above. In fact, the terminology of AI risk recently changed from “AI safety”—the term used by people who are worried that AI will literally kill us—to “AI alignment”—the term used by people who are worried about societal “harms.” The original AI safety people are frustrated by this shift, although they don’t know how to put it back in the box—they now advocate that the actual AI risk topic be renamed “AI notkilleveryoneism,” which has not yet been widely adopted but is at least clear.

The tip-off to the nature of the AI societal risk claim is its own term, “AI alignment.” Alignment with what? Human values. Whose human values? Ah, that’s where things get tricky.

As it happens, I have had a front-row seat to an analogous situation—the social media “trust and safety” wars. As is now obvious , social media services have been under massive pressure from governments and activists to ban, restrict, censor, and otherwise suppress a wide range of content for many years. And the same concerns of “hate speech” (and its mathematical counterpart, “algorithmic bias”) and “misinformation” are being directly transferred from the social media context to the new frontier of “AI alignment.”

My big learnings from the social media wars are:

On the one hand, there is no absolutist free speech position. First, every country, including the United States, makes at least some content illegal. Second, there are certain kinds of content, like child pornography and incitements to real-world violence, that are nearly universally agreed to be off limits—legal or not—by virtually every society. So any technological platform that facilitates or generates content—speech—is going to have some restrictions.

On the other hand, the slippery slope is not a fallacy, it’s an inevitability. Once a framework for restricting even egregiously terrible content is in place—for example, for hate speech, a specific hurtful word, or for misinformation, obviously false claims like “ the Pope is dead ”—a shockingly broad range of government agencies and activist pressure groups and nongovernmental entities will kick into gear and demand ever greater levels of censorship and suppression of whatever speech they view as threatening to society and/or their own personal preferences. They will do this up to and including in ways that are nakedly felony crimes. This cycle in practice can run apparently forever, with the enthusiastic support of authoritarian hall monitors installed throughout our elite power structures. This has been cascading for a decade in social media and with only certain exceptions, continues to get more fervent all the time.

And so this is the dynamic that has formed around “AI alignment” now. Its proponents claim the wisdom to engineer AI-generated speech and thoughts that are good for society, and to ban AI-generated speech and thoughts that are bad for society. Its opponents claim that the thought police are breathtakingly arrogant and presumptuous—and often outright criminal, at least in the U.S.—and in fact are seeking to become a new kind of fused government-corporate-academic authoritarian speech dictatorship ripped straight from the pages of George Orwell’s 1984.

As the proponents of both “trust and safety” and “AI alignment” are clustered into the very narrow slice of the global population that characterizes the American coastal elites—which includes many of the people who work in and write about the tech industry—many of my readers will find themselves primed to argue that dramatic restrictions on AI output are required to avoid destroying society. I will not attempt to talk you out of this now; I will simply state that this is the nature of the demand, and that most people in the world neither agree with your ideology nor want to see you win.

If you don’t agree with the prevailing niche morality that is being imposed on both social media and AI via ever-intensifying speech codes, you should also realize that the fight over what AI is allowed to say/generate will be even more important—by a lot —than the fight over social media censorship. AI is highly likely to be the control layer for everything in the world. How it is allowed to operate is going to matter perhaps more than anything else has ever mattered. You should be aware of how a small and isolated coterie of partisan social engineers are trying to determine that right now, under cover of the age-old claim that they are protecting you.

In short, don’t let the thought police suppress AI.

AI Risk #3: Will AI Take All Our Jobs?

The fear of job loss due variously to mechanization, automation, computerization, or AI has been a recurring panic for hundreds of years, since the original onset of machinery such as the mechanical loom. Even though every new major technology has led to more jobs at higher wages throughout history, each wave of this panic is accompanied by claims that “this time is different”— this is the time it will finally happen, this is the technology that will finally deliver the hammer blow to human labor. And yet, it never happens.

We’ve been through two such technology-driven unemployment panic cycles in our recent past—the outsourcing panic of the 2000s, and the automation panic of the 2010s. Notwithstanding many talking heads, pundits, and even tech industry executives pounding the table throughout both decades that mass unemployment was near, by late 2019—right before the onset of Covid—the world had more jobs at higher wages than ever in history.

Nevertheless, this mistaken idea will not die. And sure enough, it’s back.

This time , we finally have the technology that’s going to take all the jobs and render human workers superfluous— real AI. Surely this time history won’t repeat, and AI will cause mass unemployment—and not rapid economic, job, and wage growth—right?

No, that’s not going to happen—and in fact AI, if allowed to develop and proliferate throughout the economy, may cause the most dramatic and sustained economic boom of all time, with correspondingly record job and wage growth—the exact opposite of the fear. And here’s why.

The core mistake the automation-kills-jobs doomers keep making is called the lump of labor fallacy. This fallacy is the incorrect notion that there is a fixed amount of labor to be done in the economy at any given time, and either machines do it or people do it—and if machines do it, there will be no work for people to do.

The lump of labor fallacy flows naturally from naive intuition, but naive intuition here is wrong. When technology is applied to production, we get productivity growth —an increase in output generated by a reduction in inputs. The result is lower prices for goods and services. As prices for goods and services fall, we pay less for them, meaning that we now have extra spending power with which to buy other things. This increases demand in the economy, which drives the creation of new production —including new products and new industries—which then creates new jobs for the people who were replaced by machines in prior jobs. The result is a larger economy with higher material prosperity, more industries, more products, and more jobs.

But the good news doesn’t stop there. We also get higher wages. This is because, at the level of the individual worker, the marketplace sets compensation as a function of the marginal productivity of the worker. A worker in a technology-infused business will be more productive than a worker in a traditional business. The employer will either pay that worker more money as he is now more productive, or another employer will, purely out of self-interest. The result is that technology introduced into an industry generally not only increases the number of jobs in the industry but also raises wages.

To summarize, technology empowers people to be more productive. This causes the prices for existing goods and services to fall, and for wages to rise. This in turn causes economic growth and job growth, while motivating the creation of new jobs and new industries. If a market economy is allowed to function normally, and if technology is allowed to be introduced freely, this is a perpetual upward cycle that never ends. For, as Milton Friedman observed, “Human wants and needs are endless”—we always want more than we have. A technology-infused market economy is the way we get closer to delivering everything everyone could conceivably want, but never all the way there. And that is why technology doesn’t destroy jobs and never will.

These are such mind-blowing ideas for people who have not been exposed to them that it may take you some time to wrap your head around them. But I swear I’m not making them up—in fact, you can read all about them in standard economics textbooks. I recommend the chapter The Curse of Machinery in Henry Hazlitt’s Economics In One Lesson , and Frederic Bastiat’s satirical