Opinion

Why we need to pause AI research and consider the real dangers

I have spent most🐓 of my lif🃏e as a techno-optimist.

I believe the world has gotten better on average over the centuries — I would rather be a random perso💎n in 2023 than a random person at aꦚny other time in history.

This progress is largely thanks to adva🍸nces in scien👍ce.

I started working in the philosophy and ethics of artificial intelligence in large part becaus✅e I was enthralled by its potential as the most transformative technology of ജmy lifetime.

Google’s DeepMind says its mission is to “solve intelligence” and from there use the enhanced intelligence to solve all our other problems — global poverty, climate change, cancer, you name it. I find this vision compe🅘lling, and I still believe AI has that po🧔tential.

But I have reluctantly come to believe that path there is much narrower and much more dangerous than I once hoped.

Google’s DeepMind says its mission is to “solve intelligence.” REUTERS

This is why I signed the Future of Life Institute’s calling for a moratorium on the further development of the most powerful modern AI (notably “large language models” like OpenAI’s GPT).

First, there are dangers of AI already present but quickly amplifying in both power and prevalence: misinformation, algorithmic bias, surveillance, and intellectual stultification, to name 𝓡a few. I⛦ think these worries are already sufficient to call for more reflection before we proceed.

My colleague Phil Woodward has pointed out that though OpenAI has , it has released to the public a perfect cheating machine that has already done to higher education what publishing an easy recipe for an undetectabl🉐e athletic enhancement would do to pro🍃fessional sports.

No doubt ChatGPT and its successors will have man🅠y positive effects on education, too — but the disruption in🅰 the meantime is undeniable and not obviously for the best.

OpenAI has already been used in higher education. NurPhoto via Getty Images

OpenAI is arguably one of the best-intentioned AI outfits out there, but intentions have a funny way of being warped when profit motives are also on the line. That one of OpenAI’s most notable founders (Elon Musk) also signed FLI’s letter is telling.

The near-term AI concerns are not all, though.

Like many, I am convinced the long-term “existential risk” is very real.

AI has already presented some dangers, including misinformation and algorithmic bias. NurPhoto via Getty Images

About a decade ago I read the preliminary papers behind Nick Bostrom’s book “Superintelligence,” which basically argues 1) AI is likely to self-amplify until it reaches a level of scientific and technological sophistication far beyond what humans have, and 2) once it reaches that state, humans will basically have no say in what happens next, and 3) it is very hard to make sure such an AI will have our true interests atඣ heart. 

Since reading that work I have gone through ♓some of the classᩚᩚᩚᩚᩚᩚ⁤⁤⁤⁤ᩚ⁤⁤⁤⁤ᩚ⁤⁤⁤⁤ᩚ𒀱ᩚᩚᩚic stages of grief.

First, I was in denial; I wrote an academic response to Bostrom in which I thought I cou🎃ld show his argum🥀ents were misguided.

T🤪he more♑ directly I engaged his book, though, the more I realized he had already considered my objections and refuted them.

Popular science fiction can give us the impression artificial general intelligence (AGI) is in the far future or pure fantasy. AFP via Getty Images

In the years since it’s been a mix of bargaining, depression, and acceptance of the fact that advances in my much-beloved AI are a serious risk to human existence.

These arguments are not always portrayed well in the media. Like many subtle issues, arguments for the position don’t fit well into a soundbite or 280 characters, but apparent takedowns (of oversimplified, strawman versions) do.

Popu𒆙lar science fiction, especially, can mislead our imagination in two opposing directions♋. First, it can give us the impression artificial general intelligence (AGI) is in the far future or pure fantasy.

But we have already started to see right now.

And even if real AGI is still far off — say 50 years or more — the hurdles are at least as hard as those of climate change, and the stakes at least as high.

Second, because science fiction is ultimately about human concerns (and its AI must be portrayed by human actors), we are used ꦡto the idea that AI will be like us in most ways. But, as Yuval Harari, Tristan Harris, and Aza Raskin recently put it, we are in the process of summoning a truly .

AI will not share our biological history a꧂nd so will not have our contingent wantsღ. It will not necessarily be pro-social, for example, any more than it will love sugar and fat.

In this space, I can only urge the curious and concerned to engage in the nuances. Myself, I now devote most of my research time to the “alignment problem”: roughly, the problem of trying to make sure the goals of a superintelligent system are sufficiently aligned with ours to enable human flourishing.

🌳This is a truly interdisciplinary field; It needs computer scientists, ethicists, psychologists, formal epistemologists, governance experts, neuroscientists, mathematici💃ans, public-relations experts, engineers, economists . . . and it needs many more of each.

Accenture Research analyzed how companies are still experimenting with AI.
Accenture Research showed how AI is rising in different industries.

If you find yourself wanting to know more, you might perhaps start with Brian Christian’s excellent overview “The Alignment Problem.”

For those who want to help but aren’t sure how you might start at the website .

As a philosopher, I am often haunted by a phrase from “Superintelligence,” that AI alignment is “philosophy with a deadline.”

Lately, as we’ve all noticed, that deadline has shortened dramatically.

Steve Petersen is a professor of philosophy at Niagara University.