Politics

Biden’s AI order doesn’t recognize how ghoulish the tech really is

It’s natural to seek 𒐪reassurance against scary monsters this time of year.

I’d like to imagine that’s why President Biden chose Monday to sign a new executi♏ve order on artificꦿial intelligence.

Though the order does comfoꦗrt me about some of the lesser AI imps and sprites, I fear it does not address the true AI demon.

AI is no imaginary phantom.

It is here, it is already causing trouble, and it has the potential to create vastly more.🅠

And like the🌃 scariest movie villains, it is quickly growing in power.

But, like the most nuanced movie v𒊎illains, there is still hope of turning it into a force for good.

Let me be clear: Though I’m leaning on horror-movie tropes, I am not saying current AI is any literal kind of villain intending to do humans♋ harm.

Right now, AI is just a powerful tool that — li🐲ke most powerful tools — is “dual use”: It can help bring abou🉐t good or evil ends, deliberately or accidentally.

Fro🍌m what I gathered from the White House fact sheet, the executive order is comforting when it comes to regulating tool AI.

I𝄹’m delighted to see, for example, it aims to address AI’s impact on misinformation, algorithmic bias, education and wealth ine🌳quality with reasonable first steps.

(And I can’t remember the last time the executive branch encouraged citizen privacy and encryption!)

What haunts me, though, is the day AI stops being a tool and starts becoming an agent: roughly, something capable of acting i🐎n the world toward its own ends.

In the robot movies,ജ such agentic AI comes to hate or resent us, and we then beat it in a fight.

Both tropes are dangerously misleading.

First, AI doesn’t have to hate us; it just has to have goals slightl🐓y misaligned with our own — and given its fundamentally differentꦗ nature, this is almost inevitable.

As Stuart Russell points out, we don’t hate gorillas, but when we want something the gorillas don’t want, it’s the gorillas who inev💝itably los🃏e.

Second, it is much more probable AI would quickly reach&🧸nbsp;“superintelligence” rather than stopping at ex🦋actly the human level.

And we would have no mor💜🍸e chance of winning a war against a superintelligence than gorillas would have in a “war” against us.

It is tragically hard to summarize the arguments that AI is a real existential risk and tragically easy to dismiss mere strawman versions of them, as I’ve written in these pages.

Suffice ജto say many smart people (do you count  as smart?) have been deeply concerned.

And s🌼adly, I think the executive order does not go far enough to address existential-level risk from AI.

The relevant points are to requirܫe AI labs to share their safety testing while the National Institute of Standards and⛎ Technology and others develop new safety tests.

This is probably a good start, but many of us who research this think we have no great idea, even in theory, how to test for agentic AI w🌱ith misaligned goals.

Here’s a taste of the problem: Suppose intellige🐬nꦦt aliens are heading our way from another solar system, and we get a chance to interview them first.

How could we check whether they’re dangerous?

If they are considerably smarter than we are, they’ll be able to anticipate any test we can think of and rig the results.ﷺ

Even if we wer🌄e allowed a molecular-level view of everything and everyone on their ships, we should not be sure we could spot trouble.

Like the classic horror films, we could confidently wa𝓡ltz into our own worst nightmare.

I🌊t’s worth adding that some measures look positively counterproductive through the lens of existential ris𝕴k.

You can imagine why training AI to look for exploitable software bugs would unnerve me.🐼

And blowing more resources on AI development just fans the fire — especially when combined with talk of “American leadership,” which can just taunt 𓃲other countries into an AI arms ra🔯ce that’s more likely to lead everyone into reckless development.

I flaked on my Halloween costume this year, but it occurs to me it’s not too late to do one of those phoned-in, conceptual costumes: If I go as “misaligned agentic AI,” I can be very scary just by looking like a totally ordinary, harmless fellow who 🐟passes all safety tests.

Steve Petersen is a professor of philosophy at Niagara University.