With AI as with everything else, stories matter. Myths matter. AI companies love the narrative of AI as an arms race — it justifies their rush to market. But it leaves people thinking that maybe we need to race ahead on developing advanced AI even if it could drive up the odds of human extinction.
Katja Grace doesn’t buy that myth.
As the lead researcher at AI Impacts, an AI safety project at the nonprofit Machine Intelligence Research Institute, Grace has made the case that AI is not actually an arms race. In May, she wrote an article in Time arguing that “the AI situation is different in crucial ways. Notably, in the classic arms race, a party could always theoretically get ahead and win. But with AI, the winner may be advanced AI itself. This can make rushing the losing move.”
Besides, she notes, if one lab or nation takes the time to iron out some AI safety issues instead of racing ahead, other labs or nations may take those improvements on board, which would benefit everyone. She writes:
A better analogy for AI than an arms race might be a crowd standing on thin ice, with abundant riches on the far shore. They could all reach them if they step carefully, but one person thinks: “If I sprint then the ice may break and we’d all fall in, but I bet I can sprint more carefully than Bob, and he might go for it.”
On AI, we could be in the exact opposite of a race. The best individual action could be to move slowly and cautiously. And collectively, we shouldn’t let people throw the world away in a perverse race to destruction — especially when routes to coordinating our escape have scarcely been explored.
What Grace is offering here is a counternarrative: less arms race, more tragedy of the commons — a classic type of coordination problem that we know, at least in principle, how to solve.
Grace has also gathered data that’s shaped the debate around the risks of AI: In an oft-cited summer 2022 survey of machine learning researchers, 48 percent of respondents said they thought there was a 10 percent or greater chance that the effects of AI would be “extremely bad (e.g., human extinction).”
And since late 2022, Grace has been a persuasive voice calling on the powers that be to seriously consider slowing down AI progress. Contrary to the myth that technological progress is inevitable and trying to slow it down is futile, she noted on her blog that there are lots of technologies that we’ve decided not to build, or that we’ve built but placed very tight restrictions on — like human cloning and human germline modification.
Not long ago, “slow down” was a taboo idea in the AI world. It took guts to say it. Since then, Grace’s early calls have been echoed in open letter after open letter signed by worried technologists. Now, in late 2023, “slow down” is a fairly common position.
Grace has shown not only that she’s got guts, but also that there’s power in telling ourselves a different story.
>>> Read full article>>>
Copyright for syndicated content belongs to the linked Source : Recode – https://www.vox.com/23870388/katja-grace-ai-impacts-lead-researcher-safety-future-perfect-50-2023