Innovation in artificial intelligence is advancing faster and faster by the day. Whether AI has the patience for a presidential election cycle, however, is another question.
Each day, machine learning and other forms of AI, as well as the content it produces, outpace headline writers’ ability to comprehend the technology. In 2023 alone, AI-powered programs have surpassed human capabilities in diagnosing diseases, including X-rays, MRIs and CT scans. They can also quickly assess students’ learning styles and recommend unique lesson plans for them based on their strengths and weaknesses. They can mimic famous artists, compose music and even replicate the sound of a person’s voice.
But to a large degree in the U.S., these technologies remain unregulated and are in rapid evolution while policymakers race to try to understand and regulate them. They have the power to save lives and tap the limitless potential of the information age, along with the potential to become a job-killing scourge that will wreak havoc on the global economy.
The direction of America’s AI policy is already becoming an emerging question in the 2024 presidential election, with everyone from Republicans Will Hurd and Vivek Ramaswamy to President Joe Biden increasingly focused on AI’s awesome and terrifying possibilities.
“Sixty-five percent of Americans are concerned that robots are going to take their jobs,” Hurd said in a recent interview with ABC News.
Whoever wins 2024’s election could have a profound impact on the direction of AI and the future it will come to define. But at this point, few of the candidates—even the president—have been able to articulate the specifics of what that future might look like.
Presidents and Machines
Washington, D.C., has already been thinking about machine learning and AI for years.
As early as February 2019, President Donald Trump signed an executive order to create the American AI Initiative. The order set up a federal office tasked with a commitment to doubling AI research investment, established a series of national AI research institutes, created plans for AI technical standards and set regulatory guidance standards for the then-burgeoning industry.
Earlier this week, Vice President Kamala Harris met with labor leaders to discuss their concerns and hopes for the future of the technology. And on Capitol Hill, Senate Majority Leader Chuck Schumer has reportedly been drafting a sweeping regulatory framework for the AI industry that, while murky, seeks to strike a middle ground between Big Tech and organized labor.
Ultimately, it will be the president’s views on AI that likely guides the direction America goes in. But while both political parties have largely mirrored each other’s policies, from the Trump administration to the current one, Republicans and Democrats emphasize different priorities in how they would execute those policies.
President Joe Biden delivers a nationally televised address from the Oval Office on June 2. The president has stated broad principles that he thinks should govern future AI regulations but without getting specific.
Win McNamee/Getty Images
“There aren’t as many partisan differences as you might expect on broad themes for AI policy,” said Matthew O’Shaughnessy, a visiting fellow in technology and international affairs at the Carnegie Endowment for International Peace.
He told Newsweek, “No matter the outcome of 2024, the White House and Congress will be focused on things like developing guidelines for responsible use of AI, boosting U.S. leadership in AI development and encouraging AI use throughout government.”
Specific emphases, O’Shaughnessy added, might differ.
“The Biden administration has emphasized civil rights and equity in its AI work much more than the Trump administration did,” he said. “Both administrations prioritize innovation, but the Trump administration was more hesitant to create AI rules it thought might limit growth.”
One of the presidential candidates has indicated he has ideas for where he might go. Ramaswamy, a biopharmaceutical executive, said he will introduce a comprehensive plan for AI in his campaign platform. He points out that there are serious risks both in overregulating and ignoring the risks posed by AI.
“Outright bans aren’t the answer,” he wrote on social media. “The right approach is to set clear rules for who bears liability for unforeseen consequences of AI protocols, and we should be *very* skeptical of proposed regulations from large companies currently trying to commercialize AI.”
I don’t think Kamala Harris is capable of directing U.S. policy on AI. My guess is she has some difficulty even spelling it.
Joking aside, this is a serious issue with risks both to over-regulating & to ignoring new risks of AI. Outright bans aren’t the answer. The right…
— Vivek Ramaswamy (@VivekGRamaswamy) July 8, 2023
Regulation Templates in the EU, China
At this point, it’s still not known precisely what U.S. governance of AI technology will look like. But AI is not a uniquely American phenomenon, and some countries have already offered templates of what a potential regulatory scheme could look like.
The European Union, for instance, has embraced a broad-based approach, grouping AI applications into four risk categories ranging from “minimal” (the most basic forms of AI) to “limited-risk” applications like chatbots and mood detectors, “high-risk” applications like law enforcement or hiring procedures, and “unacceptable” applications such as social scoring or certain types of biometrics, all of which are subject to different regulations.
China, meanwhile, has rolled out a national framework that regulates the deployment of specific algorithm applications in certain contexts, such as consent requirements for deepfake content or guiding principles for algorithms guiding workplace productivity.
While the U.S. has not established its own formal frameworks yet, Darrell West, a senior fellow at the Brookings Institution’s Center for Technology Innovation—told Newsweek some in the private sector, including Microsoft, have begun to chart their own policy recommendations. These include licensing requirements for AI companies and third-party audits of how the technology is being used.
But at the lightning pace at which the technology is moving, candidates—President Joe Biden in particular—will likely need to make their positions clear, and soon. More than two-thirds of Americans said they were concerned about the negative effects of AI in a May Reuters/Ipsos poll. And 61 percent said they believed it could threaten the fate of civilization.
Articulating specifics of his vision for the technology, West said about Biden, will be critical both for the companies that use the technology and a public that fears its implications.
“Biden has put out broad principles that he thinks should govern the future regulations, but he actually hasn’t specified what the regulations should be and how far they should go,” West said.
“Of course, that’s what everybody wants to know. That’s what companies want to know. That’s what advocates want to know too. We’re kind of in that stage where we have to figure out how to convert the broad principles to actual regulations,” he said.
An Articulated Vision
An articulated vision for AI could help soothe an anxious public—particularly a generation that, for nearly two decades, has been manipulated in myriad ways by algorithms defining what music people listen to or the products they buy.
That’s the landscape politicians are inheriting and need to work to regulate, Vince Lynch, CEO of software company IV.AI, told Newsweek.
“It’s not going away,” he said. “It’s not going to just stop, and even if America decides to stop, it doesn’t mean it stops. It’s a globally open thing that can be used. We really need people to be thinking about it and focused on how to use it to our advantage versus not paying attention to what’s happening.”
Lynch, whose clients have included companies ranging from Netflix to the federal government, said the conversations around AI have long been siloed within the individual companies that use them. This leaves federal officials flat-footed and often reacting to issues involving AI, particularly, the algorithms that control the most mundane facets of everyday life.
After the 2016 presidential election, for example, the tech company Cambridge Analytica faced international scrutiny after manipulating personal data on platforms like Facebook to feed users inflammatory political commentary. Its actions raised questions about whether the election result was affected.
More recently, OpenAI, the company behind the popular chatbot ChatGPT, is facing investigations by European and Canadian data protection authorities for its own data collection methods. It has already been temporarily banned in Italy.
While the U.S. should be cautious not to stifle innovation in AI, Lynch said, it should be keenly aware of the technology’s risks. It should focus on creating people-first regulations that mitigate possible harm from AI and also find ways the technology can work to the benefit of humanity, he added.
“We have to really be thoughtful about this technology,” Lynch said. “It is incredibly powerful. It is incredibly helpful. It helps us distill human nature, helps us understand real need and, from a political point of view, can help us really understand what the people want, regardless of the political cycle.”
>>> Read full article>>>
Copyright for syndicated content belongs to the linked Source : Newsweek – https://www.newsweek.com/nobody-running-president-has-plan-ai-1813099