More than a dozen US states have passed laws covering AI use, with additional state legislatures debating regulations on the way. Here’s how emerging laws and mandates could impact CIOs’ AI strategies.
As artificial intelligence adoption has surged in the past year, many voices have called for regulation to protect people from adverse machine decisions — and regulatory bodies are responding with a complex patchwork of emerging statutes and mandates that CIOs will need to navigate to ensure their AI strategies are compliant wherever their organizations do business.
For example, the US White House has released a blueprint for an AI bill of rights, and the European Parliament passed the wide-ranging AI Act in March, regulating AIs used in the European Union.
Meanwhile, leaders from Microsoft, Google, and OpenAI have all called for AI regulations in the US, and the US Chamber of Commerce, often opposed to business regulation, has called on Congress to protect human rights and national security as AI use expands.
However, Congress, mired in partisan infighting, seems unlikely to move forward on serious AI legislation anytime soon. The 118th session of Congress, covering 2023 and 2024, may end up as the least productive session in US history, with only 47 bills passed and becoming law between the beginning of 2023 and April 1 of this year.
But Congress’ inaction doesn’t mean AI regulation isn’t evolving apace in the US. Sixteen states had already enacted AI-related legislation as of late January, and state legislatures have already introduced more than 400 AI bills across the US this year, six times the number introduced in 2023.
Many of the bills are targeted both at the developers of AI technologies and the organizations putting AI tools to use, says Mahdavi, a lawyer with global law firm BCLP, which has established an AI working group. And with populous states such as California, New York, Texas, and Florida either passing or considering AI legislation, companies doing business across the US won’t be able to avoid the regulations.
Enterprises developing and using AI should be ready to answer questions about how their AI tools work, even when deploying automated tools as simple as spam filtering, Mahdavi says. “Those questions will come from consumers, and they will come from regulators,” she adds. “There’s obviously going to be heightened scrutiny here across the board.”
California and Connecticut lead the pack
One state to watch is California, partly because of its large population that interacts with businesses across the US, and partly because the state legislature there tends to be ahead of the pack on consumer protection issues.
“These laws tend to have extra-territorial reach,” Mahdavi says. “Because of the size of California’s economy, most businesses in the US are going to be targeting California, are going to be doing business in California.”
Senate Bill 1047, introduced in the California State Legislature in February, would require safety testing of AI products before they’re released, and would require AI developers to prevent others from creating derivative models of their products that are used to cause critical harms.
Another state to keep an eye on is Connecticut, where the legislature has high interest in AI regulation, Mahdavi adds. AI legislation proposed in Connecticut also has served as a model for lawmakers in other states.
Last year, the Connecticut General Assembly passed Senate Bill 1103, which regulates state procurement of AI tools. This year, lawmakers in the state are considering Senate Bill 2, which would require organizations deploying AI for consequential “high-risk” decisions to develop risk management policies. Developers would need to disclose how their AIs could be used to discriminate against people.
The comprehensive SB 2 would also require organizations deploying AI to take reasonable care to protect state residents against algorithmic discrimination, and it would require companies using AI to inform the people affected when AI tools make consequential decisions about them.
The bill would also prohibit distribution of deceptive election materials generated by AI and require companies using synthetic media content to disclose it was manipulated by the AI. The Connecticut bill is “super ambitious, and they try to tackle a variety of issues,” Mahdavi says.
Three types of AI bills
Most state bills targeting AI fall into three categories, according to Mahdavi.
The first category includes pure transparency bills, generally covering both the development of AI and the output of its use. These bills, introduced in California, New York, Florida, and other states, often require organizations using AI to tell the public when they’re interacting with AI models and require AI developers and users to disclose the data sets used to train large AIs. Other transparency bills regulate the use of AI by political campaigns.
The second category focuses on specific sectors, particularly high-risk uses of AI to determine or assist with decisions related to employment, housing, healthcare, and other major life issues. For example, New York City Local Law 144, passed in 2021, prohibits employers and employment agencies from using an AI tool for employment decisions unless it has been audited in the previous year. A handful of states, including New York, New Jersey, and Vermont, appear to have modeled legislation after the New York City law, Mahdavi says.
The third category of AI bills covers broad AI bills, often focused on transparency, preventing bias, requiring impact assessment, providing for consumer opt-outs, and other issues. These bills tend to impose regulations both on AI developers and deployers, Mahdavi says.
Addressing the impact
The proliferation of state laws regulating AI may cause organizations to rethink their deployment strategies, with an eye on compliance, says Reade Taylor, founder of IT solutions provider Cyber Command.
“These laws often emphasize the ethical use and transparency of AI systems, especially concerning data privacy,” he says. “The requirement to disclose how AI influences decision-making processes can lead companies to rethink their deployment strategies, ensuring they align with both ethical considerations and legal requirements.”
But a patchwork of state laws across the US also creates a challenging environment for businesses, particularly small to midsize companies that may not have the resources to monitor multiple laws, he adds.
A growing number of state laws “can either discourage the use of AI due to the perceived burden of compliance or encourage a more thoughtful, responsible approach to AI implementation,” Taylor says. “In our journey, prioritizing compliance and ethical considerations has not only helped mitigate risks but also positioned us as a trusted partner in the cybersecurity domain.”
The number of state laws focused on AI have some positive and potentially negative effects, adds Adrienne Fischer, a lawyer with Basecamp Legal, a Denver law firm monitoring state AI bills. On the plus side, many of the state bills promote best practices in privacy and data security, she says.
“On the other hand, the diversity of regulations across states presents a challenge, potentially discouraging businesses due to the complexity and cost of compliance,” Fischer adds. “This fragmented regulatory environment underscores the call for national standards or laws to provide a coherent framework for AI usage.”
Organizations that proactively monitor and comply with the evolving legal requirements can gain a strategic advantage. “Staying ahead of the legislative curve not only minimizes risk but can also foster trust with consumers and partners by demonstrating a commitment to ethical AI practices,” Fischer says.
Mahdavi also recommends that organizations not wait until the regulatory landscape settles. Companies should first take an inventory of the AI products they’re using. Organizations should rate the risk of every AI they use, focusing on products that make outcome-based decisions in employment, credit, healthcare, insurance, and other high-impact areas. Companies should then establish an AI use governance plan.
“You really can’t understand your risk posture if you don’t understand what AI tools you’re using,” she says.
SUBSCRIBE TO OUR NEWSLETTER
From our editors straight to your inbox
Get started by entering your email address below.
>>> Read full article>>>
Copyright for syndicated content belongs to the linked Source : CIO – https://www.cio.com/article/2081885/the-complex-patchwork-of-us-ai-regulation-has-already-arrived.html