Artificial Intelligence & Machine Learning , Next-Generation Technologies & Secure Development , Standards, Regulations & Compliance

US Lawmakers Warned That AI Needs a 'Safety Brake'

Legislative 'Blueprint' Provides Regulatory Road Map for AI
US Lawmakers Warned That AI Needs a 'Safety Brake'
Image: Shutterstock

Artificial intelligence with no human supervision runs the risk of catastrophe, warned two tech executives before a panel of U.S. senators who intend to introduce regulatory legislation later this year.

See Also: GDPR & Generative AI: A Guide for Customers

Tech companies have jumped to incorporate AI in products leading to worries about an unregulated race of machine intelligence applications that could trigger unintended consequences.

Microsoft President Brad Smith - whose company has partnered with ChatGPT maker OpenAI to embed AI in a slew of its products including its Bing search engine - told the senators AI needs a "safety brake" before it can be deployed without concerns.

"Maybe it's one of the most important things we need to do so that we ensure that the threats that many people worry about remain part of science fiction and don’t become a new reality. Let's keep AI under the control of people. It needs to be safe,” Smith said during a Senate Judiciary subcommittee on privacy hearing on AI regulation. "We need a safety brake, just like we have a circuit breaker in every building and home in this country to stop the flow of electricity if that's needed,” he added.

William Dally, chief scientist and senior vice president of chip designer Nvidia, sounded similar warnings. "The way we make sure we have control over all sorts of AI is keeping a human in the loop. You want a human between AI output. Keeping humans is a critical loop to keeping airplanes from falling out of the sky," he said.

Sens. Richard Blumenthal, D-Conn., and Josh Hawley, R-Mo. - respectively, the chair and ranking member of the subcommittee - earlier released what Blumenthal calls "a comprehensive, legislative blueprint for real enforceable AI protections."

The bipartisan AI framework proposes establishing a license regime for AI models at the level of sophistication of GPT-4 and above, or models used for high-risk applications such as facial recognition. The senators also say companies should bear legal liability when their models breach privacy, violate civil rights "or otherwise cause cognizable harms."

During the hearing, Blumenthal urged witnesses to share their industry perspective and promised to hold more legislative hearings. "We won't be offended by criticism," he said, adding the aim is for Congress to move forward "later this year" with AI legislation.

The United States is far from enacting comprehensive AI regulation, although the Biden administration has been on a monthslong campaign to collect tech company signatories to a slew of voluntary policy commitments such as investing in AI model cybersecurity, red-teaming against misuse or national security concerns and accepting vulnerability reports from third parties. Companies who sign the White House pledge say they will watermark AI-developed audio and visual material that is otherwise indistinguishable from organic content and develop tools to identify content created within their own systems (see: IBM, Nvidia, Others Commit to Develop 'Trustworthy' AI).

Many tech executives privately worry that the European Union is poised to effectively set Western standards for AI deployment as the trading bloc finalizes a regulation first proposed by the European Commission in April 2021. Among the issues being discussed in final negotiations are the extent to which generative AI models such as ChatGPT should be subject to regulations for testing and transparency.

Woodrow Hartzog, a Boston University professor of law who specializes in privacy issues, told the committee that any legislation should be robust. "Half measures will not protect us. A checklist is no match to those who exploit our data," he said.

AI-enabled services, especially in "high-risk" sectors, should be subject to licensing requirements, while applications, which have less risk of harm, may require less stringent licensing or regulation, Nvidia's Dally testified.


About the Author

Marianne Kolbasuk McGee

Marianne Kolbasuk McGee

Executive Editor, HealthcareInfoSecurity, ISMG

McGee is executive editor of Information Security Media Group's HealthcareInfoSecurity.com media site. She has about 30 years of IT journalism experience, with a focus on healthcare information technology issues for more than 15 years. Before joining ISMG in 2012, she was a reporter at InformationWeek magazine and news site and played a lead role in the launch of InformationWeek's healthcare IT media site.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing govinfosecurity.com, you agree to our use of cookies.