Please ensure Javascript is enabled for purposes of website accessibility

Balancing innovation and regulation in AI: A fine scalpel, not a heavy club


FILE - President Joe Biden signs an executive on artificial intelligence in the East Room of the White House, Monday, Oct. 30, 2023, in Washington. Vice President Kamala Harris looks on at right. (AP Photo/Evan Vucci)
FILE - President Joe Biden signs an executive on artificial intelligence in the East Room of the White House, Monday, Oct. 30, 2023, in Washington. Vice President Kamala Harris looks on at right. (AP Photo/Evan Vucci)
Facebook Share IconTwitter Share IconEmail Share Icon

Artificial intelligence must be regulated, an expert said Friday. But he said it's vital to consider the technological and economic feasibility of any such effort.

“I think the regulation needs to be like a fine scalpel so that you can carve off the undesirable pieces, and you can sort of accentuate the positive outcomes of this,” said Saurabh Bagchi, a professor of electrical and computer engineering at Purdue University.

AI is a “fast-moving train” that’s proven difficult for policymakers to keep up with, he said. And he warned against taking a “heavy club” approach that could tamp down innovation.

“And we certainly don't want that here,” he said. “This is, I think, a gold rush, and we want the U.S. to be the first and to be able to leverage that gold rush better. And therefore, we have to tread this regulation landscape very carefully.”

Some AI guardrails might be great in concept, but we don’t have the tools in place yet to implement them. That’s not to say we never will, Bagchi said. We just don’t already.

Then there are regulations that, though perhaps well-meaning, might put financial burdens on companies that dissuade them from the research and investment that could give us the next AI breakthrough, he said.

The “sweet spot” is a rule that the industry can both technologically and financially abide by, Bagchi said.

An example of a pressing area of AI regulation that hits the sweet spot is “equity,” he said.

AI has been used to approve or decline home loan requests, to make parole decisions, and to help form predictive policing decisions. But those AI-generated decisions aren’t always transparent.

Bagchi said regulations could ensure AI systems provide details for their outputs in such cases.

“So, if I know that my home loan request is being denied, I can see that it's being denied because my credit rating is at this level,” he described. “If I can get my credit rating here, then I have a higher chance of it getting accepted.”

Some companies are establishing their own parameters for responsible AI development, such as OpenAI’s recently released "Preparedness Framework."

But most Americans don’t trust businesses to use AI responsibly, according to a Gallup survey.

Bagchi said the federal government should handle the bulk of AI regulations, though there could be a place at the state level for some “fine-tuning.”

He said the White House’s executive order on AI was “a very credible step” and helped set the tone for how we should govern AI development.

The White House said the order established new standards for AI safety and security. The order addressed privacy concerns, equity, consumers, workers, and U.S. competitiveness in this fast-emerging field.

President Joe Biden called AI the “most consequential technology of our time.”

But Bagchi said Congress needs to catch up and put some legislative teeth behind these rules of engagement.

Bagchi said he’s confident Congress will meet the moment, given the amount of attention being paid to AI.

AI and the internet know no boundaries, so Bagchi also said transnational organizations, such as the United Nations and European Union, need to play a role in reining in the technology.

Bagchi offered a couple of aspirational ways of controlling AI that aren’t yet technologically feasible.

One example would be providing credit and “micropayments” to writers, artists and other content creators so they benefit when their work is used in AI-generated output.

“It brings together a much larger portion of society who are creating outputs and who right now are very cagey about the use of AI or letting AI use any of their content,” he said. “And that, I think, slows down progress and sort of blunts the positive effects of AI. But we've got to solve this important problem.”

It’s also important to regulate how human-generated content and AI-generated content can be identified. The technology to do that effectively isn’t there yet, though Bagchi believes progress is quickly being made.

Loading ...