Open Sourced logo

Artificial intelligence is here, and it’s impacting our lives in real ways — whether it’s the Alexa smart speaker on our nightstand, online customer service chatbots, or the smart replies Google drafts for our emails.

But so far, the tech’s development has outpaced regulation. Now, government agencies are increasingly encountering AI-based tools, and they must figure out how to evaluate them. Take the Food and Drug Administration, which greenlights new medical products: It needs to review and approve new health care products that boast AI-capabilities — like this one that promises to detect eye problems related to diabetes — before they’re sold to us. Or consider the Equal Employment Opportunity Commission, which investigates employment discrimination. Today, the agency must also make decisions about AI-based hiring algorithms, like those that screen job candidates’ resumes and decide whether or not you deserve an interview.

On Wednesday at CES, the prominent Las Vegas-based technology trade show, White House officials formally announced how the Office of Science and Technology wants federal agencies to approach regulating new artificial intelligence-based tools and the industries that develop the tech.

The White House’s proposed AI guidance discusses some of the biggest concerns technologists, AI ethicists, and even some government officials have about the technology, but the guidelines are centered most on encouraging innovation in artificial intelligence and making sure regulations don’t “needlessly” get in the way.

That reflects an ongoing problem for AI, one that’s already played out in other tech sectors, where a rush to innovate without much oversight has only come to back haunt us.

While encouraging innovation in AI is certainly a consideration, critics of the technology have said that regulators must scrutinize artificial intelligence more closely as it continues to be rolled out in the real world. They argue that artificial intelligence can replicate, and even amplify, human biases. These tools often function in black boxes — meaning that they’re proprietary and operated by the companies that sell them — which makes it difficult for us to know when or how they might be harming real people (or if they even work as intended). And new AI-based tools can also raise concerns about privacy and surveillance.

For now, these new guidelines are just that — guidelines — which means that today’s memo won’t have an immediate effect on the artificial intelligence tech you might encounter in your daily life. But the memo shows how the government is thinking about AI and its potential impact on Americans. “People should care that the White House is trying to bring a framework for assessing and justifying the deployment of AI tools, because what we’re finding as these tools develop and emerge is that there are some applications that have deeper consequences than others,” said Nicol Turner-Lee, a fellow at the Brookings Institution who researches technology and equity.

The Trump administration wants a national AI effort

Trump and his administration want the US to dominate the AI industry — and they definitely want the US to be better at AI than China. Early last year, President Donald Trump signed an executive order establishing the “American A.I. Initiative,” which is meant to jumpstart AI research and help build an AI-competent US workforce, among other goals (though he didn’t give the effort any new funding).

Outlining 10 primary principles, today’s memo to federal departments and agencies echoes the goals of that executive order. It urges regulators to be mindful of innovation and to “consider ways to reduce barriers to the development and adoption” of AI when weighing how existing laws and potential new rules apply to the emerging technology.

“Federal agencies must avoid regulatory or non-regulatory actions that needlessly hamper AI innovation and growth,” says the memo. “Agencies must avoid a precautionary approach that holds AI systems to such an impossibly high standard that society cannot enjoy their benefits.” At the same time, the guidance also urges regulators to be conscientious of values like transparency, risk management, fairness, and nondiscrimination.

These are all fair points. By encouraging these federal departments and agencies to take action, the Trump administration also hopes to avoid a future in which American AI companies might face a patchwork of local and state regulation, or possibly overreaching federal regulation, that could impede the technology’s expansion.

AI experts told Recode that the AI guidelines are a starting point. “It will take time to assess how effective these principles are in practice, and we will be watching closely,” said Rashida Richardson, the director of policy research at the AI Now Institute. “Establishing boundaries for the federal government and the private sector around AI technology will offer greater insight to those of us working in the accountability space.”

Aaron Rieke, the managing director of the technology rights nonprofit Upturn, said in an email to Recode that, for now, he doesn’t think the memo will have much influence: “I do not think these principles will have much of an impact on the average person, especially in the short term. I think regulators will be able to justify their decisions, good or bad, without much effort.”

Importantly, the memo doesn’t actually apply to artificial intelligence that the US government itself uses (of which there’s plenty). For instance, a search of a US federal contracts database shows that the Centers for Disease Control has purchased facial recognition products (an AI-based technology), while the Department of Commerce appears to be using AI to improve its patent search system.

One of the reasons AI needs regulations: It comes with risks

AI systems are not inherently objective. Humans build these tools, and AI is often developed using flawed or biased data, which means the technology can inherit or even magnify human biases like sexism and racism. For instance, when in 2017 scientists taught a computer program to learn the English language by mining the internet, it ultimately became prejudiced against women and black people.

Critics say that risk means the government should aggressively regulate, and even ban, certain applications of artificial intelligence. And some AI tools, like facial recognition, that rely on collecting sensitive information, have also spurred concerns about how this tech could potentially create privacy and surveillance nightmares.

This all matters because AI already has the potential to have a real impact on your life, even if you haven’t realized it yet. Some landlords have floated requiring tenants to use facial recognition to enter their homes, even though the technology is known to be less accurate on people of color and women (and especially women with dark skin), among other groups. Another example: Though never used, a resume-screening algorithm produced by Amazon inadvertently discriminated against female applicants because it was trained on resumes the company had previously collected, which mostly came from men. Imagine losing out on your dream job because of a biased algorithm.

“AI systems have a potential to discriminate against the American public on the basis of race, sex, gender — every sort of criteria imaginable,” Albert Fox Cahn, an attorney who leads the Surveillance Technology Oversight Project at New York University, told Recode. “This could impact everything from whether you get a job offer, whether you get approved for an apartment or a mortgage, whether you get the good interest rate or the bad interest rate. It could impact college admissions and school placement.”

That’s left him disappointed with the new proposed guidelines. “Rather than provide a framework for regulators to actually address discrimination head-on, instead the White House is urging a hands-off approach which will allow AI to simply target historically marginalized communities without the interventions we need,” said Cahn. He said the memo’s references to values of nondiscrimination and transparency don’t have much force behind them.

“When you think of where most consumers are more AI-vulnerable, it’s in those areas like housing, health care, and employment — the areas that primarily make the front page of the newspaper,” said Turner-Lee. She said it’s not clear what the memo will mean for agencies like the Department of Labor and Consumer Financial Protection Bureau as compared to, say, the Department of Agriculture.

She adds that it’s also not clear whether agencies are actually prepared to identify the risks AI tech poses, or if they’re up to the job of ensuring their regulations keep pace with innovation. “There’s a lot more of the devil in the details that I’d like to see, but I think they’re just trying to give us a general framework for some kind of ethical and fair deployment.”

Now the White House wants feedback, including yours

The draft guidance isn’t set in stone. For the next several months, it will be subject to public feedback, including yours (we’ll update this piece with how to do that as soon as the information becomes available). Once the guidance is formally approved, the White House expects that agencies will report back on how they plan to meet its new AI expectations.


Open Sourced is made possible by Omidyar Network. All Open Sourced content is editorially independent and produced by our journalists.

Posts from the same category:

    None Found