TechnologyFederal Agencies to Adopt New AI Technology Protocols Announced by VP Harris

Federal Agencies to Adopt New AI Technology Protocols Announced by VP Harris

More fair use of AI technology on government level under new rules by VP Kamala Harris
Kamala Harris, the U.S. Vice President, speaks about AI in a press conference at the Safety Summit 2023, in London, England.

Travelers should be able to say no to face recognition scans at airport security checks by the end of the year without worrying that it might cause delays or put their plans in jeopardy.

That’s one of the specific rules that the Biden administration says it will enforce across the whole US government to protect AI. This is an important first step towards stopping the government from abusing AI. Using the government’s large purchasing power, the move could also indirectly regulate the AI business.

Thursday, Vice President Kamala Harris announced that US agencies will have to follow a new set of rules that are meant to stop AI from being used in unfair ways. The rules are meant to cover everything from TSA screenings to choices made by other government agencies that affect people’s health care, jobs, and housing.

Agencies using AI tools will be required to check that they don’t put the rights and safety of Americans in danger starting on December 1. In addition, each agency will be required to post online a full list of the AI systems it uses, along with a description of the risks associated with using them.

The Office of Management and Budget (OMB) new policy tells all government agencies to name a chief AI officer who will be in charge of how each agency uses AI.

Harris told reporters on a press call Wednesday that leaders from government, civil society, and the private sector have a moral, ethical, and social duty to make sure that AI is developed and used in a way that keeps people safe and lets everyone enjoy all of its benefits. She said that the policies are meant to be a model for governments around the world under the Biden administration.

The announcements made on Thursday come at a time when the federal government is quickly adopting AI tools. Currently, US government agencies are using machine learning to keep an eye on volcanoes around the world, follow wildfires, and count animals captured on camera by drones. A lot of other use cases are being planned. Last week, the Department of Homeland Security said it would be using AI more to train immigration officers, keep important equipment safe, and look into cases of drug abuse and child exploitation.

MB Director Shalanda Young said that limits on how the US government uses AI can help make public services better. She also said that the government is starting a national talent surge to hire “at least” 100 AI pros by this summer.

Young highlighted the agency reporting requirements and said, “These new requirements will be supported by greater transparency.” “AI comes with risks, but it also offers huge chances to make public services better and make progress on big problems like climate change, public health, and access to fair economic opportunities.”

Experts say that this technology could help find new ways to treat diseases or make trains safer, but it could also be used in a bad way to target minorities or make biological weapons. The Biden administration has moved quickly to deal with this issue.

A big executive order on AI was signed by Biden last autumn. As part of the order, the Commerce Department was told to help fight computer-made deepfakes by creating guidelines on how to watermark material made by AI. Ahead of time, the White House said that top AI companies had agreed to let outside safety testers test their models.

It has taken years for the federal government to implement the new policies that were announced on Thursday. In 2020, Congress passed a law telling OMB to make its rules for agencies public by the following year. A new report from the Government Accountability Office, on the other hand, says that OMB missed the 2021 target. However, it didn’t put out a copy of its policies until November 2023, two years after the Biden executive order.

Even so, the new OMB policy is the latest thing the Biden administration has done to change the AI business. And because the government buys so much commercial technology, its AI policies are likely to have a significant impact on the private industry. Thursday, US officials promised that OMB would do more to regulate government contracts that use AI. They are now asking the public for suggestions on how they should do this.

There are some things that the US government can’t do with executive action, though. Policy experts have asked Congress to pass new laws that could set basic rules for the AI business. However, leaders in both houses have taken a slower, more deliberate approach, and not many people think that results will be seen this year.

*Also this month, the European Union passed the first law of its kind on artificial intelligence, which means that the EU is once again ahead of the US when it comes to controlling an important and disruptive technology.

Nathan Enzo
Nathan Enzo
A professional writer since 2014 with a Bachelor of Arts in Journalism and Mass Communication, Nathan Enzo ran the creative writing department for the major News Channels until 2018. He then worked as a Senior content writer with, including national newspapers, magazines, and online work. He specializes in media studies and social communications.


Related Articles