Release Date: October 30, 2023
On October 30, 2023, President Biden issued a landmark Executive Order to promote the safe, secure, and trustworthy development and use of artificial intelligence (AI). The Biden-Harris Administration places the highest urgency on governing the development and use of AI safely and responsibly, and is therefore advancing a coordinated, federal government-wide approach to doing so. The Department of Homeland Security (DHS) will play a critical part in ensuring that AI use is safe and secure nation-wide. DHS’s own use of AI will be achieved responsibly, while advancing equity and appropriately safeguarding privacy, civil rights, and civil liberties.
The direction provided in the EO is consistent with DHS’ innovative work in ensuring the safe, secure, and responsible use and development of AI. DHS will manage AI in critical infrastructure and cyberspace, promote the adoption of AI safety standards globally, reduce the risks that AI can be used to create weapons of mass destruction (WMD), combat AI-related intellectual property theft, and help the United States attract and retain skilled talent. The EO follows on DHS’s work deploying AI responsibly to advance its missions for the benefit of the American people.
To learn more about DHS’s work in AI, visit www.dhs.gov/ai.
Managing AI in Critical Infrastructure and Cyberspace
Advances in AI will revolutionize the operation of critical infrastructure operations and ultimately the delivery of services upon which Americans rely daily. But it will also present new and novel risks. To protect U.S. networks and critical infrastructure, the President has directed DHS to take several steps to help govern the safe and responsible development and use of AI.
First, the President has directed Secretary of Homeland Security Alejandro N. Mayorkas to establish and chair an AI Safety and Security Advisory Board (AISSB) to support the responsible development of AI. This committee will bring together preeminent industry experts from AI hardware and software companies, leading research labs, critical infrastructure entities, and the U.S. government. This AISSB will issue recommendations and best practices for an array of AI use cases to ensure AI deployments are secure and resilient.
Second, DHS will work with stakeholders inside and outside of government to develop AI safety and security guidance for use by critical infrastructure owners and operators. The Cybersecurity and Infrastructure Security Agency (CISA) is assessing potential risks related to the use of AI in critical infrastructure sectors, including ways in which deploying AI may make critical infrastructure systems more vulnerable to critical failures, physical attacks, and cyberattacks. We will also take a global, harmonized approach by working with international partners on these guidelines.
Finally, DHS will capitalize on AI’s potential to improve U.S. cyber defense. CISA is actively leveraging AI and machine learning (ML) tools for threat detection, prevention, and vulnerability assessments. Furthermore, DHS will conduct an operational test to evaluate AI-enabled vulnerability discovery and remediation techniques for federal civilian government systems.
Reducing Risks at the Intersection of AI and Chemical, Biological, Radiological, and Nuclear Threats
The advent of AI may make it easier for malicious actors to develop WMD. Of particular concern is the risk of AI-enabled misuse of synthetic nucleic acids to create biological weapons. To mitigate the risk, DHS will work with the White House Office of Science & Technology Policy and other relevant U.S. government agencies to evaluate the potential for AI to lower the barriers to entry for developing WMD. Furthermore, DHS will develop a framework to evaluate and stress test synthetic-nucleic acid screening, creating a standardized set of expectations for third parties that audit AI systems for misuse and prevent the risk of abuse and proliferation by malicious actors.
Combatting AI-related Intellectual Property Theft
Protecting AI intellectual property (IP) is critical to U.S. global competitiveness. IP theft threatens U.S. businesses, impacts American jobs, and negatively effects our national security. To address this challenge, DHS, through the National Intellectual Property Rights Coordination Center, will create a program to help AI developers mitigate AI-related IP risks, leveraging Homeland Security Investigations (HSI), law enforcement, and industry partnerships. DHS will also contribute to the Intellectual Property Enforcement Coordinator Joint Strategic Plan on Intellectual Property Enforcement.
Attracting and Retaining Talent in AI and other Critical Emerging Technologies
Cultivating talent in AI and other emerging technologies is critical to U.S. global competitiveness. To ensure that the United States can attract and retain this top talent, DHS will streamline processing times of petitions and applications for noncitizens who seek to travel to the United States to work on, study, or conduct research in AI or other critical and emerging technologies. DHS will also clarify and modernize immigration pathways for such experts, including those for O-1A and EB-1 noncitizens of extraordinary ability; EB-2 advanced-degree holders and noncitizens of exceptional ability; and startup founders using the International Entrepreneur Rule.
DHS has already advanced policy consistent with direction in the EO:
On October 20, 2023, U.S. Citizenship and Immigration Services (USCIS) published a Notice of Proposed Rulemaking to modernize the H-1B specialty occupation worker program and enhance its integrity and usage; USCIS continues to work on rulemaking to enhance the process for noncitizens, including experts in AI and other critical and emerging technologies and their spouses, dependents, and children, to adjust their status to lawful permanent resident.
On September 12, 2023, USCIS clarified guidance on evidence for EB-1 individuals of extraordinary ability or outstanding professors or researchers.
DHS Leads in the Responsible Use of AI
AI is already delivering significant value across DHS, and it will only become more significant to every part of our operations in the years to come.
Concrete examples of where DHS is already seeing benefits from AI include the following:
Fentanyl Interdiction: U.S. Customs and Border Protection (CBP) uses a ML model to identify potentially suspicious patterns in vehicle-crossing history. Recently, CBP used the model to flag a car for secondary review, which yielded the discovery of over 75 kgs of drugs hidden in the automobile.
Combatting Online Child Sex Abuse: Recently, HSI Operation Renewed Hope identified 311 previously unknown victims of sexual exploitation thanks in part to a ML model that enhanced older images to provide investigators with new leads.
Assessing Disaster Damage: The Federal Emergency Management Agency (FEMA) uses AI to assess damage to homes, buildings, and other property after a disaster more efficiently. Using ML, data from past incidents, as well as pre-disaster imagery, FEMA can classify different levels of damage. During disasters, FEMA uses the output from the ML model to significantly reduce the number of impacted structures that need to be physically reviewed in-person for damage. This allows FEMA’s analysts to process images in days, as opposed to weeks, and gets disaster assistance to survivors that much faster.
While these examples focus on border security, investigations, and disaster response, every DHS Agency and Office is working to responsibly integrate AI, harnessing its potential to further improve DHS operations for the benefit of the American people.
Protecting Civil Rights, Civil Liberties, and Privacy
DHS maintains a clear set of principles and robust governance that prioritizes the protection of civil rights, civil liberties, and privacy. The Department’s approach is the foundation for its work to ensure AI is used responsibly across DHS’s unique missions. DHS policy outlines the Department’s commitment to lean forward in deploying AI tools to enhance operations and lead the government in the responsible and ethical use of AI, ensuring the acquisition and use of AI in a manner that is consistent with the U.S. Constitution and all other applicable laws and policies. Among other commitments, DHS will not collect, use, or disseminate data used in AI activities or establish AI-enabled systems that make, or support, decisions based on the inappropriate consideration of race, ethnicity, gender, religion, gender, sexual orientation, gender identity, age, medical condition, or disability.
The Department’s governance and oversight for the responsible use of AI is a closely coordinated, highly collaborative effort that unites operational and business-process stakeholders from across the Department around the common goal of ensuring responsible use. In April 2023, Secretary Mayorkas established the Department’s first Artificial Intelligence Task Force to drive specific applications of AI to advance critical homeland security missions.
The DHS AI Task Force includes a Responsible Use Group, led by the Officer for Civil Rights and Civil Liberties, which is developing tailored approaches to provide guidance, risk assessment, mitigation strategies, and oversight for the protection of individual rights in projects championed by the DHS AI Task Force. An AI Policy Working Group coordinates work to affect Departmental policy change and apply oversight to all DHS AI activities through collaboration among the Office of the Chief Information Officer, Science and Technology Directorate, Office of the Chief Procurement Officer, Office for Civil Rights and Civil Liberties, the Privacy Office, and the Office of Strategy, Policy, and Plans.
As DHS deploys AI responsibly, it will work under the President’s direction to harness the opportunities and reduce the potential harms of this revolutionary technology.
###
Topics
CYBERSECURITY SECRETARY OF HOMELAND SECURITY SCIENCE AND TECHNOLOGY
Keywords
ARTIFICIAL INTELLIGENCE (AI) EXECUTIVE ORDER SECRETARY ALEJANDRO MAYORKAS
Original Link