Artificial Intelligence & Machine Learning , Government , Industry Specific

DHS Will Use AI to Investigate Sex Traffickers, Drug Dealers

New AI Road Map to Focus on Investigation, Immigration Services, Disaster Relief
DHS Will Use AI to Investigate Sex Traffickers, Drug Dealers
Image: Shutterstock

The U.S. Department of Homeland Security plans to embed artificial intelligence in its operations and use large language models to comb through massive amounts of data to track down child sex traffickers and drug smugglers. While pledging to use AI responsibly, DHS said it hopes to move quickly and plans target other areas such as immigration services and disaster management.

See Also: Splunk For Law Enforcement

In DHS' new AI road map, the agency said its approach to protecting U.S. cyber networks and critical infrastructure with AI will be "rigorously tested" to avoid bias, while ensuring the "safe and responsible development" of emerging technologies such as generative AI.

Several AI pilot projects will "guide our efforts this year to strengthen our national security, improve our operations, and provide more efficient services to the American people, while upholding our commitment to protect civil rights, civil liberties and privacy," said Secretary of Homeland Security Alejandro N. Mayorkas during the announcement he made Monday with Eric Hysen, DHS chief information officer and chief artificial intelligence officer.

The announcement comes five months after President Joe Biden's AI executive order spurred federal agencies to implement AI in their operations. The order directed DHS to promote the adoption of AI safety standards globally, protect U.S. networks and critical infrastructure, reduce the risks that AI can be used to create weapons of mass destruction, combat AI-related intellectual property theft and retain skilled talent (see: Biden's AI Executive Order, 90 Days On).

The department plans to partner and share information with stakeholders including the private sector, academia and international entities to help accelerate the development and deployment of AI solutions. Among the first pilot projects is a plan by Homeland Security Investigations to use a large language model system to make its child sexual exploitation investigation processes more efficient. DHS said AI can help identify perpetrators and victims at a time when sex crimes reportedly occur every nine minutes in the U.S.

The LLM will use open-source technologies to speed up the summarization of investigative reports and search for contextually relevant information. The unit will also use the technology to detect networks related to fentanyl, map patterns and crime trends. Fentanyl, a drug that is 100 times more potent than morphine and caused more than 112,000 overdose deaths by 2023, has become a major priority for law enforcement agencies across the country.

Another pilot project will enable U.S. Citizenship and Immigration Services to use generative AI to train immigration officers with an interactive application that will personalize training to each candidate's needs and job requirements, ensuring knowledge retention and limiting the need for retraining over time.

The Federal Emergency Management Agency will use AI technology to streamline the process of applying for disaster relief grants to ease the hazard mitigation planning process for local governments and will focus on crafting customized plans to identify risks and mitigation strategies.

Each of the three pilot teams will partner with privacy, cybersecurity, civil rights and civil liberties experts throughout the development and evaluation process, the announcement says.

Lessons learned will help the agency understand ways to "effectively and responsibly" use the technology, DHS said.

DHS has previously put out calls to hire 50 AI experts to build teams that will help it use the technology responsibly, set up a task force under Hysen to identify areas of AI effectiveness, and solicit synthetic data generators that can make up fake data for real-world scenarios, such as identifying cybersecurity threats, to boost the accuracy of machine learning models or be used to test systems.


About the Author

Rashmi Ramesh

Rashmi Ramesh

Assistant Editor, Global News Desk, ISMG

Ramesh has seven years of experience writing and editing stories on finance, enterprise and consumer technology, and diversity and inclusion. She has previously worked at formerly News Corp-owned TechCircle, business daily The Economic Times and The New Indian Express.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing govinfosecurity.com, you agree to our use of cookies.