The biggest AI policy news this year is the publication of the Department of Homeland Security's (DHS) AI playbook. The playbook outlines best practices and learnings developed by the DHS through a series of generative AI (GAI) pilot programs.
Let's explore what the AI playbook covers, why it's raising concerns, and how an AI-friendly approach to immigration case processing could affect your practice.
The DHS AI playbook, published in January 2025, acknowledges a willingness to use artificial intelligence to improve immigration case handling. The playbook also communicates a broader goal: establishing a framework government officials can use to implement GAI technology safely and effectively.
The framework highlights seven steps for the safe, productive implementation of GAI technologies. They include identifying lower-risk AI programs that align with the organization's priorities, building on existing agency tools and infrastructure, and developing responsible use guidelines.
Deploying artificial intelligence in government programs comes with risks. GAI tools can enable measurable efficiency gains, but the benefits cannot be at the expense of privacy, civil rights, and civil liberties. The playbook guidelines emphasize the safe and ethical use of AI technology.
This theme—responsible technology adoption—is common across AI policy news. The DHS framework addresses safety concerns in part by limiting the extent of AI usage. The technology should be used to enhance human decision-making. AI is not a replacement for officers or a tool for automating mission-critical processes.
At least three DHS AI pilot projects contributed to the recommendations described in the playbook. The DHS AI task force, established in 2023, handpicked these programs:
The DHS AI task force chose these pilots because their findings could apply to other agencies. Each program was structured to enhance human work with minimal disruption and risk.
From the DHS playbook, we can extrapolate three goals for using immigration AI: efficiency, training, and ethics.
DHS's proactive approach to adopting AI technology has raised concerns. Advocacy groups fear that using AI in the adjudication of immigration cases will create ethical and oversight challenges. It may also infringe on civil liberties.
Bias is a known problem within AI systems. Human prejudice can become embedded in GenAI applications through low-quality training data, human feedback with AI models, and flawed AI algorithms.
In 2024, more than 140 advocacy groups asked DHS to suspend select immigration AI pilots. The argument focused on the challenges of monitoring AI outcomes and combatting AI bias. Ensuring fair, unbiased decision-making is difficult without full transparency into the algorithms that power tools like the ICE Hurricane Score and Risk Classification Assessment (RCA).
Hurricane Score predicts the likelihood that a noncitizen released from detention will comply with required ICE check-ins. RCA estimates a detainee's flight risk and public safety risk. Risk assessments are used to recommend detention decisions. Note that DHS does not consider RCA to be AI since it automates analysis previously done manually.
DHS says only humans make decisions about detention, deportation, and eligibility. The AI tools play a supporting role only. However, some argue that leaving the final decisions to humans may not be enough.
Bias and discrimination built into AI technology can be difficult to detect. And, because these tools are designed to be used at scale, even subtle discrimination patterns can have far-reaching effects on civil liberties.
Former President Biden signed an executive order in 2023 to limit artificial intelligence in government programs to ensure fairness, safety, and security. That order led to the hiring of a new Justice Department Chief AI Officer in 2024.
Advocacy groups have argued that DHS AI tools violated Biden's federal order. However, this argument is no longer relevant since President Trump repealed the order earlier this year.
The DHS guidelines emphasize deploying new AI technologies for mission-enhancing—not mission-critical—processes. One high-level goal is to improve and expedite human work without replacing it. Another is to adopt AI without risking privacy, security, or civil rights and liberties.
As DHS continues pursuing these goals, immigration lawyers should expect to be impacted by AI more in case processing, such as expedited timelines, standardized decision-making, emphasis on complex cases, and evolving security measures:
GAI tools can expedite the analysis and categorization of case materials. As a result, straightforward immigration applications should be processed more efficiently.
As with DHS's RCA automation, new technology can standardize processes previously completed manually—this should reduce inconsistencies and ensure fairer outcomes across different service centers.
Offloading routine tasks to AI allows case officers to dedicate more time and energy to complicated situations. Ultimately, this could improve the quality of human judgment in nuanced immigration scenarios.
AI data privacy concerns relate to the collection, use, and handling of personal information. Fortunately, this is an easier problem to solve than AI bias. Developers can build privacy safeguards into AI applications, and the complexity and efficacy of these safeguards should improve over time.
Immigration lawyers can follow the DHS's lead in adopting AI to improve efficiency and consistency while protecting their clients' privacy and data. As AI tools get more sophisticated, lawyers may have to use them to remain competitive.
There are two logical AI adoption starting points for immigration lawyers. One is using AI to streamline writing. The other is AI-enabled data capture for faster intakes.
GAI applications are known for their language skills. AI writing assistants for immigration lawyers can proofread, edit, rewrite, and simplify documents, notes, meeting invites, and more. These tools can also break through language barriers with seamless and accurate English-to-Spanish translations that are clear and simple.
Importantly, these AI writing features can be integrated into the practice's primary case management system. Eliminating the need to cut and paste across applications maximizes efficiency and minimizes errors.
Data capture is another area of AI opportunity for immigration lawyers. Automating the collection of names, birthdays, and document numbers improves efficiency and data accuracy. Image-to-text AI tools allow immigration lawyers to upload physical documents, such as passports or green cards and have that information populated across all necessary client documents. This significantly shortens the intake process, allowing you to handle more cases more efficiently. Faster intakes and fewer errors improve client satisfaction, add efficiency, and increase case capacity.
AI is changing how immigration cases are handled by DHS and ICE. Processing times on straightforward cases will shorten as service centers automate routine tasks and standardize decision-making. Complex cases should receive more attention as resources shift to focus on higher-level issues.
In this changing world, immigration lawyers must evolve to maintain high service levels. Efficiency will be a key theme in that evolution.
Docketwise, the top-ranked all-in-one immigration software, supports your practice through this technology transition. You can use Docketwise to complete immigration forms quickly and accurately, track and convert client prospects, communicate privately with clients, and track case status in real time.
Docketwise is also currently developing two AI-powered features that can set your firm apart from the competition: Docketwise Writing Assistant and Docketwise IQ Data Capture. Both are purpose-built for lawyers.
Learn how Docketwise can create cutting-edge efficiencies in your practice by scheduling a demo today.