Study Calls for AI Transparency in Migration Management to Protect Human Rights and Enhance Accountability
July 17, 2025
The study warns that overreliance on AI can perpetuate biases and errors, potentially undermining trust in migration decision-making processes.
A recent study by Professor Ana Beduschi from the University of Exeter underscores the importance of transparency in the use of AI for migration management, emphasizing that governments should openly disclose their AI systems and purposes without risking national security or personal data.
The study highlights that compliance with international human rights laws, including privacy and non-discrimination, is essential when deploying AI in migration processes.
Increased transparency is believed to foster public acceptance of AI in public services and enhance accountability, especially given that many countries have yet to disclose their AI use, creating accountability gaps.
Responsible AI implementation can streamline migration casework, allowing caseworkers to focus on critical areas, provided that potential risks are properly identified and mitigated.
Protecting sensitive data related to vulnerable migrants requires robust cybersecurity measures when utilizing AI technologies.
Professor Beduschi developed a risk matrix framework to help countries identify, prioritize, and mitigate risks associated with AI in migration management.
The research points out that the lack of disclosure by some countries about their AI use in migration management creates significant accountability issues.
Summary based on 3 sources
Get a daily email with more AI stories
Sources

Phys.org • Jul 17, 2025
Increased transparency about how countries use AI to manage migration needed, experts urge
Mirage News • Jul 17, 2025
Research Calls for Transparent AI Use in Migration Policies