In recent years we have seen Artificial Intelligence (AI) technologies become embedded in society and our daily lives.
AI technologies also have huge potential to support the work of law enforcement agencies. Successful examples of areas where AI systems are successfully used include automatic patrol systems, identification of vulnerable and exploited children, and police emergency call centres.
At the same time, current AI systems have limitations and risks that require awareness and careful consideration by the law enforcement community to either avoid or sufficiently mitigate the issues that can result from their use in police work.
With the recent developmental leaps in AI capabilities, particularly around Generative AI, the public debate around legal and ethical implications of AI systems, as well as the negative effects they could have on society and humanity, has exploded. It is important that these concerns are addressed in a timely fashion, particularly in a law enforcement context.
The Toolkit for Responsible AI Innovation in Law Enforcement (AI Toolkit) , published in June 2023, will help law enforcement agencies address the most pressing challenges when it comes to the use of AI.
As a global police organization, INTERPOL embraces and encourages the use of AI in law enforcement work, however, we are equally committed to do so in a responsible way, in line with policing principles, human rights and ethical standards.
The AI Toolkit was developed by INTERPOL, together with its longstanding partner, the United Nations Interregional Crime and Justice Research Institute (UNICRI), with financial support from the European Union, based on the needs expressed by member countries, to fill gaps in terms of guidance on the development, procurement and use of responsible AI in law enforcement agencies.
The development process was characterized by a highly inclusive, consultative, and iterative process, specifically designed with a view towards ensuring the AI Toolkit is technically and practically sound, promoting close interdisciplinary coordination with industry, academia, criminal justice practitioners, civil society organizations and the public, and fostering a sense of transparency and broader acceptance of law enforcement's use of AI.
The AI Toolkit
In short, the AI Toolkit aims to offer support to law enforcement agencies to navigate the complex task of institutionalizing responsible AI. It consists of seven individual resources and a comprehensive user guide.
This AI Toolkit provides law enforcement agencies with a theoretical responsible AI foundation based on human rights law, ethics, and policing principles, as well as several practical tools to support them with putting responsible AI innovation theory into practice, at every stage of their AI journey.
The intended primary users of the AI Toolkit are personnel in law enforcement agencies, whether local, regional, or national. It is crucial to note though that many other non-law enforcement stakeholders play a key role in implementing responsible AI innovation in law enforcement. These include technology developers in the private sector or academia, civil society, the public, and other criminal justice actors such as the judiciary, prosecutors, and lawyers. For those secondary stakeholders, the AI Toolkit seeks to facilitate more informed discussions among and between all stakeholders involved.
The AI Toolkit, together with the INTERPOL Responsible AI Lab (I-RAIL), sets a base framework for all INTERPOL future AI-related activities. The planned implementation phase will consist of awareness raising, training and selected cooperative AI-related projects with member countries, in close cooperation with other internal INTERPOL units.