
In a quiet suburb of a big city, a couple walk the dog under the light of the street lamps. Nobody around, no noise, it is a calm late evening in this neighborhood, and they barely notice the flashing blue lights approaching fast from the other side of the streets.
The police car arrives rapidly, pass on the sides of the couple, stops several tens meter behind them, and faster than they realize what is happening, the patrol officers get out of the vehicle and stop a person, arresting it before placing it in the back of the car.
One of the police officers approaches the couple, explaining that their crime prevention system alerted them that a robbery was about to happen and that they were about to become the next victims of the suspect.
Does it look like a science fiction scene ?
Would this be something like Tom Cruise’s 2002 movie “Minority Report” ?
Or, would you prefer the TV series with Michael Emerson and Jim Caviezel called “Person of Interest” ?
From Fiction to Reality
Even if the previous scenario or these fictional creations appear improbable at first, we should probably think twice.
Indeed, in the middle of last summer, the UK government announced a new initiative to leverage artificial intelligence and data analytics for preventing crime before it occurs. This project, termed the “Concentrations of Crime Data Challenge,” aims to develop advanced mapping technology to enhance public safety across England and Wales.
This initiative calls for a collaboration from business and universities to work on a solution to deliver a real-time and interactive crime map in order to predict where knife crime would happen, before they happen.
Through an investment of £500, the UK government expects to get a prototype by April 2026, and an operational solution by 2030 with a stated objectives of halving the number of knife crime in the next decade. It is the cornerstone of a UK government Safer Streets Mission.
How will this work ?
This system will rely upon various data sources, including local councils, the police itself, and social services. These data will encompass criminal records, previous incident locations, and behavioral patterns of known offenders.
As introduced in the previous section, the primary goal is to establish a crime map, thus, to identify where a crime is most likely to concentrate, and to detect, track, and predict specific offenses like knife crime and anti-social behavior before they escalate. Hopefully, this should prevent further victims of such crimes.
In fact, the past projects focused on two types of prediction : predicting areas where crimes (e.g. burglaries) would happen, and preventing recidivists acting again.
The first type is based on statistical data in delimited areas providing geographic and temporal predictions, identifying hot spots where patrol should be deployed more regularly.
The second type is based on the behavioral data of the people with past police records and sentences in order to anticipate recidivism.
Well, not so fast !
On paper, such a solution looks promising. However, such solutions ingest sensitive information, they must be under specific surveillance before implementation.
Like any other AI system, the two main factors are the historical data, and the resulting trained model. The quality of the latter relies on the quality of the former.
When it comes to policing data, a closer look must be undertaken to ensure that it doesn’t contain biased information that may lead to false positives, discriminative and unfair outcomes.
These models must ensure complete transparency on how they yield a result, and not behave like a black box, and be able to explain these decisions.
Another concern and challenge are balancing the security needs with the protection of the individual privacy rights. Think about facial recognition, for example.
Consequently, AI-assisted law enforcement places this approach under either “High Risk”, or “Unacceptable Risk” of the EU AI Act‘s risk-based model. For the latter risk level using AI is prohibited (e.g. Social scoring, facial recognition) which would make using AI for recidivism prevention systems banned.
For the “High risk” level systems, they face strict mandatory requirements and must undergo conformity assessments.
Proven Results
Even though such technology seems to chart a path to more secure streets and life in general, when confronted with the reality of the facts, new perspectives emerge.
As examples of AI-assisted policing implementation outside the UK, several initiatives didn’t lead to significant results, or, when positive results occurred, there were no established links to the AI system.
In the US, at least 3 systems can be mentioned where the results were lower than expected or, worse, unfair. In Los Angeles (CA) and Chicago (IL), the policing tools were dismantled after accusations of racial bias, not even speaking about the limited results.
In another instance, in New Jersey, the police officers knowing the city, its areas and habits were more efficient at anticipating incivilities and burglaries than the AI-predicting tool. To the point that they were not using it. Furthermore, the predictions were less than 1% accurate.
In Switzerland, several cantons deployed a solution to predict burglaries. While the number of burglaries decreased by a third in 3 years, it wasn’t proven that the software was at the root of the decrease. Additionally, cantons not using this tool also observed a similar, if not larger, decrease of the burglaries.
A side-effect in canton Aargau, was also highlighted : the decreasing number of burglaries was then not enough to feed the AI system in data, reducing its efficacy.
Towards a Promising Future
Despite the multiple fails or struggles of AI-assisted policing, one shouldn’t discard the idea of having a solution supporting the law enforcement organizations.
These past projects illustrate the importance of data quality, in terms of accuracy and fairness, in addition to the transparency of the systems that they must adhere to.
And, before preventing a crime from happening, increasing patrol presence in the areas where predictions showed an event would occur, thus preventing it from happening would already be an important step while having much fewer risks for individual societal damage.
But the time of “Minority Report” may not come tomorrow.
The last question that finally raises is : from a justice point of view, how solid would be the prosecution of someone arrested before the crime happens ? How guilty is the person if the crime has not been committed ?
This will probably be answered by lawyers before having reliably AI-predicted crimes systems deployed on a great scale
0 Comments