OK, let's start by being realistic. New, cutting edge, technology simply does not work straight away. There is an evolutionary process of gradual incremental improvements. Things, as they say, can only get better.

But from a privacy point of view, the current position of the Met Police's AI facial recognition trials is, even so, pretty parlous. An independent study by the University of Essex confirmed that of the positive matches made by the systems during the trials which they reviewed, 81% were inaccurate. In other words, even allowing for a pretty small sample size, 34 people were stopped and question on suspicion of being wanted criminals who were in fact innocent.

34 people. Compared to 8 who were correctly identified. Imagine if you were walking down a street in London with family, colleagues, customers, and you are stopped by the police and questioned because their AI has flagged you as a wanted criminal. Not an appealing prospect.

Small wonder then, that some people would prefer to avoid the scan, rather than running that risk. But at least one individual was stopped, questioned and fined for seeking to conceal their face in the pilot zone. This was picked up by the researchers as particularly problematic, because the lawful basis which the police were relying on for processing the facial ID data was informed consent. Even those not expert in the niceties of the laws around consent in a data protection context will be able to appreciate that consent given under pain of a fine for non-compliance is scarcely likely to be validly obtained.

This issue, among other reasons, informed a general conclusion that it was "highly possible" that identification evidence obtained in this way would be successfully challenged in Court.

The news will add more weight to demands for a proper, measured and principled engagement with these new, privacy-invasive technologies. Recent draft guidance published by the European Commission's High Level Expert Advisory group on AI points to the need for AI be not only legally compliant, but also robust and ethical. Only then, they say, will it start to be trustworthy.

When questions of liberty and criminal prosecution are involved, it shouldn't be too much to hope that we can trust the devices that will be involved in making those determinations. But for now, trustworthy AI in the criiminal justice arena seems to be a long way off.

(PS - the article quoted below says that the Met Police's methodology for its own 1 in 1,000 error rate is unknown. In fact, it seems that this rate is derived by reference to the total number of faces scanned, regardless of whether they were identified or not, which seems (at best) a dubious metric.  By the same metric, if my maths is correct, this gives an accuracy level of 1 in 4,000)