An interesting report from yesterday's Evening Standard, picking up on a new trial of facial recognition technology. I have written previously about some of the attempts to utilise that technology in a law enforcement setting, but this time the context is rather different.

Some might feel that applying cutting edge technology to ensuring that punters at the bar are served in the order of their arrival is a solution in search of a problem. It is certainly an illustration of how all-pervasive this technology is becoming. But several of the issues that have beset (and ultimately led to the halting of) trials of facial recognition and other evolving technology in the public sector, are equally applicable to this trial. Those challenges will need to be understood and addressed before there will be widespread adoption of such technology by businesses, let alone acceptance of its use by the public at large.

1. People are much more comfortable with machine-aided decision making, than when it is machine-led. A problematic trial of AI as a tool to identify potential benefit fraudsters in several London boroughs resulted in a significant number of innocent people being wrongly accused of fraud. This, it seems, was at least partially a consequence of letters being sent alleging fraud on the basis of the AI identifying an increased likelihood that it was taking place. In the same way, individuals who are over 25 but blessed with youthful features could be prejudiced by being subjected to ID checks because the computer "knows" that they are younger.

2. Racial bias. In a range of facial recognition trials around the world, the same issue has been encountered on numerous occasions. Partly as a result of the comparative availability of data sets, partly due to limitations in the image capturing technology, there is a marked discrepancy in the rate of successful identifications between subjects of different ethnic origins. In particular, darker skinned individuals are more likely to be mis-identified. Although the consequences may be milder (a delay in being served, versus deprivation of liberty) a solution which does not address that potential for bias is discriminatory and will cause genuine harm to those affected.

3. Inability to opt out. Although this report speaks about a particular concern of privacy campaigners (namely that data will be wiped at closing time and not transferred off-site) there are many who will feel uncomfortable being subjected to facial recognition at all. Biometric processing is the handling of special categories of personal data, which frequently depends on the explicit consent of the data subject. If the solution is that customers are deemed to consent to this processing, simply by being present in the pub/bar, that is unlikely to be sufficient.

4. Accuracy. I have mentioned the specific issue with racial bias, above, but even excluding that effect facial recognition technology tends not to be 100% accurate. Such inaccuracies will magnify with scale, and in circumstances where it is intended to use the technology to run bar tabs without requiring a credit card to be provided, the risk that businesses and their customers will be defrauded.

None of this, of course, is to say that facial recognition doesn't have a place in a commercial setting. The ICO itself recognises that this technology can be positively deployed to improve customer experience and reduce workforce costs (a project currently being considered in the ICO's regulatory sandbox is using facial recognition and other technology at Heathrow in a bid to improve passenger flow). But for a technology to succeed in this space, a good deal more work will need to be done to address challenges like those identified above, before widespread user acceptance can follow.