In 2018, the European Commission appointed a High-Level Expert Group on Artificial Intelligence. One of the first outputs of this group of around 50 experts from industry, civil society and academia was a draft set of guidelines which proposed 7 key requirements which AI ought to fulfil in order to be trustworthy.
Trust, which is an increasingly precious commodity in many spheres, is essential when it comes to AI. The range of advancements that can be made through adopting AI-based solutions is growing at a dizzying rate, and there are real benefits to be achieved for businesses and society at large. But for all of the promise that it contains, AI is also beset with legitimate concerns about the potential for inherent bias, its use to support totalitarianism, and the scope for increasing existing divisions in society.
This healthcare-focused piece from Nature, well worth reading in full, illustrates very clearly the risks inherent in AI solutions within any field. But it also presents the outline of a possible solution, in its focus on the increasing drive among developers to build within a defined ethical framework from the ground up. This idea of fairness by design reflects one of the core tenets of the AI HLEG's draft guidelines: that to be trustworthy AI needed to comply with the law, fulfil ethical principles, and be robust.
Almost inevitably it brings to mind parallels from my field of data protection, where the concepts of data protection by design and by default have now been formally incorporated into the law via the GDPR. Just as in this field, incorporation of ethical considerations as an afterthought, or to burnish an existing solution, will simply not cut it. If the objective is to build a solution that is trustworthy, the best approach is to engage an expert multi-disciplinary advisory team from the outset. That team can help to define the necessary legal and ethical framework, identifying opportunities rather than imposing barriers. By doing so, they ensure that as a solution evolves it cannot help but be compliant.
Such an approach is not easy. It involves identifying the right expertise for the project, and it means an investment of time and effort at the outset which is at odds with the agile approach of moving fast and breaking things. But just as developers working with personal data are starting to discover, compliance is rapidly becoming a commercially valuable differentiator. Meanwhile, non-compliance may start to become a barrier to entry. When you put it like that, the upfront investment is a no-brainer. Making money from innovation by doing the right thing? That's a future that everyone can get behind.
One way to ensure that AI tools don’t worsen health inequalities is to incorporate equity into the design of AI tools. The University of Chicago Medicine data team ... now partners closely with the university’s diversity, inclusion and equity department. This means addressing equity in AI is not an afterthought “but rather, a core of how we implement AI in our health system,” says John Fahrenbach, a data scientist with the university’s Center for Healthcare Delivery Science and Innovation. Fahrenbach worries that not enough attention is being paid to equity in the design of most machine-learning models. “There are so many machine-learning models in health care being developed, deployed and pitched, and I rarely hear them even mention these concerns. This really has to change and formalized regulation is likely the best way for this to happen,” he says.