I've been involved in a few conversations on-line over the last couple of weeks, that came to mind as I was reading this article. There is a growing concern that the pace at which new technology (particularly AI) is being applied is accelerating, without any proper opportunity to consider the privacy and other implications of such innovation.

Separately, I am seeing an increasing desire from those in the technology world to improve dialogue with the humanities and social sciences, to ensure that those who are undertaking hard science are able to frame it within a real world context. Equally it is essential that those outside of the scientific community are properly able to understand the opportunities, as well as the limitations, presented by new technology. Most frequently this topic arises in the context of a need to import ethical qualities into decision-making or decision-supporting technologies like AI.

But the importance of having an ethical framework for tech innovation, the need for invention not to out-pace the constraints of its society, are not new. Nuclear science was one of the great scientific breakthroughs of the last hundred years or so. It was viewed with enthusiasm in its early years - the possibilities contained within nuclear power capturing the imagination and promising a more optimistic future. But there are many who feel that the rate of innovation went too far, too fast and with too little constraint. Even some of those involved in the research regretted their participation after the construction and use of atomic bombs at the end of World War II.

If AI contains within it the same capability to disrupt and transform humanity's future, we need to learn from the lessons of the past. Every member of society from scientist to artist, voter to politician, big business to consumer, has a stake in the world that new technologies like AI can create. They all need to be part of the conversation about what it can and should do, and where the limits should be drawn.