An insight from Dr Seán Costello – Director, Innopharma Technology
As the old adage goes “With great power comes great responsibility”. What follows below is an interesting account by our very own Dr Seán Costello after recently attending an event to consider ethical and responsible business practices in the use of Artificial Intelligence.
I had a really interesting start to the week where I attended as a panel speaker a symposium hosted by Intel and Article One in association with the School of Law at Trinity College Dublin. The Symposium was a day for debating and discussing the areas of responsible business conduct and innovation with respect to the ethical use of Artificial Intelligence.
In an age where we are experiencing a recession on Human Rights, privacy and increased authoritarian efforts to exert control, the context and gravity of the issues discussed at the event with respect to human rights, privacy, and personal and societal freedoms are important to consider. These were established early by the presence of keynote speaker Eamon Gilmore, the EU Special Representative for Human Rights where he expressed the commitment of the EU to do its part to ensure the responsible use of such technology.
The intention of the symposium was to delve into issues around responsible development and deployment of Artificial Intelligence (AI). The emergence of new international standards and regulations on responsible business conduct was discussed in a way that respects all stakeholder expectations while at the same time ensuring that the Human Rights framework is reflected in the efforts to define best practices, standards and regulations in the ethical deployment of AI.
These were weighty topics, but increasingly relevant as AI increasingly pervades all aspects of our lives. I was part of a panel alongside Lama Nachman from Intel, Matt Moran BioPharmaChem Ireland and chaired by Erik O’Donovan, Head of Digital Economy Policy at IBEC. In this we shared our experience of the practical issues surrounding the responsible deployment of AI in manufacturing.
During the days debate I couldn’t help but draw parallels between the challenges facing policy makers and other stakeholder in setting standards and regulations for the responsible and ethical use of AI today. and the raison d'etre for the emergence of Regulators such as the FDA and EMA in the manufacture of medicines. The birth of the FDA and more recently the EMA was driven by historical ethical challenges such as mislabelling, misbranding, adulteration and contamination (both deliberate and unintended) of medicinal products by corporations and individuals. It required measures for the protection of the public and became the drivers for the culture of responsible governance that we have today in pharmaceutical, food and medical device manufacturing.
Surely lessons can be learned from this analogous perspective that are equally attributable today to the responsible use of AI.
Listening to this discussion taking place on the campus which hosts the oldest school of medicine in Ireland, I could not help but link much of what was being discussed to the maxim ‘primum non nocere’ (first, do no harm) attributed to the ancient Greek physician Hippocrates on whose oath is the principal precepts of bioethics that all students in healthcare abide by even today.
Perhaps such a similar oath or code of conduct is a starting point for us all in the ethical use of AI.
Well done to Article One, Intel, School of Law at TCD for a superb event and to Paula Williams, Global Programme Owner for Human Rights at Intel for the invitation to participate.