Global Alliance: In all the hype around Artificial Intelligence systems communicators as guardians of organisational reputation need to keep a laser focus on the implications for publics.

The issues 

The potential for long term reputational damage is high when considering implementing AI systems in governmental services. These could be health, policing, social care, mental health, housing and benefits. All services which are critical to citizens.

An AI implemented without sufficient testing to ensure it performs correctly without bias could have catastrophic consequences for some of the most vulnerable in society. It could also lead to a major breakdown in trust between citizens and government leading to civic unrest.

Minority Report might be sci fi but an AI which could predict who might commit a crime is not. But consider the biases we know are built into our policing currently – how confident would you feel especially if you came from an ethic minority?

What about an AI health system which screened for an illness? Surely a good thing. But what if it was trained on data with a significant bias so it only spots the illness indicators in those in higher socio economic groups?

How about a health and social care system which predicts who will need to go into a care home and when?  It could be a positive response to ageing populations but what if it wasn’t voluntary? How would you feel being forced out of your home and having your liberty curtailed.

Bias

We need to consider biases in AI systems at a variety of levels:

  • Who codes the AI system? We know the coding industry suffers from a diversity deficit. If we want systems that work in an inclusive way improving the diversity of backgrounds and experiences in those doing the coding is essential.
  • Who collects the data we train the AI system on? We know that governmental institutions which collect large datasets can suffer from institutional bias. What oversight do we put in before we use the data to train the model? How can we clean data to remove bias before it is used?
  • How do we test the model and monitor for bias? We need to pilot models and correct any biases rather than rush to implement en masse.
  •  How do we ensure AI systems are culturally specific and appropriate? Using US modelling and data in a AI system and then implementing in a European country would be likely to fail. Equally a Western designed system may be inappropriate for Asia and vice versa.

What to do

Firstly I would advocate that during the development phases lay members of the public who will be impacted by the systems are engaged to allow transparency and challenge.

Second, ethics boards should scrutinise the development of models. These could work in a similar way those in UK health which oversee research.

Third, Oversight Boards for the implementation of models need to exist where complaints can be heard. They need to be open and transparent and staffed by people with enough technical knowledge and who are willing to challenge. We’ve just seen with the Horizon Post Office scandal what happens when there is no challenge to the computer system. Scale that up to a critical AI system making thousands of decisions about citizens.

Fourth, I would propose that PR practitioners be involved from the outset. We need to know what the model does, what data it was fed and how it performs if we are to advocate for it, engage publics and create trust.

Fifth, Senior Leadership and Board Directors have to be upskilled in AI to provide appropriate challenge.

 

Mandy Pearse, FCIPR, Chart PR, MBA

Director at Seashell Communications

PR strategist, consultant, speaker and trainer. Former CIPR President.

 Article published as part of Global Alliance Ethics Month 2024.

 Any thoughts or opinions expressed are that of the authors and not of Global Alliance.

Source: https://www.globalalliancepr.org/thoughts/2024/2/27/artificial-intelligence-and-ethics