Ethics in Technology – AI and Bias

Ethics in Technology – AI and Bias

From the AiM Next Generation Technologies team

It is simple, collect some base data, feed it in to a deep learning algorithm, let it learn, and we reap the benefits. This can apply to anything, for example creditworthiness, facial recognition, or reducing plastic waste. However, hold your horses, it may not be quite that simple. “Why?”, you ask…

Let’s take a step backwards, if I give you a lot of data, and ask you to read up on a subject, you’ll make some notes, collate the facts and form some opinions (well, that’s what I’d do). I can then give you some similar scenarios, and you can draw conclusions. But what if the data I provided only tells a part of the picture, for example, I want to look at the percentage of children developing asthma in schools situated in areas with high traffic volumes. If I only use a base data set containing details of asthma rates amongst children in schools with different traffic conditions, including both rural and city areas, it seems like I can draw realistic conclusions about the effect of traffic on asthma. However, if I do not include information on relative wealth, medical provision, home environment, genetics etc., I may miss other contributory factors, which may have a significant influence on the final outcome.

Similarly, AI can produce biased results because its original data set had an implicit bias, and this has occurred with facial recognition, when data sets of mostly white faces were used. This has potential implications for companies who use AI to make recruitment decisions, or the police who use may use AI software to identify suspects.

Bias is not a new phenomenon, and it occurs whenever we process data and make decisions, but the danger of AI is that we may become so dependent on the decisions it makes that we forget that there may be bias based on the material it used to learn. Once again, we see that there are ethical questions regarding the type of learning data and possible manipulation by unscrupulous groups to be considered even when we are using AI in a positive socially beneficial way.

What are your views on AI bias? What can we do to minimise it? Let us know what you think!