Ben Taylor, CTO at Rainbird, discusses why the government must adopt a transparent approach towards AI technologies
In recent months, we’ve seen increased interest and concern around government and private sector use of AI technologies. Numerous media stories have highlighted contentious applications of AI, including the implementation of a visa application system by the Home Office and a facial recognition system by the developers of the King’s Cross Central precinct in London. Many AI technologies are still in a nascent stage, which is part of the issue, resulting in flawed decisions. As the march towards an AI future continues, governments and public sector organisations need to consider and agree what applications are ethical, useful and acceptable in a democratic society.
Gain public trust
To gain the public’s trust, the primary consideration for governments planning, developing and implementing public-facing AI solutions needs to be transparency. At present, the way that AI is developed and deployed by governments and the private sector is generally shrouded in mystery, creating an atmosphere of distrust. To address this, and give the public more confidence in the technology, transparency needs to be adopted at all stages of project planning, implementation and outcomes also need to be made clear.
One of the main criticisms of AI is a shortcoming in the technology at present that can incorporate bias, resulting in discriminatory decisions. This was the main concern with the Home Office visa application system, which experts cautioned could be discriminating against some applicants on the basis of nationality and age. In order to eliminate the potential for bias, best practice for governments should be to ensure that humans remain part of the loop in any AI decision-making process. Black-box type AI solutions and processes need to be transformed into a glasshouse, that operates according to human logic. Human-centric, rules-based AIs enable humans to audit every decision they make without the need for external scrutiny.
Neural networks
Governments need to be particularly cautious where planned AI implementations incorporate neural networks. Neural networks are a set of algorithms, modelled loosely on the human brain and designed to recognise patterns. They are increasingly being used in AI systems because they do offer promise in extracting real value and greater insights from data. However, they too cannot be relied upon to make correct, unbiased decisions without human oversight.
The main issue with neural networks is that ‘deep learning’ systems are difficult to create, or audit. They also require the oversight of data scientists. As they can’t think outside the context of their ‘learning environment’, they are only as good as the data they were trained on, increasing the risk of inherent bias.
“Gold standard”
While automated decision-making capabilities are improving, the results it produces are still often inconsistent and erratic, and do not draw upon up-to-date data. The National Institute of Standards and Technology (NIST) – a branch of the U.S. Commerce Department – has been working to develop a “gold standard” for facial recognition technologies, which holds great promise for governments and should help build public trust in the technology. However, until a “gold standard” does exist and is taken as regulation, organisations need to remain firmly in the loop, only implementing AI solutions that put humans at the heart of decision-making. The key to creating an AI culture is ensuring organisations use forms of AI which can be built and customised by ordinary humans, and provide human-readable explanations of their decisions.
The UK Government is placing great emphasis toward AI as a driver for economic growth and have even stated aim of implementing AI across the public sector. For this to be successful, however, they will need to ensure human workers remain at the heart of the decision-making process. At this stage, it is vital that public sector bodies invest in training the workforce to work with and understand AI decisions, including identifying algorithmic biases. Only by placing humans into the loop can organisations ensure successful, ethical AI outcomes that gain public trust.