In this opinion article, President of the BCS, The Chartered Institute for IT, Chris Rees reveals why he thinks that technology, including social media, must be ethical
The reaction to the sudden and continued news coverage of Cambridge Analytica’s use of Facebook’s user data has been telling. Much of the press, public and our own MPs expressed shock, confusion and outrage at what has emerged. The reaction was one of horror: a company had been collecting information about us (and our friends) from the web and using it to profile and target each of us with specific advertising that an algorithm had decided was most likely to appeal to us? This was done not only to convince us to buy products or services but also to sway our political opinions too? And none of this was illegal? Disgusting. Terrifying. Immoral.
The other prominent reaction to the allegations, often from columnists, was one of frustration that any of this was ‘news’ at all. Of course, your data was being collected and sold to the highest bidder, they said. Did you think Facebook and the various other free online platforms you were using were not-for-profit organisations, supplying the world with communications networks out of the goodness of their hearts? It’s not Cambridge Analytica you have a problem with; it’s Facebook’s entire business model and those of many other digital companies whose services you use daily.
This is what we’ve been trying to tell you, they said; yes, the situation is troubling, but it should hardly come as a surprise. Whatever your viewpoint, these developments exposed the gap that exists between the public’s use of new technologies and their awareness or understanding of how they work. Questions of who designed these new technologies, how they were created and whether they were implemented with the interests of the user or society in mind, all seem to be gaining public traction in a way we have not previously seen. Some people even closed their Facebook accounts in disgust! And this issue is only going to grow in prominence.
Rapid developments in machine learning and artificial intelligence (AI), whether in the context of job-stealing automation or the benefits and dangers of self-driving cars, are pushing the subject of ethical behaviours in IT to the front of both social and political debate.
Surely, we need to get to a situation where the IT professionals creating these products and services are confident that what they are doing is entirely ethical, at every stage from inception to implementation. Yet, unlike medical professionals who have a long-standing, well-supported ethical focus, this approach may well be challenging for individuals without support for their upholding of an established ethical doctrine.
In our dynamic tech world, there are likely to be frequent instances when an ethical approach may appear slower or costlier than one which bends the rules or pushes the boundaries. That’s without mentioning those instances where an individual’s career may be at risk if they refuse to cut corners or make decisions they know to be ethically dubious.
No one wants to slow the pace of positive innovation. The challenge, therefore, is to create a culture that enables a competitive advantage to exist for tech companies that abide by ethical standards. This should demonstrate how the tech companies, consumers and society as a whole can benefit from the IT industry taking responsibility for their actions under a new ‘social contract’.
Bias in AI decision making, transparency around how they reach their conclusions, liability when things go wrong… there are some big topics which need addressing when considering how to implement ethical considerations into the IT sector. The task of ensuring that ethical considerations are firmly embedded into every stage and aspect of the tech industry is an ambitious one, but in many ways, it is also obvious and well-overdue. A lack of basic understanding around IT and how tech companies operate, both amongst the public and those in positions of authority, is no small barrier to initiating a full public debate on the subject. However, with continued coverage of the issue and many commentators suggesting we are at ‘crunch-point’ where public awareness of and attitudes towards tech companies is shifting, perhaps we are now approaching the moment where we can do more to ensure that IT is good for society.
A new cross-party Parliamentary Commission on Technology Ethics is being launched by two MPs (Labour’s Darren Jones and Conservative Lee Rowley), with the help of Oxford University’s Professor Luciano Floridi and BCS, The Chartered Institute for IT.
Over the course of the next 12 months, the Commission will be looking at a range of key areas around the subject of tech ethics, aiming to recommend substantive policy changes that can have some practical impact.
The Commission will be inviting stakeholder evidence throughout its investigation and I would encourage anyone with opinions on this most timely of issues to follow and feed into its progress by emailing us at policyhub@bcs.uk.
Chris Rees
President
BCS, The Chartered Institute for IT
Tel: +44 (0)1793 417 417