Philipp Pointner, Jumio’s Chief Product Officer, explores the continued issues that age-restricted websites have when trying to verify their users’ age and how the government can offer support
More than 850 million children and youth — half of the world’s student population — were out of school earlier this year due to the COVID-19 pandemic, according to UNESCO. Over the past few months, these children have presumably spent all of their time at home, but only a portion of their day focusing on school work. They’ve had more free time on their hands, which, in turn, means more time spent on the internet. This presents a very real threat, leaving minors vulnerable to online harms.
This vulnerability won’t magically disappear once UK students return to school, which nearly all are expected to do in September.
Thankfully, the UK Government has been working hard to tackle this issue. It is due to publish its full response to the Online Harms bill imminently and the ICO Code Age Appropriate Design legislation is due to pass through Parliament before the end of the year. Even more recently, The Home Affairs Committee sought evidence on online harms arising from the COVID-19 lockdown period and the adequacy of the government’s proposals to counter them.
Overall, the government is aiming to deliver a higher level of protection for children and expects companies to “use a proportionate range of tools, including age-assurance and age-verification technologies, to prevent children accessing inappropriate behaviour, whether that be via a website or social media.” But what does this actually mean for businesses, and how can the government set a realistic precedent when it comes to age verification?
Vanity age verification vs. true age verification
Not all age verification processes are equal, and clearly some don’t work as well as they should. Recent research from Jumio found that 54% of UK age-restricted websites have been unable to prevent minors from accessing their products or services despite over two-thirds (67%) believing it is their responsibility to prevent this from happening.
Organisations operating in age-restricted spaces use a range of methods to prohibit minors from accessing sites and products. Those that operate in highly regulated spaces like online gaming and financial services are required by know-your-customer (KYC) regulations to perform more thorough age and identity verification checks. However, organisations operating in less-regulated spaces are often held to a much lower standard and often just ask their users to self-report their own age when they access the website.
Taking a risk-based approach
Regulated industries such as financial services and payments companies, which also fall under the age-restricted banner, must comply with KYC and AML (anti-money laundering) regulations which govern how they identity proof new customers. The premise is that knowing your customers — performing identity verification, reviewing their financial activities and assessing their risk factors — can keep money laundering, terrorism financing and other types of illicit financial activities in check.
The UK government needs to champion a similar risk-based approach to age verification. The greater the likelihood of social harm, the greater the need for more robust forms of non-anonymous methods of age verification. According to the Protecting Minors Report, businesses selling products, such as alcohol or fireworks, are less likely (50%) to depend on weak age-verification methods than those offering a service, like pornography (71%). This is somewhat intuitive since any requirement to divulge a user’s actual identity is likely to result in significant customer abandonment on pornographic sites.
Overall, 95% of those surveyed say it’s important to ensure minors do not access age-restricted services, which shows that businesses want to do the right thing. Nevertheless, harms need to be addressed and thought needs to go into how minors can truly be protected.
Face-based biometrics is the most thorough method of truly determining an identity, and subsequently, age. Most organisations know that there are better, stronger methods of age verification than having the user self-report that information, but it’s generally not in their own self-interest to leverage these technologies. Almost half (46%) of tech decision-makers said they would not implement a more robust form of age verification due to the fear that it would negatively impact conversion rates for valid customers, while 38% felt that such measures would be too time-intensive and 36% thought it would create a disjointed customer experience.
Protecting anonymity is another reason for using weaker forms of age verification, but this can be dangerous. Many porn (and even dating) sites know many of their members do not want to divulge their real identities — sometimes this is out of embarrassment, but some wish to remain anonymous because they intend to inflict harm or perpetrate fraud. Maintaining a balance between anonymity and the right amount of identity verification can be tricky, but in cases where clear harm can occur to a minor, age and identity verification should be compulsory.
Improving credibility, security and efficiency with one method
When done right, robust age verification can protect minors from online harms without having a negative impact on the customer experience or conversion rates.
What does the process look like?
It starts by requiring a user to capture a photo of their government-issued ID. Identity verification solutions can extract personal information, such as date of birth, from ID documents, which can be used to calculate the current age of the person creating the account, and can also determine if the document has been manipulated. Next, the user needs to take a corroborating selfie, which is compared to the ID to determine that the person possessing the ID is who they claim to be, and certified liveness detection ensures that the person is physically present. After the age and identity of a user has been verified online, biometric-based authentication can ensure that all future logins and transactions are made by the original account owner.
Striking a balance
One strategy being put forth by leading dating sites is to offer two options. The first is to let users who want to preserve their anonymity create accounts with a limited number of identity checks. For those users who want to earn a certification badge, they would voluntarily undergo identity and age verification checks. With this approach, members of the dating site can self-select whether they want to only date those who have earned the verification badge. But, even with this “free market” approach, safeguards need to be in place to protect members from catfishing, fraudulent schemes and physical harm regardless whether the user has been verified or opts to remain anonymous.
After all, it is completely appropriate to hold any organisation that profits from selling age-restricted products and services accountable for the potential harms caused by their platform, depending on the industry and the likely harm of onboarding a bad actor. The UK Government should absolutely champion this approach if it is to truly protect minors from online harms.