The importance of designing adaptable cybersecurity regulatory frameworks as AI threats advance

Cybersecurity concept. 3D render
image: ©BlackJack3D | iStock

Ghazi Ben Amor, VP of Corporate Development at Zama, offers advice for policymakers and regulatory authorities on adaptable cybersecurity and how to develop nuanced and effective privacy policies that can evolve alongside technological advancements

Given that the cost of cybercrime is projected to reach $13.82 trillion by 2028, and could escalate even further as cybercriminals gain access to increasingly sophisticated AI, public trust in technology is understandably waning.

In response, the UK government introduced new regulatory advancements to enhance cybersecurity in AI models and software during its flagship cyber security event CYBERUK this May.

Comprising two comprehensive codes of practice, created with industry experts, the idea is that developers are now required to design and build their AI products in a way that makes them resistant to unauthorised access, alterations, and damage. By doing so, these products will be more secure and reliable, which will help to build and maintain trust among users and stakeholders in different industries that rely on AI technology.

While for me, these new measures are certainly a step in the right direction, I do have concerns around the future adaptability and efficacy of regulatory frameworks; concerns that are also shared among the developer community.

A recent survey delving into developers’ thoughts found that 72% think that regulations made to protect privacy are not built for the future, 56% fear that dynamic regulatory structures could pose new threats, and 30% feel regulators lack the necessary skills to fully understand the technology they are supposed to oversee.

One of the major concerns I have is the security risk associated with training AI systems that require vast datasets, which often contain sensitive personal information. Inconsistent or evolving regulations could create vulnerabilities here, increasing the likelihood of data breaches or misuse. And as regulations evolve, ensuring the security and privacy of the personal information used in AI training will only become more difficult – a situation that could prove very problematic for both individuals and organisations.

What to consider when creating regulatory frameworks?

So what can we do to future-proof regulations and proactively protect our digital infrastructure to avoid constantly playing catch-up with cybercriminals?

Improve knowledge of PETs:

With almost one-third of developers believing there is a general lack of understanding among regulators who are not equipped with the right set of skills to comprehend the technology they’re tasked with regulating, improving knowledge around privacy-enhancing technologies (PETs) is a good place to start. Understanding the strengths and limitations of each PET allows for a flexible, tailored approach rather than a one-size-fits-all policy. These are a few of the main ones, but there are many more both available and in the pipeline – the key is to keep up to date with developments:

  • Authentication Technologies:
    • Multi-factor authentication (MFA) adds an extra security layer and is widely used in online banking and enterprise software. Biometric authentication, utilising unique physical traits such as fingerprints or facial recognition, is another advanced method. Looking ahead, federated identity mechanisms like FIDO (Fast Identity Online) or OpenID Connect promise enhanced security and streamlined user authentication across various platforms.
  • End-to-End Encryption (E2EE):
    • This technology ensures data is encrypted from sender to recipient, preventing unauthorised access. However, implementing E2EE can be complex and resource-intensive, requiring significant computational power and sophisticated key management. Additionally, since E2EE restricts service providers from accessing data, it can complicate data recovery and legal compliance.
  • Fully Homomorphic Encryption (FHE):
    • FHE allows data processing without decryption, combining AI and data security effectively. The technology is maturing fast and many use cases are now possible. For instance, financial institutions can now use FHE to train fraud detection models without exposing personal data, and healthcare providers can perform predictive diagnostics without compromising patient privacy.
  • Multi-Party Computation (MPC):
    • Complementing FHE, MPC enables a quorum of entities to decrypt data collaboratively, ensuring only authorised access. Each entity holds only a part of the decryption key, preventing unilateral access to the data. The clear data remains accessible only to the end-user.

Foster continuous learning

Alongside an in-depth understanding of PETs, prioritising investment in wider, ongoing education and skills development is a must for regulators to stay updated on advancements and threats. It’s the only way to stay at the forefront of a world that now moves so fast technology-wise. In fact, in fields like AI and cybersecurity, rapid innovation can quickly render existing knowledge obsolete. As such, I believe employees where possible should attend industry events and conferences to provide them with opportunities to stay updated on the latest developments and trends. These events also enable staff to network with experts, fostering connections that can lead to valuable insights and collaborations.

Collaborate with technology creators

I do not believe there should be any expectations for policymakers to be the only ones responsible for crafting nuanced and effective privacy policies that evolve alongside technological advancements. In fact, by collaborating  directly with developers and creators of new technologies, policymakers can ensure that these developers design their products with existing frameworks in mind rather than expecting new regulations to adapt to every new technological advancement. It’s also equally important that policies are designed to be innovation-friendly, while establishing partnerships with tech companies is also great for knowledge sharing. Those willing to invite representatives from tech firms to conduct internal seminars or demonstrations will hugely benefit from the expertise and practical advice of those on the cutting edge of technology.

Adopt a dynamic and adaptive regulatory framework

Technology doesn’t sit still, which means policy can’t either. By designing regulations that are flexible and capable of evolving alongside technological advancements, we can better address the continuous emergence of new challenges in cybersecurity and data privacy. Putting this into action might include regular policy reviews and updates using dedicated committees, collaboration with stakeholders across tech, academia, cybersecurity, and end-users to ensure comprehensive and adaptable regulations, or even implementing feedback mechanisms for organisations and individuals to report challenges and successes.

As increasingly complex systems like AI, IoT, and advanced data analytics become integral parts of everyday life, now is the time to develop more robust and forward-thinking privacy policies that are not just comprehensive for today, but are resilient against evolving and future cyber threats.

LEAVE A REPLY

Please enter your comment!
Please enter your name here