Ali Farooqui, Head of Cyber Security at BJSS, now part of CGI, explores how companies can balance the benefits of artificial intelligence (AI) with robust cybersecurity practices
“Can AI do it?”
It’s a question I hear regularly. Whether it’s in boardrooms, project planning meetings or strategy sessions, the phrase repeatedly surfaces. It is now essential to any productivity, innovation, and problem-solving discussion. And with good reason: AI is transforming industries by enhancing efficiency, streamlining operations and improving customer experiences. Businesses across various sectors use AI for various tasks, from supporting medical diagnoses to inventory optimisation and automation.
While these technological advancements offer many business benefits, building reliable solutions that customers can rely on requires a significant focus on cyber security. The organisations that will thrive in this AI-dominated world and maintain high levels of customer trust will strike the right balance between innovation and security.
AI: A double-edged sword in cyber security
On one hand, AI can be used to protect organisations by predicting, identifying and responding to threats. Using historical data from past attack patterns, AI can pre-empt future breaches, allowing businesses to put protections in place and be prepared should a cyber attack occur. AI can also monitor various data sets in real-time, including network traffic, user behaviour and system activities, flagging potential threats for further investigation.
Many organisations also use AI-assisted threat modelling to create shorter, more effective feedback loops within their application development lifecycles. These innovations reduce the reliance on manual monitoring, allowing security teams to redirect their focus towards organisational challenges and their business’ security strategy.
AI is now often baked into cyber security observability tooling, security deployment pipelines and Security Orchestration Automation and Response (SOAR) platforms. This helps operation teams monitor the vast amounts of malicious traffic noise that some new-age attack vectors generate.
The advantages of AI are so vast that many large-scale security platforms now include agentic integrations, with hyperscalers like Microsoft announcing a variety of security AI agents that can help triage phishing attacks. These AI agents can also perform data loss prevention tasks, create vulnerability management frameworks with real-time prioritisation and provide insight into context-driven, organisation-relevant threat intelligence.
On the other hand, while businesses can use AI to reinforce their security measures, cybercriminals are also deploying AI devices to breach these same defences. AI has enabled attackers to build vectors more quickly, allowing their tools to adjust more efficiently to new security measures. Any organisations that do not deploy AI-enabled defences will find themselves particularly vulnerable to this new breed of attack.
AI has also been used to generate convincing phishing campaigns, making legacy detection mechanisms less effective. Voice emulation and deepfakes add another layer to this challenge. The engineering firm Arup experienced this firsthand when an employee was deceived by an AI-generated video call, transferring £20 million to criminals.
This new age of AI-driven tooling also introduces a fear that an organisation’s security team could experience the “black box effect”. This is where an organisation’s front-line engineers and operations specialists have limited exposure to their devices, making it difficult for them to understand how they function or how to optimise them effectively. This can ultimately create vulnerabilities to cyber-attacks.
With cyber criminals using AI to push the boundaries of cyber security, organisations must continuously adapt and upgrade their security processes. This is especially important as we are in danger of reaching a tipping point where offensive AI tools are advancing so rapidly that they could begin to outpace defensive measures.
Balancing innovation with security
Robust AI safeguards are essential for responsible AI development and deployment, and these must encompass ethical, technical and governance considerations. Ethically, that means the safeguards must address fairness, transparency and privacy. Technically, it involves ensuring data quality, model robustness, security and bias mitigation. Effective governance then adds a final layer to create a more holistic approach. This relates to clear policies, risk assessments and regulatory compliance. Implementing these multifaceted safeguards will ensure that AI systems align with human values, minimise harm and foster trust.
An organisation’s AI guardrails should also include “harm-bench” evaluations. This is important for developing and consuming large language models (LLM). These evaluations reliably predict the resilience of AI cyber security systems against real-world attempts to elicit harmful behaviours or bypass implemented safety guardrails.
Beyond this, organisations should consider broader benchmarks evaluating AI systems’ safety and security. These might include metrics for robustness against adversarial attacks, resilience to data poisoning and the effectiveness of different defence mechanisms. AI Security Posture Management platforms provide this capability and are useful tools to help identify security gaps.
To build responsible security feedback loops, it’s worth asking a few key questions that can help clarify your organisation’s position in this evolving space. These include:
- What is different about your organisation, and how have your key risk indicators changed due to AI?
- Has your security posture improved in the last year or five years?
- Is your security leadership content with your organisation’s security posture?
Most importantly, businesses need proactive security strategies that leverage AI-driven tooling to give security operations and engineering teams the space to perform and provide effective solutions.
Security solutions
All organisations must remember that security is not optional, and it should be considered a ‘day-zero’ job. Without the right approach, businesses risk creating unreliable service applications and solutions. Organisations must start strong with four key steps to ensure they don’t fall into this trap.
Firstly, as security requires flexibility, any developmental and organisational processes must allow room to pivot. Creating shorter feedback loops will enable agility, allowing businesses to adapt quickly. This will not only improve an organisation’s security measures, but it will also allow businesses to maintain a competitive edge.
The next step is to implement cloud technology, which is designed with ‘secure by default’ and ‘secure by design’ philosophies in mind. Using cloud platforms will support rapid experimentation and secure deployments, enabling organisations to innovate responsibly.
Thirdly, observability considerations are a necessity. After all, you cannot protect what you cannot see. This is true of all development, and utilising AI for observability will help organisations to continuously monitor and proactively assess their security posture.
Above all else, businesses need to build a security-focused culture. Organisations must foster an environment where security and innovation are seen as complementary forces. This requires encouraging collaboration between security teams and innovators to create secure and cutting-edge solutions.
While AI and intelligent agents’ transformative potential offers compelling business innovation opportunities, a pragmatic, phased adoption strategy is paramount. Organisations must prioritise rigorous testing and pilot deployments before full-scale implementation to effectively manage cyber security risks and ensure a secure and sustainable integration of these advanced technologies.