Andrea Hoddell, Senior Operational Intelligence Tutor at Intelligencia Training Limited, examines the role of artificial intelligence in intelligence analysis. Is it a solution or a threat? Find the answer here
Artificial intelligence (AI) is everywhere:
- The media discuss the relative advantages and disadvantages.
- In our personal lives – with facial recognition on digital devices and streaming services recommending programmes based on our previous viewing.
- In our working world, a recent civil service Microsoft Co-pilot trial is one small example.
So, why should the intelligence analysis profession be any different?
Intelligence analysts working within the intelligence cycle use direction to collect data, process and then disseminate intelligence products to inform decision-making in strategic, operational and tactical settings. Volumes of data are increasing as our lives become more digital. How can intelligence analysts manage this increase in collection potential, and what role does technology play in the solution?
To consider this issue, it is helpful to explore using some of the fundamental key analytical principles which are taught in our Level 4 Intelligence Analyst Apprenticeship:
Objectivity – AI removes direct human bias by applying an algorithm or model to data, with outputs remaining constant regardless of time of day, mood or environment. AI doesn’t get tired, ‘have a bad day’ or go on annual leave as all human analysts do. However, AI uses data to ‘train’ models, and that data and its relevance to the required task is vital to ensure outputs are as intended. Any bias in the data or training process will negatively impact outcomes.
As analysts, we can use AI to help us focus collection efforts and be involved in predictions based on data – but when we do, we must ensure we understand the model being applied, the data on which it has been trained and be able to explain these to our customers. A recent survey across law enforcement professionals in a range of public sector roles found all would welcome the opportunity to increase their understanding of ‘machine learning’, with only 17% having received any formal training. (1)
Timely – legally acquired digital device downloads are increasingly available to intelligence analysts across the public sector. The data contained in these offers valuable insights. Still, from personal experience, I can attest to the challenge of identifying relevant material in a proportionate way, considering necessity and minimising collateral intrusion. AI can assist with this process, with one example being image identification, which enables photos related to particular themes to be readily identified, such as those depicting counterfeit trainers or weapons.
Humans conducting the same analysis would not be able to produce similar outputs in a timely fashion to support investigations. Analysts must, however, remain mindful of the human/ technology interface and consider how training data can become out-of-date – if, for example, a new form of illegal drug wrapping or an alternative counterfeit product were to become prevalent, this could impact model detection rates without training data refresh.
Relevant – AI is not a ‘magic wand’ – tailored bespoke solutions are required for different ‘problems’. Although often spoken about in generic terms, it is not ‘one-size-fits-all’ and needs to be employed thoughtfully, like all analytical tools.
Accurate – AI gives predictions, not definite outputs, in the same way intelligence analysts form assessments. There have been proposals (2) for probability language to be used in relation to machine learning outputs, in the same way that intelligence analysts lean on the probability yardstick and confidence ratings in their work. In both scenarios, certainty (or, perhaps more accurately, uncertainty) is conveyed to the reader.
It is also important to remember that the use of technology is not confined to those seeking to use it for altruistic or business/government purposes, and it is increasingly part of the threat landscape. Knowledge and understanding of AI use against civil service/public sector organisations and the commission of serious and organised criminality is developing. Examples can be seen in assessed reporting, such as the reference to ‘malicious technical content being included in the company name on incorporation and sequential numbering of mass incorporations’ seen within Companies House’s recent inaugural strategic intelligence assessment. (3)
As intelligence analysts, we must remain mindful of advanced technology’s impact on our data sources, especially when considering open-source materials. Our Intelligence Analyst programme covers in some depth the potential impact of bias and strategies to ensure our work remains objective. Consideration of the impact of artificially generated open-source content is an important part of this.
Intelligence Analysis is a constantly evolving discipline and the role which technological advancement plays will continue to shift over time – it provides a positive contribution providing its limitations are also considered. Human analysts will always have a role in developing key findings and assessments, and human decision-makers will take action in response to analytical recommendations. However, artificial intelligence can help us to fill more of our intelligence gaps and improve the quality and richness of our outputs.
Andrea Hoddell, Senior Intelligence Operational Trainer, is currently delivering the Level 4 Intelligence Analyst Apprenticeship programme on behalf of Intelligencia Training. She has recently completed a MSc in Computer Science (Big Data Analytics) at Wrexham University and held several intelligence analyst roles in the civil service.
For more information about our focussed Apprenticeship offerings, including our Intelligence Analyst programme, please contact Nick Atkinson, Strategic Relationship Director – Intelligencia Training. www.intelligenciatraining.com
References
- The objective benefits of machine learning in law enforcement decision-making outweigh the inherent risks of it use’ August 2024.
- M Hughes et al., “AI and Strategic Decision-Making Communicating trust and uncertainty in AI-enriched intelligence”, CETAS/Alan Turing Institute, April 2024. Available online AI and Strategic Decision-Making | Centre for Emerging Technology and Security. Accessed 06/02/2025.
- Companies House Strategic Intelligence Assessment: Public version, Available online Companies House strategic intelligence assessment – GOV.UK Accessed 06/02/2025.

This work is licensed under Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International.