Brian Sims
Editor

Intelligent Thinking

The term ‘Artificial Intelligence’ in the professional security domain is controversial and views on it are polarised. Some hardware manufacturers dismiss it as a technology for the back office or deny its potential altogether. On the other hand, the cutting-edge developers are creating better and faster ways of helping humans to solve security risk problems using the maximum technology innovations. Pauline Norstrom offers some thought-provoking comment on this hugely topical subject

ARTIFICIAL INTELLIGENCE analyses the data generated from security systems and operations. Whether AI leads to better decisions depends on the type of AI used, the quality and quantity of the source data and the human process defined to manage the outputs. Denying this branch of computer science its rightful place in the security technology glossary can confuse the customer and lead to a poor technology fit.

AI originated from the early computer science pioneers in the UK over half a century ago who felt that it would only be a matter of time before computers could masquerade as humans. Citing Alan Turing and his famous Turing Test, for a computer to pass that test, it would be impossible to determine whether a human or a computer answered a series of questions. However, the Turing Test was only challenging one type of human intelligence. Intelligence is a far more complex animal.

The type of AI which attempts to emulate human intelligence is known as general AI and also referred to in singularity. In short, this is AI which thinks for itself. It’s the idea that the computer or robot will possess a facsimile of human intelligence in all its forms from IQ through to emotional intelligence, on again to the Gardner’s theory of multiple intelligences and also taking in human consciousness.

For now, let’s dismiss the notion that robots will take over the world any time soon, or that ‘HAL 9000’ (the super computer in the celebrated film ‘2001: A Space Odyssey’) will start making decisions about the fate of its human masters. The reality of the matter is that we don’t fully understand human intelligence, and even less so human consciousness.

The academic and scientific communities are divided on this subject. Some say never, others suggest it will be another 50 years before we come even close. We need to plant our feet firmly back on the ground and take a look at the real-world problems being solved by this controversial set of technologies.

Narrow AI prevails

In the security domain, it’s very much the case that narrow AI prevails. Essentially, this is data-driven AI designed to solve tightly defined problems. All AI technologies used in the security industry have to be carefully set up and trained. The setting up and training time is significantly reducing, though, as those AI technologies become more sophisticated due to enhanced development and exposure to more and relevant training data.

AI has been evolving over the last couple of decades and forms of supervised learning embodied in simple conditional alarm configurations have already been in use for some time now. The industry is inclined to forget this fact, considering it to be ‘old tech’. However, the problems being solved are not old. 

Rather, they’re growing and becoming more and more sophisticated by the day. AI has evolved and, put simply, the Boards of Directors, specifiers, manufacturers, integrators and end users who make decisions about the ethical implementation of this technology need to catch up.

Who would have thought that the professional security domain would be at the forefront of pioneering technology? Historically, the security world has attracted a low-tech image having been associated with the physical work accompanying the human intervention stage of security and safety management. Myriad technologies are now in use to enable human judgements about what’s happening. As a result, determined action can be taken with precision.

The professional security industry provides an excellent example of people and technology working together to solve problems and achieve the best outcomes.

The development of object detection and both automatic and live facial recognition solutions for security purposes has recently been cited as commanding over £5.35 billion in financial terms. That’s nearly 10% of the global AI investment budget. This is undoubtedly a hot space which has grabbed the attention of innovators and investors alike.

Security and video surveillance technologies have matured, while networks, cloud processing and storage have increased in tandem. What were previously siloed operations due to discrete data sources are now emerging as securely accessible and capable of being converged. Now, a bigger picture can be built from multiple disparate data sources and, by dint of that, a multitude of perspectives may be provided.

Masses of data

Specialised autonomous hardware devices are generating masses of data every day. Those devices include building technology sensors, video surveillance cameras, alarm and PIR sensors, physical contacts and access control events, lone worker alarms, people counters, heat mapping devices, GIS and GPS data, health check and diagnostics data… The list goes on. The data generated from the management of people, buildings and security systems is almost unlimited.

This vast ocean of data tells a story about what’s happening at a given point in time, but the means by which the human operator can correlate and interpret that data in context is limited by the AI tools provided by the developers of the systems themselves. Currently, these are fairly primitive in nature.

The most sophisticated form of AI is the deep learning neural network applied to video data sources. In the security industry, convolutional neural networks for image processing are used extensively in automatic and live facial recognition and object detection. Automatic and live facial recognition algorithms solve very specific problems of authentication of authorised persons or, indeed, unauthorised persons. Object detection determines the presence of objects and people with defined characteristics such as a car type, whether the individual is an adult or a child and the type of clothing they’re wearing. These technologies are analysing the video sources in isolation.

A combination of AI technologies could correlate the outputs of a number of siloed AIs and paint a bigger picture. This is happening gradually, but in most cases it’s the human who makes the choice to link the relevance of events. These choices can be displayed in the same interface which is a big step forward when it comes to putting AI to its best use.

The quality and format of the data sources is the Achilles heel of achieving highly reliable outputs. Another subject attracting much controversy in recent times is the prevalence of biased data due to the developers of global AI technologies operating within an echo chamber.

Unrepresentative training data sets can lead to a biased algorithm, thereby rendering the results untrustworthy when applied to a wider real-world scenario. However, high-end providers have invested heavily into balancing these issues. In the National Institute of Science and Technology reports, the better algorithms are cited to be over 99% accurate across a varied race and gender demographic. This is encouraging.

Risk situations

In a risk situation, there’s no training data if the AI is looking across multiple data sources and correlating what may appear to be unconnected events. This is precisely why many AIs need a bedding-in period to learn the environment. This is normal. Only the bleeding edge AI technologies can start producing reliable results with minimal labelled data, but their descriptive language is at times beyond comprehension, in turn making it very difficult for the buyer to understand what the AI actually does.

What could be described as a resulting ensemble of AI technologies could add more value than the individual data source queries. This is probably the most challenging sector for AI, but when successfully implemented also brings the most rewarding results.

The best use case scenario may be the prevention of a terrorist incident due to the correlation of relevant data across multiple data sources (eg a blue car parks in an unusual place on Bond Street at 12.00 pm, its type and licence plate are recognised by the system and an event created, aggregated social media feed analysis picks up boastful and threatening hate language and an event is created, a known terrorist wearing a red jumper walks down Bond Street at the same time and an event is created, an organised protest is underway resulting in a congregation of hundreds of people at the same time which is an area requiring increased surveillance).

Threats could come from anywhere. Due to the combined use of several types of AI, the location and possible time of an attack could be averted by the deployment of security officers to disperse the crowd. With police co-ordination, the suspect can be detained for questioning and a potential attack thwarted.

Without AI, correlating these unconnected events would be a slow and manual task. With AI, however, the decision-makers can act faster and more precisely.

In conclusion, AI in the security setting is not a mystery. It’s not designed to replace the human presence. Rather, it’s designed to augment the human’s ability to make accurate decisions more quickly in order to avert disaster.

In today’s world, there’s simply too much data available for a human alone to analyse. It’s time to embrace this technology set. In doing so, new opportunities open up as problems which are difficult to solve – such as making the COVID-19 world safer – can be considered, thereby contributing to the safe recovery of the economy without a high human cost.

Pauline Norstrom is CEO and Founder of Anekanta Consulting (www.anekanta.co.uk)

AI.
AI.
AI
AI
AI.
AI.
Company Info

FSM Editor

Dorset House
64 High Street
East Grinstead
RH19 3DE
UNITED KINGDOM

01342 314300