Brian Sims
Editor
Brian Sims
Editor
IN RECENT years, the development and adoption of Artificial Intelligence (AI) technology has accelerated at an unprecedented pace, in turn impacting various industries. The spark of innovation provided by AI is already a feature of the video surveillance sector. At Hanwha Vision, notes John Lutz Boorman, we predict that 2026 will be a pivotal turning point for AI.
We foresee AI moving beyond simple adoption towards becoming the essential foundation of the entire industry: the emergence of so-called ‘Autonomous AI Agents’ will reshape the structure and operations of video surveillance systems.
To meet this wave of change, Hanwha Vision has identified five key trends upon which the industry needs to focus. These trends signal a future wherein AI serves as the core engine, elevating video surveillance further from monitoring to the provision of central pillars of operational efficiency and sustainability.
Trustworthy AI: Data quality and responsible use
As AI analysis becomes ubiquitous, the principle of ‘Garbage In, Garbage Out’ will be critical in video surveillance. Visual noise and distortion caused by challenging environments (such as low light, backlighting or fog) are primary causes of AI-derived false alarms. In 2026, establishing a ‘Trusted Data Environment’ to solve these issues will become the industry’s top priority.
With the performance of AI analysis engines levelling up across the board, the focus of investment is shifting towards securing high-quality video data that AI can interpret without error.
An example of this is minimising noise and distortion in extreme environments through AI-based high-performance Image Signal Processing (ISP) technology and the use of larger sensors. AI-based ISP employs deep learning to differentiate between objects and noise, effectively eliminating noise, while optimising object details to provide real-time data that’s most conducive to AI analysis. Larger image sensors capture more light, which fundamentally suppresses video noise generation, starting from low-light conditions.
In parallel, as the ethical use of AI becomes a major concern, the mandatory adoption of AI governance systems is approaching. The European Union’s AI Act uses a risk-based classification of AI systems deployed in public spaces and imposes a legal obligation on manufacturers to ensure transparency in AI from the design phase. This can only accelerate the industry’s push to build genuinely trustworthy AI.
The AI Agent Partnership: From tool to teammate
As AI evolves from straightforward detection to an agent capable of analysing complex scenes and proposing initial responses, the role of the operator will change fundamentally. Humans will delegate repetitive surveillance tasks to AI Agents, freeing themselves for more critical and high-level activity.
While previous AI systems in video surveillance merely reduced the operator’s workload by automating repetitive tasks like object search, tracking and alarm generation, the AI Agent will be able to take this a step further. It will autonomously conduct complex situational analysis, automatically execute an initial response and then recommend the most effective follow-up actions to the monitoring operator.
For example, an AI Agent can independently assess an intrusion, initiate preliminary steps such as sounding an alarm and then propose the final decision options (for example, whether to call the police) to the operator. Simultaneously, it can automatically generate a comprehensive report detailing real-time video of the intrusion area, access records, a log of the AI’s initial actions and suggested optimal response strategies.
Operators will become more like ‘Commanders’, making final decisions that require nuanced judgement, complex analysis and consideration of legal and contextual implications. They will also take on the role of AI governance manager, transparently tracking and supervising all autonomous actions and reasoning processes executed by the AI Agent. This essential function, which prevents system misuse, demands a significant elevation of the monitoring operator’s skill set.
Driving sustainable security
The explosive growth of generative AI is driving demand for energy. According to the International Energy Agency, power consumption by Data Centres will more than double by 2030 under its base case scenario due to the demand for AI.
The video surveillance industry can no longer prioritise performance without limit as it faces the dual challenge of surging high-resolution video data and the computational burden of AI at the edge. As such, ‘sustainable security’ (which prioritises operational longevity and minimising environmental impact) is set to become a core competency for achieving Total Cost of Ownership reductions and meeting Environmental, Social and Governance goals.
In order to realise sustainable security, the industry is moving towards developing low-power AI chipsets that drastically reduce power consumption, while preserving high-quality imaging and AI processing power. It’s also prioritising technologies that ensure data efficiency directly on the edge device (ie the camera).
Smart spaces powered by video intelligence
As AI is integrated into cameras and advances are made in cloud technology for large-scale data processing, the concept of a ‘Sentient Space’ (ie a space that can sense and understand) is becoming a reality.
This sees video surveillance expanding beyond simple monitoring to become a core data source for ‘digital twin’ technology, which reflects the physical environment in real-time. A ‘digital twin’ is a virtual replica of a real-world physical asset created in a computer-based virtual environment.
Currently, the AI information (metadata) extracted by AI cameras is already used as business intelligence to optimise operations in sectors such as smart cities, retail and advanced manufacturing. Moving forward, this metadata will be fused with diverse information from access control devices, Internet of Things sensors and environmental sensors to complete a unified and intelligent digital twin environment.
This digital twin environment will revolutionise the monitoring experience. Instead of complex and fragmented screens, operators will gain an holistic view of event relationships on a map-based interface that integrates the video management system and access control systems. Within this perfectly mirrored digital space, the video system will eventually evolve into an ‘Autonomous Intelligent Space’ that deeply understands situations and manages and resolves issues independently.
Adding the latest AI technology could provide security managers or operators with greater control over system operations. For example, AI can instantly comprehend natural language prompts like: ‘Find a person who entered the server room after 10.00 pm last night’ and automatically analyse access and video records to report the results. This signifies true situational awareness that can move far beyond basic complex search parameters.
Hybrid architecture: distributed power
The rising cost of transmitting HD video data, coupled with data sovereignty and regulatory concerns, poses challenges for purely cloud-based systems. As such, hybrid architecture (which preserves the benefits of the cloud, while mitigating operational strain) is rapidly establishing itself as the optimal solution for the video surveillance sector.
Hybrid architecture grants end users the ultimate control and flexibility over system operations. It allows system functions to be deployed to the most efficient location based on an organisation’s business needs, budget and legal/regulatory environment. It will become a key strategy for maximising Total Cost of Ownership.
From a video surveillance standpoint, hybrid architecture maximises efficiency by flexibly distributing functions between the on-premises and cloud environments. On-premise environments can host real-time monitoring functions and critical functions that must comply with regulations for short-term video storage and retention. Functions involving the local processing and control of highly sensitive data are also placed on-premise to bolster data security control and ensure immediate response capabilities at the site.
Meanwhile, the cloud environment is leveraged for functions such as remote centralised management, large-scale data analysis, deep learning for AI models and long-term archiving. Using the cloud this way ensures system scalability and operational ease.
Beyond simple infrastructure separation, this architecture also supports the optimal distributed computing structure necessary for the successful operation of AI analysis-based video surveillance systems.
In this structure, edge (camera/NVR) devices handle the first layer of computation, performing real-time detection and only transmitting necessary data to the cloud. This reduces network bandwidth strain, maximises speed and storage efficiency.
Following on from this, the cloud (central server) environment conducts the second layer of deep analysis and large-scale machine learning based on the filtered data from the edge, significantly enhancing the accuracy and sophistication of AI functions.
New standard
In 2026, I believe that AI will be firmly established as a new standard for security infrastructure. To meet this, Hanwha Vision will deliver trustworthy data and sustainable security value for end users by providing solutions based on a hybrid architecture optimised for AI analysis and processing. 2026 looks set to be an exciting year.
John Lutz Boorman is Head of Product and Marketing at Hanwha Vision Europe (www.hanwhavision.eu)
Western Business Media Limited
Dorset House
64 High Street
East Grinstead
RH19 3DE
UNITED KINGDOM