
Top stories






More news





Email is expected to remain the dominant entry point for cyberattacks, potentially accounting for as much as 90% of breaches. Phishing incidents already represent 77% of attacks, and the rise of AI-generated content is accelerating this trend. Messages are becoming more personalised, fluent and context-aware, making them harder to detect even for trained employees.
As collaboration platforms tighten access controls and monitoring, more everyday work is shifting back to email, increasing exposure. Attackers are moving away from broad, high-volume campaigns and instead targeting specific individuals, often impersonating executives or finance staff. The growing use of deepfake content adds further pressure, creating scenarios where a single convincing message can bypass multiple layers of defence.
Rising productivity demands and reduced headcount are pushing employees to their limits. In response, many are turning to unauthorised AI tools to keep up, often without understanding the security implications. Proprietary data is being shared with consumer platforms or used to train personal models that sit outside organisational oversight.
This behaviour is expanding the attack surface rapidly. By mid-2026, organisations may be dealing with far more rogue AI agents than unauthorised cloud applications. At the same time, attackers are actively exploiting insider access and probing outsourced operations in regions where controls may be weaker. Human behaviour, AI usage and access management are increasingly inseparable risks that must be addressed together.
Security operations centres have long been overwhelmed by alert volume. Analysts spend significant time triaging and closing false positives, only for queues to refill. This cycle contributes to burnout and increases the likelihood that real threats are missed.
In 2026, many organisations are turning to AI-driven systems to handle alert enrichment, correlation and even resolution of routine incidents before a human becomes involved. These tools can pull context from multiple sources, adapt to emerging threats and reduce response times from days to minutes. When paired with clear oversight and accountability, they allow security teams to shift focus from constant firefighting to higher-value risk management.
The challenges ahead are significant but not insurmountable. Organisations that succeed will be those that recognise the growing interdependence between people and AI, govern new technologies with the same rigour as human access, and use automation to reduce the operational burden on security teams. In a threat landscape defined by speed and scale, adaptability will matter as much as technology itself.