Author: Jim Richberg, Head of Cyber Policy & Global Field CISO, Fortinet

Artificial Intelligence is rapidly reshaping how organizations operate, make decisions, and use data. Less visible, however, is how quickly AI is reshaping risk, particularly for public institutions charged with protecting citizens’ data, privacy, and trust.

AI has reached a tipping point. Its rapid adoption is delivering gains in productivity and innovation while simultaneously altering the balance of power between attackers and defenders in cyberspace. AI is increasing the variety, volume, and velocity of digital threats facing governments, critical infrastructure, and individual users.

While there are two main types of AI, attention recently has focused almost exclusively on the rise of GenAI, enabled by advances in computing and deep learning models that have made it both powerful and widely accessible. The other form of AI, predictive or discriminative modeling that can classify data and predict outcomes, continues to power benign uses such as automation and cyber defence – but GenAI is proving to be a tide that raises all boats, both friend and foe.

GenAI can generate entirely new content, including software code. This is poised to democratize software development, enabling non-programmers to write software (a practice called “vibe coding”). This creates risk when done by employees in the workplace and even more so when done by cyber criminals.  Most malicious cyber actors have relied on computer code created by and rented from a relative handful of criminal programmers. Vibe coding will enable less technically savvy criminals to save money by writing their own code, which will increase both the number and diversity of different cyber attacks hitting Canadian networks. And since GenAI can generate exploit code faster than human programmers, there will be less time between the discovery of a new vulnerability and its use against network defenders.  

GenAI is also amplifying social engineering attacks. More convincing phishing emails, voice, and even deep-fake videos blur the line between authentic and artificial interactions, making trust harder to establish and easier to exploit. For governments that rely on digital public engagement, this erosion of trust poses a significant risk not just to security but also to institutional credibility.

AI is also being deployed defensively to identify vulnerabilities, prioritize remediation, and help security teams manage the growing volume of data generated by complex digital environments. AI-powered tools can correlate signals across systems and respond to multi-vector attacks at a scale that traditional, stove-piped security solutions struggle to match. 

Yet a more insidious problem remains. “N-day vulnerabilities” or flaws for which fixes already exist continue to be widely exploited because organizations fail to apply patches in a timely manner. Attackers do not need to use AI to find new exploits or write new code when known vulnerabilities remain unpatched by users. While AI can help with vulnerability management and automated patching, AI alone cannot compensate for gaps in priorities or resourcing.

The AI ​​genie is out of the bottle. Harnessing its benefits while managing its risk will require adaptive cybersecurity and collaboration between governments, industry, and public institutions.  Balancing innovation with security and privacy demands a clear-eyed assessment of the capabilities and limits of AI governance. Legacy assumptions and lagging regulations will not be enough in a world where AI is already shaping outcomes.

For those seeking practical insights into how organizations are addressing these challenges today, the upcoming webinar Securing AI Usage in Practice: Identity, Visibility, and Guardrails in 2026 examines emerging approaches to AI oversight, risk management, and responsible deployment in real-world environments.

Learn more and register here: https://bit.ly/Securing-AI-Usage-in-Practice

Keep Reading