NUS
 
ISS
 

How Agentic AI will Transform the Role of Cybersecurity Professionals

Not long ago, cybersecurity professionals spent their days deep in the trenches. They combed through logs, chased false alarms, patched vulnerabilities, and responded to incidents as they emerged. The job was largely reactive and hands-on in nature.

Today, the landscape is shifting.

Agentic AI systems, capable of autonomous decision-making and action, are taking on the tasks once performed by humans. From monitoring networks and spotting intrusions to automatically fixing vulnerabilities, agentic AI operates at a scale and speed no human team could match.

Unlike traditional AI, which provides analysis and recommendations, agentic AI can take action in the real world - such as interacting with systems or operating robotics. An agentic AI system can also adjust its behaviour based on new information and complete tasks autonomously, at a level that’s not possible in traditional AI systems.

In cybersecurity, this unlocks powerful ways for defenders to strengthen their security posture, but it also introduces new types of risk. Autonomous agentic AI systems may access tools, generate outputs that trigger downstream effects, or interact with sensitive data in ways that are difficult to predict or control.

As agentic AI takes on frontline responsibilities in cybersecurity, how will it reshape risks, opportunities, and the very role of the professionals who safeguard organisations?

One thing is clear: the path ahead demands a dual focus - learning to defend both with and against agentic AI.


Reporting to work: agentic AI in cybersecurity


What does agentic AI look like as the newest hire in the security operations centre? Picture an analyst that never sleeps, never tires, and can process more data in a minute than a human team could in a month.

Already, cybersecurity companies are deploying these systems - not just to detect threats, but to connect the dots and act. Agentic AI can investigate an alert, correlate signals across multiple systems, determine whether an attack is underway, and isolate compromised devices, all without waiting for human intervention.

This marks a big shift: AI-powered cyber defence is moving beyond “flagging” to “fixing”.

Some of the tasks agentic AI now handles include:

Real-time monitoring of network traffic

Autonomous intrusion detection and response

Vulnerability scanning and patching before attackers exploit gaps

Predictive threat intelligence that spots emerging risks

Offensive testing, simulating attacks to uncover weaknesses

Case management, such as logging and categorising incidents while suggesting next steps

What sets agentic AI apart is its adaptability. Unlike traditional tools that rely on fixed rules, these systems learn from new attack techniques and refine their defences over time. They can even hunt for hidden threats lurking inside an organisation’s network, proactively looking for anomalies rather than waiting for something to break.

The benefits of agentic AI in cybersecurity are clear: faster response times, fewer false positives, and defences that scale as attacks grow more complex. Organisations will be able to take on a more proactive stance, predicting and preventing threats before they cause damage.

A double-edged sword

A simple text prompt, and your agentic AI executes it - be it booking flights, gathering data and insights, or even controlling physical devices through robotics.

Tech giants like OpenAI, Google, and Meta are already building agentic AI that can take real-world actions with minimal human input. These systems have the potential to enable game-changing productivity for businesses; for example, by automating customer onboarding or streamlining compliance checks.

But for attackers, it lowers the barrier to entry like never before. The same capability could launch phishing campaigns, probe networks for vulnerabilities, or deploy malware – at machine speed and scale, without requiring an expert hacker behind the keyboard. A research by Tech Radar found that agentic AI systems are already capable of executing a broad spectrum of malicious activities. While current capabilities are still in their early stages, the potential is there for automated attacks at scale in the not-so-distant future.

The risks of agentic AI systems are manifold:

Exploitation: Attackers could manipulate AI decision-making to redirect outcomes

False positives or negatives at scale: One misstep could paralyse entire networks or leave threats undetected

Opacity and explainability of decisions: AI agents often operate as “black boxes”, making it hard to audit or understand their choices

The balance between autonomy and control: Too much freedom risks chaos; too much restraint undercuts their speed advantage

Regulatory and ethical questions: The responsibility carried when an AI agent causes harm

The future of AI-powered cybersecurity is human-centered 

Cybersecurity leaders need to be clear-eyed about the limitations of agentic AI. The truth is that cybersecurity cannot be left entirely to machines – especially when human judgement, accountability, or nuance is required.

Traditionally, organisations have attempted to address shortfalls in cybersecurity by building defenses around systems. For example, cybersecurity and compliance training often serve as the default response to mitigating “human error”. But this approach is outdated as it places blame on employees when agentic AI is now the one who is acting.

What’s needed is a shift toward human-centric, real-time protection. That means two things:

User-focused controls: By strengthening authentication, monitoring behavior, and deploying phishing-resistant technologies, cybersecurity teams can catch risky user behaviours before they escalate

Threat mapping: Treat human risk with the same rigour as software vulnerabilities – by tracking, prioritising, and mitigating risky user behaviours systematically to enable more targeted interventions

As organisations adopt agentic AI, the human-in-the-loop process becomes critical in ensuring agents perform as intended. These humans take on significant responsibilities – such as approving exceptions and requests from agentic AI. Their input will also influence the future behaviour of these self-learning systems.

How the division of labour between agentic AI and humans should ideally look like: agentic AI handles routine detection and mitigation, while escalating complex or high-stakes decisions – like terminating accounts or approving access to sensitive data – to qualified professionals.

The key to unlocking the power of collaborative intelligence is to strike the right balance. At the end of the day, the aim of deploying agentic AI is to amplify, not replace, human defenders.

The evolving role of cybersecurity professionals

But today, with agentic AI reshaping incident response and compliance at scale, cybersecurity leaders can now act as orchestrators. Their main role: To guide, govern and oversee AI systems that do much of the operational heavy lifting. This frees human defenders from repetitive tasks, so they can redirect their time and resources on higher-impact tasks.

In the past, cybersecurity was a game of “cat and mouse”, where hackers launched attacks and defenders blocked intrusions. Professionals were often seen as “operators” who patched vulnerabilities, traced malicious code, and responded to alerts.

But today, with agentic AI reshaping incident response and compliance at scale, cybersecurity leaders can now act as orchestrators. Their main role: to guide, govern, and oversee AI systems that do much of the operational heavy lifting. This frees human defenders from repetitive tasks, allowing them to redirect their time and resources to higher-impact work.

What would the evolved mandate of cybersecurity professionals look like? A lot less like firefighting, and more like leadership as they can now focus on: 

Governance and oversight: Ensuring AI systems remain aligned with business goals, ethical standards, and compliance frameworks

Risk management: Navigating an environment where regulations struggle to keep pace with technology

AI literacy: Understanding how models work, their limitations, and potential biases

Leadership and communication: Bridging the gap between technical and non-technical teams within an organisation to build greater resilience

Enabling collaborative intelligence

Agentic AI promises greater speed, scale, and the ability to act autonomously in ways humans cannot match.

In cybersecurity, it provides a significant leap in enabling organisations to proactively detect and respond to threats, according to Mr Khoong Chan Meng, Chief Executive Officer of NUS-ISS, it will empower IT teams to better keep pace with rapidly evolving attack vectors.

But unlocking its full potential in cybersecurity requires trust, and “that trust must be built on safeguards”, adds Mr Khoong.

“Strong governance frameworks for agentic AI are the foundation. Organisations need clear policies defining how agentic AI systems are deployed, when humans must step in, and what boundaries must never be crossed.”

Paired with ethics and compliance frameworks, these guardrails ensure accountability at both the team and organisational level. “Cybersecurity leaders also need to ensure that these frameworks are embedded by design, and not as an afterthought,” he says.

Because one thing is for sure: the future of cybersecurity is human and agentic AI working closely together, each amplifying the other’s strengths, Mr Khoong concludes.

As agentic AI continues to redefine the cybersecurity landscape, professionals must not only adapt but also lead with foresight, governance, and resilience. The need for continuous learning has never been more urgent. NUS-ISS offers a comprehensive suite of cybersecurity programmes, from professional certifications to cyber risk awareness courses, designed to equip learners with the knowledge and tools to defend against today’s complex threats. To learn more, visit our Executive Education Programme on Cybersecurity

A+
A-
Scrolltop
More than one Google Analytics scripts are registered. Please verify your pages and templates.