Not long ago, cybersecurity professionals spent their days deep in the trenches. They combed through logs, chased false alarms, patched vulnerabilities, and responded to incidents as they emerged. The job was largely reactive and hands-on in nature.
Today, the landscape is shifting.
Agentic AI systems, capable of autonomous decision-making and action, are taking on the tasks once performed by humans. From monitoring networks and spotting intrusions to automatically fixing vulnerabilities, agentic AI operates at a scale and speed no human team could match.
Unlike traditional AI, which provides analysis and recommendations, agentic AI can take action in the real world - such as interacting with systems or operating robotics. An agentic AI system can also adjust its behaviour based on new information and complete tasks autonomously, at a level that’s not possible in traditional AI systems.
In cybersecurity, this unlocks powerful ways for defenders to strengthen their security posture, but it also introduces new types of risk. Autonomous agentic AI systems may access tools, generate outputs that trigger downstream effects, or interact with sensitive data in ways that are difficult to predict or control.
As agentic AI takes on frontline responsibilities in cybersecurity, how will it reshape risks, opportunities, and the very role of the professionals who safeguard organisations?
One thing is clear: the path ahead demands a dual focus - learning to defend both with and against agentic AI.
Reporting to work: agentic AI in cybersecurity
What does agentic AI look like as the newest hire in the security operations centre? Picture an analyst that never sleeps, never tires, and can process more data in a minute than a human team could in a month.
Already, cybersecurity companies are deploying these systems - not just to detect threats, but to connect the dots and act. Agentic AI can investigate an alert, correlate signals across multiple systems, determine whether an attack is underway, and isolate compromised devices, all without waiting for human intervention.

This marks a big shift: AI-powered cyber defence is moving beyond “flagging” to “fixing”.
Some of the tasks agentic AI now handles include:
Real-time monitoring of network traffic
Autonomous intrusion detection and response
Vulnerability scanning and patching before attackers exploit gaps
Predictive threat intelligence that spots emerging risks
Offensive testing, simulating attacks to uncover weaknesses
Case management, such as logging and categorising incidents while suggesting next steps
What sets agentic AI apart is
its adaptability. Unlike traditional tools that rely on fixed rules, these
systems learn from new attack techniques and refine their defences over time.
They can even hunt for hidden threats lurking inside an organisation’s network,
proactively looking for anomalies rather than waiting for something to break.
The benefits of agentic AI in
cybersecurity are clear: faster response times, fewer false positives, and
defences that scale as attacks grow more complex. Organisations will be able to
take on a more proactive stance, predicting and preventing threats before they
cause damage.
A double-edged sword
A simple text prompt, and your agentic AI executes it - be it booking flights, gathering data and insights, or even controlling physical devices through robotics.
Tech giants like OpenAI, Google, and Meta are already building agentic AI that can take real-world actions with minimal human input. These systems have the potential to enable game-changing productivity for businesses; for example, by automating customer onboarding or streamlining compliance checks.
But for attackers, it lowers the barrier to entry like never before. The same capability could launch phishing campaigns, probe networks for vulnerabilities, or deploy malware – at machine speed and scale, without requiring an expert hacker behind the keyboard. A research by Tech Radar found that agentic AI systems are already capable of executing a broad spectrum of malicious activities. While current capabilities are still in their early stages, the potential is there for automated attacks at scale in the not-so-distant future.
The risks of agentic AI systems are manifold:
Exploitation: Attackers could manipulate AI decision-making to redirect outcomes
False positives or negatives at scale: One misstep could paralyse entire networks or leave threats undetected
Opacity and explainability of decisions: AI agents often operate as “black boxes”, making it hard to audit or understand their choices
The balance between autonomy and control: Too much freedom risks chaos; too much restraint undercuts their speed advantage
Regulatory and ethical questions: The responsibility carried when an AI agent causes harm
The future of AI-powered cybersecurity is human-centered
With agentic AI reshaping incident response and compliance at scale, cybersecurity leaders can now act as “orchestrators” – rather than “operators”.
Their main role is to guide, govern and oversee AI systems that do much of the operational heavy lifting. This frees human defenders from repetitive tasks, so they now can redirect time and resources to focus on:
Governance and oversight: Ensuring AI systems remain aligned with business goals, ethical standards, and compliance frameworks
Risk management: Navigating an environment where regulations struggle to keep pace with technology
AI literacy: Understanding how models work, their limitations, and potential biases
Leadership and communication: Bridging the gap between technical and non-technical teams within an organisation to build greater resilience
Enabling collaborative intelligence
Agentic AI promises greater speed, scale, and the ability to act autonomously in ways humans cannot match.
“In cybersecurity, agentic AI provides a significant leap in enabling organisations to proactively detect and respond to threats,” says Mr Khoong Chan Meng, Chief Executive Officer of NUS-ISS. “It will empower IT teams to better keep pace with rapidly evolving attack vectors – but unlocking its full potential in cybersecurity requires trust, and that trust must be built on safeguards.”
He adds: “Strong governance frameworks for agentic AI are the foundation. Organisations need clear policies defining how agentic AI systems are deployed, when humans must step in, and what boundaries must never be crossed. Paired with ethics and compliance frameworks, these guardrails ensure accountability at both the team and organisational level.
One thing is for sure: The future of cybersecurity is collaborative intelligence – where human and agentic AI work closely together, each amplifying the other’s strengths. To prepare for this future, leaders and practitioners must develop the expertise to design, deploy, and manage these systems responsibly, Mr Khoong says.
To help professionals bridge this emerging skills gap and lead in this field, NUS-ISS offers a comprehensive suite of cybersecurity programmes. Each is designed to equip learners with the knowledge and tools to defend against today’s complex threats.To learn more, visit our Executive Education Programme on Cybersecurity