Aaron Momin
Contributor

Defense at scale: How agentic AI secures without extra headcount

Opinion
Sep 16, 20259 mins

In the AI cyber arms race, big security teams won’t save you — autonomous agents just might.

Empty Office Interior With Chairs
Credit: a40757 / Shutterstock

As artificial intelligence (AI) rapidly gains momentum, financial services companies are racing to scale operations while facing a major challenge: their cyber capabilities and needs grow exponentially while they struggle to hire skilled security professionals to meet these needs. This staffing crisis isn’t just about unfilled positions; it’s about survival in an environment where AI-powered attacks are becoming more sophisticated and the cost of breaches continues to skyrocket.

While hiring cybersecurity talent will remain important, deploying Agentic AI to scale defensive capabilities without needing significant staffing increases could help lessen the burden on headcount constraints. Having spent over 30 years in cybersecurity, I’ve witnessed firsthand how speed has become the new currency in cybersecurity. The organizations that thrive are those that can rapidly identify threats and respond autonomously, not those with the largest security teams.

The scale-up security paradox

Financial services firms face a perfect storm of cybersecurity challenges as they grow. While these double their transaction volumes and customer bases annually, the dearth of available cybersecurity talent means qualified professionals remain nearly impossible to hire. Meanwhile, bad actors are using AI to rapidly create better phishing campaigns, deepfakes and customized malware. AI-powered deepfakes that can fool finance workers into transferring millions, like the attack in Hong Kong, where criminals used video deepfakes to trick a worker into transferring $25 million.

Despite these attacks, firms are not steering away from AI investment. The 2024 NVIDIA State of AI in Financial Services report revealed that more than 90% of respondents saw a positive impact on revenue from AI. Furthermore, cybersecurity was the highest growing concern, with over a third of organizations now assessing or investing in AI for security purposes.

What firms may not recognize is that traditional cybersecurity approaches are failing because they largely depend on human analysts to process alerts, investigate threats and coordinate responses. When attack volumes surge exponentially but security teams remain unchanged, the result is slower responses, more undetected threats and ultimately, successful breaches.

The autonomous advantage: Speed as currency

The key lies in understanding that speed is the defining competitive advantage in cybersecurity. Google’s recent success with its “Big Sleep” AI agent is a good example of this. In January 2025, Big Sleep became the first AI agent to prevent a cyberattack by detecting and disrupting an SQLite (a widely used database embedded in a number of applications, mobile devices and web services) vulnerability. This threat was undetected by the human eye and could have been swiftly exploited if not for the AI agent.

This underscores a shift in strategy from reactive to proactive defense. Rather than waiting for attacks to occur and then scrambling to respond, Agentic AI can potentially identify vulnerabilities, predict attack patterns and implement countermeasures before threats can be exploited.

The speed advantage becomes increasingly important in cases like distributed denial-of-service (DDoS) attacks, an attempt to disrupt normal traffic of a server, service or network by overwhelming the target or its surrounding infrastructure with a flood of Internet traffic. When nation-state actors launch DDoS attacks to bring down networks, an agentic AI deployed at the network perimeter can potentially identify attack patterns, analyze traffic anomalies and modify firewall rules autonomously, thereby eliminating the response delays that typically allow attacks to succeed.

Strategic implementation: Building your agent defense network

Successfully deploying Agentic AI moves beyond a traditional “one-size-fits-all” approach. Instead, organizations should build teams of specialized agents, each designed with its own function that work within coordinated defense networks. This could be like the following:

  • Third-party risk assessment automation. Agentic AI can assess suppliers, compile risk data from multiple sources and conduct automated evaluations that take weeks when conducted manually. Given that breaches continue to originate from third or fourth-party suppliers, automating this due diligence helps maintain visibility into supply chain vulnerabilities.
  • Data protection intelligence. Agentic AI can determine where sensitive data resides, who has access and further identify unauthorized access patterns using predictive analytics. These agents can process terabytes of access logs and behavioral data to identify unusual patterns that humans might miss entirely or take days to discover.
  • Security operations enhancement. Agentic AI can analyze security event data in real-time, connecting information from multiple sources to identify real threats while reducing false positives. According to industry research, by 2025, AI in cybersecurity will quickly move from chatbots to agent-driven approaches, with organizations leveraging automation for threat detection and autonomous responses.

Assessment framework: Choosing the right Solutions

Not all Agentic AI solutions are created equal. Financial organizations should evaluate potential platforms against three critical criteria:

  • Accuracy and Transparency. The most sophisticated AI agent is worthless if it generates false positives that overwhelm security teams or, worse, false negatives that allow real threats through. Look for solutions that clearly demonstrate their reasoning, planned steps and a clear audit of the systems leveraged to identify the threat. Transparency builds stronger human-AI partnerships where humans maintain oversight while agents handle structured, repetitive tasks at scale.
  • Integration capabilities. AI agents must seamlessly integrate within existing security infrastructure. The best solutions can ingest data from multiple security tools, compare information across platforms and execute responses through established frameworks. Avoid solutions that require a complete overhaul of existing tools.
  • Governance and control. As regulatory bodies have noted, AI agents need to operate within well-defined parameters alongside proper human oversight. Evaluate how solutions handle model governance, provide audit trails and maintain data lineage and accountability for autonomous decisions.

Real-world success stories

The effectiveness of autonomous security isn’t theoretical; it’s being proven daily across financial services. Recent research shows that the number of chief operating officers who implemented AI-powered automated cybersecurity management systems jumped from 17% in May 2024 to 55% in August 2024. This was largely driven by recognition of the technology’s ability to identify fraudulent activities, detect anomalies and provide real-time threat assessments.

Financial institutions are using AI agents for fraud detection, where AI systems monitor transaction patterns in real-time, learn from new fraud types and take immediate action by alerting compliance teams or freezing suspicious accounts. Teams of AI agents work with other systems to retrieve additional data, simulate potential fraud scenarios and investigate irregularities much faster than human analysts can.

In vulnerability management, organizations are deploying agents that continuously scan for security flaws and build solutions based on threat intelligence. They can even automatically patch low-risk vulnerabilities. This approach has reduced the average time from threat detection to solution from weeks to hours.

The implementation roadmap

Organizations ready to deploy Agentic AI should follow a structured approach:

  • Agree on achievable use cases. Start with business challenges that have clear, measurable outcomes and sufficient, accessible data. Focus on areas where AI can automate repetitive tasks, improve decision-making or enhance customer experience and assess organizational readiness for adoption.
  • Start with high-volume, low-complexity tasks. Begin by automating repetitive security operations like log analysis, basic threat detection and routine compliance checks. These provide immediate value while building organizational confidence in the technology.
  • Build specialized agent teams. Rather than deploying massive AI systems, create teams of specialized agents dedicated to specific security tasks. This creates a more reliable and controlled outcome while reducing the risk of failure.
  • Establish human oversight frameworks. Implement clear governance structures that define when agents can act autonomously and when human oversight is required. This can build better accuracy and transparency between humans and AI at scale.
  • Measure and optimize. Track key performance indicators like threat detection accuracy, response times and false positive rates. Use these insights to refine agent behavior and expand autonomous capabilities.

Managing the risks

While Agentic AI offers significant advantages, organizations must address the risks. AI agents can potentially expand attack surfaces if not properly contained, and their autonomous nature introduces new governance challenges. The key is creating robust frameworks from the start.

Establish clear boundaries for agent behavior, implement thorough logging and audit capabilities, and maintain kill switches for quick intervention when needed. Regular penetration testing should include attempts to compromise or manipulate AI agents themselves.

Perhaps most importantly, recognize that autonomous security augments rather than replaces human expertise. The goal is to free skilled security professionals from repetitive tasks so they can focus on strategic threat analysis, complex incident response and the nuanced decision-making that AI can’t replicate.

The competitive imperative

Financial organizations that are late to the game of autonomous security could face major competitive disadvantages. As the World Economic Forum notes, agentic AI’s increased autonomy enables organizations to handle repetitive, data-intensive processes while optimizing workflows, enhancing compliance and improving decision-making.

While traditional cybersecurity scaling requires an increased headcount, autonomous security enables exponential capability growth without major staffing increases that have been troublesome to fill. Organizations that adapt will defend against tomorrow’s threats while their competitors struggle with yesterday’s approaches.

Speed is now the new currency in cybersecurity. The organizations that can rapidly identify threats and autonomously implement countermeasures will survive and thrive. Those who can’t will struggle to compete with both attackers and competitors who have embraced the autonomous future.

The question is no longer whether autonomous security will reshape cybersecurity in financial services; that shift is already underway. The question is whether your organization will lead this transformation or be left behind.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

Aaron Momin

As the global CISO at Synechron, a leading global digital transformation consulting firm, Aaron Momin is accountable and responsible for cyber risk management, information security, crisis management and business continuity planning. Aaron has 30 years of experience in managing cyber and technology risk, improving security maturity and integrating privacy for global organizations. He is a certified CISO, CISM and CRISC.

More from this author