Security Leaders Revamp Cyber Strategies Amid AI and Digital Supply Chain Challenges
August 11, 2025
Concerns extend to third-party software, with 68% of respondents expressing worries about associated risks, while 60% acknowledge that attackers evolve too quickly for organizations to maintain resilience.
Almost half of the surveyed executives are uneasy about AI features and large language models, with 66% believing that generative AI aids attackers in analyzing data and evading defenses.
A recent survey by Emerald Research, involving 225 security leaders, reveals that the expansion of digital supply chains and the integration of generative AI in critical systems are prompting a reevaluation of cybersecurity strategies.
In response to these challenges, 49% of security leaders plan to utilize penetration testing to identify software supply chain vulnerabilities, while 44% will focus on uncovering insider threats by integrating pentesting throughout development and procurement workflows.
Penetration testing has evolved from a compliance task to a core component of enterprise security programs, with 88% of security leaders deeming it essential for effective security management.
Furthermore, 74% of security leaders believe that regular, documented pentesting enhances client credibility and provides a competitive advantage in procurement processes.
The complexity of software supply chains is a significant concern, with 73% of executives reporting at least one notification of a supply chain vulnerability in the past year, leading 83% to implement formal vendor security requirements.
The report highlights a growing gap between compliance and actual security, indicating that security leaders are calling for stronger controls, faster remediation, and improved visibility into AI-related risks.
In light of these challenges, security teams are demanding new tools and standards to assess generative AI security, with over half of the respondents seeking guidance on defensive AI usage and frameworks to respond to AI-generated attacks.
As part of a proactive approach, Chief Information Security Officers (CISOs) are embedding pentesting into vendor agreements and applying similar security rigor to AI systems as is done for traditional infrastructure.
Concerns regarding generative AI include risks of model poisoning and intellectual property theft, with 44% of leaders citing these as significant risks.
Summary based on 1 source
Get a daily email with more AI stories
Source

Help Net Security • Aug 11, 2025
Pentesting is now central to CISO strategy - Help Net Security