As artificial intelligence (AI) continues to evolve, industry leaders in cybersecurity are grappling with how best to safeguard data, protect organizational infrastructure, and adapt to a future where AI plays a central role. During a recent conversation with top executives from Google DeepMind, Anthropic, and KPMG at the CISO Summit, discussions centered on AI’s rapid integration into enterprise systems, the inherent cybersecurity challenges, and the strategies needed to mitigate associated risks.
AI’s Growing Role in Cybersecurity Programs
Many organizations are venturing into AI, aiming to enhance efficiency, automate tasks, and improve productivity. However, the evolving nature of AI models, particularly "frontier models," raises critical security concerns. Vijay Bolina, Chief Information Security Officer (CISO) at Google DeepMind, addresses these developments, explaining, “We are exploring where and how frontier and foundational models fit into the AI ecosystem. These models, while powerful, require careful consideration regarding their potential risks and security implications.” Bolina believes that while smaller, more specialized models are useful, larger, more complex models are pushing the limits, “introducing new capabilities that are not yet standardized.”
Jason Clinton, CISO at Anthropic, reinforces the significance of understanding and protecting the lineage and transparency of AI systems. “When deploying AI, it’s essential to trace the lineage of the data and model,” he says, emphasizing that AI’s growing complexity could compromise security if organizations fail to account for potential vulnerabilities in multimodal systems handling text, video, and audio inputs.
Kristy Hornland, Director at KPMG US specializing in AI security, adds, “The integration of AI into cybersecurity is not just about adopting new technologies but also about understanding the unique risks they introduce. It's imperative to align AI governance with industry-leading frameworks and practices to ensure responsible and secure deployment.”
The Challenge of Securing AI-driven Models
For many cybersecurity professionals, securing AI models introduces unique obstacles. Traditional security measures often fall short when applied to multimodal AI models, which handle varied types of input data and require advanced protocols for safety and resilience. Clinton highlights the importance of adapting these models, as they are “trained on massive amounts of data, which means that if improperly managed, they could introduce vulnerabilities into critical systems.” He adds that, while these models can significantly streamline operations, there’s a need for continuous monitoring to prevent unauthorized access to sensitive information.
Another critical aspect that Clinton discusses is the future of AI-to-AI communication, where models might communicate through their own protocols, bypassing human-readable formats. He raises a thought-provoking point: “If we’re projecting three years into the future, AI agents might have their own language protocols—structured communication that doesn’t rely on English. How do we secure these interactions?” Such advancements, he believes, could revolutionize both the functionality and security protocols around AI.
Hornland emphasizes the necessity of developing an AI threat matrix tailored to an organization's specific use cases. She explains, “An AI threat matrix helps organizations prioritize their security efforts by scoping to the types of use cases planned or in flight for the organization, and assigning risk categorization in alignment with the organization’s risk appetite to various potential attacks.”
Overcoming Trust and Safety Concerns
Ensuring trust in AI is paramount for cybersecurity executives. Both Clinton and Bolina advocate for enhanced transparency with AI vendors and providers. “Trust and safety,” says Bolina, “are not optional but essential elements for organizations deploying AI.” In addition to ensuring that models adhere to safety standards, Bolina believes it is necessary to establish trust by implementing secure access controls and regular model evaluations.
From Clinton’s perspective, one of the biggest blind spots in AI security remains model resilience and transparency. “Many organizations aren’t asking the right questions,” he explains, “and when we ask about their visibility into AI, it often sounds like they haven’t fully grasped the potential risks.” This lack of visibility, especially regarding user interaction with AI models, can create unexpected vulnerabilities if not properly addressed. For Clinton, transparency in model deployment, training data, and security protocols are critical to ensuring safe AI integration.
Hornland concurs, stating, “Proactive threat identification and management contribute to an organization's ability to withstand and recover from security breaches. Developing a well-structured AI threat matrix can assist organizations in meeting compliance requirements and effectively managing cyber risk.”
Third-party AI Use and the Challenge of Data Governance
The use of third-party AI models in enterprise systems adds another layer of complexity. Many organizations rely on external providers, raising questions about data privacy and control. “With third-party models,” Clinton notes, “you have to ask: who’s the actual service provider, and can you be sure your data won’t be misused?” He explains that AI users need to carefully scrutinize third-party providers and make sure they’re comfortable with data governance practices.
Bolina echoes this sentiment, adding that it’s a fundamental third-party risk management issue: “This is no different than other data governance challenges, but it requires a stricter level of oversight due to the unique risks involved with AI.” He argues that, as with any technology, organizations need to enforce strict data controls and ensure that AI deployments align with their privacy and security commitments.
Hornland highlights the importance of aligning AI governance programs with industry-leading frameworks and practices. She notes, “Delivering responsible and secure AI governance programs for leading life sciences, financial services, and government clients requires alignment to industry-leading frameworks and practices, and deploying AI security platforms to support these program objectives.”
A Call for Industry Standards and Regulation
Recognizing the evolving risks, both Bolina and Clinton advocate for clear industry standards and regulatory frameworks. Bolina points to the importance of established guidelines, like those from the National Institute of Standards and Technology (NIST), as foundational to secure AI deployment. He suggests using these standards to form the basis for an organization’s internal policies on AI, adapting as necessary to suit the unique requirements of AI technology.
Clinton agrees, emphasizing the need for industry-wide collaboration on AI safety standards. “Our industry has an opportunity to create standards that reflect the true risks and rewards of these models,” he says, citing ongoing work by the Center for AI Safety and other regulatory bodies. Clinton also advocates for transparency, noting that “a commitment to responsible scaling, like the model Anthropic is using, could serve as a benchmark for others in the industry.”
Hornland underscores the role of AI security working groups in facilitating industry collaboration. She has held the position of Global Resilience Federation AI Security Working Group facilitator for the last two years, contributing to the development of industry standards and best practices.
The Road Ahead: Preparing for AI Integration
To organizations considering AI, Bolina offers practical advice: “Start by understanding the essential stakeholders—cyber, legal, privacy, and HR—and build a cross-functional team dedicated to AI governance.” He emphasizes that an effective AI governance framework is crucial for reducing risks while maximizing the technology’s potential. “Using established frameworks and adapting them to suit the needs of AI is essential,” he believes, noting that NIST guidelines and similar frameworks are helpful starting points.
Hornland, echoing Bolina's view, stresses the importance of a structured approach to AI implementation and governance. "Establishing a cross-functional governance team allows organizations to identify risks early and integrate security best practices across departments,” she says. For Hornland, this approach ensures that security, privacy, and compliance are maintained throughout the AI lifecycle, from initial design to deployment and beyond. She also advocates for regular AI risk assessments, pointing out that “AI risk isn’t static—new vulnerabilities emerge with every update and innovation.” A well-structured threat matrix, she notes, is an invaluable tool in identifying and mitigating these risks.
Clinton highlights the necessity of developing a compliance function within the cybersecurity team. “A dedicated compliance team can help manage risk and establish guidelines that align with both internal policies and industry standards,” he says. By working closely with departments like HR, IT, and legal, this team can create a robust structure that supports AI implementation across various use cases.
Looking Forward: AI as the New Cybersecurity Frontier
As AI becomes more deeply embedded in organizational frameworks, the approach to cybersecurity will need to evolve to meet new demands. Bolina remains optimistic about the potential of AI in strengthening defenses, observing that AI can help cybersecurity teams stay ahead of threats by processing large datasets and identifying patterns. "AI isn't just a tool for attackers; it's an asset for defenders, too," he asserts, highlighting the proactive insights AI can offer in predicting and countering security threats.
Hornland emphasizes the importance of ongoing collaboration and transparency across the industry. “The cybersecurity landscape is changing, and no organization can navigate it alone,” she believes. In her view, AI security working groups and cross-industry partnerships are essential in setting standards that protect both businesses and users. “Responsible AI deployment,” she says, “is about sharing knowledge, adopting best practices, and supporting each other in facing new challenges.”
Clinton, reflecting on the future, envisions a time when AI capabilities will extend to every department, creating an environment where “everyone is a potential developer” in some capacity. As AI tools become more accessible, the responsibility to ensure security will need to be integrated across all levels of an organization, from leadership to individual contributors. “AI will change the way we work, but we must change the way we think about security to keep pace,” he advises.
The rapid advancement of AI presents a powerful opportunity for organizations across industries, but with this opportunity comes a set of unique challenges. Bolina, Clinton, and Hornland each emphasize that the key to secure AI implementation lies in a commitment to transparency, cross-functional governance, and adherence to industry standards. As Bolina aptly puts it, “AI has the potential to reshape industries—but only if we harness it responsibly and safely.”
Hornland leaves a final thought for organizations embarking on their AI journey: "Adopt a mindset that values both innovation and accountability. AI can bring incredible value, but it requires constant vigilance, collaboration, and a commitment to ethical practices." With these guiding principles, organizations can navigate the complexities of AI security, turning potential vulnerabilities into opportunities for resilience and growth.