AI risks
The innovation gamble every leader must face
Published on: October 1, 2025
Last update: October 1, 2025
What if the AI tools transforming your business are also creating vulnerabilities you've never considered?
As organizations rush to integrate generative AI and large language models (LLMs) into their workflows, a new category of risks is emerging that traditional security frameworks weren't designed to handle.
I recently sat down with William Reyor, Director of Security Engineering at Modus Create, to discuss the evolving landscape of AI security risks. His insights reveal both the challenges and practical solutions that business leaders need to understand.
If you're working with AI-powered tools or incorporating LLMs into your projects, this conversation is for you.
1. You co-authored a book on cybersecurity. How did this experience shape you?
Yes, I co-authored Defensive Security Handbook: Best Practices for Securing Infrastructure, 2nd Edition (O’Reilly, 2023) with Amanda Berlin. Writing a book is a lot like running a marathon. Everyone knows it’s hard, but you don’t really know how hard until you sit down and start doing the work.
For me, the challenge was translating complex security concepts for a broad audience, from experts to newcomers. I approached it the same way I approach any problem: break it down into its most basic components and tackle it with discipline and rigor. The process reinforced my belief that subject matter expertise combined with clear storytelling can make even the driest aspects of information security engaging and accessible.
2. What new risks does AI introduce that traditional security models don't address?
When we add AI and LLMs to existing applications, we're fundamentally expanding the attack surface in ways that traditional security models simply don't cover. The complexity alone creates new vulnerability points that didn't exist before.
We're seeing entirely new attack styles emerge, including:
- Prompt injection: Malicious inputs designed to manipulate AI systems into behaving unexpectedly
- Model inversion: Techniques that can extract sensitive training data from AI models
- Data poisoning: Corrupting training datasets to compromise model integrity
These attacks can lead to unauthorized access to sensitive information or system manipulation that bypasses traditional security controls. The challenge is that these vulnerabilities exist at the application logic level, not just the infrastructure level, where most security teams focus their efforts.
3. What is generative monoculture, and why should organizations be concerned?
Generative monoculture represents one of the most overlooked risks in AI adoption today. The concept borrows from biology, where a single disease can wipe out entire crops because they're genetically identical.
In technology, we've seen similar patterns play out. When everyone adopts identical systems, any vulnerability discovered in that technology multiplies across every organization using it.
Think about the Heartbleed bug or how the WannaCry ransomware attack spread so rapidly because of widespread Windows adoption.
The same risk applies to AI systems. When organizations standardize on the same foundational models and key technologies, they're creating a shared vulnerability surface. If someone discovers an exploit in a widely used AI framework or model, that risk can be leveraged against every organization using those same components.
We saw this principle in action with the CrowdStrike incident last year, where a single point of failure affected airlines, hospitals, and businesses worldwide because so many had adopted the same endpoint detection and response solution.
4. How can organizations diversify their AI dependencies to reduce monoculture risks?
The solution follows a similar blueprint to what we learned with cloud adoption: embrace a multi-model strategy.
Key diversification strategies include:
- Testing different AI models: Don't rely on a single provider or model architecture
- Varying your technology stack: Use different frameworks and tools across different applications
- Avoiding vendor lock-in: Maintain the flexibility to switch providers if needed
- Implementing fallback systems: Ensure you have alternatives when primary systems fail
This approach mirrors the multi-cloud strategies that became standard practice as organizations matured their cloud adoption. The goal isn't to complicate your architecture unnecessarily, but to build resilience against systemic failures.
5. What is adversarial testing, and how does it apply to AI security?
Adversarial testing for AI applications involves systematically attempting to break or manipulate your AI systems using techniques that attackers might employ.
One practical example is using tools like promptfoo, which can dynamically generate different types of prompt injection attacks to test your LLM applications. This testing can be integrated directly into your CI/CD pipeline, providing automated security validation as you build and deploy AI features.
Adversarial testing helps identify:
- How easily your AI systems can be manipulated through crafted inputs
- Whether sensitive data can be extracted through model queries
- If your applications properly validate and sanitize AI-generated outputs
- Whether your safety guardrails actually prevent harmful behavior
The key is making this testing part of your standard development workflow, not something you do as an afterthought.
6. What immediate steps should organizations take to secure their AI implementations?
The foundation of AI security starts with something decidedly unglamorous but absolutely critical: inventory.
You can't protect what you don't know exists. Organizations need to:
- Catalog all AI use cases across the entire organization
- Document data sources being used by AI systems
- Map dependencies on external models and services
- Identify integration points where AI connects to other systems
This inventory becomes the baseline for everything else. Without it, you're essentially trying to secure systems you may not even know exist.
Beyond inventory, immediate priorities include establishing governance frameworks, implementing input validation, and creating monitoring systems that can detect unusual AI behavior or potential attacks.
The foundation of AI security starts with something decidedly unglamorous but absolutely critical: inventory. You can't protect what you don't know exists.
7. What governance frameworks exist to guide AI security efforts?
The good news is that security frameworks are evolving to address AI-specific risks. The Open Worldwide Application Security Project (OWASP) has developed a "Top 10 for LLM Applications" that guides testing and identifying common vulnerabilities in AI systems.
These frameworks typically address:
- Input validation and sanitization for AI systems
- Secure model training and deployment practices
- Data governance for AI applications
- Monitoring and incident response for AI-related security events
However, these frameworks are still maturing. Organizations often need to adapt general security principles to their specific AI use cases rather than relying on prescriptive guidance.
8. How should organizations balance AI innovation with security concerns?
This tension between speed to market and security is very real. Everyone wants to be first with AI capabilities, but rushing can create significant vulnerabilities.
The key is building security into your AI development process from the start, not bolting it on later. This means:
Adopting a "secure by design" approach:
- Include security requirements in your AI project planning
- Build testing and validation into your development workflows
- Establish clear governance for AI data usage and model selection
Creating rapid but responsible deployment pipelines:
- Automate security testing wherever possible
- Use staging environments that mirror production for thorough testing
- Implement gradual rollouts that allow you to catch issues early
The goal isn't to slow down innovation, but to innovate responsibly in ways that won't compromise your organization later.
9. Now, let’s get a little personal. How long have you been working at Modus Create, and what do you like most about the culture?
I’ve been with Modus Create for a little more than four years now. What I love most about the culture here is that I get to work with some of the smartest people in the world, and we’re continuously permitted to “punch above our weight.”
The work itself drives me—the need to help customers solve their problems, which to me feels like solving puzzles.
10. You attend a lot of conferences and events. What are your tips for avoiding burnout?
At Modus Create, there’s an expectation that we don’t just focus on pure technical work. We’re out there communicating, connecting, and sharing expertise.
That’s a privilege, but it can also be exhausting. I’m a natural introvert, so to avoid burnout, I focus on self-care and recharging between events. That balance lets me show up fully and best represent Modus Create, no matter the venue or audience.
11. Who is one thought leader that everyone should follow?
For me, it’s a toss-up between Dan Geer and Bruce Schneier. Dan coined the term “monoculture” in the cyber context, and I had the chance to hear him speak on it back in 2013. His thinking still feels relevant and adaptable today.
Bruce’s ability to explain the problem space in AI, cybersecurity, and networked systems has been hugely influential for me, as well.
12. What security lessons should business and technology leaders take away from this conversation?
AI security isn't about choosing between innovation and protection. It's about building both into your approach from day one. The fundamental principles remain the same: You need to know what you're protecting, test your defenses, and build resilience into your systems.
The organizations that will thrive in the AI era are those that treat security as an enabler of sustainable innovation, not an obstacle to overcome. By diversifying dependencies, implementing adversarial testing, and maintaining comprehensive inventories, you can harness AI's transformative potential while protecting your organization from emerging threats.
Start with that boring but essential inventory. Map your AI landscape. Then build the testing and governance practices that will let you innovate with confidence.
AI security isn't about choosing between innovation and protection. It's about building both into your approach from day one.