
Joseph Hejderup, Endor Labs
AI is already transforming how software is built—but it’s mostly just making life harder for security teams. Developers ship AI-generated code at breakneck speed, while security teams struggle to keep up. The challenge isn’t just securing AI-generated code and systems — it’s understanding where observability and auditability become critical for AI security.
We’ll explore how security teams can cut through the noise to identify and address real AI security risks. You’ll learn:
- Key threat models and attack vectors in AI systems—from prompt injection to the unique risks of unstructured data processing
- Where observability and auditability matter most in AI workflows, and why traditional security approaches fall short
- Strategic approaches to AI-enhanced security analysis, including opportunities for leveraging reachability analysis and vulnerability assessment
- Where AI can enhance security—and where human expertise remains irreplaceable.
This talk provides security leaders and teams with a framework for thinking about AI risks, identifying critical control points, and developing strategies that balance innovation with security. Join us to learn how to make AI work for security, not against it.


Explore the conference agenda






