1,000+

Registered Attendees

Live Sessions

Technical Demos

Interactive Expo Hall

Resource Center

Why Attend?

Attendees can expect robust discussions and debates around the following topics:

Leveraging AI for automated attack surface reduction

Protecting sensitive data flowing through AI/ML models

Costs and economics of deploying AI at scale 

Compliance and regulation - what will they look like?

Roadmap planning - practical guides to integrating AI into your cybersecurity strategies

Adversarial AI Deepfakes - threats posed by AI-generated synthetic media

The ethics and perils of AI gone rogue

Agenda

11:00

Model Red-Teaming: Dynamic Security Analysis for LLMs

The rise of Large Language Models has many organizations rushing to integrate AI-powered tools into existing products, but they introduce significant new risk. OWASP has recently introduced the LLM Top 10 to highlight these novel threat vectors, including prompt injection and data exfiltration. However, existing AppSec tools are not designed to detect and remediate these vulnerabilities. In particular, static analysis (SAST), one of the most common tools, cannot be used since there is no code: machine-learning models are effectively “black boxes." LLM red-teaming is emerging as a technique to minimize the vulnerabilities associated with LLM adoption, ensure data confidentiality, and verify that safety and ethical guardrails are being applied. It applies tactics associated with penetration testing and dynamic analysis (DAST) of traditional software to the new world of machine-learning models. Join Snyk, the leader in AI security, for an overview of LLM red-teaming principles, including: 

  • What are some of the novel threat vectors associated with large language models, and how are these attacks carried out?
  • Why are traditional vulnerability-detection tools (such as SAST and SCA) incapable of detecting the most serious risks in LLMs?
  • How can the principles of traditional dynamic analysis be applied to machine learning models, and what types of new tools are needed?
  • How should organizations begin to approach building an effective program for LLM red-teaming?

Clinton Herget

Snyk, Field CTO

Clinton Herget is Field CTO at Snyk, the leader in AI Security, where he focuses on crafting and evangelizing our strategic vision for the evolution of DevSecOps. A seasoned technologist, Clinton spent his 20-year career prior to Snyk as a web software developer, DevOps consultant, cloud solutions architect, and engineering director. Clinton is passionate about empowering software engineers to do their best work in the chaotic cloud-native world, and is a frequent conference speaker, developer advocate, and technical thought leader.

11:30

Demystifying the AI SOC with intelligent workflows

Security teams are overwhelmed by alert fatigue, skill shortages, and burnout. AI is well-positioned to address these challenges, yet security leaders are split on whether a truly autonomous SOC is achievable. The truth is: the future of the SOC isn’t fully autonomous - it’s intelligent. In this session, we’ll explore: 

  • The evolution of workflows, from python and scripts, to agentic workflows, and everything in between
  • Why flexible, intelligent workflows that combine deterministic logic, AI agents, and human insight are the future of the SOC
  • How to apply these workflows to build and scale an AI SOC that’s faster and more resilient

Thomas Kinsella

Tines, Co-founder and Chief Customer Officer

Thomas Kinsella is the Co-founder and Chief Customer Officer of Tines, an intelligent workflow platform. A native of Ireland, Thomas graduated from Gonzaga College SJ and earned a degree in management science and information system studies from Trinity College Dublin. Before starting Tines with Co-founder Eoin Hinchy, he held cybersecurity positions at Deloitte, eBay, and DocuSign, where he rose to Senior Director of Security Operations. During the first decade of his career, Thomas experienced firsthand the amount of time wasted on manual security work, an experience that prompted him to join Hinchy in founding Tines in 2018. In recent years, Thomas has been recognized as an influential voice in cybersecurity.

12:00

Malicious vs. Defensive: How to Stay Ahead of AI-Powered Cybercrime

The widespread adoption of generative AI has led to increased productivity for employees, but also for adversaries. Threat actors are using AI to craft email attacks that are polished, typo-free, hyper-personalized, and are doing so at scale. From convincing business email compromise (BEC) attacks to credential phishing and even deepfake videos that mimic executives, these machine-speed threats are breaching defenses faster than ever. So how do you stay ahead? Join this session to explore how generative AI is reshaping the threat landscape, what AI-powered attacks look like in the wild, and how autonomous, behavior-based defenses can help you fight back. Because in the age of AI, only good AI can stop bad AI—before it reaches your people.

Mick Leach

Abnormal Security, Field CISO

Mick Leach is Field CISO of Abnormal Security, an AI-native email security company that uses behavioral AI to prevent business email compromise, vendor fraud, and other socially-engineered attacks. At Abnormal, he is responsible for threat hunting and analysis, engaging with customers, and is a featured speaker at global industry conferences and events. Previously, he led security operations organizations at Abnormal, Alliance Data, and Nationwide Insurance, and also spent more than 8 years serving in the US Army’s famed Cavalry Regiments. A passionate information security practitioner, Mick holds 7 SANS/GIAC certifications, coupled with 20+ years of experience in the IT and security industries. When not digging through logs or discussing operational metrics, Mick can typically be found on a soccer field, coaching one of his 13 kids.

12:30

BREAK

Please visit our sponsors in the Exhibit Hall and explore their resources. They're standing by to answer your questions.

12:45

How Okta Protects Non-Human Identities

Non-human identities (NHI)—including machine identities, service accounts, automation tools, API keys and tokens, and AI agents—power modern IT environments. The explosion of machine-to-machine interactions, automation, and AI-driven workflows has made NHIs the new identity security perimeter. Yet, most organizations remain unprepared. NHIs outnumber human identities 50:1, yet many organizations lack a formal NHI security program. Without clear ownership, real-time visibility, or automated security controls, NHIs can easily become a blind spot, making it difficult for organizations to detect, verify, and respond to security risks. During this event, attendees will learn: 

  • How to gain visibility to undermanaged NHIs
  • Strategies to bring access control policies to these types of unfederated NHIs
  • And considerations for using the same security framework for NHIs as human identities

13:15

When the Next Breach Isn’t Technical: How CISOs and Security Leaders Will Be Tested in 2026

The Silent Shift from Threat Containment to Governance Proof Event Description: Over the past two years, cybersecurity and AI governance have evolved from technical disciplines into boardroom imperatives. Proxy advisors and institutional investors, BlackRock, ISS, Glass Lewis, are now evaluating companies on how effectively they govern data, privacy, and AI risk. For the first time, governance maturity directly impacts shareholder confidence and market value. In this session, veteran cyber governance leader Chris Hetner unpacks the new reality facing CISOs, incident response leads, and infrastructure security directors: 

  • Governance is now part of your attack surface. What was once internal documentation is now subject to investor and regulatory review.
  • AI is accelerating faster than oversight. Shadow automation and untracked AI models are creating ungoverned risk and new accountability challenges.
  • Proof of control is the new metric. The SEC and global regulators increasingly expect organizations to demonstrate defensible decision-making and traceable risk management. 

You’ll leave this session with a clear understanding of how to operationalize defensibility building governance that stands up to board scrutiny, investor questions, and regulatory audits. 

Key Takeaways: 

  • How to prepare your organization for investor-grade governance expectations
  • The convergence of cyber, privacy, and AI risk under new SEC and proxy oversight
  • Practical frameworks for evidence-based decision trails, escalation workflows, and explainable outcomes

Christopher Hetner

Senior Executive, Board Director, and leader in cybersecurity

Christopher Hetner is a senior executive and board director recognized globally for advancing cybersecurity, AI governance, and risk management at the intersection of business and technology. As Cyber Risk Advisor to the National Association of Corporate Directors (NACD) and Chair of Cybersecurity, AI, and Privacy at the Nasdaq Center for Board Excellence, he drives operational resilience across public and private sectors. Hetner also serves on the boards of Simulint and NACD’s Connecticut Chapter and is a Research Affiliate with the MIT Sloan School of Management.

13:15

The AI Mirage: Why Savings Remain Elusive

Carl Hayes

Invenci, CEO

Carl Hayes is a seasoned cybersecurity executive with 15+ years of experience. As co-founder and CRO of Stratejm, he helped establish it as a leading MSSP, pioneering SECaaS in 2015 before its acquisition by Bell Canada in 2024. He is now CEO of Invenci, developing Gen-AI solutions across key industries. Carl has served on multiple boards, including the OEA and the Mackenzie Institute, and advises the University of Guelph’s Cybersecurity Program and Baycrest Health Sciences.

13:45

The Economic Impact of Securing AI

Artificial intelligence is redefining productivity, profitability, and risk. As AI systems drive trillions in potential corporate gains, their vulnerabilities carry material financial consequences. This session examines the direct economic impact of securing AI, analyzing how model exploitation, data exposure, and control inefficiencies translate into measurable business losses. Malcolm Harkins, Chief Security & Trust Officer at HiddenLayer, presents a framework for assessing exposure, quantifying the cost of controls, and integrating AI risk into enterprise governance and financial disclosure. Attendees will gain a practical understanding of how to align security investment with business outcomes in an AI-driven economy.

Malcolm Harkins

HiddenLayer, CISO & Trust Officer

14:15

Securing the AI Era: Why DSPM and AI Security Are Better Together

AI is reshaping how businesses build and innovate, but it also introduces new risks: data exposure, Shadow AI, pipeline misconfigurations, and insecure model usage. Security teams are now tasked with protecting sensitive data across sprawling AI workloads while ensuring AI adoption stays safe and compliant. That’s where the combination of Data Security Posture Management (DSPM) and AI Security comes in. Together, they provide the visibility and control organizations need to protect data flowing into, through, and out of AI systems. DSPM ensures sensitive data is discovered, classified, and governed, while AI security removes attack paths, secures pipelines, and uncovers hidden AI usage. By uniting these approaches, companies can innovate with AI at speed, with security embedded at every step.

Snegha Ramnarayanan

Wiz, Staff Product GTM Manager

Snegha Ramnarayanan is a Staff Product GTM Manager at Wiz focused on Wiz’s core capabilities including data and AI security. Previously , she worked in field and product roles in the cybersecurity space including Okta & Vmware. She graduated from the Haas school of Business(MBA) & the University of Illinois Urbana Champaign (EE). In her free time, she loves music, traveling, connecting with loved ones and enjoying a nice warm cup of coffee.

14:45

Networking & Exhibit Hall Connections

Please visit our sponsors in the Exhibit Hall and explore their resources. They're standing by to answer your questions.

11:00

AI-Powered Identity Fraud: What You’re Up Against & How to Fight Back

Generative AI has fundamentally changed the threat landscape, arming fraudsters with tools to create convincing deepfakes, voice clones, and sophisticated phishing campaigns at scale. In this session, identity security experts explore how AI is amplifying social engineering and digital deception, turning once-obvious scams into multimillion-dollar corporate heists. Join us as we dissect real-world attacks—including a $25 million deepfake CFO scam and live demonstrations of AI voice cloning—to understand why legacy, static identity defenses are no longer sufficient. We will map AI-based threats across the entire user journey, from registration to customer support, and provide actionable strategies to modernize your defenses. Key Takeaways 

  • The Evolution of Deception: How attackers use GenAI to bypass traditional verification methods, including "grandparent scams," romance fraud, and executive impersonation.
  • Why Legacy IAM Fails: Understanding the critical gaps in static security tools, such as the lack of real-time risk scoring, liveness detection, and cross-channel integration.
  • Defending the User Journey: Specific strategies to stop fraud at critical touchpoints
  • Real-World Defense: Case studies on how financial institutions are using dynamic authorization and AI-driven signals to save millions in potential fraud losses.

Maya Ogranovitch Scott

Ping Identity, Senior Product and Solutions Marketing Manager

Maya Ogranovitch Scott is a senior product and solution marketing manager for Ping's retail and fraud solutions. With a background in vertical solutions as well as fraud prevention, she is passionate about leveraging the power of identity to help enterprises deliver exceptional customer experiences that are simultaneously secure and seamless.

Adam Preis

Ping Identity, Director of Product and Solution Marketing

Adam Preis is a seasoned Director of Product and Solution Marketing at Ping Identity, stationed in the U.K. but wielding influence globally, particularly within the financial services sector. With a robust 12-year tenure in the tech industry, Adam is fervently dedicated to realising customer value. His expertise extends across diverse settings, where he has successfully launched products, honing a skill set that blends innovation with strategic market penetration.

11:30

From input-handling flaws to crashables: Lessons from breaking LLM-based coding tools

Claude Code illustrates how LLM-based coding tools expand the attack surface. Design choices around approvals, parsing, and error handling can turn into security flaws. We present specific findings Kodem uncovered in Claude Code. Both issues highlight how LLM-based coding tools introduce new misconfiguration and input-handling risks. This talk dissects the issues, their broader implications for AI developer tools, and practical mitigations.

Mahesh Babu

Kodem Security, CMO

Mahesh Babu is a former VP of Information Security turned company builder and now leads growth at Kodem, a venture‑backed application security startup. At HSBC he built and scaled global application security and identity & access management platforms that safeguard billions of transactions. His career began at Purdue University’s Information Assurance & Security Research Center, where he researched secure software engineering and biometrics.

Event Sponsors

Abnormal

Abnormal

Tines

Tines

Snyk

Snyk

Ping Identity

Ping Identity

Okta

Okta

Kodem

Kodem

Airia

Airia

Wiz

Wiz

RadarFirst

RadarFirst

Pangea

Pangea

Invenci

HiddenLayer

HiddenLayer

Aqua

Aqua

1Password

1Password

Gray Swan

Gray Swan

Eclypsium

Eclypsium

Arambh Labs

Arambh Labs

AIceberg

AIceberg

SecurityWeek Virtual Event Sponsorships

SecurityWeek Virtual Events Provide

  • BRAND AWARENESS: Introduce your brand to a large audience and deepen connections with existing customers and prospects through powerful brand integration by being part of a high-profile event that is heavily marketed for months.
  • THOUGHT LEADERSHIP: Demonstrate expertise and build trust by presenting to a targeted, information-hungry audience of cybersecurity professionals.
  • LEAD GENERATION: The scale of SecurityWeek’s virtual events serve as a cost effective lead generation platform to fuel your sales teams.

Frequently Asked Questions


Yes, you’ll need to fill out our registration form to gain access to the event. Please fill in the registration form with some basic information to get started.

The information you provide upon registration will be used to establish you as a user on the platform. 

SecurityWeek is committed to protecting and respecting your privacy. From time to time, we would like to contact you about our products and services, as well as other content and information from event sponsors that may be of interest to you. You may unsubscribe from these communications at any time. 

 By registering for this event, you consent to allow SecurityWeek to store and process the personal information submitted to provide you the content requested.

Yes, the vFairs platform is compatible with any computer or mobile device and any browser.

Yes, this event is completely free to attend. We encourage you to login and have a look around at your convenience.

Yes, the event will be available on-demand following the live broadcast.