General Info

SecurityWeek’s inaugural Cyber AI & Automation Summit pushes the boundaries of security discussions by exploring the implications and applications of predictive AI, machine learning, and automation in modern cybersecurity programs.

The SecurityWeek editorial team is crafting an interactive agenda delving into the transformative potential of AI, use-cases in malware hunting and threat-intelligence, the workflow when computers take on developer tasks, government policy and regulations, intellectual property protection and privacy implications.

Huddle with your peers to measure the costs, benefits, and risks of deploying machine learning and predictive AI tools in the enterprise, the threat from adversarial AI and deep fakes, and preparation for the inevitable compliance and regulations from policy makers.

This event will bring together cybersecurity program leaders, AI threat researchers, policy makers, software developers and supply chain security specialists to delve into the transformative potential of AI, predictive ChatGPT-like tools and automation to detect and defend against cyberattacks.

Agenda

Agenda

December 6, 2023 11:00

Generative AI: A Cyber Sword with Double Edges

This session explores the emerging threat landscape associated with generative AI, highlighting its potential as a double-edged sword in cybersecurity. This talk will cover recent activity and trends in the criminal underground, demonstrating how threat actors can exploit this technology to evolve social engineering and fraud, app hijacking, and other tactics and techniques. From there, well shift to defense strategies and evolution needed to keep pace with these evolving techniques. In the concluding segment, we'll explore how defenders can leverage generative AI for themselves as a tool to enhance and accelerate security productivity and outcomes.  

speaker headshot

Shannon Murphy
Trend Micro, Global Risk & Security Strategist

December 6, 2023 11:30

5 Ways Cybersecurity Leaders can leverage GenAI in 2024

In an era where cyber threats are increasingly sophisticated and pervasive, how can resource-strapped teams stay ahead? 

This webinar explores how GenAI can support cybersecurity teams by enabling rapid security investigations, anomaly detection, and faster insight generation. Tim Chase, Field CISO at Lacework, will break down how you can leverage GenAI to: 

  • Augment your security team: Address the cybersecurity skills shortage by flattening the learning curve for new tools. 
  • Detect anomalies: Identify outliers and anomalous behavior in cloud data, a crucial aspect in modern cybersecurity.
  • Accelerate insight discovery: Use GenAI to sift through extensive data and enhance operational efficiency.
  • Maintain data security and privacy: A crucial cautionary note on the importance of handling sensitive data with care when deploying GenAI tools.
speaker headshot

Tim Chase
Lacework, Field CISO

December 6, 2023 11:55

BREAK

We are taking a quick break. Please visit our sponsors in the Exhibit Hall and review their resources. They're standing by to answer your questions.

December 6, 2023 12:15

The ChatGPT Threat: Protecting Your Email from AI-Generated Attacks

The widespread adoption of generative AI meant increased productivity for employees, but also for bad actors. They can now create sophisticated email attacks at scale—void of typos and grammatical errors that have become a key indicator of attack. That means credential phishing and BEC attacks are only going to increase in volume and severity. 

So how do you defend against this threat? Join this session to hear how generative AI is changing the threat landscape, what AI-generated attacks look like, and how you can use "good AI" to prevent "bad AI" from harming your organization.

speaker headshot

Mick Leach
Abnormal Security, Field CISO

December 6, 2023 12:45

Crafting Security in the Language of Algorithms and Machines

In an era where artificial intelligence (AI) and Large Language Models (LLMs) are becoming integral to our digital interactions, ensuring their security and usability becomes paramount.

This presentation embarks on a journey through the intersection of these two pivotal domains within the automation landscape, digging into the methodologies, techniques, and tools employed in threat modeling, API testing, and red teaming these artificial narrow intelligent systems.

Expect a discussion on how we, as users and developers, can strategically plan and implement tests for Generative-AI and LLM systems to ensure their robustness and reliability and attempt to spark a conversation about our daily interactions with Generative-AI and how it affects our conscious and subconscious engagements with these technologies.

Key Takeaways:

  • Exploring User Interaction with GenAi: Engage in a dialogue about the pervasive and perhaps unnoticed interactions with GenAi in our daily lives, and how this influences our digital experiences.
  • In-depth Insight into LLM Security: Uncover the intricate techniques and tools applied in threat modeling and API testing, and MLOPs red teaming to safeguard LLMs.
  • Strategic Testing of GenAi & LLM Systems: Delve into strategic planning and testing methodologies for GenAi & LLM systems, ensuring their efficacy and security in real-world applications.
speaker headshot

Rob Ragan
Bishop Fox, Principal Security Architect

December 6, 2023 13:20

Demystifying LLMs: Power Plays in Security Automation

As the popularity of Large Language Models (LLMs) continues to grow, there's a clear divide in perception: some believe LLMs are the solution to everything - a ruthlessly efficient automaton that will take your job and steal your dance partner. Others remain deeply skeptical of their potential - and have strictly forbidden their use in corporate environments.

This presentation seeks to bridge that divide, offering a framework to better understand and incorporate LLMs into the realm of security work. We will delve into the most pertinent capabilities of LLMs for defensive use cases, shedding light on their strengths (and weaknesses) in summarization, data labeling, and decision task automation. Our discourse will also address specific tactics with concrete examples such as 'direction following'—guiding LLMs to adopt the desired perspective—and the 'few-shot approach,' emphasizing the importance of precise prompting to maximize model efficiency.

The presentation will also outline the steps to automate tasks and improve analytical processes and provide attendees with access to basic scripts which they can customize and test according to their specific requirements.

speaker headshot

Gabriel Bernadett-Shapiro
Independent AI Security Researcher

December 6, 2023 14:00

Fireside Chat: Nick Vigier, CISO, Oscar Health

Nick Vigier joins the SecurityWeek fireside chat to discuss his priorities as CISO at Oscar Health, the challenges of communicating security risks in large organizations, the ransomware crisis in the healthcare sector, the cybersecurity labor market and the issue of CISOs facing personal liability for breaches.f

The conversation also delves into AI/LLM use-cases to automate routine and monotonous security tasks, the use of generative-AI and co-pilot technologies to write more secure code and ethical and privacy considerations when training and deploying large language model (LLM) algorithms. 

speaker headshot

Ryan Naraine
SecurityWeek, Editor-at-Large

speaker headshot

Nick VIgier
Oscar Health, Chief Information Security Officer

December 6, 2023 15:00

Trend Vision One Companion Demo

December 6, 2023 15:25

Abnormal Platform Demo

December 6, 2023 15:36

Lacework Demo

[On-Demand] Trend Vision One Companion Demo

[On-Demand] Abnormal Platform Demo

[On-Demand] Lacework Demo

Platinum Sponsor

 

Gold Sponsors