SecurityWeek’s inaugural Cyber AI & Automation Summit pushes the boundaries of security discussions by exploring the implications and applications of predictive AI, machine learning, and automation in modern cybersecurity programs.
The SecurityWeek editorial team is crafting an interactive agenda delving into the transformative potential of AI, use-cases in malware hunting and threat-intelligence, the workflow when computers take on developer tasks, government policy and regulations, intellectual property protection and privacy implications.
Huddle with your peers to measure the costs, benefits, and risks of deploying machine learning and predictive AI tools in the enterprise, the threat from adversarial AI and deep fakes, and preparation for the inevitable compliance and regulations from policy makers.
This event will bring together cybersecurity program leaders, AI threat researchers, policy makers, software developers and supply chain security specialists to delve into the transformative potential of AI, predictive ChatGPT-like tools and automation to detect and defend against cyberattacks.
December 6, 2023 11:00
This session explores the emerging threat landscape associated with generative AI, highlighting its potential as a double-edged sword in cybersecurity. This talk will cover recent activity and trends in the criminal underground, demonstrating how threat actors can exploit this technology to evolve social engineering and fraud, app hijacking, and other tactics and techniques. From there, well shift to defense strategies and evolution needed to keep pace with these evolving techniques. In the concluding segment, we'll explore how defenders can leverage generative AI for themselves as a tool to enhance and accelerate security productivity and outcomes.
Trend Micro, Global Risk & Security Strategist
December 6, 2023 11:30
In an era where cyber threats are increasingly sophisticated and pervasive, how can resource-strapped teams stay ahead?
This webinar explores how GenAI can support cybersecurity teams by enabling rapid security investigations, anomaly detection, and faster insight generation. Tim Chase, Field CISO at Lacework, will break down how you can leverage GenAI to:
Lacework, Field CISO
December 6, 2023 11:55
We are taking a quick break. Please visit our sponsors in the Exhibit Hall and review their resources. They're standing by to answer your questions.
December 6, 2023 12:15
The widespread adoption of generative AI meant increased productivity for employees, but also for bad actors. They can now create sophisticated email attacks at scale—void of typos and grammatical errors that have become a key indicator of attack. That means credential phishing and BEC attacks are only going to increase in volume and severity.
So how do you defend against this threat? Join this session to hear how generative AI is changing the threat landscape, what AI-generated attacks look like, and how you can use "good AI" to prevent "bad AI" from harming your organization.
Abnormal Security, Field CISO
December 6, 2023 12:45
In an era where artificial intelligence (AI) and Large Language Models (LLMs) are becoming integral to our digital interactions, ensuring their security and usability becomes paramount.
This presentation embarks on a journey through the intersection of these two pivotal domains within the automation landscape, digging into the methodologies, techniques, and tools employed in threat modeling, API testing, and red teaming these artificial narrow intelligent systems.
Expect a discussion on how we, as users and developers, can strategically plan and implement tests for Generative-AI and LLM systems to ensure their robustness and reliability and attempt to spark a conversation about our daily interactions with Generative-AI and how it affects our conscious and subconscious engagements with these technologies.
Bishop Fox, Principal Security Architect
December 6, 2023 13:20
As the popularity of Large Language Models (LLMs) continues to grow, there's a clear divide in perception: some believe LLMs are the solution to everything - a ruthlessly efficient automaton that will take your job and steal your dance partner. Others remain deeply skeptical of their potential - and have strictly forbidden their use in corporate environments.
This presentation seeks to bridge that divide, offering a framework to better understand and incorporate LLMs into the realm of security work. We will delve into the most pertinent capabilities of LLMs for defensive use cases, shedding light on their strengths (and weaknesses) in summarization, data labeling, and decision task automation. Our discourse will also address specific tactics with concrete examples such as 'direction following'—guiding LLMs to adopt the desired perspective—and the 'few-shot approach,' emphasizing the importance of precise prompting to maximize model efficiency.
The presentation will also outline the steps to automate tasks and improve analytical processes and provide attendees with access to basic scripts which they can customize and test according to their specific requirements.
Independent AI Security Researcher
December 6, 2023 14:00
Nick Vigier joins the SecurityWeek fireside chat to discuss his priorities as CISO at Oscar Health, the challenges of communicating security risks in large organizations, the ransomware crisis in the healthcare sector, the cybersecurity labor market and the issue of CISOs facing personal liability for breaches.f
The conversation also delves into AI/LLM use-cases to automate routine and monotonous security tasks, the use of generative-AI and co-pilot technologies to write more secure code and ethical and privacy considerations when training and deploying large language model (LLM) algorithms.
Oscar Health, Chief Information Security Officer
December 6, 2023 15:00
December 6, 2023 15:25
December 6, 2023 15:36