Skip to content

The Emerging Capabilities of AI, Webinar Summary

Rampart AI™ recently hosted a webinar on the impact of artificial intelligence (AI) and its potential to affect application security. The discussion featured two industry experts: Sam Curry, the Vice President and CISO at Zscaler, and Dr. Jules White, the Associate Dean of Strategic Learning Programs at Vanderbilt University School of Engineering. ChatGPT, a language model developed by OpenAI, was a focal point of the discussion. ChatGPT is an AI model that uses deep learning techniques to generate human-like responses to text input. The speakers discussed the potential applications of ChatGPT in various industries, including natural language processing, chatbots, and content generation.

The speakers discussed the threat that ChatGPT could pose to the current application security system, particularly in terms of its potential use in offense, as it can accelerate the learning curve for hackers, create more script kiddies, and develop attack weaknesses.

“I think one thing that needs to happen quickly is, is to get it out in the open and start discussing it rather than saying, don't use it immediately.” Dr. Jules White, Associate Dean of Strategic Learning Programs at Vanderbilt University School of Engineering said. "Get everybody, you know, into the conversation of saying, look, let's talk about best practices, how we should use it, and what are the sort of appropriate processes that we should put around it.”

The speakers cautioned that attackers are likely to develop their own generative language models and plug-ins, which would pose a significant security threat.

“So if you think of all the applications for it as an expanding tree of possibilities, the guardrails are sort of limiting us from exploring some of that, and the underlying technology is out there.” Sam Curry, Vice President and CISO at Zscaler said. “Adversaries who don't have guardrails on their versions of it are going to develop those paths and they're going to explore things. Now, I think your typical cybercriminal is going to use it to automate at scale, and that's not gonna be very creative, but it's gonna create volume. So it won't be writing particularly creative malicious code. It'll be writing a lot of it and variations of it such that it's not gonna get caught by things like signature engines.”

Another topic highlighted was the security of this emerging technology in itself.

"So simply not putting in the data you care about isn't enough to ensure that it doesn't have a generative capability of making connections you didn't expect,” Mr.Curry added, “and that's something else we'd have to think about what may emerge from this, especially as it gets more sophisticated.

You can watch the webinar, "AI's Emerging Capabilities" in its entirety below: