Daphna: I wanted to welcome everyone to Rampart AI's latest webinar on artificial intelligence. And so we're talking about ChatGPT today and the impact its technology is having and will have on the application security world.
So, We are joined live by Zscaler VP and CISO, Sam Curry. Sam is dedicated to empowering defenders in cyber conflicts and fulfilling the promise of security, enabling a safe, reliable, connected world. And we're also joined by Dr. Jules White, associate Dean of Strategic Learning Programs at Vanderbilt University School of Engineering.
So Dr. White's research focuses on cybersecurity, mobile/cloud computing, and domains ranging from healthcare all the way to manufacturing. So these are two industry experts who have a lot of insight to share today. So welcome.
Jules: Thank you for having us.
Sam: Yeah, thanks for, thanks for having us both.
Daphna: Awesome. So let's start with a brief introduction about what ChatGPT is, just to get everyone on the same page. So I asked ChatGPT to kind of be aware of itself here and it says ChatGPT is a powerful language model developed by open AI that uses deep learning techniques to generate human-like responses to text input.
It has a wide range of potential applications in various industries, including natural language processing, chatbots, and content generation. So just to start out, Our speaker, Sam Jules, do you agree with this self-proclaimed summary of what ChatGPT says it is?
Jules: I think it's a pretty good description.
I mean, I think that the good part, I think that it hits on, which is. The opposite of what you see in the news is that it's generating text, right? And so I think that we've seen ChatGPT talked about as if it's a question answering machine, that it's like a search engine or it's like, you know, going to give you, you know, correct and truthful answers to questions and that that's its purpose.
And really that's not what at all, it's a tool for generating text. That follows patterns that have been seen in the past and in the process of generating text, it can, you know, follow instructions and perform interesting operations That is a result of sort of some, you know, emergent capabilities in the underlying network.
But at the core of it, I think this is pretty, pretty accurate that it is generating text. It's not a question-answering machine.
Sam: Yeah, I was gonna give, a similar, but not quite the same answer I was gonna say it's not wrong, which is probably the best way to answer things to do with GPT or generative language models.
But Jules is right. This, the difficulty though is, I think when human beings interact with something that produces intelligible answers, we tend to infer reasoning and sentience behind it, and we tend to see something that we're interacting with and think, well, it's a source of authority, or there's some correctness behind it, which is not necessarily the case over time.
As these things get trained and built better, they'll give better answers. But again, Jules nailed it. It it's not a search engine and, it shouldn't be treated as such. And, we'll go deeper into this, but that's not a bad answer.
Daphna: Perfect. So to start off, I asked ChatGPT to write a webinar with questions about how ChatGPT can impact the security sector.
So it came up with a great angle and some questions. So if we have time at the end, we will go back to these, but for now, I did write my own questions and I also asked ChatGPT for answers to the questions that I wrote. So that's what you're gonna be seeing throughout the slideshow. So let's begin with just another kind of blanket question for both of our, um, speakers.
What is the threat that ChatGPT can have on our current application security system?
Sam: So I think in order to answer that, we have to answer what is it we're potentially using it for and I think there are applications in an offense that it can be used for.
There are applications in defense, a lot depends on what we plug it into. I've seen it used, for instance, to generate code. I've seen it used to generate, let's talk offense. Um, it can accelerate the learning curve for people to learn hacking skills. It can create many, many, many, many more script kitties. it can also create an offense and weaknesses. People tend to, will tend to, code in the same way, or at least develop an attack in a similar way. Now in defense, the same thing can be done, although it's asymmetric what we do in offense and defense. The part of the problem is though, that we've been putting guardrails on this.
We've been doing it to behave a certain way, talk a certain way, avoid, avoid sounding murderous, for instance, and avoid giving legal advice. The problem is that the basic methodology in how to do this is out there and attackers are going to be developing and using their own generative language models and other derivatives and other plug-ins for it.
We've gotta make sure that defenders have the ability to do red teaming and purple teaming using this without being hamstrung. Now what are some of the risks associated with it? Well, one of the biggies is, leaky abstraction in defense that, in other words, Defenders won't truly understand what it is they're learning as they learn quickly.
Also, getting the habits of thought or even poisoning, by attackers. Where the sampling is coming from is the language models are being, being built and used, and the output is created. That can be affected. So it, this will like a calculator or like, like a search engine will make some jobs easier.
But we still have to understand the basics and how it's achieved and we still have to put variation into this and I could go on forever. So I'll stop there cuz I'm sure Jules has got a bunch he wants to say as well.
Jules: Yeah, So those are great ones and I'll maybe follow up with some complimentary and different ones that go along with that.
So one thing I think that's interesting is, You know and related is it's clear that there's gonna be a ton of software written very quickly that's using this stuff. And I think one of the threats is going to be that you have a lot of people who stop paying attention to code who stop looking at carefully and before.
I think the biggest risk is, is I think there's a real opportunity with these tools to improve the security of code by always giving it a second look. You know? But, until those, those practices are in place and wildly used, I think there's a risk that you have a lot of people producing code very rapidly and not paying attention to it because they trust the tool and maybe aren't even able capable of spotting errors in it. So I think that's a risk that we create a lot of software that we really haven't thought through. Now, we already created a lot of software bugs in it, so I think that is a risk. I think that, you know, my perspective is that one of the most important things for national security and competitiveness was that this came on US soil and the fact that this came out of the United States is a huge you know, important thing for us. At the same time, I think there's a huge risk if we start, you know, getting too careful and cautious with it, and don't accelerate behind this as a competitive advantage because every adversary is going to be all over this and going to be driving hard on it. And so the moment that you slow down, you know, I think there are some really good points to be, you know, we're putting guardrails, we're doing all these things and, and all the adversaries are gonna be taking off all the guardrails, starting from scratch.
And, you know, if we're, you know, slowing down, becoming more cautious, they're gonna be speeding up and we're gonna lose the competitive advantage. So I, I think that's one of the biggest sort of threats is that we don't properly, you know, um, take advantage of this, this early lead that we had and use it to, you know, Great advantage into defense...
Watch the full webinar below: