LLMs in APPSEC: Real-World LLM Use Cases in Application Security from Four Industry Experts

Author
Impart Security
Published on
July 24, 2024
Read time
48
Impart Security
July 24, 2024
48
min

In this episode of the RealReal AppSec Talk, Darwin Salazar, Phillip Maddux, James Wickett, and Brian Joe separate hype from reality for AI and LLMs in application security today, how things are evolving on the front lines, and what the future will look like. You'll hear expert perspectives from the security practitioner (Phillip), from an SDLC (software development lifecycle) contextual security founder (James), from a runtime API security founder (Brian), and the industry analyst viewpoint from Darwin as moderator.

Here's a summary of what was discussed:

- 0:00: Cold open with original rap: What's a Cackalacklycon?

- 3:05: Top LLM and security headlines

• From countless data breaches and a flurry of bad headlines, it's clear that AI and LLMs in application security are still early stages. Luckily, it looks like we're turning a corner and finally coming out of the initial growing pains.

• Security practitioners should adopt a crawl-walk-run approach when using LLMs. From the offensive side, we can effectively use LLMs to identify issues and learn how to patch them and reduce risks. From the defensive side, we're still figuring out how to leverage LLMs to provide effective outcomes on how to secure applications. There is a big rush to adopt AI, but figuring out how and where to apply it in our workflow requires more discovery before we can walk and run with it.

- 4:15: How LLMs are being used today: Phishing, Code Analysis, Runtime Edge cases, James Berthoty's ChatGPT vs Snyk bake off

• AI and LLMs are not the silver bullet to secure SDLC; it's good at solving smaller bespoke problems. The industry needs to figure how how and what problems to target and solve with AI and LLMs. It can be an enabler and it can help us be more collaborative, but you can't force it to do something it cannot do. It's good for coding analysis and some automation and can be used for some shortcuts.

• An example of a practical use case is to use LLMs to determine if an email is a phishing email. We can use LLMs to check against several other checks and to boost confidence in how we score an email to determine if it is phishing or not.

• From a vendor standpoint, a lot of people are surprised at how good GPT4 is against SaaS players. For SDLC, LLMs can be used for code summarization, to detect application security problems and determine what kind of application it is, what it's touching. It can help move code review out of regex rule writing and remove all the noise and false positives, provide deeper knowledge, and higher cardinality of the findings and what is presented to users.

• LLMs for SDLC are better suited for context windows with pattern matching and detections, less so for runtime data. For runtime, LLMs don't have enough scale; the data is too large to shove into a model and use it to block. You can't just trust the outcome. But there are edge use cases around policies and rules, generating documentation, and making alerts easy to understand.

• Whether SDLC or runtime, LLMs are good for having strong data pipelines and processing or pattern matching to help security teams fulfill tasks, not actually do the tasks. As these models become more lightweight, there will be more runtime use cases, but for specific runtime analysis, detection, and response use cases, LLMs are not ready yet for production. They cannot be the actual decision maker.

- 24:39: Where LLMs are going in the next 6-12 months: Three different types of copilots, sorting thousands of pull requests, specialty models emerging , inputs to decisions but not decision making

• For developers, AI and LLMs will be a boon. There will be less barriers to entry to be a developer. It's not going to take our jobs away; it's just going to move people into different roles. Security tools will kind of shift from an assistant to something that takes on DAST-like capabilities or pre-merge analysis. We'll be able to leverage it to shrink time to get code to production.

• For the practitioner, the experimentation process will continue. LLMs will help to identify which pull requests are risky. There will always be surprises.

• There will be more LLM tuners, and therefore we'll learn practical applications around detection response and response automation. It will help teams scale and help us become more effective on how to interact with it and with each other so we can get responses back faster. We can use it to better discern whether a friend or foe is hitting our APIs.

• We can have AI comb through code to look for patterns and insights in detections and responses to distinguish between attackers/malicious users. The information provided is a data point to the decision a human makes; it is not the decision maker, at least not yet.

• But there are interim solutions that are helping LLMs get better at helping us make decisions for penalties, blocking, etc. Right now it's bringing insights from the SDLC or runtime observations into runtime context.

• LLMs are not trusted enough to make blocking decisions, but LLMs can give insights into how we can make better decisions and build trustworthy security policies. Eventually, the reasoning will get better, and we'll learn to trust LLMs more.

- 38:18: The future of the application security engineering role: Phillip's 5-phase prediction, Star Trek references, and reasons for optimism

• There will be 5 phases:

1) LLMs help humans scale as we tune and train it.

2) LLMs increase efficiency so we can learn to do more application security things.

3) More automation. LLMs will do more things for us and will help free things up for us to experiment and accomplish more.

4) AI + automation = infinite humans.

5) We rely on LLMs/AI so much that we end up forgetting about application security. We end up having to reboot.

• Similar to how vulnerability management and detection (rules and tests), prioritization, and posture management have evolved, LLMs are going to generate so much interesting things to think about, we're going to have to come up with new tools and new ways to help us prioritize and understand what's relevant.

• The application security industry has always ben understaffed. AI/LLMs can hep us keep up and fill in the gaps. It's exciting to think about the potential. There's a lot to think about.

• The future of AI is going to be a surprise, but it's only going to amplify humans. One fear is that we're never going to fully realize its potential and how it can promote and enhance humankind. We're only just in the beginning.

About the panel:

Darwin Salazar - https://www.linkedin.com/in/darwin-salazar

Phillip Maddux - https://www.linkedin.com/in/phillip-maddux-60499a105/

James Wickett - https://www.linkedin.com/in/wickett/

Brian Joe - https://www.linkedin.com/in/brianwjoe/

Meet a Co-Founder

Want to learn more about WAF and API security? Speak with an Impart Co-Founder!

See why security teams love us