A major premise of appsec is figuring out effective ways to answer the question, "What security flaws are in this code?" The nature of the question doesn't really change depending on who or what wrote the code. In other words, LLMs writing code really just means there's mode code to secure. So, what about using LLMs to find security flaws? Just how effective and efficient are they?
We talk with Adrian Sanabria and John Kinsella about the latest appsec articles that show a range of results from finding memory corruption bugs in open source software to spending an inordinate amount of manual effort validating persuasive, but ultimately incorrect, security findings from an LLM.
Visit https://www.securityweekly.com/asw for all the latest episodes!
Show Notes: https://securityweekly.com/asw-370