A talk at OWASP Toronto by Matt Brown.

Notes

  • Prompt Engineering is a real title now
  • You need to be somewhat skilled at prompting to be efficient
  • You can think of cursor, replit ghostwriter, windsurf, copilot as your vibecoding tools
  • We build:
    • Modern AI native apps
    • They are model based and not hard coded
    • They make real time deicison
    • They are uncertain
    • They are embedded intelligence
  • Who builds it:
    • Vibe based development
    • Autonomous code writing
    • No syntax included
  • Vibe Coding
  • We are more scared of vibecoders than actual developers breaking things
  • LLMs are trained off of Open Source Software. A lot of bad practices in open source since its a free world
  • Software Composition Analysis exists to scan vulnerabilities in bad code
  • Software engineers programmers. We are not developing sustainable codebases
  • Github CTO says: We are laying off but we will hire a lot more, you can be 10x with 10 developers, 100x with 100 developers
  • Guy gets his vibecoded app destroyed. Software is complicated, it is a whole system to support it, and vibecoding neglects the system and security behind the app
  • Code flow for interns to deploy into prod:
  • Malicious attacker will:
    • AI Fuzz endpoints
    • Destroy systems
  • 75% of developers are using AI copilots
  • 40% of code on github is AI generated (Microsoft)
  • 62% of AI generated code has issues
  • They tested cursor, and asked it to create a board game collection app, It has 65 critical CVE, 700 Static Application Security Testing issues, 1600 dependencies, app didnt work
    • Mostly outdated dependencies
  • Lesson 1: Non-Determinism is good and bad
    • Same prompt can product different results
    • Free from IF-THEN. Allows for creativity and reasoning
    • Code value is dependent on the model you are using
  • When prompting, speak like a different person - appsec people speak appsec
  • Lesson 2: Expect the unexpected:
    • Small features have a huge impact. Simple AI suggestions can inflate your dependency tree (AI likes to make significant architectural changes. It will simplify things automatically)
    • Vulnerability multiplication. Each dependency can introduce new risks
    • Risks beyond CWEs and CVEs. AI can make architectural changes that impact your security posture
  • Static Application Security Testing tends to struggle with negative logic. Removing some security feature does not flag anything
  • With Dynamic Application Security Testing, you could probably identify this
  • Lesson 3: Start with secure prompts:
    • Implementing security early at the prompt stage
    • Tell it to explicitly be secure
  • Lesson 4: Implement security standards
    • Use rules files to drive development. Rule files can drive behaviors
    • Use ignore rule to protect sensitive data. Your Mileage May Vary with different models
    • Use Test-driven Development. Automated checks (which you can ask the model to write) can help catch issues
  • Example: give the prompt: Use the internal function sanitizeInputSafe() for all user input
  • Lesson 5: Get real-time security signal
    • Out of date code and data (and CVE data). Models are trained of code that is 1+ year old, but new CVEs are being disclosed every day
    • Fresh intelligence. MCP servers are useful to inject fresh security intelligence into the development workflow
    • Models can fix issues. With enough guidance, models can fix their own issues
  • MCP are like API for LLM to use. You can use Endor MCP to find the most recent vulnerabilities
  • Cursor has a MCP, it will use it while chatting, it is not a GET hook. Anytime you interact with cursor, it will do this. They don’t implement as a GET hook, because dev’s like freedom. You can set up your MCP to call this when you want
  • Help devs adopt code assistants:
    • Start with secure prompts
    • Implement security standards
    • Add security signals with MCP servers
  • Endor will release a free prompt library
  • Motley fool did an article, 90% of american companies say they are using AI