App safety for AI-coded and vibe-coded builds

    AI coding assistants and no-code platforms ship apps faster than ever — but they also produce predictable security patterns: hardcoded API keys, unsafe SQL, missing auth checks, untrusted data flowing into native bridges, and permissive default configurations. This cluster covers the vulnerabilities specific to AI-generated and vibe-coded apps and how to find them.

    PTKD's scanner is tuned for these patterns; the guides below walk through what to look for and how to fix it before shipping.

    3 guides in this cluster

    The five vulnerabilities AI assistants introduce most often

    1. Hardcoded credentials.Asked to "add Stripe payments," the assistant often inlines a test or production key into client code rather than reading it from a server config. Search your generated bundles for sk_, pk_, AIza, AKIA prefixes.
    2. Authorisation via "hide the button".Premium-feature gates implemented client-side only. Removing the button or calling the endpoint directly bypasses the gate. Every privileged operation needs server-side enforcement.
    3. SQL or NoSQL injection.String-concatenated queries against client-side SQLite or Firestore rules that trust unsanitised user input.
    4. Overly permissive network rules.Generated network_security_config.xml or Info.plist with cleartext allowed for the whole app because the assistant's training data preferred the broadest fix.
    5. WebView / JS bridge injection.Exposed addJavascriptInterface with native privileges granted to any URL the WebView loads.

    Platform-specific notes

    • Cursor / Copilot: Suggestions reflect training-data patterns, including outdated and vulnerable ones. Always review security-sensitive suggestions against current platform docs.
    • Rork / FlutterFlow / Bubble: Visual builders often expose API keys in their default configurations. Use platform server-side variables, not client-side environment values.
    • Adalo / Glide: Database security rules are easy to misconfigure. Default to authenticated reads/writes only; enable public access per-collection deliberately.
    • Claude / GPT agents writing production code: Output looks clean but often skips input validation and lacks rate limiting. Treat agent-generated code like an intern's first PR — review every privileged operation.

    How PTKD scans AI-coded apps

    PTKD's scanner has rules tuned for the patterns above — detection signatures for common AI-assistant code shapes, framework-specific known-vulnerable defaults (Rork, FlutterFlow, Cordova/Capacitor), and JS bridge audit for WebView-heavy hybrid apps. Upload your build and the report calls out AI-pattern findings separately from general OWASP findings so you can triage them as a group.

    Where to start

    If you only have time for a few pages from this cluster, these are the most-asked guides.

    1. 01
      Cursor AI Security Issues: The Complete Risk Assessment Guide

      What goes wrong when you trust Cursor for security-sensitive code.

    2. 02
      How to Perform Cursor App Security Testing: Expert Guide

      Using Cursor itself to drive a security testing workflow.

    3. 03
      react native secure storage and keychain guide | PTKD

      Secure storage patterns for RN apps generated by AI tools.

    All guides in this cluster

    Frequently asked questions

    Are AI-generated apps inherently insecure?
    No, but their failure modes are more predictable than hand-written code. The same five issues — hardcoded keys, client-side auth, SQL injection, permissive network rules, WebView bridges — recur in 80% of AI-coded apps we scan. Once you know the patterns, you can systematically check for them.
    Should I avoid AI coding tools for security-critical apps?
    No. Avoid blind acceptance of AI output, especially for auth, payments, and data access. Review every privileged operation, scan every build, and run dependency audits. AI accelerates the boring parts; the security review is the part you don't automate.
    Do no-code platforms hide vulnerabilities?
    They hide implementation, which makes review harder, not vulnerabilities themselves. Most no-code platforms expose a security rules editor (Firestore Security Rules, Bubble Privacy Rules, Adalo Collections permissions). The vulnerabilities live there; scan and review those rules with the same rigour as hand-coded backend code.