Mobile app security: the complete reference

    Mobile security is the largest cluster on PTKD — over a hundred guides covering platform-specific controls, threat models, testing methodologies, compliance requirements, and the new wave of AI-coded and no-code apps that traditional scanners weren't built for.

    This page is the canonical entry point: the sections below summarise what mobile app security actually means in 2026, the threat model PTKD assumes, and how the OWASP Mobile Top 10 maps to the guides in this cluster. Use it as a starting point, then drill into a sub-topic.

    112 guides in this cluster

    What is mobile app security?

    Mobile app security is the set of practices, controls, and automated checks that keep a published Android or iOS app resistant to the threats it faces in production. It covers the app binary itself, the runtime it executes in, the network traffic it generates, the data it stores, the third-party SDKs it loads, and the build pipeline that produces it.

    Three things make mobile different from web security: the attacker has full read access to your shipped binary, the device you're running on cannot be trusted, and the platform store (Apple, Google) is a hard distribution gate. Every control in this cluster is shaped by those three facts.

    The OWASP Mobile Top 10, in plain English

    The Open Worldwide Application Security Project (OWASP) publishes the canonical list of mobile risk categories. The 2024 revision is what most scanners — PTKD included — map findings against. The categories below appear across every guide in this cluster:

    • M1 — Improper credential usage: hardcoded secrets, weak token storage, predictable session handling.
    • M2 — Inadequate supply chain security:vulnerable third-party SDKs, unsigned dependencies, compromised build pipelines.
    • M3 — Insecure authentication / authorisation:broken biometric flows, missing server-side checks, token replay.
    • M4 — Insufficient input / output validation:WebView injection, SQL injection in local databases, unsafe deep-link handlers.
    • M5 — Insecure communication: missing TLS, cleartext fallback, broken certificate pinning.
    • M6 — Inadequate privacy controls:over-broad permissions, leaked PII in logs, missing consent.
    • M7 — Insufficient binary protection:no obfuscation, no anti-tampering, exposed strings.
    • M8 — Security misconfiguration:debug builds shipped, exported components, overly permissive manifests.
    • M9 — Insecure data storage:plaintext SharedPreferences, unencrypted Keychain entries, world-readable files.
    • M10 — Insufficient cryptography:custom crypto, weak algorithms, predictable keys.

    The threat model this cluster assumes

    Every page in this cluster assumes a hostile execution environment. Specifically, the attacker can: install your app on a rooted device, attach a debugger, intercept network traffic with a custom CA, decompile your binary, replay requests, and tamper with local storage. Any control that breaks under those assumptions doesn't belong in your app.

    What the attacker generally cannot do — assuming your backend is sound — is forge a server-validated session, decrypt hardware-bound key material, or bypass Apple/Google's code signature. Most real production security work is pushing trust off the device toward your servers and the platform attestation APIs.

    AI-coded and no-code apps need different scanning

    Apps built with Cursor, Rork, FlutterFlow, Adalo, Bubble, and Glide produce predictable security patterns: hardcoded API keys in client config, SQL composed via string concatenation, authentication shimmed onto pages that should have been server-protected, and untrusted user input flowing into native bridges. Traditional SAST tools — built for hand-written Swift and Kotlin — frequently miss these because the vulnerable code lives in generated JSON, YAML, or compiled JavaScript bundles.

    Several guides in this cluster are tuned for those patterns. If you're shipping an AI-coded app, start with the Rork-specific pages and the "testing AI-generated apps for vulnerabilities" guide before working through the rest of the cluster.

    Where to start

    If you only have time for a few pages from this cluster, these are the most-asked guides.

    1. 01
      OWASP Mobile Security Testing Guide: How to Use It

      The canonical OWASP MASTG checklist applied to a real APK/IPA workflow.

    2. 02
      Mobile App Security Basics: Complete Guide 2025

      Read this first if you've never thought about mobile security beyond 'use HTTPS'.

    3. 03
      Best mobile app vulnerability scanners in 2026: which ones actually help?

      Honest comparison of paid and open-source mobile scanners for 2026.

    4. 04
      Testing AI-Generated Apps for Vulnerabilities: Complete 2025 Guide

      Concrete checks for Cursor/Rork/FlutterFlow output.

    5. 05
      iOS Keychain Security: How to Use It Right

      The right and wrong ways to use the iOS Keychain.

    All guides in this cluster

    AI app development

    Android specifics

    Banking apps

    Mobile app protection

    App safety

    App security

    iOS specifics

    Malware detection

    Compliance

    Mobile app protection

    Core security

    Testing

    Threats

    Tools

    Vulnerabilities

    Best practices

    Testing

    Tools

    Vulnerabilities

    Frequently asked questions

    How often should I scan a mobile app for vulnerabilities?
    Every commit that touches application code or dependencies, via CI/CD. A scan that runs once before submission catches the last bug introduced, not the others. PTKD's API and GitHub Action let you wire scans into pull-request checks so regressions never reach the store.
    Is OWASP Mobile Top 10 enough for compliance?
    It's the baseline, not the whole picture. GDPR, HIPAA, PCI DSS, and ISO 27001 all impose additional requirements around consent, data residency, audit logging, and incident response that the OWASP list doesn't enumerate. Use OWASP for technical coverage and a compliance framework for the legal scope.
    Can a static scan really find runtime issues?
    Some, not all. Static analysis is excellent for hardcoded secrets, weak crypto, manifest misconfiguration, and known-vulnerable dependencies. Dynamic checks — TLS negotiation, attestation, runtime tampering — need a sandboxed execution environment. PTKD combines both because either alone misses about a third of findings in real apps.
    What's the difference between SAST, DAST, and IAST for mobile?
    SAST reads your built binary and reports issues from code patterns alone. DAST runs the app in an instrumented sandbox and observes its behaviour — network calls, file writes, IPC. IAST instruments the running app and reports issues as they happen. Mobile scanners typically combine SAST and DAST; pure IAST is rare on mobile because instrumentation breaks app integrity checks.
    How long should my first scan take?
    An APK or IPA under 200MB scans in two to five minutes on PTKD. Reports include severity-ranked findings, OWASP mapping, and copy-paste remediation snippets — not just a list of warnings.