The ten checks below catch the security issues we see in roughly 80% of the APKs and IPAs scanned by PTKD. Run them in order — they are sorted by the ratio of damage prevented to engineering hours required, not by where they live in the OWASP framework.
This post is the field version of a security review, written for teams who don't have a dedicated security engineer and need a defensible posture before they ship. We'll cover what to check, why each one matters in plain English, and exactly what "done" looks like.
Most apps fail at least three of these ten checks. The first six are non-negotiable for any app that handles user data.
How to use this list
Work top-down and don't skip ahead. The order matters: secrets leak the moment your binary ships, network misconfiguration fails the next request, and storage bugs surface the next time a user logs in. Architectural items (attestation, anti-tampering) come last because they depend on the basics being in place.
Each check has three parts: what it is, the failure pattern we see in real apps, and what "done" looks like. For the deep-dive on any single item, follow the linked guide — those cover the implementation details the checklist can't.
1 · Secrets are not embedded in the binary
No API keys, OAuth client secrets, signing keys, or admin tokens live in your shipped APK/IPA. Anyone can extract strings from a published binary; treat anything you compile in as public information.
Common failure: a Firebase admin SDK key, a Sentry DSN with write scope, or an unpartitioned third-party API key inlined for "convenience".
Done looks like: all secrets fetched from your backend after authentication, or stored in the platform-specific secure keystore. Read our guide on securing API keys for the patterns.
2 · TLS is enforced and ATS exceptions are gone
Every network request your app makes is HTTPS. There are no NSAllowsArbitraryLoads exceptions in Info.plist and no cleartextTrafficPermitted=true in your Android Network Security Config. Both opt-in to cleartext, both bypass the platform defaults you actually want.
Common failure: ATS exception added for a single legacy API and never removed; cleartext enabled in network config because the test environment used HTTP.
Done looks like: ATS exceptions removed or scoped to a single domain you control with a documented rotation plan; network security config explicitly blocks cleartext.
3 · Authentication has server-side enforcement
Every privileged operation — anything beyond "read public data" — is gated by a check on your server, not just a UI condition in the app. Client-only authorization is the most common authorization bug we find.
Common failure: a premium feature hidden behind a client-side flag; calling the underlying endpoint directly skips the gate.
Done looks like: server validates session, role, and ownership on every request. The app UI hides features users can't access, but the server is the source of truth.
4 · Untrusted input is validated server-side
User input, URL query parameters, deep-link payloads, and inter-app intents are validated and sanitised on the server before they affect any state. Most mobile injection bugs come from trusting the device on endpoints that should have been server-side.
Common failure: SQL composed by string concatenation in a client-side cache; an unvalidated deep link that opens a WebView with attacker-controlled HTML.
Done looks like: input boundaries documented; parametrised queries; an allow-list for deep-link hosts.
5 · Sensitive data is encrypted at rest
Tokens, PII, and any data covered by your compliance framework live in the platform keystore — Android Keystore on Android, Keychain on iOS — with the correct accessibility flags. SharedPreferences and UserDefaults are plaintext from a compliance standpoint.
Common failure: refresh tokens written to SharedPreferences; Keychain entries with default accessibility that syncs via iCloud Backup.
Done looks like: Android EncryptedSharedPreferences or a Keystore-backed AES key; iOS Keychain with kSecAttrAccessibleAfterFirstUnlockThisDeviceOnly and SecAccessControl for biometric gates. See the iOS Keychain security guide for the full pattern.
6 · Permissions are scoped to actual need
Every permission you declare maps to a feature the user can actually trigger and a justification you can write in one sentence. Over-broad permissions raise rejection risk and attack surface; both stores penalise apps that ask for more than they use.
Common failure: a third-party SDK added READ_CONTACTS or QUERY_ALL_PACKAGES to the manifest without your team noticing. The app is now treated as a contact harvester.
Done looks like: every permission justified, requested at runtime in the moment of use, and your manifest diff reviewed before every release. Read the Android dangerous permissions list for the ones reviewers care about.
7 · Device attestation gates sensitive actions
High-value endpoints (payments, account changes, admin actions) check Play Integrity on Android and DeviceCheck on iOS server-side before they execute. Attestation is the cheapest way to filter out emulators, rooted devices, and bot traffic without alienating real users.
Common failure: attestation evaluated on-device only — an attacker just patches the result. Or rolled out but never validated server-side.
Done looks like: attestation tokens posted to your backend; server verifies the JWS, the nonce you issued, and the verdict. Failures degrade features or step up auth, not hard-block.
8 · Anti-tampering raises the cost of repackaging
Release builds use R8/ProGuard with shrinking and obfuscation, symbols are stripped, and the app verifies its own signing certificate at runtime. None of this stops a determined reverse engineer — it raises the cost enough that opportunistic repackagers move on.
Common failure: R8 disabled because "it broke a SDK"; debug builds shipped with the wrong gradle config; no signature check at app startup.
Done looks like: R8 minify + shrink + obfuscation enabled for release; iOS dSYMs stored separately; signature hash verified at app launch.
9 · Third-party SDKs are audited and pinned
Every SDK in your dependency tree has a known author, a current version, and a documented justification. Supply-chain compromise is the fastest-growing mobile attack vector — your app is only as trustworthy as the least careful SDK author you ship.
Common failure: an analytics SDK swapped maintainers and started collecting clipboard contents; a transitive dependency published a CVE you didn't notice for six months.
Done looks like: dependencies pinned to specific versions; an SBOM generated on every build; PTKD's SDK risk report reviewed at each release.
10 · Logs are structured, scrubbed, and monitored
Production logs contain no secrets, no full PII, and feed an alerting backend that pages someone when suspicious patterns spike. Most breaches are detected days or weeks late because nobody was watching the logs.
Common failure: bearer tokens logged on auth failures; full request bodies logged for debugging that never got removed; no alerting on auth-failure rate spikes.
Done looks like: log redaction at the source; structured fields you can query; alerts on auth-failure rate, attestation rejections, and integrity verdict spikes.
How to run this in CI
Once you've fixed the issues, wire the check into CI so regressions never reach the store. PTKD's GitHub Action runs the same ten checks on every pull request and fails the build on high-severity findings. The CI/CD setup guide walks through the integration in under five minutes.
What this list deliberately doesn't cover
Three categories are intentionally absent: business-logic vulnerabilities, jailbreak detection, and certificate pinning. Business-logic flaws require human review; no scanner catches them. Jailbreak detection is a risk signal, not a control — score it, don't block on it. Certificate pinning is high-value for payment APIs but creates operational risk if your team can't manage rotation discipline, so it's a deliberate opt-in rather than a default.
Read the longer PTKD mobile app security guide for the strategic frame around all three.



