App Store

    Does Apple App Review check for hardcoded AWS S3 keys?

    Apple App Review checks for hardcoded AWS S3 keys in iOS app binaries

    The reader I am writing for has an AWS-backed iOS app, an AWS access key embedded somewhere in the build, and a submission window closing this week. They want to know whether Apple's review will catch it or whether they are about to ship a credential leak into the App Store. The honest answer is that the credential is not Apple's job to find.

    Short answer

    Apple App Review does not systematically scan IPA binaries for hardcoded AWS S3 keys, and apps with embedded AWS credentials are approved and shipped every week. According to the published App Review Guidelines, the review process focuses on policy compliance (data handling, privacy disclosures, prohibited content) rather than security analysis of the compiled binary. The check has to happen on your side before submission, using the same tools an attacker would point at the IPA after it ships.

    What you should know

    • App Review is policy review, not security review. The two have different goals and different staffing.
    • The strings command on an IPA reveals hardcoded credentials in seconds. Anyone with unzip and strings can run the same check Apple does not.
    • Apple's Privacy Manifest declares SDK behaviour, not configuration. A correct Privacy Manifest still passes review with a credential-leaking SDK configuration.
    • AWS keys follow predictable prefixes: AKIA for long-lived access keys, ASIA for session tokens. Any string in the binary matching either is a near-certain finding.
    • The fix is server-side credentials, not obfuscation. Hiding a key in the binary buys minutes, not protection.

    What does Apple's review actually look for?

    Apple's App Review Guidelines describe the categories the review team checks: safety (objectionable content), performance (crashes, broken functionality), business (in-app purchase compliance), design (HIG conformance), and legal (privacy disclosures, regional restrictions). Per the App Review section of developer.apple.com, each submission passes through automated checks followed by a human reviewer.

    The automated layer reportedly catches private API usage, prohibited frameworks, missing entitlements for declared capabilities, and obvious policy violations like reused screenshots or undisclosed in-app purchase flows. None of those checks involve scanning binary strings for credential patterns.

    The human reviewer follows a use-the-app protocol: open the build on an iPhone or iPad, run through the main user flows, check that the privacy policy URL works, confirm the App Tracking Transparency prompt appears when expected, and verify the metadata matches the actual functionality. The protocol does not include decompiling the binary or running strings against the executable.

    Why does Apple's review miss hardcoded keys?

    Three structural reasons:

    First, the review surface is enormous. Apple processes hundreds of thousands of submissions per week, and the team optimises for the violations they can find quickly. Credential scanning across the full bundle (including embedded frameworks) is computationally expensive per app and produces false positives (some valid public keys look like credentials), which is a poor fit for queue throughput.

    Second, hardcoded credentials are not a guideline violation by themselves. Apple's stance, consistently maintained in the App Review Guidelines section 5.1.1, is that developers are responsible for protecting user data they collect. A hardcoded AWS key is a developer choice with security consequences, not a violation Apple is positioned to police.

    Third, the credential exposure does not affect users of the app in a way the reviewer can observe. The app functions normally; the AWS key works; the data flows. The exposure becomes visible only when a third party extracts the credentials and uses them against the developer's AWS account. By that point, Apple is several steps removed from the chain.

    How easy is it for an attacker to find them?

    A decrypted IPA is a ZIP archive containing a Payload/<App>.app bundle with the compiled Mach-O executable inside. The attack reduces to four commands on any Mac or Linux machine:

    unzip MyApp.ipa
    cd Payload/MyApp.app
    strings MyApp | grep -E 'AKIA|ASIA|aws_'
    strings MyApp | grep -E 'sk_(live|test)_'   # for Stripe keys, same pattern
    

    The strings utility extracts all printable strings from the binary. AWS access keys start with AKIA (long-lived) or ASIA (temporary), both of which appear in plaintext in any Mach-O that includes a literal credential. The Spaceraccoon writeup on hunting credentials in iOS apps walks through the same workflow against real App Store apps and finds hits with regularity.

    Obfuscation does not change the outcome. A key that is XOR'd against a constant or base64-encoded still has to be decoded at runtime, and a dynamic instrumentation tool like Frida can read the decoded value out of memory the moment the app uses it.

    What is the correct architecture for using S3 from a mobile app?

    Three patterns, in order of how much they reduce exposure:

    PatternWhat changesWhat still has to be right
    Pre-signed URLs from your backendClient never sees an AWS key. Backend signs a short-lived URL scoped to one bucket and one object.Backend authentication, request rate-limiting, URL TTL (typically 5 to 15 minutes).
    Amazon Cognito Identity PoolsClient gets temporary AWS credentials scoped to one IAM role per user. Credentials rotate automatically.IAM role permissions scoped to least privilege; the role policy is the new attack surface.
    AWS STS AssumeRoleWithWebIdentityClient exchanges a Supabase or Auth0 JWT for AWS credentials. Scopes by JWT claims.The identity provider's JWT verification; the role trust policy.

    All three move the AWS credentials out of the client. The pre-signed URL pattern is the simplest for upload-and-download apps. Cognito Identity Pools or STS AssumeRoleWithWebIdentity are necessary when the client needs to make multiple AWS calls per session.

    How do you scan your own IPA before submission?

    Four checks, in order of how quickly they confirm exposure:

    1. Run strings against the compiled executable and grep for AKIA, ASIA, aws_, and sk_live_. Any hit is a finding.
    2. Run otool -L against the executable to list linked frameworks. Look for AWS SDK frameworks; the SDK presence does not prove a credential leak but flags the surface to audit.
    3. Decompile a .plist file with plutil -p Info.plist and search for any value that looks credential-shaped. Some apps store credentials in Info.plist by mistake.
    4. Run a third-party static scanner. PTKD.com (https://ptkd.com) is one of the platforms that runs the above checks automatically against an uploaded IPA, plus pattern-matching for the secrets of dozens of other providers (Stripe, SendGrid, Twilio, OpenAI). The report maps each finding to the relevant OWASP MASVS control.

    The MASVS-STORAGE-1 control covers exactly this case: "The app does not store any sensitive data in publicly accessible locations." A compiled binary in an IPA is publicly accessible once the app ships.

    What to watch out for

    Three details that get missed during the pre-submission rush.

    First, AWS keys can hide in embedded frameworks, not just the main executable. If the app links a third-party SDK that internally bundles AWS credentials (for instance, an analytics SDK that uploads to S3), the keys live in the SDK's binary, not yours. The strings check has to run against every Mach-O in the bundle, including everything under Frameworks/.

    Second, the Info.plist and asset catalogs are scanned too. Some apps store credentials in plist files for "configurability," which means the credential is in plaintext XML inside the bundle. The check has to include find Payload -name '*.plist' -exec plutil -p {} \;.

    Third, Privacy Manifest declarations do not unmake the leak. A correctly-declared Privacy Manifest for an AWS SDK passes Apple's review while still containing a hardcoded key. The two checks happen at different layers and serve different purposes.

    Key takeaways

    • Apple App Review is policy review, not security review. Hardcoded AWS keys are not on Apple's published rejection list.
    • The same strings command an attacker runs takes you ten seconds. Run it before submission.
    • The architectural fix is pre-signed URLs, Cognito Identity Pools, or STS AssumeRoleWithWebIdentity. Pick one based on how the app uses AWS.
    • PTKD.com (https://ptkd.com) scans an uploaded IPA for AWS, Stripe, Twilio, and other credential patterns and maps findings to MASVS controls.
    • Document the credential audit in your CHANGELOG so the next AI-generated change does not silently reintroduce the key into a file you already cleaned.
    • #app-store
    • #ios
    • #aws
    • #hardcoded-keys
    • #static-analysis
    • #ipa

    Frequently asked questions

    Has Apple ever rejected an app specifically for hardcoded AWS keys?
    Not as a documented pattern. The App Review Guidelines section 5.1 covers data handling and section 2.5 covers technical implementation, but neither lists 'hardcoded credentials' as a rejection cause. Rejections that mention credentials usually flag something else (missing privacy disclosure, undocumented SDK behaviour) and the credential leak is incidental, not the trigger.
    Do iOS App Transport Security checks include credential scanning?
    No. App Transport Security (ATS) checks the network configuration: TLS versions, certificate validation, cleartext traffic exceptions in Info.plist. It does not inspect string contents in the binary. An app with strict ATS settings can still ship with AWS keys hardcoded in the Mach-O executable.
    Will Apple's automated review tools detect just one specific key type by pattern?
    Apple does not publish what their automated layer scans for. Developer reports suggest the automated checks focus on private API usage, prohibited frameworks, and missing entitlements. AWS-style keys (starting with AKIA or ASIA) are not on any published list of patterns Apple flags.
    What about the Privacy Manifest declaration for the AWS SDK?
    Privacy Manifest declares which categories of user data an SDK collects, not whether the SDK is configured securely. A correctly-declared Privacy Manifest for AWSMobileClient still lets the app ship with hardcoded credentials if the developer pasted them into Swift code.
    Do Apple's dynamic checks catch what static checks miss?
    Apple does perform dynamic execution during review (the reviewer opens the app on a real device), but that path tests user flows, not credential exposure. A reviewer would not notice an AWS key being used unless the app crashed or made an obviously malicious request to a non-Apple service. The credential leak surfaces later, when the app is in the wild and an attacker decompiles the binary.

    Keep reading

    Scan your app in minutes

    Upload an APK, AAB, or IPA. PTKD returns an OWASP-aligned report with copy-paste fixes.

    Try PTKD free