kostia.dev
ios
swift
security
mobile
secrets

Your iOS App Secrets Are Never Really Secret

May 15, 2026
14 min read

Every secret you ship inside an iOS app is physically on a device you don't control. This post covers what counts as a "secret", the full threat model for leaks, and where Apple's built-in protections actually stop.

The moment your app lands on a user's device, you've lost physical control of it. That's not a flaw in the App Store model — it's just the nature of client-side software. Every bit of data you ship inside the binary is, in principle, readable by whoever holds the phone.

Most of the time, that's fine. But some of what you ship is sensitive. Multiple independent studies of Google Play apps find that roughly 40–43% contain at least one extractable secret (Alecci et al., 2025; Li et al., 2025; Wei et al., 2025) — and a 2025 CCS paper analysing both Android and iOS apps found iOS apps were more likely to expose secrets than Android. It's not a niche problem; it affects close to half of all apps in the store.

What Counts as a Secret

In iOS development, a "secret" is anything embedded in your app that grants access or special rights:

  • API keys
  • OAuth tokens or refresh tokens
  • Private keys (for signing or encryption)
  • Third-party SDK credentials (Firebase, Stripe, analytics)
  • Internal endpoint URLs that shouldn't be public

These aren't user data — they're developer data. Values you placed there intentionally, that are supposed to stay hidden.

Secrets live in different places depending on how carefully they were handled:

Hardcoded in source. The most common case. Something like let apiKey = "abc123" ends up baked directly into the binary as a readable string. Static analysis tools find these in seconds.

In resource files. Some teams put all credentials in a secrets.json or .plist bundled with the app. Slightly less obvious, but just as accessible — anyone who unpacks the .ipa archive sees the files directly.

Obfuscated or encrypted at rest. A step up. The value isn't a plain string anymore, but it still has to be decrypted at runtime, which means the decryption key — and the decrypted value — exist in memory at some point.

mermaid
flowchart LR A["Hardcoded\nlet apiKey = '...'"] B["Resource file\nsecrets.json / .plist"] C["Obfuscated\nor encrypted at rest"] A -->|harder to find| B -->|harder to find| C style A fill:#fca5a5,stroke:#ef4444,color:#1f1f1f style B fill:#fde68a,stroke:#f59e0b,color:#1f1f1f style C fill:#bbf7d0,stroke:#22c55e,color:#1f1f1f

The third approach is meaningfully harder to attack than the first two. But "harder" isn't the same as "safe", and understanding why requires thinking about what attackers are actually trying to do.

What Can Actually Go Wrong

Here's what a secrets leak looks like in practice.

API bill explosion. An attacker extracts a maps or weather API key and fires off millions of requests. You get the bill. Your legitimate users get rate-limited or blocked.

Account takeover without a password. A leaked OAuth access token or refresh token lets an attacker authenticate as a real user. No phishing required, no brute force, no social engineering — just replay the token.

Bypassing paid features. An attacker finds the key that gates premium content and builds a bypass. Every technically capable user who finds their work on GitHub stops paying.

Fake push notifications. A stolen FCM or APNs key means an attacker can send push notifications that appear to come from you. Phishing via your own app's notification channel.

Payment fraud. A payment provider key in the wrong hands can initiate unauthorised transactions or reverse legitimate ones.

Third-party integration damage. Analytics keys, logging keys, crash reporting keys — these often have write access to systems your whole team relies on. A compromised analytics key can flood your dashboards with garbage data.

Regulatory and legal exposure. In healthcare, banking, or fintech, a secrets leak can trigger mandatory breach disclosure, regulatory investigation, and significant fines. The technical cost is often the smallest part.

The common thread: your key becomes their key. They can impersonate your app, your users, or your infrastructure — for as long as the key is valid.

Classifying the Threats

Microsoft's STRIDE model gives a useful framework for thinking about this. It defines six threat categories, and three are directly relevant to secrets in mobile apps.

Information Disclosure — the obvious one. An attacker reads something they shouldn't: a key from the binary, a token from memory, a credential from a resource file. This is the "secrets leak" threat in its most direct form.

Tampering — less obvious, but related. An attacker doesn't just read your secrets — they modify them. They patch the binary to swap your API key for theirs, or change a stored token to one they control. The integrity of embedded data matters as much as its confidentiality.

Elevation of Privilege — the enabler for everything else. Before an attacker can do either of the above, they need elevated access: root on a jailbroken device, the ability to attach a debugger, the ability to inject code into your process. These attacks don't steal secrets directly — they remove the barriers that would otherwise prevent it.

These three form a chain. An attacker gains elevated access to the device. That gives them the ability to tamper with the running app — patching functions, hooking calls, reading memory. From there, extracting secrets is mechanical. Each step unlocks the next. The practical implication is that you can't just defend against extraction; you need to think about what enables extraction in the first place.

mermaid
flowchart LR A["Elevation of Privilege\nJailbreak · root access\ndebugger attach"] B["Tampering\nPatch binary · hook functions\nread memory"] C["Information Disclosure\nExtract secrets\nfrom binary or memory"] A --> B --> C style A fill:#fca5a5,stroke:#ef4444,color:#1f1f1f style B fill:#fde68a,stroke:#f59e0b,color:#1f1f1f style C fill:#bbf7d0,stroke:#22c55e,color:#1f1f1f

What Apple Gives You

Apple ships several mechanisms that make attacking iOS apps harder. The specifics matter for understanding where the gaps are.

Sandboxing

Every iOS app runs in its own sandbox — an isolated environment with a randomly assigned home directory. It can't read other apps' files, access their memory, or write to system directories. The OS filesystem is mounted read-only for user apps.

This prevents one app from simply reading another app's secrets — the most straightforward cross-app data theft scenario is blocked by default.

Code Signing

Before any binary runs on iOS, the OS verifies its cryptographic signature. If the binary has been modified after signing — even by one byte — the signature check fails and the app is killed at launch.

This means an attacker can't trivially patch your binary and distribute it to other users without going through a re-signing process, which requires either a developer certificate or a jailbreak.

ASLR (Address Space Layout Randomisation)

Every time your app launches, its code, libraries, and stack land at randomised memory addresses. This matters because many memory-corruption exploits depend on knowing where things are in memory — a buffer overflow that wants to redirect execution to a specific function needs that function's address to be predictable. ASLR makes that address different on every run.

It's not a complete defence — an attacker who can read an address from memory can use it to calculate where everything else landed. But it raises the bar substantially over a world where every binary loads at the same address every time.

Execute Never (ARM)

On ARM, memory pages are either executable or writable — never both simultaneously. A page that holds your app's data can't be jumped to as code. A page that holds your app's code can't be written to at runtime.

The practical effect: an attacker can't simply write shellcode into a data buffer and execute it. Injecting arbitrary code is structurally blocked at the hardware level. iOS carves out a narrow exception for JIT compilers (JavaScript engines need it), but that requires an explicit Apple entitlement. A normal app doesn't have it.

Keychain and Secure Enclave

The Keychain is the right place to store credentials that arrive at runtime — user passwords, session tokens, private keys generated on-device. It's encrypted with a key derived from the user's passcode and, on devices that support it, protected by the Secure Enclave — a separate processor that never exposes the raw key material to the main CPU.

The important distinction: Keychain is designed for dynamic secrets that appear during app use, not for static developer secrets baked in at build time. Putting your API key in the Keychain at first launch just means it travels from your binary to the Keychain on the first run — it was still in the binary.

ATS and SSL Pinning

App Transport Security (ATS) requires HTTPS for all network connections by default and blocks old, vulnerable TLS configurations. This protects traffic in transit from passive interception.

SSL pinning goes further: your app hardcodes a specific certificate or public key and refuses to connect if the server presents anything different. This defends against active man-in-the-middle attacks, even from attackers who have installed a trusted CA certificate on the target device.

The Summary

MechanismProtects againstLimitation
SandboxingCross-app data accessBypassed entirely with a jailbreak
Code SigningBinary tamperingCan be re-signed on jailbroken devices
ASLRFixed-address memory exploitsLeaked memory pointers can defeat it
Execute NeverShellcode injectionJIT carve-out exists
KeychainCredential storageNot designed for static build-time secrets
ATS / SSL PinningNetwork interceptionPinning can be disabled with a jailbreak + Frida

Where Apple's Protections Stop

All of these mechanisms share one critical assumption: the device is running an unmodified OS with no jailbreak.

On a jailbroken device:

  • The sandbox is gone — any process can read any other process's files
  • Code signing is bypassable — modified binaries can be re-signed with ldid and run
  • Root access is available — debuggers attach to anything
  • System daemons can be patched — the process that enforces App Store verification can be disabled

Jailbreaking isn't as widespread as it was in 2012, but it hasn't gone away. Public jailbreaks exist for devices running iOS 14 and earlier, and a significant number of users are still on older hardware and older software. More importantly, a motivated attacker — a security researcher, a competitor, someone building a crack — does this on a dedicated test device. They're not jailbreaking their daily driver.

The second limit: Apple's protections say nothing about the content of your binary. A hardcoded let apiKey = "abc123" is protected from tampering by code signing. But the value is still sitting there in plaintext. Code signing guarantees nobody modified your binary; it says nothing about what's inside it.

The third limit applies even if you obfuscate or encrypt secrets at rest, and it's the most important one: secrets must be decrypted to be used. The moment your app calls a function with a secret as an argument, that value exists in process memory in plaintext. A debugger or memory scanner doesn't care how carefully you stored the value on disk. It reads memory. This is why no client-side protection strategy is complete — only the architecture of keeping secrets off the device entirely avoids it.

What This Means Practically

The goal isn't perfect security — that doesn't exist on the client side. The goal is to raise the cost of an attack high enough that it's not worth the effort, or at least not worth attempting at scale.

A few principles worth internalising before the practical techniques in the next posts:

Minimise what's on the client. The safest secret is one that never ships in the binary at all. Anything that can stay on the server, should. That said, some secrets genuinely can't: certain SDK keys must be present at build time, some third-party frameworks require a client-side credential to initialise. Acknowledge which category each secret falls into before deciding how to protect it.

Assume the binary is readable. Design your threat model around the assumption that a motivated attacker can extract any static value in your app. If the damage from exposure is high, the value doesn't belong in the binary.

Prefer short-lived over long-lived secrets. A token that expires in 15 minutes is vastly less dangerous than one that's valid indefinitely. The attack window shrinks proportionally.

Secrets that can be rotated should be — without a release. If a key is compromised, you need to invalidate it immediately, not after a two-week App Store review. This means server-side secret management: the client fetches credentials dynamically rather than having them baked in at build time.

The next post will go through what an attacker actually does step by step — the toolchain, the sequence, and how each of Apple's protections gets addressed along the way.

Related Posts

Swift
Concurrency
iOS
A guide to region-based isolation, @Sendable, sending and unchecked sendable, and closure safety in Swift 6 with examples.
6/12/2025
7 min read
Swift
Concurrency
Swift 6 Migration
Refactoring
Sendable
Region-Based Isolation
Practical refactoring tips for migrating to Swift 6 concurrency: @Sendable, sending, region-based isolation, and common error fixes.
6/13/2025
5 min read
Swift
Concurrency
iOS
A deep dive into Swift's concurrency model, focusing on actors and region-based isolation.
6/10/2025
9 min read