kostia.dev
iOS
Swift
Performance

Method Dispatch in Swift

Feb 21, 2026
10 min read

Swift implements four types of method dispatch: inlining, static, table, and message. Understanding which one applies and when explains some of Swift's most confusing behaviour — and helps you write faster code.

Every time you call a function in Swift, the runtime has to figure out which function to actually execute. That process is called method dispatch — and Swift implements four different approaches, each with different speed and flexibility tradeoffs.

Most languages only use one or two dispatch styles. Swift supports all four. That gives you a lot of power, but it can also make you ask, "wait, why did it call that one?"

The Four Flavours

Think of method dispatch as a trade-off: the faster options are more rigid, and the more flexible options are usually slower. The difference boils down to indirection — how many pointer hops the CPU has to make before it can start executing your function.

TypeIndirectionWhen
Inlining0 jumpsCompiler replaces the call site with the function body
Static1 jumpAddress known at compile time
Table2 jumpsRuntime lookup via vtable or witness table
MessageN jumpsObjective-C runtime traversal

Inlining and Precomputing

Inlining is not really dispatch at all. The compiler replaces the call site with the body of the function, so no jump happens at runtime.

swift
func addOne(to num: Int) -> Int { return num + 1 } let result = addOne(to: 2) // compiled as: let result = 2 + 1

If the inputs are known at compile time, the compiler goes further and precomputes the result entirely:

swift
let result = addOne(to: 2) // compiled as: let result = 3

No function call. No jump. Just a constant baked into the binary.

You don't control this directly — the compiler decides during its optimisation passes. Build with -O (optimise for speed) to encourage it; -Osize makes the compiler more conservative, because inlining copies the function's machine code to every call site — if the same function is called in 50 places, you get 50 copies in the compiled binary, growing the binary size.

Why is inlining faster beyond the obvious "one less function call"? Three reasons:

  • Function call overhead — the CPU has to save registers, update the instruction pointer, then restore state after the call returns.
  • Cache misses — code that isn't inline may need to be fetched from RAM into CPU caches when called, which is ~100x slower than hitting the L1 cache.
  • Branch prediction disruption — the CPU speculatively executes ahead of the current instruction. Unpredictable jumps to function calls blow up that speculation.

Static Dispatch

Also called direct dispatch or compile-time dispatch. The compiler knows exactly where in memory the function lives, so it emits a single direct jump to that address.

struct and enum methods always use static dispatch — value types can't be subclassed, so their implementations are fixed at compile time. The Swift compiler loves this because it can collapse entire chains of statically-dispatched calls into a single inlined block.

swift
struct Multiplier { func double(_ n: Int) -> Int { n * 2 } } let result = Multiplier().double(5) // direct call — one jump, possibly inlined

static functions and methods on types where the compiler can prove no override exists also use static dispatch. Mark a class final and all its non-dynamic methods become statically dispatched.

Table Dispatch

This is where things get interesting — and where most Swift gotchas live.

Virtual Tables (Classes)

When you subclass a class, Swift can't know at compile time which subclass's method will be called. The answer depends on what object you actually create at runtime. So instead of a direct address, the compiler emits a lookup into a virtual table (vtable) — a table of function pointers attached to each type's metadata in the binary.

swift
class Animal { func speak() { print("...") } } class Dog: Animal { override func speak() { print("Woof") } } let animal: Animal = Dog() animal.speak() // vtable lookup at runtime → Dog.speak()

The Animal vtable maps speak() to Animal.speak(). The Dog vtable maps it to the overridden Dog.speak(). At runtime: jump to vtable, look up the pointer, jump to the function. Two jumps.

Marking a class final tells the compiler no subclass exists, so it promotes the vtable lookup back to a static (or inlined) call. Same effect with private — the compiler can see the whole file and verify no override exists.

Protocol Witness Tables

Protocols bring a similar mechanism to value types. Because a function accepting any Drivable might receive a Car, a Truck, or anything else that conforms, there has to be a runtime lookup. Swift handles this with witness tables — one per concrete conformance — stored alongside the value in an existential container.

swift
protocol Drivable { func drive() } struct Car: Drivable { func drive() { print("Vroom") } } let vehicle: any Drivable = Car() vehicle.drive() // witness table lookup → Car.drive()

The key thing: this only kicks in when you're using an abstract protocol type. If the compiler knows the concrete type — because you declared it explicitly, or WMO is on — it skips the table and dispatches directly.

The Protocol Gotcha

This is the one that bites most mid-level Swift developers:

swift
protocol Animal { func cry() -> String } extension Animal { func cry() -> String { "..." } func sayHello() -> String { "Hello" } } class Cat: Animal { func cry() -> String { "Meow" } func sayHello() -> String { "Purr" } }
swift
var a: Animal = Cat() a.cry() // → "Meow" — witness table dispatch to Cat.cry() a.sayHello() // → "Hello" — static dispatch to the protocol extension var c: Cat = Cat() c.cry() // → "Meow" — dispatched on the concrete Cat type c.sayHello() // → "Purr" — dispatched on the concrete Cat type

cry() is a protocol requirement, so it always goes through the witness table — the runtime needs consistent behaviour across all conforming types. sayHello() is only defined in the extension and is not a requirement, so it's dispatched statically to the extension's implementation. That's why calling sayHello() on the Animal-typed variable completely ignores Cat's own override.

This isn't a bug. It's how dispatch is designed to work. But it surprises almost everyone the first time.

Message Dispatch

Message dispatch lives in the Objective-C runtime and is the slowest approach — but also the most flexible. Method implementations can be swapped at runtime (swizzling), and the runtime traverses the class hierarchy looking for the right selector if it isn't cached yet.

To opt into it from Swift:

swift
class FeatureToggle: NSObject { @objc dynamic var isEnabled: Bool = false }

@objc exposes the property to the ObjC runtime. dynamic forces objc_msgSend to be used instead of a vtable lookup.

In practice you need this for KVO, Realm model properties, some UIKit delegate patterns, and anything using method swizzling. The ObjC runtime caches method lookups after first use, so repeated calls warm up and become roughly table-dispatch speed — but you permanently lose compiler optimisations like inlining on those methods.

Making Your Code Faster

Whole Module Optimisation (on by default in Xcode) already handles a lot of this. The compiler sees every file in a module at once, can verify that an internal class has no subclasses, and promotes those methods to static dispatch automatically — no final required.

Explicit things that still matter:

  • final — removes vtable dispatch for a class or individual method. Signals intent clearly.
  • private / fileprivate — gives the compiler visibility to check for overrides in scope. If none exist, it infers final automatically.
  • Concrete types over protocol types — prefer Cat over any Animal when the concrete type is known. Witness table overhead goes away.
  • Cross-module optimisation — enables devirtualisation and inlining across module boundaries in release builds:
swift
// Package.swift .target( name: "MyModule", swiftSettings: [ .unsafeFlags(["-cross-module-optimization"], .when(configuration: .release)) ] )

The actual cost of a single dynamic dispatch is tiny — a few nanoseconds. What hurts at scale is the lost optimisation opportunity: a method the compiler can't inline is a method it can't precompute, can't fold with its neighbours, and can't remove entirely if the result is unused.

Building the Intuition

Instead of memorising "type X uses dispatch Y", ask one question: can the compiler know the concrete implementation at compile time?

If yes → static dispatch, possibly inlined.
If it depends on runtime state → table dispatch.
If it's ObjC → message dispatch.

A few quick rules of thumb:

  • struct / enum methods → always static.
  • class methods → vtable by default; static if final, private, or inferred by WMO.
  • Protocol requirements → witness table when the type is abstract; static when the concrete type is known.
  • Protocol extension methods (non-requirements) → always static; they live at a fixed address.
  • @objc dynamic → always message dispatch.

Once you have that mental model, most of the surprising Swift dispatch behaviour stops being surprising. The compiler is just trying to give you the fastest version it can prove is correct.

Related Posts

Swift
Concurrency
iOS
A guide to region-based isolation, @Sendable, sending and unchecked sendable, and closure safety in Swift 6 with examples.
6/12/2025
7 min read
2 shared tags
Swift
Concurrency
iOS
A deep dive into Swift's concurrency model, focusing on actors and region-based isolation.
6/10/2025
9 min read
2 shared tags
Swift
Struct
Class
Interview
iOS
A detailed guide to Swift structs vs classes: value vs reference semantics, inheritance, memory, closures, and interview insights.
6/5/2025
4 min read
2 shared tags