Bitcode. I just don’t buy it.

Apple / Development / iOS

At WWDC 2015 Apple introduced the concept of Bitcode, which essentially means we will in future compile our code to an intermediary binary represetation (IR) and uploaded to this iTunes Connect in this device-agnostic form.

This has been sparsely publicized as an optimisation strategy for iOS and watchOS apps, allowing Apple to apply hardware-specific micro optimisations after your app has been released to the store.

Perhaps they will use it for this, but I call disinformation on the whole thing.

If you stop to think for a moment about what the introduction of Bitcode involves, you quickly see this makes no sense for micro optimisations that might shave a few nanoseconds off operations here and there on newer chips.

Off the top of my head, here are just some of the costs and risks of introducing Bitcode:

  • Deploying code the developers have not tested and verified themselves. Both a PR risk with customers and a risk in developer relations. A bad deploy could destroy a company’s reputation for quality.
  • Developing Bitcode itself and integration with developer tools will not have been trivial and is ongoing, increasing complexity.
  • Infrastructure costs of hosting and cross-compiling Bitcode to N platforms for every release version of an app – including old versions if a user has not updated for some reason (unless they force update to latest apps on device restore)
  • Infrastructure and development costs of changes to iTunes Connect to handle the batch processing of queued cross-compile jobs and most likely verifying cross-compiled builds still work and pass App Review (N+1 reviews per release?)

In short this must be a big and relatively expensive project, and yet it is apparently just to service small optimisations on future hardware — something that makes Apple no new money at all. The new device is already sold and likely already faster than the previous device the user had.

More substantial explanations could include devices coming with a new family of chips that run the IR or a single derivative of it directly. No cross-compile, Apple ships the same code the dev provided. Deploy the same code to every new device.

That in itself would not generate more revenue for Apple in any direct way, although there will be CDN cost savings from smaller binaries, which are not insignificant at Apple’s scale.

What about sharing of code between devices? You can imagine a gaming scenario where Tom has a game and friend Sarah and Pete want to play it together but do not have it. Tom starts the game and Sarah and Pete get a prompt “Tom wants to play game MEGAWHATEVER with you? Do you want to install it now?”. They answer YES and the game is transferred in seconds over wi-fi between the devices directly. Without having to get or buy the game (even if it is free) in the App Store. Couple that with Apple TV showing the actual game play and the devices acting as controllers with displays, and you have a pretty compelling iOS sociable gaming experience.

Far-fetched? Possibly. I’m not sure about it either. But I am sure this is not just about theoretical optimisations.

The Author

Marc Palmer (Twitter, Mastodon) is a consultant and software engineer specialising in Apple platforms. He currently works on the iOS team of Concepts sketching app, as well as his own apps like video subtitle app Captionista. He created the Flint open source framework. He can also do a pretty good job of designing app products. Don't ask him to draw anything, because that's really embarrassing. You can find out more here.