On Monday, Apple announced what many of us had been expecting but hoping wouldn’t happen quite yet: the Mac is moving from Intel to Apple’s own ARM-based silicon. The transition starts later this year with the first Apple Silicon Macs, and Apple expects the full lineup to make the switch within two years.
I’ve been through platform transitions before. I was around for the 68k to PowerPC move, and I was very much in the thick of things during the PowerPC to Intel switch in 2005. Each time, Apple managed to pull it off more smoothly than anyone predicted. But this one feels different in scale and implication, especially for those of us who build developer tools and server-side software.
The Technical Foundation#
Apple’s A-series chips in the iPhone and iPad have been embarrassingly fast for years now. The A12Z in the current iPad Pro already rivals many laptop-class Intel chips in single-threaded performance while sipping power. The Developer Transition Kit Apple is shipping — a Mac Mini with an A12Z — isn’t even the final silicon. It’s a teaser.
The architectural advantages of ARM are well-documented: better performance per watt, tighter integration between CPU, GPU, and neural engine, and a unified memory architecture that eliminates the overhead of copying data between CPU and GPU memory pools. For machine learning workloads, image processing, and video encoding, the gains should be substantial.
What interests me more is what this means for the instruction set story. x86 has accumulated decades of backwards-compatibility baggage. ARM’s RISC architecture is cleaner, and Apple has the luxury of designing their chips for a single operating system. They don’t have to accommodate the weird edge cases that Intel deals with to keep ancient Windows software running.
Rosetta 2 and the Translation Layer#
Apple demonstrated Rosetta 2, the translation layer that will run existing x86 Mac apps on ARM. They showed Tomb Raider running through translation at what appeared to be playable frame rates, and they demonstrated Microsoft Office running without modification.
I’m cautiously optimistic here. The original Rosetta during the PowerPC-to-Intel transition worked better than anyone expected, but it wasn’t free — there was a measurable performance penalty, and some apps had subtle bugs. The new Rosetta has an advantage: it can do ahead-of-time translation at install time, not just JIT translation at runtime. That should help significantly with sustained workloads.
But here’s my concern: developer toolchains. If you’re running Docker, compiling large C++ projects, or using tools like Vagrant and VirtualBox, the transition gets complicated fast. Docker containers are built for specific architectures. You can’t just run an x86 Linux container on ARM without an emulation layer, and emulation layers for container workloads are slow.
What Developers Should Do Now#
If you’re a web developer working primarily in JavaScript, Python, or Ruby, your transition will probably be smooth. The interpreters and runtimes will be ported — Node.js, Python, and Ruby all run on ARM Linux already, so the macOS ARM ports should follow quickly.
If you’re doing systems programming in C, C++, or Rust, you’ll want to get a Developer Transition Kit and start cross-compiling. The LLVM toolchain already has excellent ARM support, so Clang and Rust’s compiler should produce native ARM binaries without drama. GCC will follow.
If you rely heavily on Docker for local development — and in 2020, most of us do — this is where I’d focus my attention. Docker has been running on ARM (Raspberry Pi, AWS Graviton) for a while, but the ecosystem of pre-built images is overwhelmingly x86. You’ll either need ARM-native images or you’ll be running through QEMU emulation, which is functional but not fast.
My immediate advice: start building multi-architecture Docker images now. Use docker buildx to create images that work on both amd64 and arm64. Even if you’re not planning to buy an ARM Mac on day one, multi-arch images are good practice — they’ll work on AWS Graviton instances too, which are often cheaper than their x86 equivalents.
The Virtualization Question#
One area that’s genuinely uncertain is virtualization. VirtualBox doesn’t support ARM. VMware and Parallels will need new hypervisors. Running Windows on an ARM Mac will require the ARM version of Windows, which Microsoft has been somewhat half-hearted about supporting.
For developers who need to test on Windows or run Linux VMs locally, this could be a real pain point during the transition period. Apple announced a new virtualization framework, but details are thin. We’ll need to see how VMware and Parallels respond.
I suspect that within 18 months, the virtualization story will be sorted out. But if you’re buying new hardware this year and you depend on running x86 VMs, the Intel Macs are still the safe bet.
My Take#
I think this is the right move for Apple, and ultimately the right move for developers — even though the next year or two will involve some friction. ARM’s efficiency advantages are real and growing. Intel’s roadmap has been troubled, with repeated delays and a shrinking process node advantage. Apple building their own silicon gives them control over the entire stack from transistor to API, which is a powerful position.
What excites me most is the potential for always-on, instant-wake laptops with genuine all-day battery life that can still compile code quickly. The current MacBook Pro is a compromise machine — it’s either fast or cool and quiet, rarely both. If Apple’s claims hold up, ARM Macs could be fast and efficient simultaneously.
The developer ecosystem will adapt. It always does. But if you’re making hardware purchasing decisions in the next six months, think carefully about your dependency on x86-specific tooling. The future is ARM, and it’s arriving faster than most of us expected.



