Apple revealed its first custom processor, the Apple M1, during an event in November 2020. The chip was met with high praise for cramming loads of power into a tiny space. Apple’s M1 led people to wonder if SoCs were the future of computing. Due to switching to an ARM architecture, Apple had to figure out a way to allow M1 Macs to use programs that were designed with Intel-based Macs in mind. Apple went with the simplest but most effective method: emulating the x64 architecture within the Apple M1 itself. The emulation method is called “Rosetta 2,” named after the Rosetta Stone. People were surprised by how effective Rosetta 2 was. Dougall Johnson, an Australian security researcher, now believes he knows why that is.
There is an undisclosed extension inside Rosetta 2 that streamlines the process of storing parity and adjusting flags within an application. This allows for a more accurate and “snappy” emulation, according to Johnson. The most amazing part is the origin of the extension - it was included in Intel’s second-ever processor, the Intel 8080, from 1974. The ancient 8-bit microprocessor handled these adjustments and parity storages very specifically, and the feature has continued to find its way onto today’s Intel processors. If you have a new Core i9-13900K, there is a direct (albeit minor) correlation to processors that powered some computers nearly 50 years ago. Bits 26 and 27 within ARM’s flags register are dedicated to this process, though these two bits are only assigned to this action when Rosetta 2 is active. Rosetta 2 doesn’t activate unless it detects a program was made with Intel-based Macs in mind, reassigning the two bits and allowing Rosetta 2 to work at its usual snappy pace. It’s interesting to see the methods Apple deployed to allow people to continue using programs that were designed for older Intel-based Macs. The idea of an entire architecture rerouting two bits to handle operations in the same way a processor that was released under the Nixon administration did is fascinating.