Apple's GoFetch silicon security fail was down to an obsession with speed

7 months ago 46
BOOK THIS SPACE FOR AD
ARTICLE AD

Opinion Apple is good at security. It's good at processors. Thus GoFetch, a major security flaw in its processor architecture, is a double whammy.

What makes it worse is that GoFetch is a class of vulnerability known about years before the launch of Apple Silicon processors. How did Apple's chip designers miss it? A similar problem exists in Intel's 13th Gen CPUs too. Spectre and Meltdown were discovered in 2018, after all. Is this a fundamental problem in modern processor design – an evolutionary misstep from which there's no return? The answer is part Einstein, part paranoia, and part marketing. Oh yes.

Apple M1 Chip

Hardware-level Apple Silicon vulnerability can leak cryptographic keys

READ MORE

Let's start with Einstein, who said one of the rules of reality is that the further away something is, the longer it will take to get to you. Chip designers have to deal with that and other factors by keeping copies of frequently used data in small high-speed caches close to the processor. Doing this efficiently is essential and complex. It makes a ton of assumptions about what data will be needed and when, and how to make the transfers into the cache system neither too small nor too big. It's a huge engineering challenge, and absolutely vital to performance.

A lot depends on the details of the different memory technologies used in DRAM and on-chip cache alongside bus speed limitations, but even if all this were to be perfected, the basic physics of closer equals faster will never go away.

This is not only a rule of the universe, it's a big problem in cryptography. Cryptographic software uses secrets to encode and decode data, and it needs to do it in private. Modern CPUs provide plenty of privacy through memory managers that limit access to properly privileged code.

Not good enough. If a cryptographic component takes a different amount of time to complete its task depending on inputs it can operate in perfect secrecy – but an attacker timing this from the outside can start to piece together what's going on.

As a result of discovering this, the idea of constant-time coding evolved. No matter what happens within code, it will always finish its task at the same time. Even if it means twiddling its virtual thumbs for an electronic age. Constant-time is now a basic concept to prevent information leakage from a protected system.

This is at odds with caching. As the code component gets data from memory, it does so through caching – and a constant-time cache is no cache at all. It gives data fast if it's got it, slowly if it has to fetch it. If the cache is shared between multiple processes or cores, as it always is, then an attacker can watch cache hits and misses by timing, and extract information.

Time to examine the anatomy of the British Library ransomware nightmare The last mile's at risk in our hostile environment. Let's go the extra mile to fix it How to Netflix Oracle's blockbuster audit model The federal bureau of trolling hits LockBit, but the joke's on us

Crypto code knows this and is designed to avoid it. The GoFetch bug happens because a feature of the Apple processor called a Data Memory Prefetcher (DMP) monitors the cache and tries to detect not just requests for memory access, but requests for memory locations that contain pointers to to memory locations. This is invisible to and uncontrollable by code, with the result that an attacker can fashion inputs to the crypto component that forces the DMP to leak information about the code's operation. The computer goes faster, as it must, while breaking a basic tenet of modern cryptographic information hiding.

If this is a clash of two fundamental aspects of computing, how did it happen and why did nobody pot it until now? The clash is between speed and secrecy, mirrored in the very philosophy of high-end chip makers. It's this philosophical element that makes the physical version so dangerous.

Chip makers obsess over speed, not only for its own sake but as the most important market differentiator. The industry is drenched in benchmarks where slower shows up, safer does not. The DMP side effect that gives us GoFetch is subtle, but perhaps nobody was looking too hard for it in the first place.

As to what makes things faster, well, that's a secret. The DMP idea does speed up normal operations, but Apple has disclosed very few details of its cache management systems. Instead, it took a massive cross-institution effort to reverse engineer what was going on then build and test proofs of concept.

This paranoid need for security by silence is universal among chipmakers. Only the paranoid survive, as Intel's spiritual leader and CEO Andy Grove said. You will not get more than a handful of marketing slides out of any big chipmaker. Try talking to Qualcomm, whose chips not only embody cutting-edge computer design but the massive security burden of wireless data processing, about how all that works. 404 all day long.

Why? The only outfit who could use this information to commercially harm a big chip company is another chip company, and they've all got the tools and expertise to work out what each other is doing anyway. If every detail of an Apple M3 chip was public, nobody could make an M3 competitor before the M4 came out, if then.

Yet if more details were available then two good things would happen. Security flaws would be caught earlier – no waiting three generations to get a DMP killswitch – and design decisions would be safer across the industry. Most deliciously, that first obsession – speed – would be far better served.

A corollary of very fast, very complex cache systems is that the better coders understand them, the more finely tuned the code can be to make best use of the system – and avoid doing things that trips it up. The better a compiler understands data segmentation and flow, what behaviors trigger what results, the more efficient and speedier the results. You can't do this if you don't know what's going on.

Secrecy and speed are incompatible in some ways, mutually beneficial in others. Engineering this fact for best results will always be a compromise, but that's what engineering's all about. Chip companies would be doing everyone a huge favor if they re-engineered their philosophy, not just their chips, to recognize this. ®

Read Entire Article