You are using an older browser version. Please use a supported version for the best MSN experience.

How Intel missed the iPhone revolution

TechCrunch TechCrunch 17/05/2016 Jon Stokes

Intel just laid off 12,000 workers in face of declining PC revenues, and the move has all of us asking “what’s next?” for the company that launched the microprocessor revolution.

The booming smartphone market is almost exclusively based on microprocessor technology from tiny ARM, a British chip design company with no manufacturing capability and a market cap that, for most of the pre-iPhone era, was smaller than Intel’s advertising budget.

Intel, then, is left clinging to the top of a palm tree while the technological tsunami that it started with the 1971 launch of the Intel 4004 sweeps across humanity.

So what happened? How did the company founded by Gordon Moore, the father of the integrated circuit, the eponymous author of Moore’s Law, and one most far-seeing prophets of the digital age, miss the mobile boat entirely?

To understand what went wrong for Intel after the launch of the iPhone, you first have to know what went right for the chipmaker during the “Wintel” duopoly years — how the company used a specific set of business practices to maintain shockingly high profit margins, shut out rivals, and ultimately draw the wrath of the FTC.

This isn’t so much a technology horserace story as it is a business story, and once you see how the business and tech parts fit together to create the “PC era” we all lived through, it’ll be obvious why the company couldn’t pull off a daring pivot into mobile the way it once pivoted from making memory chips to microprocessors.

But before we can get into the history, I have to take a brief detour and try to brain a zombie idea that just won’t die. It came up most recently in this piece by Jean-Louis Gassée, and I call it the “ARM Performance Elves” hypothesis.

The Performance Elves Were Real… Until They Weren’t

Quinn Dombrowski" class="size-large wp-image-1323102" src="https://tctechcrunch2011.files.wordpress.com/2016/05/5297560931_2107f4287a_b.jpg?w=680&h=454" width="680" height="454" />

The reason that Intel lost mobile to ARM has nothing to do with the supposed defects of the ancient x86 Instruction Set Architecture (ISA), or the magical performance properties of the ARM ISA. In a world of multibillion-transistor processors, anyone who suggests that one ISA has any sort of intrinsic advantage over another is peddling nonsense on stilts. I wrote about this five years ago — it was true then, and it’s still true:

First, there’s simply no way that any ARM CPU vendor, NVIDIA included, will even approach Intel’s desktop and server x86 parts in terms of raw performance any time in the next five years, and probably not in this decade. Intel will retain its process leadership, and Xeon will retain the CPU performance crown. Per-thread performance is a very, very hard problem to solve, and Intel is the hands-down leader here. The ARM enthusiasm on this front among pundits and analysts is way overblown—you don’t just sprinkle magic out-of-order pixie dust on a mobile phone CPU core and turn it into a Core i3, i5, or Xeon competitor. People who expect to see a classic processor performance shoot-out in which some multicore ARM chip spanks a Xeon are going to be disappointed for the foreseeable future.

It’s also the case that as ARM moves up the performance ladder, it will necessarily start to drop in terms of power efficiency. Again, there is no magic pixie dust here, and the impact of the ISA alone on power consumption in processors that draw many tens of watts is negligible. A multicore ARM chip and a multicore Xeon chip that give similar performance on compute-intensive workloads will have similar power profiles; to believe otherwise is to believe in magical little ARM performance elves.

This notion that ARM is somehow inherently more power-efficient than x86 is a hold-over from the Pentium Pro days when the “x86 tax” was actually a real thing.

Intel spent a double-digit percentage of the Pentium Pro’s transistor budget on special hardware that could translate big, bulky x86 instructions into simpler, smaller ARM-like “micro-ops”.

In the subsequent decades, as Moore’s Law has inflated transistor counts from the low single-digit millions into the high single-digit billions, that translation hardware hasn’t grown much, and is now a fraction of a percent of the total transistor count for a modern x86 processor.

In short, anyone who believes that ARM confers a performance-per-watt advantage over x86 is well over a decade behind the times.

With that out of the way, on with the real story.

ISA Lock-In

Since the dawn of the PC era, Intel has enjoyed a supremely rare and lucrative combination of both high volumes and high margins. The company has sold a ton of chips, and it has been able to mark them up way more than should normally be possible.

There is one technical reason that Intel has historically been able to mark its chips up so high: x86 lock-in. A processor’s ISA does matter, but for backwards compatibility, not performance.

When a massive, complex software platform like Windows is compiled for a specific ISA, it’s a giant pain to re-compile and optimize it for a different ISA, like ARM. Things like just-in-time (JIT) compilation and translation have been supposed to do away with this problem for a long time, but they’ve never panned out, and ISA lock-in is still very real in 2016.

Apart from some failed commercial experiments and lab builds, Windows has always been effectively x86-only as far as the mass market is concerned. This meant that PC vendors like Dell, the erstwhile Gateway, and even more boutique shops were in the business of selling “Wintel” PCs to customers who may have just wanted Windows but who also had to buy Intel in order to get it.

Thanks to their effective “duopoloy” status, Intel and Microsoft could charge quite a bit of money for their respective technologies, jacking up PC sticker prices for consumers and leaving systems integrators to scramble for what little profit they could eke out. But Intel in particular was famous for using its half of the Wintel duopoloy to starve up-and-coming rivals for cash by suppressing their margins. The scheme worked as follows.

The Margin Mafia

Let’s say that Intel wants to boost its margins, so it goes to Dell and says, “we’re going to start charging you more money for our CPUs.” Since Dell has no real alternative if it wants high-performing x86 processors (I’ll talk about AMD in a moment), Dell has to suck up the price increase.

Now that Dell is sending Intel more money per PC shipped, the PC maker now has three options: 1) it can raise prices, thereby making its offerings less competitive in a cut-throat PC marketplace where anyone with a screwdriver and a credit card can start a PC assembly shop, 2) it can eat the cost increase and see its own margins suffer (and its stock price get hammered), or 3) it can start squeezing the rest of the component makers that provide its PC parts (i.e. the GPU, the soundcard, the motherboard, etc.) to lower their prices and their margins, in order to make up the difference.

Not surprisingly, Dell and the rest of the PC vendors got into the habit of choosing option 3. Because there are multiple GPU vendors in the market, Dell could go to Nvidia and ATI and play them off against each other, forcing them to offer lower prices in order to secure a spot in a Dell PC. And likewise with other component makers.

This, then, was the basic mechanism by which Intel was able to “steal” margin from everyone else inside the PC box, especially the GPU makers like Nvidia and ATI.

As for AMD, Intel’s primary way of locking them out was the “Intel Inside” branding program. In exchange for putting an “Intel Inside” sticker on their PCs, and for including the little “Intel Inside” logo in their advertisements, Intel would subsidize the PC vendors’ marketing efforts. This was effectively a kickback scheme.

I call it a kickback scheme, because it works as follows: Intel raises its margins on its CPUs, forcing Dell to demand that Nvidia and/or ATI accept smaller margins on their GPUs, and then Intel kicks back a portion of the money that was peeled from Nvidia and ATI’s share of the pie to Dell by subsidizing their PC marketing efforts.

Dell gets to keep its unit cost the same and gets some extra money for marketing, Intel gets fatter margins and weakened rivals, and everybody but the component makers and AMD are happy. Why would Dell rock the boat by threatening to switch to AMD? (Dell did eventually introduce AMD-based products, and it was big news at the time.)

Taking a Pass on ARM

Intel had a good thing going with the scheme described above, and they wanted to protect it at all costs. The chipmaker also had a pretty sweet line of ARM processors, but it sold that line off when it decided that low-margin, high-volume businesses were for the birds.

By the time ARM was gaining traction and Jobs had secretly gone to work on the iPhone, Intel was thoroughly addicted to its fat margins. Thus it was no surprise that Intel passed Steve Jobs’ suggestion that they fabricate an ARM chip for the iPhone. Intel didn’t want to be in the low-margin business of providing phone CPUs, and it had no idea that the iPhone would be the biggest technological revolution since the original IBM PC. So Intel’s CEO at the time, Paul Otellini, politely declined.

It’s also important to note that there was also no mad scramble from other phone vendors for x86-powered phone chips. Everyone in the mobile space already knew everything I just outlined above — they were wise to Intel’s tricks, and they knew that the minute they adopted a hypothetical low-power x86 CPU then Intel would use ISA lock-in to start ratcheting up its margins.

There was no way they were going to give Intel the leverage to do to the smartphone space what the chipmaker did to the PC space. So the Nokias of the world had as little interest in Intel as Intel had in them, and the former happily went with the cheap, ubiquitous ARM architecture.

ARM is great, despite the fact that it has mostly run a feature size node or two behind Intel, because it gives a system integrator options. Unlike with x86, if one ARM vendor tries to squeeze you, you ditch them and move to another. ARM chips may not have had the same performance/per watt or transistor size as Intel, but they’ve always been cheap, easy, available, and totally devoid of the threat of lock-in.

Playing Defense with Atom

At some point, Intel realized that ARM was a threat in the low-power space, so the company released the low-power Atom x86 processor line as a way to play defense. But Atom just plain sucked — seriously, it ran like a dog, and Windows-based Atom netbooks were borderline unusable for the longest time.

Atom was terrible because it had to be, though. If Intel would have released a low-margin, high-performance, low-power x86 part, then server makers would’ve been among the first to ditch the incredibly expensive Xeon line for a cheaper alternative.

Intel basically couldn’t allow low-margin x86 products to cannibalize its high-margin x86 products, so it was stuck with half-measures, like Atom, aimed more at keeping ARM from moving upmarket into laptops than at moving x86 down into smartphones.

Conclusions

The TL;DR of this entire piece is that Intel missed out on the mobile CPU market because that market is a high-volume, low-margin business, and Intel is a high-volume, high-margin company that can’t afford to offer low-margin versions of its products without killing its existing cash cow.

The other thing you should take away from this is, if your entire business is build on using your monopoly status to squeeze partners, starve competitors, and fatten your own margins at the expense of everyone else, then that won’t go unnoticed. Incumbents in any new space you try to enter will be leery of partnering with you lest they find themselves subject to the same tactics.

At this point in 2016, even if Intel wanted to go all-in on mobile, it’s not clear they could. Who in their right mind would bet their entire company on an x86 mobile processor, no matter how mind-meltingly awesome its specs, given Intel’s history of using x86 as leverage to crush an entire ecosystem.

Indeed, if Apple ever moves its laptop and desktop lines from x86 to ARM, it won’t be because of ARM Performance Elves or the supposed deficiencies of the legacy x86 ISA — it’ll be because they’ve finally migrated ARM up-market to the point that the performance/watt gap vs. Intel at the same unit cost is small enough that they think they can get away with another big switch.

An ARM-based Macbook will have worse performance/watt on CPU-bound workloads than a comparable Intel-based laptop, probably forever. But Apple won’t care, because they’ll be able to lower prices and/or widen their margins, and their customers will keep buying them anyway because Apple.

As for what’s next for Intel, it’s a choice between stagnation vs. a painful transition to a lower margin business. They’ll never get into hot growth areas like cars or IoT clients, and still keep the x86 lock-in plus high-margin gravy train rolling. No, if Intel wants to grow they’ll have to give up their margins one way or the other — either by letting a low-margin x86 part cannibalize their higher-end products, or by getting back into the ARM business.

More from TechCrunch

image beaconimage beaconimage beacon