Dear Intel: Do it.
For those who have not seen, in April 2023, Intel had a simplified instruction set whitepaper it is calling X86-S. The big change is that Intel is proposing dropping native 16-bit and 32-bit support from its instruction set. As stated above, Intel should do this, but there is a major cost involved.
Intel X86-S Streamlined 64-bit Instruction Set A Perspective
According to the Intel blog post and 46-page whitepaper, the main goal of X86-S is a cleanup of legacy 16-bit and 32-bit operating modes. Currently, to get into 64-bit mode, X86 CPUs need to traverse 16-bit and 32-bit operating modes.
Reading that, likely on a 64-bit capable system and in a 64-bit capable OS, you may think that this transition already happened. In the world of x86, backward compatibility is a big thing so those legacy modes are still there. When we started STH in 2009, it was in the 32-bit versus 64-bit OS transition. There were real concerns given the 32-bit OSes were more efficient at small memory sizes. That is a big reason many Arm projects, like Raspberry Pi’s were mainly 32-bit for a long time. In the sub 3GB-4GB of RAM range, 32-bit can make sense. For modern x86 CPUs, 4GB is the low-end, and when we see systems that come equipped even with only older E-cores and 4GB, we suggest that should be 8GB these days. An 8GB DDR4 SODIMM is around $17-18 at the time we are publishing this.
With Linux and Microsoft pushing 64-bit for over a decade, it is time, as the paper suggests to trim legacy instructions and functionality from X86.
Since backward compatibility is a big deal, the proposed solution is half serviceable. Intel proposes using VMs to provide legacy 16-bit and 32-bit support. Another thought is, why? 2023 hardware is more than capable of running legacy code. Today’s processors will realistically last until 2030 en masse in the field. For any applications that need to be bare metal, why not just use today’s hardware as part of an up-cycling story?
A transition tomorrow means that those using today’s hardware have plenty of time to transition most applications to either virtual machines or simply to leave them on existing hardware platforms.
In the meantime, work to streamline X86 helps solve tomorrow’s problems more efficiently rather than hindering those same problems by enabling legacy hardware. To put this in perspective, two-generation old 16-bit compatibility in 2023 is like SpaceX spending time to figure out how to strap a horse saddle to a rocket flight seat. If folks still want to ride horses, they can use horses and saddles. We can look back at 16-bit compatibility through ESG-themed upcycling as we do with nostalgic equestrian pursuits.
If this were 2007-2010 when that transition was in full force, my opinion would be very different, but we are a decade to a decade-and-a-half from the transition’s major push.
With all of that said, some thought should be given to what this means. While Intel’s core competencies such as notebook, desktop, and server CPUs may all use 64-bit these days, that is far from a given in the IoT space. There, 32-bit can make sense from a power and cost perspective. At the same time, that has not been the focus for Intel. Instead, E-cores are getting faster and Quark is continuing to be sunset (fun fact, you can buy new servers with 32-bit Intel Quark cores in the Lewisburg generation PCH.)
Intel would effectively be surrendering IoT for X86 to Arm and RISC-V with this move. Perhaps it has already done so. This may not be the biggest change in the world to many, and there are still 32-bit apps out there. At the same time, we live in an era where 64-bit for client and server devices is mainstream, so those parts of the market are now due for a transition. The only question is when.
If you are one of the folks concerned about this, check out Intel’s page and white paper linked above.