Why Intel Killed Its Optane Memory Business

Analytics Intel CEO Pat Gelsinger confirmed that Intel will abandon its Optane business, ending its attempt to create and enhance a layer of memory that is slightly slower than RAM but has the virtues of persistence and a higher IOPS.

However, the news should not come as a surprise. The division has been on life support for some time after Micron’s 2018 decision to end its joint venture with Intel, in which it sold the product it made for the 3D XPoint chips that go into Optane drives and units. While Intel has indicated that it is open to using third-party foundries, without the means to make its own Optane silicon, the writing was on the wall.

As our Block and Files website reported in May, the sale came only after Micron sparked a glut in 3D XPoint memory modules — more than the chip maker could sell. It is estimated that Intel stocks up to nearly two years of supply.

In its weak second-quarter earnings report, Intel said leaving Optane would reduce the value of inventory by $559 million. In other words, the company abandons the project and writes off inventory as a loss.

The deal also marks the end of the Intel SSD business. Two years ago, Intel sold its business plans and manufacturing NAND flash to SK hynix to focus its efforts on the Optane business.

Announced in 2015 as an Optane SSD, the XPoint 3D memory from Intel came two years later. However, unlike solid state drives from competitors, the Optane SSD can’t compete for capacity or speed. Instead, the hardware delivered some of the strongest I/O performance on the market—a quality that made it particularly attractive in latency-sensitive applications where IOPs were more important than throughput. Intel claimed that PCIe 4.0-based P5800X hard drives can reach 1.6 million IOPS

Intel has also used 3D XPoint in its Optane persistent memory DIMMs, particularly around the launch of its second and third generation Xeon Scalable processors.

From a distance, Intel Optane DIMMs look no different than regular DDR4, apart from, perhaps, a heat splitter. However, upon closer inspection, DIMMs can have much larger capacities than is possible with DDR4 memory today. Capacities of 512 GB per DIMM were not uncommon.

Slotted DIMMs combined with standard DDR4 enabled a number of new use cases, including a tiered memory architecture that was essentially transparent to operating system software. When deployed in this way, DDR memory was treated as a large level 4 cache, with Optane memory serving as system memory.

While not offering anywhere close to the performance of DRAM, this approach made it possible to deploy very large memory-intensive workloads, such as databases, at a fraction of the cost of an equivalent amount of DDR4, without the need for software customization. That was the idea anyway.

Optane DIMMS can also be configured to operate as a high-performance storage device or a combination of storage and memory.

What now?

While DDR5 promises to address some of the capacity challenges that Optane persistent memory has, with a 512GB DIMM planned, it’s unlikely to be price-competitive.

DDR doesn’t get cheaper—at least not quickly—but NAND flash prices are dropping as supply exceeds demand. All the time, SSDs are getting faster and faster.

Micron this week began production of a 232-layer unit volume that will push consumer SSDs into the region of over 10 GB/s. Still not fast or low enough latency to replace Optane for large in-memory workloads, analysts say recordbut it comes significantly closer to the 17 GB/s offered by a single channel of low-end DDR4.

If NAND is not the answer, then what? Well, there is already an Optane memory alternative on the horizon. It’s called Fast Computing Link (CXL) and Intel has already invested heavily in the technology. Introduced in 2019, CXL defines a coherent cache interface for connecting CPUs, memory, accelerators, and other peripherals.

The CXL 1.1, which will ship alongside Intel’s much-anticipated Sapphire Rapids Xeon Scalable and AMD’s fourth-generation Eypc Genoa and Bergamo processors later this year, will enable memory to be connected directly to the CPU via a PCIe 5.0 link.

Vendors, including Samsung and Marvell, are already planning to expand the memory modules that go into PCIe as the GPU and provide a wide range of additional capacity for memory-intensive workloads.

Marvel’s acquisition of Tanzanite this spring will allow the vendor to offer Optane-like scalable memory functionality as well.

Furthermore, since the memory is managed by a CXL controller on the expansion card, older and cheaper DDR4 modules or even DDR3 modules can be used in tandem with modern DDR5 DIMMs. In this respect, CXL-based memory layers can be superior because they do not rely on a specialized memory architecture such as 3D XPoint.

VMware is considering software-defined memory that shares memory from one server to another – an effort that would be much more efficient if it used a standard like CXL.

However, emulating some aspects of Intel’s Optane persistent memory may have to wait until the first CXL 2.0-compatible CPUs – which will add support for memory pooling and switching – come to market. It also remains to be seen how the software interacts with CXL memory modules in tiered memory applications. ®

#Intel #Killed #Optane #Memory #Business

Leave a Comment

Your email address will not be published.