X-FAB – First Foundry to Offer Micro Transfer Printing
Before we take a look at Intel’s Sapphire Rapids, a quick update on micro-transfer printing (MTP) [ see IFTLE 484, “Massively Parallel Pick & Place from the X-Men”]
Volker Herbig, VP of X-FAB noted, “By licensing X-Celeprint’s disruptive MTP technology, we are uniquely positioned in our ability to facilitate the incorporation of numerous different semiconductor technologies. X-FAB customers will be able to utilize a technology that no other foundry is offering, and existing X-Celeprint customers may now tap into capacity levels that will easily meet their future demands…As a result, we can assist customers looking to implement complete multifunctional subsystems at the wafer level, even when there are high degrees of complexity involved. Signal conditioning, power, RF, MEMS, and CMOS sensors, optoelectronic devices, optical filters, and countless other possibilities will all be covered.”
Intel Sapphire Rapids
If there was any doubt that chiplet technology was going mainstream, that ended with the recent confirmation, at Intel’s Architecture Day that Sapphire Rapids will adopt chiplet architecture (note – sometimes Intel calls them tiles). As Intel Chief Architect Raja Koduri put it “…this is a big deal for Intel and a big deal for the entire data center ecosystem.”
Sapphire Rapids is the codename for Intel‘s next-generation Xenon CPU for the Eagle Stream server platform (due out in 2022), designed for data centers. It is based on Intel 7, ( rebranded the 10nm Enhanced SuperFin process – see IFTLE 494, “Intel Accelerated…] Intel engineering claims the performance is equivalent to a single monolithic die (Figure 1).
Sapphire Rapids comprises four 400mm sq. compute chiplets. The module will be 4446mm sq. The chiplet architecture allows Intel to scale Xenon beyond reticle limitations. The chiplets are connected using 10 Intel EMIB technology. The use of chiplets allows them to increase core count by simply adding more chiplets to the processor package.
Recall that AMD adopted chiplet technology nearly 4 years ago in their EPYC and later Ryzen processor families. According to Intel, each Sapphire chiplet will have full access to all resources such as cache, memory, and I/O functionality not just what’s built into that individual chiplet, which Intel claims differentiates it from the AMD EPYC technology. Sapphire will support PCle gen 5, CXL 1.1, DDR% and on-die high bandwidth memory (HBM). The bump pitch is 55µm. Intel claims Sapphire provides consistent low latency and high cross-section bandwidth across the entire module. Key building blocks are shown in Figure 3.
Sapphire Rapids will come in two variations a standard memory solution and an HBM variation. The HBM2E memory will be in 8-high stacks and come to market a few months after the initial launch. The HBM variation module will be 28% larger than the std or 5700mm sq. The HBM variation requires 14 EMIB connections.
Recent technology introduction delays at Intel indicate that Sapphire Rapids should reach the market in the spring of 2022. Intel sees Ice Lake and Sapphire rapids co-existing in the market with Ice Lake being used for more general applications and Sapphire Rapids being used for high-performance servers in data centers.
Micron and HBM Memory
With all the on-shoring talk going on these days, I have received quite a few questions as to the status of Micron and HBM memory.
While Micron has always been on the cutting-edge of memory technologies, the company had been noticeably absent when it comes to stacked HBM memory products.
Previous Micron efforts focused on stacked memory called Hybrid Memory Cube (HMC) which was announced back in 2011 as a joint effort with Samsung and IBM.
HMC saw some usage in the market, in products like accelerators and supercomputers, but ultimately lost the battle against more widespread HBM/HBM2 and Micron folded the project in 2018 in favor of GDDR6 and HBM. The first HBM memory chip was produced by SK Hynix in 2013 and the first use of HBM was the AMD Fiji GPUs in 2015.
In 2020 Micron finally introduced its first HBM DRAM for high-bandwidth devices such as GPUs and network processors – HBM2E. So HBM memory is finally available from all three of the remaining big three memory suppliers. (Table 1).
Micron’s next-gen product HBM next (which has also been called HBM3) is scheduled for release in late 2022. They will be available in 4-high 8Gb and 8-high 16Gb stacks with data rates of 3.2 Gb/sec. For comparison, the NVIDIA A100 accelerator based on HBM2e technology uses chips with a speed of 2.4 Gbps. And AMD Radeon VII uses HBM2 memory with a data transfer rate of 2 Gb/s.
For all the latest on Advanced Packaging stay linked to IFTLE……………………………….
Feature Photo: Sailesh Kottapali, Intel Senior Fellow in the Design Engineering Group and chief architect, holds a Sapphire Rapids chip at the recent Intel Architecture Day. Photo Credit – Intel Corp.