chiplet verification

Chiplets have become a strategic asset for designers who are implementing them in all sorts of applications. Until now, chiplet verification until now has been overlooked. I discussed this phenomenon with Dave Kelf, CEO of Breker Verification Systems, an ESD Alliance member company and provider of verification solutions that leverage verification intent and behaviors to generate test content. What follows is a condensed version of our discussion.

Smith: How is verification changing to meet the requirements of chiplets?

Kelf:  While functional verification of the individual components will not change, the use of chiplets in a package is different than considering a single die, driving new requirements for verification. I think this has implications for the entire SoC verification process in a significant and dramatic way!

Smith: What do you mean by that?

Kelf: Chiplets are fascinating with many positive aspects, including a tremendous boost in flexibility because in one package, separate dies can be provided by different manufacturers and even in different technologies. An Intel processor can be one chiplet, for example, and some other specialized piece of IP from another company another chiplet.  Each chiplet manufacturer is responsible for the fabrication of their own chiplet, so the final package contains varied devices from different foundries with different power, performance profiles, etc.

While chiplets represent a huge leap forward in the industry, the major complexity is in the interfacing or connectivity between them. This can lead to chiplet verification problems.

When a single chip is put in a package, the I/O of the chip needs to be connected to the pins of the package, representing interconnect with a latency issue. Usually, when going off-chip, the performance requirement is lower. As such, the chip-to-pin interconnect is a solved problem.

Constructing a system by connecting multiple chiplets is quite different than considering a single chip because designers are working with signals that normally would be flowing within one single SoC. They might have a high-performance network-on-chip or fabric within an SoC and they need that performance given the complexity and number of blocks within the design.

Breaking the SoC into different chiplets means the signal is now going off one chiplet onto another, much like a pin interconnect. That could mean potential delay and power issues within the system because the signals must be driven from one chiplet to another chiplet. The signal latency of these connections is critical.

Before chiplet-based system designs, engineers didn’t have to deal with the challenges of high-performance, low-latency connections between individual chiplets.

In addition, delays from one chiplet to another drive the potential for timing hazards that can lead to increased data coherency issues across the entire system. This represents a complex verification challenge.

Smith: How is chiplet verification going to evolve to support this need?

Kelf: The first decision is how to handle chiplet-to-chiplet interfaces. The PCI Express interconnect standard (PCIe) is a universal standard for interconnecting components. CXL is a newer standard built on top of PCIe to focus on connecting processors or compute units with storage or memory units. It uses PCIe as a basis and adds coherency protocol.

Coherency is the process of ensuring consistent data in areas of the device that might have multiple representations of that data. A perfect example is the classic cache coherency problem between multiple memory levels, a high-speed cache memory next to a processor, a lower-speed memory that might be shared between multiple processors, and off-chip storage. These three levels of cache must have the same data in respective address locations. As the memory system is executed, data pages are swapped between memory levels. Coherency could be lost given the different data representations at different memory levels. Add to that multiple processors sitting on the same bus all reading and writing to these memories, and other I/O components or more specialized IP blocks also reading and writing to memory, sometimes bypassing the processor. The opportunity for coherency issues is magnified as the system becomes more complex. Maintaining coherency across a complex modern SoC is a huge problem, and it’s handled by the processors and the fabric using various protocols, for example, Arm’s ACE protocol.

Now look at chiplets and the delay in the interface from one chiplet to another. Handling coherency with the extra latency involved in these interfaces adds an extra dimension to the problem where coherency can be violated between these complex devices.

A new standard called the Universal Chiplet Interface Express (UCIe) has emerged to aid with this. Companies looking at chiplets are working on the standard, leveraging CXL and PCIe, and adding further protocols that allow for these coherent, low-latency, high-performance interfaces. UCIe will be an important standard for handling low latency and high performance while ensuring coherency across multiple chiplet blocks in the same package.

So how to verify coherency across these multi-chiplet systems? A range of coherency-testing protocols have been developed that exercise the system in a manner designed to tease out coherency problems. A test system is built around the SoC that reads and writes all over the chip where coherency is exercised thoroughly. An example of these public protocols is the Dekker algorithm, which provides a specific range of different reads and writes into various areas of the memory subsystems and fabrics.

On top of these public algorithms, there is a need for methods specific to chiplets and data flowing around a multi-chiplet system. New proprietary algorithms are being developed to perform load-store operations testing appropriate to a multi-chiplet system.

A specialized testbench, different than a regular functional verification testbench, is needed to test the “plumbing,” if you like, of a multi-chiplet system to ensure no data hazards or inconsistencies exist around the system.

Smith: It seems chiplet verification expands beyond just functional verification.

Kelf: Yes indeed. Testing the system infrastructure has become very important. Let’s consider performance and power testing. We see this need at the SoC level where a performance problem can lead to difficult bugs, and it is even more challenging with chiplet-based systems. Mechanisms that verify the performance of the whole system running multiple tests concurrently are required.

Let us consider the idea of three-dimensional coverage. Dimension one is regular combinatorial coverage or functional verification of an area in a moment. The coverage checks certain components have been covered simultaneously. Dimension two would be sequential coverage where combinatorial coverage element sequences in time are confirmed to be covered correctly. The third dimension is concurrent coverage to make sure that several of these sequences can operate at the same time concurrently across a device.

To test a complex system, especially if it includes multiple chiplets with fast and slow interfaces, verification engineers need to run many operations concurrently across the system. The design might have one processor communicating with memory across one interface running one set of tests while concurrently a second processor is communicating with an I/O. A third test could be another I/O performing a direct memory access function to another memory somewhere else on another chiplet.

All this data is flowing across the same set of interfaces, so the concurrent testing can wring out the performance and potential power issues in these interfaces. It will reveal bottlenecks in the system, for example, maybe one interface is being overused when multiple operations are running. It’s hard to see that unless a full concurrent execution view on the simulator or emulator is running.

Smith: Concurrency is a major factor. What else do verification tools need to consider with chiplets?

Kelf: Other various activities fall under the heading verification but are traditionally not part of the verification process. For example, security testing has become a common objective for many systems to ensure a device cannot be hacked for nefarious purposes. Safety testing for certain devices is also important. For a long time, the industry has discussed hardware-software testing early in the process and this is also now becoming mandatory for these new systems.

As an example, imagine two chiplets from different sources. Maybe one will make use of sensitive data and security has been a key aspect of its development. Let’s say the second chiplet can read and write to this first chiplet and the protocol for accessing the secure areas has not been completely verified. It may be possible in this new world of chiplets that the second chiplet might be used to add vulnerabilities to the secure chiplet. This sounds fanciful but the defense industry is investing millions on this exact issue today, for example.

Smith: Would you consider this to be a new category?

Kelf: Yes definitely. Functional bugs are often figured out at the block level, but t performance and bottleneck issues often remain at the system level. These manifest themselves as a different class of bugs; an infrastructure bug.

Infrastructure verification represents a new category. This is SoC verification, and we are seeing it more at the companies we work with. System-level bugs along the lines we discussed that can’t be tested at the block level are becoming more commonplace. Chiplets and the UCIe interfaces between them make this situation even worse.

Smith: Are we going to need a standard for chiplets, a verification standard like UVM or Portable Stimulus?

Kelf: No. I think the Portable Stimulus Standard (PSS) from Accellera handles these needs at the system level. It allows a graph model to be defined that describes the interactions in the system along with its application scenarios, that can be used to generate concurrent tests. UVM is fine at the block level but is difficult to scale to the system level. PSS also handles the necessary reuse at the system level that enables the creation of configurable SystemVIP for many of these scenarios that we have been discussing.

Smith: Are you seeing a lot of companies adopting a chiplet design strategy?

Kelf: Absolutely. Everyone producing any kind of semiconductor system, especially the processor companies, are excited about chiplets. Most significant semiconductor companies are involved in the UCIe standards body. They all recognize that chiplets are key.

Smith: Are chiplets being implemented across all market spaces and applications?

Kelf: I think so, although some will take longer to latch on to this than others. Markets that use the largest semiconductors are going to be the first ones to benefit, including the server market and any kind of communications and mobile phone-based stations, large automotive chips, and any AI chips. There’s no limit to where chiplet will be able to play.

Bob Smith

 Robert (Bob) Smith is Executive Director of the ESD Alliance, a SEMI Strategic Association Partner.…

View Bob's posts

Become a Member

Media Kit

Login