ATE Industry Maneuvers Around ‘Perfect Storm’ of Issues at 90 nm and Below – 6/12/2008 8:32:00 AM – Semiconductor International

ATE Industry Maneuvers Around ‘Perfect Storm’ of Issues at 90 nm and Below
Sally Cole Johnson, Contributing Editor — Semiconductor International, 6/12/2008 8:32:00 AM

As outlined in the 2007 edition of the International Technology Roadmap for Semiconductors (ITRS), the most immediate technology challenge the automatic test equipment (ATE) industry faces is “test for yield learning” — essential for fab process and device learning below the wavelength of light or the sub-optical space of 90 and 65 and 45 nm and future process nodes.

Discrete challenges are combining and churning to create a “perfect storm” for ATE vendors to maneuver around, while they also take on a myriad of design sensitivity issues occurring at 90 nm and below. Recent assembly and packaging technology advancements, combined with the challenges of optimizing the same wafer fabrication process for different core semiconductor technologies, are helping system-in-a-package (SiP) gain ground on system-on-a-chip (SoC). But wafer fab process improvements and design/design for test (DFT) needs show potential to push SoC to the forefront. Or, we may see more hybrids in the future. One thing is for sure: Integration is a trend that shows no sign of slowing, and it’s introducing more complexity into the equation.

Designs below the 90 nm node are extremely sensitive to fabrication equipment variation, which is causing new defect mechanisms and fault models for SoC and SiP. “We’re moving from the world of random defects from particulates to systemic defects caused by the finite sensitivity between the design and process,” said Colin Ritchie, Verigy’s (Cupertino, Calif.) product marketing director of the Inovys yield-related line. “In addition to addressing systemic problems, our customers are seeing more parametric variability, which can exhibit in a number of different areas. Transistor performance is degraded because of leakage issues, and sensitivities to small variations in voltage, power, temperature or any other operating environmental effect can cause the design to perhaps function, but not to specification. It’s particularly challenging when you look at very advanced high-power devices in the communications, computer or graphics or other markets. These problems require new solutions. This is where the ATE industry is being challenged in a new domain.”

All of the top semiconductor manufacturers are finding that the contribution of yield loss caused by design-induced problems or design sensitivities gets out of whack as they move to sub-90 nm designs. “They may have followed more than 300 design rules and done all the design simulation that their EDA tools enable them to do, but they’re still finding significant contribution of yield loss due to sensitivity,” Ritchie elaborated. To put it into perspective, a typical 90 nm fab today produces somewhere on the order of 30,000 wpm, which run about $5000/wafer. Every single-digit percentage of yield improvement or loss contributes roughly $1.5M to revenues each month. This makes accelerating the detection and diagnosis of design-induced failures a priority in terms of achieving time-to-market for these devices.

“There are several elements of complexity here to deal with the sheer number of transistors being embedded into a single device and the interoperability between one microprocessor core and the next,” Ritchie said. “While the number of transistors has increased exponentially, what hasn’t changed is the external I/O — 300-500 pins is mainstream now.”

What do you do when you’ve got all this complexity, more transistors, less access and debug time, and are under pressure to get the job done faster? One way to address it is to get a better view inside the device, inside the I/O, using DFT. “Essentially, you scan elements into a design to significantly increase the stimulus and observation of that design within the die — rather than from outside,” Ritchie said.

Why is DFT so important? “What we’re finding at the advanced technology nodes is that the DFT or structures scan is actually detecting the design for process sensitivities first,” Ritchie explained. “In these advanced devices, typically the design process goes through a number of phases of laying out the design in which the mission-critical high-performance stuff is started first because it requires very tight control over the design and layout. Then it goes through a number of stages of laying out the memory and the analog, then somewhere — usually last on the list — they press ‘DFT’ in the EDA tools and structures get laid out automatically. Because it’s laid out last, it’s not uncommon for it to be ‘non-optimized’ since it goes through multiple metal layers, longer wires and more vias. This results in a sensitivity between design and process, and it’s natural that we’re seeing it first with DFT in the least optimized part of the design.”

Verigy’s Inovys Silicon Debug Solution for the V93000, showing a splat screen with compact test head.
1. Verigy’s Inovys Silicon Debug Solution for the V93000, showing a splat screen with compact test head.
The first time any semiconductor manufacturer knows whether or not they have a good or bad device is when it is put on a piece of ATE and undergoes its first wafer test. “At that stage, they’ve tested the part and identified it as good or bad, and if there’s an electrical fault, it will be identified there for the first time,” Ritchie said. “It’s very typical that our customers will take a couple of wafers, assemble a part and go through the retest process, log the failure data and fault simulation data, maybe move back into the design to do some layout extraction, then get into the world of physical failure analysis. What the industry has been asked to do is to enable analysis to take place at the point where the fault is observed.”

One of the first products to address this is Verigy’s Inovys Silicon Debug toolset that is designed to enable real-time analysis at the observation of the fault while the device is still being probed on the ATE (Fig. 1). It can also take place on a package part, although Verigy recommends the wafer aspect because the sooner you find the problem, the faster you can fix it. Equally significant, the toolset is able to reduce the time required for fault detection and diagnosis (from two to three weeks to a matter of hours) by efficiently mapping electrical failures to physical defects on SoC devices.

The toolset also enables effective logic bitmaps (Fig. 2). In the upper left-hand side of Figure 2 is a wafer map in which you can log and map the performance metrics of each die on that particular wafer. Moving clockwise to the right-hand side, where there are issues on the die, you can exhibit the performance data in a 2-D scan map and then translate that into a physical view on the actual die. The next step is actually drawing the net view that represents the registers and flip-flops in the design between the metal connections (vias). This allows mapping of the wafer to physical die locations, and from there you can go quickly from an electrical failure to a logical fault to a physical defect on the die. The details about exactly how all this works is likely a matter of intellectual property (IP).

Advertisements