Voltage drop analysis and verification - the two-step or decoupled approach

Summary: The vast majority of voltage drop analysis tools employ a two-step, or decoupled, method. in this, currents are acquired with the power net replaced by a constant voltage source. While this makes the problem much more tractable, it also serves to insert error and uncertainty into the analysis. This post characterizes the approach, and sheds some light on its limitations.

In an earlier article we wrote of how the precision and quality of voltage drop analysis is largely determined by the nature and quality of the currents applied to the extracted power net. Here we take this further by discussing the quality of results obtained with the two common methods of dynamic simulation.

The direct-coupled approach simulates the entire circuit, with the active devices back-annotated with extracted data from both signal and power nets. We’ll have more information on this technique in our next article.

The focus of this post, though, is the two-step approach, in which there are (perhaps not surprisingly) two steps. In the first, the power net is removed and the circuit, back-annotated with extracted signal net data, is simulated. During this simulation the time-varying currents flowing from the VDD and/or GND nets are sampled, and modeled as piecewise constant current sources. These currents may be averaged in order to reduce the volume of data, and the number of intervals in the generated current models (which can speed up the second phase analysis.) Finally, in the second step of this two-step analysis, these current sources are applied to the extracted power net, and all node voltages are solved.

The second phase may analyze both VDD and GND nets together in the same run, as shown in this figure:

Figure 1

Alternatively, some tools may require each power net to be evaluated independently in separate runs, one for each power net (this may be done “under the hood”), as shown here:

Figure 2

Pros:
By separating current acquisition from power net analysis, tools that use this technique have made the problem more tractable. With power nets containing tens to hundreds of millions of resistors [1], anything that can be done to divide and conquer, making the task more manageable, has to be tried. Especially as power net and voltage drop analysis is generally one of the last physical verification tasks to be performed, once LVS/DRC and final extraction are complete, and schedule compression is at its worst.

However, being able to perform analysis doesn’t mean much if the analysis is flawed. From a purely pragmatic point of view (and those of you who know me also know that pragmatism is one of my characteristics; albeit occasionally served with a healthy side order of perfectionism) two-step analysis can give some relative guidance and help establish potential weak spots in the design. It’s up to the user, though, to establish their own comfort level with the results; false positives and false negatives abound. To help users decide where to draw the line, some of the compromises and limitations of this two-step approach are discussed below.

Cons:

  1. The accuracy of currents obtained during the first phase of this approach suffer from a particular problem. Consider figure 1 above; during current acquisition (the middle of the three diagrams in the graphic,) power nets are replaced by constant voltage sources. In this circuit configuration, the NAND and NOR gates in figure 1 get their current directly from the zero impedance voltage sources, at full rail voltage (VDD or GND). This is equivalent to all the NAND or NOR gate’s VDD current coming directly from the package-level power pins connected to the PCB. But where, in the real model of the circuit shown in the left-most diagram of the graphic, do the gates get their currents? Something like 5% of the gate’s VDD current will come from the power pin; the great majority (in excess of 85% at today’s process nodes) comes from local decoupling capacitors, and charge stored in the well of nearby MOS devices. Indeed, the NOR gate can be functionally inactive, and yet remain electrically active, providing current through the lower impedance path between the PMOS devices, up through the power net, and into the NAND gate PMOS. Substituting the power net for a constant-voltage source eliminates this, the largest source of current, from the analysis (since current cannot pass through a constant-voltage source.)
  2. There’s another characteristic lost in this two-step approach, and I want to mention it here. I’ve heard it referred to as “voltage feedback”, but you can consider it the negative feedback between the VDD net and devices connected to it. For simplicity, consider an inverter gate connected to a VDD power net. When the PMOS draws current from the power net, VDD will reduce slightly as the PMOS charges, and this will serve to reduce the current drawn from VDD, which will push the VDD value back up again. In effect there’s a negative feedback relationship between voltage and current at the VDD pin/MOS source interface. With a constant voltage source providing (infinite!) current at fixed voltage in step 1 of the two-step approach, this effect cannot be modeled.
  3. With the acquisition of currents achieved by performing simulation or timing analysis under constant voltage, there exists no opportunity to perform voltage drop-induced performance variation. With nano-scale geometries giving increasingly sensitive designs, such analysis is absolutely essential in order to precisely predict silicon behaviour. To me, not being able to perform this analysis is the third strike. And out.

The vast majority of voltage drop analysis tools employ this two-step approach. As such, it’s likely that if you’re using them your analysis is subject to similar (if not identical) issues that you, as users, need to be aware of and take into consideration during your final physical verification. I’m not saying they’re bad. Far from it - this is the state of the art of voltage drop tools today. You may decide, on further detailed consideration of the capabilities of your voltage drop tools to move away from a demand for absolute accuracy and precision (which the two-step approach cannot provide), and move instead to viewing them as being “relatively accurate”. Fix the biggest violations first, and many of the sympathetic errors will reduce or disappear. But don’t worry about the last tenth of a millivolt. Be as conservative as you feel you need to - but there’s no point cutting with a scalpel when you’re measuring with a yard stick marked off in inches.

Notes
1  Or more; I have worked with an extracted GND net that contained around a billion - yes, 10 to the 9th power - resistors. Once extraction was complete (itself a week or so of effort, on 6 or 8 machines), producing the DSPF file took over 8 days, on a machine with frighteningly large amounts of physical memory. Analysis was, to say the least, a computational challenge.

No Comments »

No comments yet.

RSS feed for comments on this post.

Leave a comment

If you want to leave a feedback to this post or to some other user´s comment, simply fill out the form below.

(required)

(required)