|Summary: To ensure silicon success, voltage drop analysis tools - like all EDA tools - must at their most detailed give results that precisely predict final silicon behavior. Yet for much of the design process the chip is in an incomplete state, and the best we can hope for are predictors that, as more data becomes available, have variation and errors that monotonically decrease. This article describes how as the power distribution networks become more defined, and chip functionality increases, the quality of results, and the information available from voltage drop verification, increase.|
Here a word about quality of results for voltage drop analysis tools specifically, though the same argument can also be applied more generally to all EDA tools.
The extracted power net, containing R and C values of parasitics in the power distribution networks, usually shows close correlation across various extraction tools. This is hardly surprising. As we described in our post on physical data, resistance extraction is performed by fracturing the net segment into smaller, manageable geometries, and then “counting the squares” in each geometry. Applying the law of large numbers here, the intuitive conclusion is that accuracy can be increased by counting more, smaller squares.
For capacitance we have to consider the geometries involved; metal for power distribution is usually fairly wide and not at all like the tall, skinny metal laid down for signal nets. As such, fringing and sidewall capacitance contribute less to the over capacitance value of the power net geometry. Also, power nets are at more or less a fixed constant voltage (there is an AC component superimposed on the DC value, but it’s fairly small), and so the current bled from or into the power net from adjacent signal nets (calculated as I=C.dV/dt) is very small, and ignoring it has negigible effect on accuracy.
So the accuracy of voltage drop analysis, given that extractor A and extractor B are going to come up with very similar extracted RC values, comes down to the quality of the currents modeled. Different tools, at different stages in the design flow, have varying data available to them - some “better”  than others.
These currents typically fall into the following groups:
- Static currents
- Constant-current sources
- Piecewise-constant-current sources
- Vectorless methods for deriving instantaneous currents
- Two-step, or decoupled, analysis
- Direct-coupled analysis
Each one of these numbered items is the subject of an AllAboutEDA post - if the numbered bullets are hyperlinks, go ahead and click on them to read what we have to say about the current derivation, and the type of voltage drop analysis being performed. In general, the quality of results and the rigor of the analysis performed goes from least to best down the list, but there’s a healthy overlap, especially for the static currents.
1. What makes a better current? I would posit that it’s the one that has the smaller difference between the modeled current itself and the real design, on silicon, configured in exactly the same manner. Now comparison to silicon is notoriously difficult, even for well-established and stable processes, so we’ll permit one level of abstraction and say it’s the smaller difference between the modeled current and SPICE configured in exactly the same manner. I can think of nothing better - it’s not a perfect situation, since SPICE objects to circuits with power nets (especially large MOS count circuits), see the limitations section in this article, and FastSPICE can be finicky for its own reasons. If you can recommend a better one, or come up with a closed-form model, I’d be one of many people who would love to hear about it.