|Summary: Why can't EDA verification tools, particularly circuit simulation and parasitic extraction/reduction tools, deliver precise results at the finest level of resolution supported by the computer architecture? For today's machines that would mean full double precision supporting 15 significant digits. And yet many products only give credence to the first few. What's up with that?|
Why can’t EDA verification tools, particularly circuit simulation and parasitic extraction/reduction tools, deliver precise results at the finest level of resolution supported by the computer architecture? For today’s machines that would mean full double precision supporting 15 significant digits. And yet many products only give credence to the first few. What’s up with that?
If we hold the die size constant (for cost reasons) and walk down the process nodes (130nm, 90nm, 65nm, 45nm, etc), then to the first order, the number of MOS integrated increases with the square of the process node. The number of extracted parasitics, all things being equal, also increases in this squared relationship. However, all things aren’t equal. In order to increase the precision of post-extracted analysis, designers are performing extraction with finer and finer resolution, meaning finer-grained geometry fracturing. Additionally, power net extraction is now a mandatory step, and it won’t be too long before substrate extraction (it will depend at first on selection of epi or non-epi bulk material) is required in order to verify correct isolation between analog and digital stages. These two refinements are causing an explosion in the number of extracted parasitics present in the final netlist, and many traditional design tools are unable to cope. In fact, some new design tools are also unable to cope — I know of one current (recently released) SPICE-like simulator that permits only a single resistor and grounded capacitor pair per MOS. That’s about as much use as a chocolate teapot. The only alternative available to users of these tools is to apply aggressive parasitic reduction. Using this approach, you may be able to read the design into the tool, but you’ll miss subtle parasitic-induced effects. Indeed, you may see, as I have recently, results accurate only to the second significant figure. This just isn’t good enough.
Implementation limitations (efficiency, etc)
EDA has a mixture of innovations in both algorithms and the development of heuristics. The one thing they both have in common is that they’re subject to implementation by software engineers who are, like the rest of us, human. Sometimes they make mistakes, or pick sub-optimal data structures or algorithms - the results may be correct, but inefficient for a certain class of circuits. One example would be in the selection of a graph traversal algorithm, a widely-used component of many EDA tools - the common choice is between breadth-first search and depth-first search. Both will give the “correct” answer, but the most efficient one depends on the graph structure in question. Pick one, and it may work great (as measured by fast and memory efficient) for many, even most circuits. But not for all.
Heuristics, or “educated guesses”, can frequently give a fast near-optimal solution to a computationally intractable problem. The reasoning here is “good enough and fast is better than perfect and never.” And sound reasoning it is too. Placement and routing are areas where increasingly complex heuristics have been developed over the years. But again, they’re rarely universally applicable, and presenting poorly conditioned data can result in slow execution times and poor results.
In the situations where efficiency suffers, users are required to make trade-offs. As one example, during circuit simulation, users will often respond to slow run-times by increasing the time-step. This loss of timing precision causes transients to be missed; numerical precision can also be wildly thrown off by this. Thankfully, the implementation limitations are generally minimized over time - EDA companies spend a lot, post-release, on code optimization in a constant effort to improve performance. In my long experience in EDA, if it takes a week to write some new code, it’ll take a week to debug it to ensure it’s functionally correct, and a further two weeks to optimize it for real-life data. But in the early stages of the product lifecycle, implementation limitations abound. It’s the nature of the beast.
One of the first jokes I heard about engineering was the old saw about mechanical - I almost wrote maniacal - engineers: measure with a micrometer; mark with a carpenter’s pencil; cut with an axe. We’re not quite as bad in EDA, but sometimes we do end up picking an algorithm which while it delivers results, has limitations. Let me clarify with an example. Matrix solvers are a fundamental component of many EDA tools; consider the solver in SPICE as an example. There are two classes of solvers, direct and iterative (see
For very large problems, direct solvers may not be time or space efficient, and so even though they would give the precise answer (subject to underlying precision, and stability), iterative methods are used. The most significant advantage of iterative solvers in this scenario is that users can trade off the precision of the solution against run-time. Pick a loose criterion for convergence, and the solution will be fast. It may be “wrong”, but it will be fast. Tighten the convergence criterion and the solution may take longer to achieve than for a direct solver.
Finally, the easy one. All software has bugs. They can affect functionality, performance (run-time or memory efficiency), ease of use, integration, reliability and a whole host of user-visible metrics. Many of these bugs relate to undocumented use cases, or lapses in the requirements captured by the product core team, and some of these may be relevant in defining the appropriate level of precision and performance desired in the final finished product.
You can see there are many reasons why EDA products fail to deliver absolute accuracy, as measured by precision at the finest level of resolution supported by the underlying machine architecture and compiler pair. Sometimes the trade-off is well-known, and designers understand the limitations and can still get some utility out of the tool (witness the success of Fast-SPICE over the last 15 years). This is all well and good when the trade-off is well-articulated, and it’s possible to compare results to a known-good reference during evaluation or acceptance testing. Too often in EDA these limitations are brushed under the carpet and hidden from the user. “You get what you get” seems to be the attitude of these companies, and I think this is wrong. Hiding limitations from designers can cause chip failures. These software limitations should be published, and the companies held accountable. I wouldn’t mind if the principals were publicly flogged or put in the stocks and ridiculed. Which would you prefer? Comments welcome.