DAtum

EDA, Software and Business of technology
Greek, 'loxos: slanting. To displace or remove from its proper place
da·tums A point, line, or surface used as a reference


                        ... disruption results in new equilibria


[EDA] IBM goes statistical...Monte Carlo anyone?

IBM has integrated statistical timing analyis on top of its EinsTimer static timing analysis tool. The new tool EinStat sits on top of its original tool and apparently gives a significant improvement in performance...
EinsStat analysis required about 18 seconds to optimize a test chip of about 3,000 gates, while a comparable Monte Carlo optimization took nearly 14 hours, IBM reported. Optimization of the 2.1 million-gate test ASIC took an hour and 10 minutes with EinsStat, IBM said.

Wow... that is really something. As an engineer working generally in the field of static timing analysis, I think the numbers are slightly off. Either EEDesign has a typo, or the IBM guys are reaaally bragging.
For example, consider this similar article from EEDesign: back in 2001, performance of the industry standard STA tool - PrimeTime took a couple of hours on a million gate design
n one Synopsys benchmark, a 1.3 million-gate design with 23,000 top-level coupled nets, and up to 2,500 aggressors per net, ran in five hours on a 400-MHz SparcOS5 workstation.
. Consider the slowest and most accurate of them all - SPICE. Benchmark results from a SPICE company shows benchmark results of approx 30000 gates at nearly an hour.
These benchmarks suggest that someone made a goof up somewhere. 3000 gates would not normally take 14 hours to time.

But even if it did good, thats good enough. Statistical timing analysis is seemingly hot in the world of EDA. However, its not just hot...its scalding.
Consider Monte-Carlo methods for optimisation. Monte Carlo methods model characteristics like circuit delay, capacitance and a whole load of other stuff inside simulators.
What happens is that since due to on-chip variations in fabrication, all the above parameters vary across the chip. If we want to model a chip accurately, we cannot take one value of the delay/capacitance/whatever but rather need a distribution function that tells us the probability of the delay/capacitance/whatever being a particular value at a particular co-ordinates (time, space, temperature). Then we need to evaluate this integral over 0 to n (where n may be size of chip/ time of operation/maximum operating temperature). It is in evaluating this integral that Monte Carlo methods are very helpful.
What they tell us is basically very simple: integrals are basically areas in a co-ordinate space. The integral of a particular function is the space covered under it. Now rather than evaluate the integral, we can arrive at an approximate answer using probability. Generate a sample co-ordinate and check to see if it is inside the area covered by a function.
Out of the total number of samples generated, the number of points which fall under the function will give the probability of the function.
This probability multiplied by the area of the bounded box around the entire function will approximate the area of the distribution.
Where is the trick - it is in generating a evenly spaced random sequence that will even out the points on the entire space. Such random numbers are called quasi-random numbers.
Statistical Timing Analysis (SSTA) tools ask the question differently. Instead of evaluating the probability that delay is less than a critical value in a design, they ask What is the maximum delay distribution from inputs to outputs. I do not know what is the computational order of complexity difference between the two methods, but apparently it is too good!
del.icio.us Tags:
« Home | Next »
| Next »
| Next »
| Next »
| Next »
| Next »
| Next »
| Next »
| Next »
| Next »
|