[EDA] IBM goes statistical...Monte Carlo anyone?
IBM has integrated statistical timing analyis on top of its EinsTimer static timing analysis tool. The new tool EinStat sits on top of its original tool and apparently gives a significant improvement in performance...
Wow... that is really something. As an engineer working generally in the field of static timing analysis, I think the numbers are slightly off. Either EEDesign has a typo, or the IBM guys are reaaally bragging.
For example, consider this similar article from EEDesign: back in 2001, performance of the industry standard STA tool  PrimeTime took a couple of hours on a million gate design
These benchmarks suggest that someone made a goof up somewhere. 3000 gates would not normally take 14 hours to time.
But even if it did good, thats good enough. Statistical timing analysis is seemingly hot in the world of EDA. However, its not just hot...its scalding.
Consider MonteCarlo methods for optimisation. Monte Carlo methods model characteristics like circuit delay, capacitance and a whole load of other stuff inside simulators.
What happens is that since due to onchip variations in fabrication, all the above parameters vary across the chip. If we want to model a chip accurately, we cannot take one value of the delay/capacitance/whatever but rather need a distribution function that tells us the probability of the delay/capacitance/whatever being a particular value at a particular coordinates (time, space, temperature). Then we need to evaluate this integral over 0 to n (where n may be size of chip/ time of operation/maximum operating temperature). It is in evaluating this integral that Monte Carlo methods are very helpful.
What they tell us is basically very simple: integrals are basically areas in a coordinate space. The integral of a particular function is the space covered under it. Now rather than evaluate the integral, we can arrive at an approximate answer using probability. Generate a sample coordinate and check to see if it is inside the area covered by a function.
Out of the total number of samples generated, the number of points which fall under the function will give the probability of the function.
This probability multiplied by the area of the bounded box around the entire function will approximate the area of the distribution.
Where is the trick  it is in generating a evenly spaced random sequence that will even out the points on the entire space. Such random numbers are called quasirandom numbers.
Statistical Timing Analysis (SSTA) tools ask the question differently. Instead of evaluating the probability that delay is less than a critical value in a design, they ask What is the maximum delay distribution from inputs to outputs. I do not know what is the computational order of complexity difference between the two methods, but apparently it is too good!
EinsStat analysis required about 18 seconds to optimize a test chip of about 3,000 gates, while a comparable Monte Carlo optimization took nearly 14 hours, IBM reported. Optimization of the 2.1 milliongate test ASIC took an hour and 10 minutes with EinsStat, IBM said.
Wow... that is really something. As an engineer working generally in the field of static timing analysis, I think the numbers are slightly off. Either EEDesign has a typo, or the IBM guys are reaaally bragging.
For example, consider this similar article from EEDesign: back in 2001, performance of the industry standard STA tool  PrimeTime took a couple of hours on a million gate design
n one Synopsys benchmark, a 1.3 milliongate design with 23,000 toplevel coupled nets, and up to 2,500 aggressors per net, ran in five hours on a 400MHz SparcOS5 workstation.. Consider the slowest and most accurate of them all  SPICE. Benchmark results from a SPICE company shows benchmark results of approx 30000 gates at nearly an hour.
These benchmarks suggest that someone made a goof up somewhere. 3000 gates would not normally take 14 hours to time.
But even if it did good, thats good enough. Statistical timing analysis is seemingly hot in the world of EDA. However, its not just hot...its scalding.
Consider MonteCarlo methods for optimisation. Monte Carlo methods model characteristics like circuit delay, capacitance and a whole load of other stuff inside simulators.
What happens is that since due to onchip variations in fabrication, all the above parameters vary across the chip. If we want to model a chip accurately, we cannot take one value of the delay/capacitance/whatever but rather need a distribution function that tells us the probability of the delay/capacitance/whatever being a particular value at a particular coordinates (time, space, temperature). Then we need to evaluate this integral over 0 to n (where n may be size of chip/ time of operation/maximum operating temperature). It is in evaluating this integral that Monte Carlo methods are very helpful.
What they tell us is basically very simple: integrals are basically areas in a coordinate space. The integral of a particular function is the space covered under it. Now rather than evaluate the integral, we can arrive at an approximate answer using probability. Generate a sample coordinate and check to see if it is inside the area covered by a function.
Out of the total number of samples generated, the number of points which fall under the function will give the probability of the function.
This probability multiplied by the area of the bounded box around the entire function will approximate the area of the distribution.
Where is the trick  it is in generating a evenly spaced random sequence that will even out the points on the entire space. Such random numbers are called quasirandom numbers.
Statistical Timing Analysis (SSTA) tools ask the question differently. Instead of evaluating the probability that delay is less than a critical value in a design, they ask What is the maximum delay distribution from inputs to outputs. I do not know what is the computational order of complexity difference between the two methods, but apparently it is too good!
del.icio.us Tags: eda timinganalysis asic ibm
