Deep-submicron flows need an overhaul, designers say
By Richard Goering
April 26, 2002 (11:18 a.m. EST)
MONTEREY, Calif. Design flows for deep-submicron ICs need an overhaul, attendees of this week's Electronic Design Processes (EDP-2002) workshop agreed. But they had starkly different views of how to proceed with RTL sign-off, high-level modeling, and platform-based design.
"It's not about point capabilities, it's still about the methodology," said conference chairman Andrew Kahng, professor at the University of California at San Diego. That was one of the few points of consensus at the EDP-2002 conference.
Now in its ninth year, EDP is a small but influential conference that draws participants from academia, EDA companies, and chip design firms. This year's conference opened with keynoter Dan Smith, Nvidia Corp.'s director of hardware engineering, explaining how and why his company developed a standardized IC design methodology.
Many speakers at EDP-2002 addressed the thorny issue of sign-off. Some advocates of RTL sign-off said that ultra-deep-submicron complexity will make gate-level sign-off too difficult, but added that placement-based or GDSII sign-off requires too much expertise for many customers. Others voiced concern about the reeducation that will be required for RTL sign-off, and the inability of foundries to change RTL code.
In sessions on high-level modeling, advocates of system-level design often clashed with skeptics who argued that high-level modeling is too vague and must be "grounded in reality." Different pathways to high-level modeling were advocated, including SystemC, SpecC, and Esterel.
Several said platform-based design was a way to ground high-level modeling, and design space exploration, in reality. But it must also be application-specific, participants said, and one paper probed the tough question of how the platform gets created in the first place.
In other sessions, participants described new methodologies for analog/mixed-signal design, outlined "policy-based" RTL design, and examined cost savings through design reuse.
RTL or physical sign-off?
There's a widespread consensus in the design community that synthesis and placement, or perhaps synthesis, placement and routing, should be a single process, and that the current gate-level ASIC sign-off which occurs between synthesis and placement will not remain valid. The question many are wrestling with now is which way to go: RTL sign-off, or sign-off after physical design?
At EDP-2002, two of the most positive views of RTL sign-off came from representatives of two semiconductor foundries that already accept designs in this style. Tom Russell, manager of ASIC timing, power and synthesis at IBM Microelectronics, predicted that IBM's customers will be evenly split between "early" and "late" sign-off models.
Under late sign-off, customers will do synthesis, placement and routing. But that approach takes a significant investment in tools and skills, Russell noted. And it includes knowledge of a "very scary thing" that IBM's internal teams are well equipped to handle noise avoidance.
"RTL sign-off means the customer doesn't have to run synthesis," Russell said. "We think it will be very attractive, and a cost benefit to customers." IBM is working with "quite a few" customers on RTL sign-off, but mostly on a pilot basis, he said.
Dan Deisz, director of North American design centers for LSI Logic Corp., said his company believes that GDSII sign-off or sign-off after placement and routing won't be viable below 0.11 microns, because of deep-submicron effects and manufacturing issues. He predicted that netlist and RTL sign-off will become the most common models.
And yet, Deisz said, LSI Logic has found that RTL code is "functionally correct and physically incorrect most of the time." And customers are very hesitant to give the company their RTL code, he said. There needs to be an "educational process" for both customers and LSI Logic designers, he said.
The most negative view of RTL sign-off came from Shankar Krishnamoorthy, R&D group director for physical synthesis at Synopsys Inc. "The big problem is that there's no flexibility to change the RTL, and when you get to timing critical designs, changing the RTL becomes increasingly important," he said. He also noted that constraints are continually refined throughout synthesis and placement, and that RTL sign-off will require "constant iteration" between ASIC vendors and customers to get the constraints right.
Krishnamoorthy advocated "placement-based sign-off" using a physical synthesis tool, but he acknowledged that it requires some placement and routing expertise. The solution, he said, is new technology that will allow physical prototyping during logic synthesis.
Deisz, however, argued that placement-based sign-off isn't all that much better than netlist sign-off, and that the placement that comes from physical synthesis is only a starting point that will probably change.
"If you throw away the placement, you pretty much throw away the timing closure," Krishnamoorthy retorted.
Vivek Joshi, senior design engineer at Intel Corp., also sounded a skeptical note about RTL sign-off when he discussed noise challenges for ultra-deep-submicron designs. In a presentation called "depressing" by moderator Gary Smith, chief EDA analyst at Gartner Dataquest, Joshi discussed the looming problems of capacitive cross-coupling, inductive noise, voltage drop, voltage "droop," and ground bounce.
"Noise needs to be incorporated into the flow, and I don't know how it will fit into an RTL toss-it-over-the-wall type thing," Joshi said.
"RTL sign-off does not mean throwing RTL over the wall," responded Tommy Eng, president and chief executive officer of Tera Systems Inc. Eng said that both RTL and physical sign-off are viable models, and he said that predictive "RTL design closure" technology currently under development by Tera will help facilitate RTL sign-off.
Two sessions on high-level modeling brought forth a number of different points of view. Participants debated whether such modeling is useful, how it should be done, and whether it's aimed more at design-space exploration or validation.
"I am an optimist. System-level design does have a good future," said Grant Martin, fellow at Cadence Design Systems Inc. "But it will take a lot of hard work to make it happen, and the most important thing is education."
Victor Konrad, senior design engineer at Intel, threw some cold water on the concept by describing Intel's unsuccessful experience with high-level modeling in its cancelled "Yosemite" project, which was to be a next-generation Itanium processor. In that project, Intel developed a high-level model written in the company's proprietary iHDL language a clock-accurate model with some 20,000 lines of code. But that was too slow, so Intel moved on to C/C++.
Konrad said Intel felt that high-level modeling in C was "doable," but raised unanswered questions. It was unclear whether a high-level model could be a clock and signal-accurate "golden" model for the RTL code, and could be interoperable with RTL. Ultimately, he said, Intel decided there wasn't enough return on investment in building a model and keeping it in synch with the RTL.
Dundar Dumlugol, vice president of engineering at CoWare Inc., then came forward to present a "more optimistic scenario." He outlined a SystemC design flow, and stated that SystemC untimed functional models run 100,000 times faster than RTL, while representing both concurrent and sequential processes. Cycle-accurate, timed functional models are the next level down, and they are still two or three orders of magnitude faster than RTL, he said.
But SystemC has competition, as evidenced by one EDP-2002 paper that presented a design flow using the SpecC language, and another that described a design flow using Esterel. Common to all three is the idea of successive refinement of models from the algorithmic to implementation levels.
But the real issue is not languages but a lack of high-level synthesis tools, said John Sanguinetti, chief technical officer of Forte Systems Inc. He described CynthHL, a forthcoming tool from Forte Systems that will take C++/SystemC descriptions down to RTL Verilog.
Some conference attendees found all the talk of high-level modeling deeply disturbing, and repeatedly questioned how real it is. Some said that high-level modeling is too inaccurate to make sound engineering judgments. "You need an accuracy requirement, and it's got to be better than 50 percent," chairman Kahng said.
Skeptics were also unclear about what high-level modeling is really for. Advocates seemed to talk mostly about design-space exploration, but several audience members felt that high-level modeling is much more useful, and more accepted, for validation.
Jumping on the platform
The discussion of high-level modeling flowed nicely into the discussion of platform-based design because, as several participants noted, a predefined architecture will help constrain the high-level modeling problem. "In my opinion, we'll go to platform-based design because [you] can't explore all the design space, and we need to ground this thing in reality," said Sandeep Shukla, professor at the University of California at Irvine.
Infineon Technologies' successful experience with platforms was described by senior design engineer Sagheer Ahmad. "By using platforms, we can reduce design times and iterations," Ahmad said. "But there won't be any one generic platform that fits into each and every application need."
One problem with platforms was called the "first-generation dilemma" by Jiang Xu, a PhD student at Princeton University. "Platform-based design assumes there are enough IP [intellectual property] cores and modules to build the platform, but unfortunately for most first-generation designs there are only a few IP cores and modules," he said. Thus, said Xu, there's not enough information to do performance analysis, which is a requirement for platform-based design.
Xu presented a revised flow in which designers first select an architecture, and assign estimated requirements to unavailable modules. They then adjust the requirements using performance analysis in a trial-and-error fashion. Then they purchase cores and modules. This flow may require several iterations, Xu said.
Cadence's Martin described his company's work in developing platforms for BMW and Philips. With BMW, he said, Cadence developed a methodology for "software-software codesign." With Philips, he said, Cadence came up with a way of doing design-space exploration without requiring thousands of hours of simulation.
The Cadence approach involves the development of "probes" that allow designers to look at interesting states and run various kinds of experiments. This leads to the development of "statistical rules of thumb" that can be used to analyze platform requirements. Kahng said later that he found Martin's paper "reassuring" in light of his earlier concerns about the inaccuracy of high-level design-space exploration.
Presentations and papers from the conference are available online at the EDP-2002 Web site.