UCSD VLSI CAD LABORATORY














Last Modified: October 11, 2006

Topics:

  1. The VLSI Design-Manufacturing Interface
    1. Layout Density Control for Improved VLSI Manufacturability
    2. Image Compression Approaches for Area Fill Layout Data
    3. Area Fill Compression by Hierarchical Rectangle Covering
    4. Phase-Shift Mask Layout Synthesis
    5. Alternative Mask Writing Strategies
    6. Manufacturing Variability
  2. VLSI Interconnect Performance Analysis and Optimization
  3. VLSI Interconnect Synthesis
  4. Technology Extrapolation and the "Living Roadmap"
  5. MARCO GSRC Calibrating Achievable Design (C.A.D.) Theme
  6. Other Topics

1. The VLSI Design-Manufacturing Interface

Increasing complexity of VLSI designs and IC process technologies increases the mismatch between design and manufacturing. The resemblance between a circuit "as fabricated" (on the wafer) and "as designed" (in the layout tool) grows ever weaker. Process variations, fabrication defects, etc. form new cost (turnaround time, productivity) bottlenecks as we enter the era of nanometer-scale VLSI. This motivates research to enhance the predictability and yield of VLSI manufacturing, as well as design technology means of overcoming process variations and lithographic errors.

  1. Layout Density Control for Improved VLSI Manufacturability

    In very deep-submicron VLSI, manufacturing steps involving chemical-mechanical planarization (CMP) have varying effects on device and interconnect features, depending on local characteristics of the layout. To reduce manufacturing variation due to CMP and to improve performance predictability and yield, layout must be made uniform with respect to certain density criteria, by inserting "fill" geometries into the layout. The industry state of the art in density control for CMP has several key weaknesses, including (1) a "physical verification heritage" (tools are based on layer-oriented geometry engines optimized for Boolean operations; tools are not empowered to change the layout; tools have no functional awareness; ...), (2) use of local and discrete density checks (with respect to small, fixed "windows" in the layout); and (3) use of non-physical models of the CMP process and the effect of layout density variation on post-CMP planarity.

    We have been the first to address both flat and hierarchical fill problems. Our work is based on the spatial layout density model and effective layout density model, where the Min-Var and Min-Fill objectives are studied. Our group was the first to develop linear programming and Monte-Carlo/Greedy methods for the flat single-layer/multiple-layer fill formulations, as well as Monte-Carlo based methods for the hierarchical fill problem.

    Currently, we are studying the following issues: (1) Cu and Shallow Trench Isolation (STI) CMP models; (2) the "smoothness and uniformity gap" in fill synthesis (i.e., of fixed-dissection discrete solutions, versus the continuous formulation); (3) simulation studies of the effect of local density variation on the post-CMP wafer topography; (4) efficient combinatorial algorithms to a one-dimensional filling problem; (5) generation of grounded and/or compressible fill patterns; and (6) co-optimization of CMP recipes with the fill layout to minimize overall variation.


  2. back to top

  3. Image Compression Approaches for Area Fill Layout Data

    In modern VLSI manufacturing processes, each succeeding layer of material is deposited only after the previous layer of material, along with an insulating dielectric, have been together polished flat by a process known as chemical-mechanical planarization (CMP). The CMP result will not be flat unless the layout geometries in the previous layer of material exhibit uniform spatial density. Therefore, many millions of "dummy fill" features are introduced into sparse regions of the layout to equalize the spatial density at the cost of dramatically increasing the size of the layout data file. The size expansion reduces the efficiency of passing the chip design to the manufacturing process. Compressing the dummy fill data is a straightforward solution to the problem.

    The dummy fill features, typically small squares on a regular grid, are essentially "ones" in a binary 0-1 matrix. Compressing the dummy fill layout information is therefore an exercise in binary data compression. This compression is permitted to be lossy, but only in a certain one-sided sense; the notion of one-sided loss leads to a number of "directed" covering code formulations which are of independent interest. Several algorithms based on JBIG (Joint Bi-level Image) Experts Group methods are presented to improve the compression ratio of the binary data. Pattern matching and substitution (PM&S), reference dictionary generation using set cover algorithms, and one-sided loss are considered and introduced in the algorithms. Sensitivity of the compression results to main parameters of the algorithms is also studied. Comparison with off-the-shelf methods (Gzip, Bzip2, Winrar, etc.) on layout data shows that our JBIG based algorithms achieve improvements of 18% - 70% over commercial compression tools. Our current research seeks to hybridize these methods with rectangle-covering methods (see below), GDSII-specific compression operators (AREF, SREF), and fill synthesis to generate effective filling solutions with very low data volume.


  4. back to top

  5. Area Fill Compression by Hierarchical Rectangle Covering

    For the design of integrated circuits, the most popular format for data interchange is GDSII Stream Format. The GDSII format uses polygon, path, box, structure reference (SREF) and structure array reference (AREF) constructs. An AREF structure is a two-dimensional array of some other basic structure which, in turn, could be another smaller AREF structure. GDSII description of area fill data can be either flat or hierarchical. A flat description will most likely lead to a larger data file than a hierarchical description. Thus, in this work we consider an existing fill pattern, look for hidden hierarchy in it, and use the identified hierarchy to reduce the size of the GDSII data file. We focus on the compression of a single layer of GDSII, typically on interconnect layers. Such layers are typically non-hierarchical, and have "regular" fill patterns which can be compressed by nested AREFs into a hierarchical GDSII file.


  6. back to top

  7. Phase-Shift Mask Layout Synthesis

    Alternating phase-shifting mask (PSM) technology exploits the ability to vary the phase shift of the partially coherent light that is transmitted through clear regions of an IC mask. Consider two closely adjacent features of an IC layout, which correspond to two clear regions of a dark-field mask with small separation. If these clear regions have respective phase shifts of 0 and 180 degrees, then light diffracted into the nominally dark region between the clear regions will interfere destructively; the improved image contrast leads to better resolution of the printed features and improved depth of focus. Alternating PSM technology is enabling to the subwavelength lithography that drives the Technology Roadmap for Semiconductors, e.g., 90nm gate lengths are achieved using 248nm steppers and alternating-aperture PSM. However, a layout problem results: on either side of a given critical-width feature in the layout, we must place phase-shifting clear areas in the mask of opposite phase. This boils down to achieving 2-colorability by "bipartizing" a particular type of graph - a phase conflict graph - that is induced from the layout. We have developed efficient algorithms for layout modification and phase assignment for dark field alternating-type phase-shifting masks in the single-exposure regime. We formulate a minimum layout perturbation problem that is essentially a T-join problem - and we show how the latter can be solved in practice. Other work addresses bipartization of the phas conflict graph using different degrees of freedom (in particular, a "vertex deletion" operation instead of an "edge deletion" operation). This project is not very active at the moment, but we hope to study hierarchical phase assignment and the layout density tradeoffs inherent in phase-shifting or other super-resolution layout techniques (e.g., PhasePhirst!).


  8. back to top

  9. Alternative Mask Writing Strategies

    Mask writing is an increasingly critical and costly step in the semiconductor manufacturing flow. Rapid technology scaling and the aggressive use of reticle enhancement techniques in subwavelength optical lithography lead to stringent tolerances for mask critical dimensions (CD). Ever-decreasing feature sizes are achieved today using high-resolution, variable shaped electron-beam writers. However, CD control becomes more challenging, especially as mask CD errors can be "amplified" during exposure, development and etch of chemically amplified resists (cf. the "mask error enhancement factor" (MEEF)).

    Resist heating is a phenomenon that arises during e-beam write and affects CD accuracy of masks. It is estimated that resist heating during mask write alone contributes between 10-20% of the total lithographic CD error in the 180nm technology node. Thus, heating effects may prove to be a serious obstacle to efficient mask fabrication at future technology nodes. In this work, we develop a writing approach that differs from the conventional Vector Scan Beam (VSB), in that its goal is to minimize the maximum accumulated temperature over the reticle. In this writing approach, the objectives are two-fold: (1)maximize the distance between subfields written at times t and t + i, for i = 1, 2,.., k, and (2) minimize the total write time of the mask.

    This formulation is, in effect, a "self-avoiding traveling salesman problem" (SA-TSP): it modifies the classical TSP problem by adding the constraint that the tour avoids recently visited locations. The self-avoiding property minimizes the impact of resist heating, while the traveling salesman objective of minimum tour cost minimizes total write time. We have developed heuristics for the SA-TSP problem on finite grids, and we are now seeking to verify the utility of corresponding writing approaches through finite-element analysis of thermal patterns in resist (using a mask heating model developed with ABAQUS).


  10. back to top

  11. Manufacturing Variability

    Aggressive VLSI technology scaling has introduced new variation sources and made process control more difficult. As a result, future technology nodes are expected to see increased process variation and decreased predictability of nanometer-scale circuit performance. According to the International Technology Roadmap for Semiconductors (ITRS), there are no known solutions for a number of near-term variability control requirements. Potentially, variability will have large impact on the value of semiconductor products. Our research goals are: (1) development of cost models, design rules, layout methods and tools to minimize costs associated with resolution enhancement techniques that limit lithography induced variability; and (2) understanding of tradeoffs between development of circuit design techniques that can deal with variability, and development of process techniques that reduce variability.

    Among the contributions of our work are: (1) a new, self-consistent taxonomy for within-die process variation; (2) accurate modeling of within-die correlations; (3) projection of circuit performance variation into future technology nodes; (4) evaluation of sensitivities of design performance to improved control of individual device parameters in manufacturing; (5) an assessment of potential benefits from design for variability methodologies (as opposed to traditional design for performance methodologies); and (6) a new "design for value" paradigm.

    back to top