Low pass filtering of mask diffraction orders, in the projection tools used in microelectronics
industry, leads to a range of optical proximity effects, OPEs, impacting integrated circuit pattern
images. These predictable OPEs can be corrected with various, model-based optical proximity
correction methodologies, OPCs , the success of which strongly depends on the completeness of
the imaging models they use.
The image formation in scanners is driven by the illuminator settings and the projection lens
NA, and modified by the scanner engineering impacts due to: 1) the illuminator signature, i.e. the
distributions of illuminator field amplitude and phase, 2) the projection lens signatures
representing projection lens aberration residue and the flare, and 3) the reticle and the wafer scan
synchronization signatures. For 4x nm integrated circuits, these scanner impacts modify the
critical dimensions of the pattern images at the level comparable to the required image tolerances.
Therefore, to reach the required accuracy, the OPC models have to imbed the scanner illuminator,
projection lens, and synchronization signatures.
To study their effects on imaging, we set up imaging models without and with scanner
signatures, and we used them to predict OPEs and to conduct the OPC of a poly gate level of 4x
nm flash memory. This report presents analysis of the scanner signature impacts on OPEs and
OPCs of critical patterns in the flash memory gate levels.
As the technology shrinks toward 65nm technology and beyond, Optical Proximity Correction (OPC) becomes more
important to insure proper printability of high-performance integrated circuits. This correction involves some
geometrical modifications to the mask polygons to account for light diffraction and etch biasing. Model-based OPC has
proven to be a convenient, accurate, and efficient methodology. In this method, raw calibration data are measured from
the process. These data are used to build a VT5 resist model [1] that accounts for all proximity effects that attendant to
the lithography process. To ensure the reliability of the calibrated VT5 model, these data must be broad in the image
parameter space (IPS) to account for different one-dimensional and two-dimensional features for the design intent.
Failure to provide sufficient IPS (i.e. mimic the design intent) coverage during model calibration could result in
marginalizing the VT5 model during OPC, but is difficult to judge when there is enough data volume to safely
interpolate and extrapolate design intent. In this paper we introduce a new metric called Safe Interpolation Distance
(SID). This metric is a multi-dimensional metric which can be used to automatically detect the portions of the target
design that are not covered well by the desired VT5 model.
As design rules shrink, there is an unavoidable increase in the complexity of OPC/RET schemes required to enable
design printability. These complex OPC/RET schemes have been facilitating unprecedented yield at k1 factors
previously deemed "unmanufacturable", but they increase the mask complexity and production cost, and can introduce
yield-detracting errors. The most common errors are found in OPC design itself, and in the resulting patterning
robustness across the process window. Two factors in the OPC design process that contribute to these errors are a) that
2D structures used in the design are not sufficiently well-represented in the OPC model calibration test pattern suite, and
b) that the OPC model calibration is done only at the nominal process settings and not across the entire focus-exposure
window.
This work compares two alternative methods for calibrating OPC models. The first method uses a traditional industry
flow for making CD measurements on standard calibration target structures. The second method uses 2D contour
profiles extracted automatically by the CD-SEM over varying focus and exposure conditions. OPC models were
developed for aggressive quadrupole illumination conditions (k1=0.35) used in 65nm- and 45nm-node logic gate
patterning. Model accuracy improvement using 2D contours for calibration through the process window is
demonstrated. Additionally this work addresses the issues of automating the contour extraction and calibration process,
reducing the data collection burden with improved calibration cycle time.
Process models are responsible for the prediction of the latent image in the resist in a lithographic process. In order for
the process model to calculate the latent image, information about the aerial image at each layout fragment is evaluated
first and then some aerial image characteristics are extracted. These parameters are passed to the process models to
calculate wafer latent image. The process model will return a threshold value that indicates the position of the latent
image inside the resist, the accuracy of this value will depend on the calibration data that were used to build the process
model in the first place.
The calibration structures used in building the models are usually gathered in a single layout file called the test pattern.
Real raw data from the lithographic process are measured and attached to its corresponding structure in the test pattern,
this data is then applied to the calibration flow of the models.
In this paper we present an approach to automatically detect patterns that are found in real designs and have
considerable aerial image parameters differences with the nearest test pattern structure, and repair the test patterns to
include these structures. This detect-and-repair approach will guarantee accurate prediction of different layout fragments
and therefore correct OPC behavior.
In order to achieve the necessary OPC model accuracy, the requisite number of SEM CD measurements has
exploded with each technology generation. At 65 nm and below, the need for OPC and/or manufacturing
verification models for several process conditions (focus, exposure) further multiplies the number of
measurements required. SEM-contour based OPC model calibration has arisen as a powerful approach to
deliver robust and accurate OPC models since every pixel now adds information for input into the model,
substantially increasing the parameter space coverage. To date however, SEM contours have been used to
supplement the hundreds or thousands of discreet CD measurements to deliver robust and accurate models.
While this is still perhaps the optimum path for high accuracy, there are some cases where OPC test
patterns are not available, and the use of existing circuit patterns is desirable to create an OPC model.
In this work, SEM contours of in-circuit patterns are utilized as the sole data source for OPC model
calibration. The use scenario involves 130 nm technology which was initially qualified for production with
the use of rule-based OPC, but is shown to benefit from model based OPC. In such a case, sub-nanometer
accuracy is not required, and in-circuit features can enable rapid development of sufficiently accurate
models to provide improved process margin in manufacturing.
Lithography models for leading-edge OPC and design verification must be calibrated with empirical data, and this data is traditionally collected as a one-dimensional quantification of the features acquired by a CD-SEM. Two-dimensional proximity features such as line-end, bar-to-bar, or bar-to-line are only partially characterized because of the difficulty in transferring the complete information of a SEM image into the OPC model building process. A new method of two-dimensional measurement uses the contouring of large numbers of SEM images acquired within the context of a design based metrology system to drive improvement in the quality of the final calibrated model.
Hitachi High-Technologies has continued to develop "full automated EPE measurement and contouring function" based on design layout and detected edges of SEM image. This function can measure edge placement error everywhere in a SEM image and pass the result as a design layout (GDSII) into Mentor Graphics model calibration flow. Classification of the critical design elements using tagging scripts is used to weight the critical contours in the evaluation of model fitness.
During process of placement of the detected SEM edges of into the coordinate system of the design, coordinate errors inevitably are introduced because of pattern matching errors. Also, line edge roughness in 2D features introduces noise that is large compared to the model building accuracy requirements of advanced technology nodes. This required the development of contour averaging algorithms. Contours from multiple SEM images are acquired of a feature and averaged before passing into the model calibration. This function has been incorporated into the prototype Calibre Workbench model calibration flow.
Based on these methods, experimental data is presented detailing the model accuracy of a 45nm immersion lithography process using traditional 1D calibration only, and a hybrid model calibration using SEM image contours and 1D measurement results. Error sources in the contouring are assessed and reported on including systematic and random variation in the contouring results.
The model calibration process, in a resolution enhancement technique (RET) flow, is one of the most
critical steps towards building an accurate OPC recipe. RET simulation platforms use models for predicting
latent images in the wafer due to exposure of different design layouts. Accurate models can precisely
capture the proximity effects for the lithographic process and help RET engineers build the proper recipes
to obtain high yield. To calibrate OPC models, test geometries are created and exposed through the
lithography environment that we want to model, and metrology data are collected for these geometries.
This data is then used to tune or calibrate the model parameters. Metrology tools usually provide critical
dimension (CD) data and not edge placement error (EPE - the displacement between the polygon and resist
edge) data however model calibration requires EPE data for simulation. To work around this problem, only
symmetrical geometries are used since, having this constraint, EPE can be easily extracted from CD measurements.
In real designs, it is more likely to encounter asymmetrical structures as well as complex 2D structures that
cannot easily be made symmetrical, especially when we talk about technology nodes for 65nm and beyond.
The absence of 2D and asymmetric test structures in the calibration process would require models to
interpolate or extrapolate the EPE's for these structures in a real design.
In this paper we present an approach to extract the EPE information from both SEM images and contours
extracted by the metrology tools for structures on test wafers, and directly use them in the calibration of a
55nm poly process. These new EPE structures would now mimic the complexity of real 2D designs. Each
of these structures can be individually weighed according to the data variance. Model accuracy is then
compared to the conventional method of calibration using symmetrical data only. The paper also illustrates
the ability of the new flow to extract more accurate measurement out of wafer data that are more immune to
errors compared to the conventional method.
The fate of optical-based lithography hinges on the ability to deploy viable resolution enhancement techniques (RET).
One such solution is double patterning (DP). Like the double-exposure technique, double patterning is a decomposition
of the design to relax the pitch that requires dual masks, but unlike double-exposure techniques, double patterning
requires an additional develop and etch step, which eliminates the resolution degradation due to the cross-coupling that
occurs in the latent images of multiple exposures. This additional etch step is worth the effort for those looking for an
optical extension [1]. The theoretical k1 for a double-patterning technique of a 32nm half-pitch (HP) design for a
1.35NA 193nm imaging system is 0.44 whereas the k1 for a single-exposure technique of this same design would be 0.22
[2], which is sub-resolution. There are other benefits to the DP technique such as the ability to add sub-resolution assist
features (SRAF) in the relaxed pitch areas, the reduction of forbidden pitches, and the ability to apply mask biases and
OPC without encountering mask constraints.
Similarly to AltPSM and SRAF techniques one of the major barriers to widespread deployment of double patterning to
random logic circuits is design compliance with split layout synthesis requirements [3]. Successful implementation of
DP requires the evolution and adoption of design restrictions by specifically tailored design rules.
The deployment of double patterning does spawn a couple of issues that would need addressing before proceeding into a
production environment. As with any dual-mask RET application, there are the classical overlay requirements between
the two exposure steps and there are the complexities of decomposing the designs to minimize the stitching but to
maximize the depth of focus (DoF). In addition, the location of the design stitching would require careful consideration.
For example, a stitch in a field region or wider lines is preferred over a transistor region or narrower lines. The EDA
industry will be consulted for these sound automated solutions to resolve double-patterning sensitivities and to go
beyond this with the coupling of their model-based and process-window applications.
This work documented the resolution limitations of single exposure, and double-patterning with the latest hyper-NA
immersion tools and with fully optimized source conditions. It demonstrated the best known methods to improve design
decomposition in an effort to minimize the impact of mask-to-mask registration and process variance. These EDA
solutions were further analyzed and quantified utilizing a verification flow.
Conventional site-base model calibration approaches have worked fine from the 180nm down to the 65nm technology nodes, but with the first 45nm technology nodes rapidly approaching, site-based model calibration techniques may not capture the details contained in these 2D-intensive designs. Due to the compaction of designs, we have slowly progressed from 1D-intensive gates, which were site-based friendly, to very complex and sometimes ornate 2D-gate regions. To compound the problem, these 2D-intensive gate regions are difficult to measure resulting in metrology-induced error when attempting to add these regions to the model calibration data. To achieve the sub-nanometer model accuracy required at this node, a model calibration technique must be able to capture the curvature induced by the process and the design in these gate regions. A new approach in model calibration had been developed in which images from a scanning electron microscope (SEM) are used together with the conventional site-base to calibrate models instead of the traditional single critical dimension (CD) approach. The advantage with the SEM-image model calibration technique is that every pixel in the SEM image contributes as CD information improving model robustness. Now the ornate gate regions could be utilized as calibration features allowing the acquisition of fine curvature in the design.
This paper documents the issues of the site-base model calibration technique at the 45nm technology node and beyond. It also demonstrates the improvement in model accuracy for critical gate regions over the traditional modeling technique, and it shows the best know methods to achieve the utmost accuracy. Lastly, this paper shows how SEM-based modeling quantifies modeling error in these complex 2D regions.
One of the enabling RET candidates for 45 nm robust imaging is high transmission (20-30%) EAPSM masks. However, the effectiveness of these masks is strongly affected by the electromagnetic field (EMF) that is ignored in most commercial full-chip OPC applications that rely on the Kirchhoff approximation. This paper utilizes new commercial software to identify and characterize points in a design that are especially sensitive to these EMF effects. Characterization of conventional 6% and 30% High Transmission photomasks were simulated and compared with experimental results. We also explored, via simulation-driven design of experiment, the impact of mask variations in transmission, phase, and SRAF placement and size to the imaging capability. The simulations are confirmed by producing a photomask including the experimental variations and printing the mask to silicon. Final analysis of the data will include exact mask measurements to confirm match to simulation assumptions of mask stack, and phase.
KEYWORDS: Optical proximity correction, Photomasks, Data modeling, Back end of line, Semiconducting wafers, Visualization, Databases, Scanning electron microscopy, Data integration, Metals
SMIC is a pure-play IC foundry, as foundry culture Turn-Around Time is the most important thing FABs concern about. And aggressive tape out schedule required significant reduction of GDS to mask flow run time. So the objective of this work is to evaluate an OPC methodology and integrated mask data preparation flow on runtime performance via so-called 1-IO-tape-out platform. By the way, to achieve fully automated OPC/MDP flow for production. To evaluate, we choose BEOL layers since they were the ones hit most by runtime performance -- not like FEOL, for example, Poly to CT layers there're still some non-critical layers in the between, OPC mask makings & wafer schedules are not so tight. BEOL, like M2, V2,then M3 V3 and so on, critical layer OPC mask comes one by one continuously. Hence, that's why we pick BEOL layers. And the integrated flow we evaluated included 4 layers of metal with MB-OPC and 6 layers of Via with R-B OPC. Our definition of success to this work is to improve runtime performance at least of larger than 2x. At meantime, of course, we can not sacrifice the model accuracy, so maintaining equal or better model accuracy and OPC/mask-data output quality is also a must. For MDP, we also test the advantage of OASIS and compared with GDS format.
To perform a thorough source optimization during process development is becoming more critical as we move to leading edge-technology nodes. With each new node the acceptable process margin continues to shrink as a result of lowering k1 factors. This drives the need for thorough source optimization prior to locking down a process in order to attain the maximum common depth of focus (DOF) the process will allow. Optical proximity correction (OPC) has become a process-enabling tool in lithography by providing a common process window for structures that would otherwise not have overlapping windows. But what effect does this have on the source optimization? With the introduction of immersion lithography there is yet another parameter, namely source polarization, that may need to be included in an illumination optimization process. This paper explored the effect polarization and OPC have on illumination optimization. The Calibre ILO (Illumination Optimization) tool was used to perform the illumination optimization and provided plots of DOF vs. various parametric illumination settings. This was used to screen the various illumination settings for the one with optimum process margins. The resulting illumination conditions were then implemented and analyzed at a full chip level. Based on these results, a conclusion was made on the impact source polarization and OPC would have on the illumination optimization process.
With the advent of the first immersion and hyper-NA exposure tools, source polarization quality will become a hot topic. At these oblique incident angles, unintentional source polarization could result in the intensity loss of diffraction orders possibly inducing resolution or process window loss. Measuring source polarization error on a production lithographic exposure tool is very cumbersome, but it is possible to reverse engineer any source error similarly to what has been accomplished with intensity error. As noted in the intensity maps from the source illumination, it is not safe to assume an ideal or binary source map, so model fitness is improved by emulating the real error. Likewise, by varying the source polarization types (TE, TM, Linear X and Linear Y) and ratios to obtain improved model fitness, one could deduce the residual source polarization error. This paper will show the resolution and process window gain from utilizing source polarization in immersion lithography. It will include a technique demonstrating how to extract source polarization error from empirical data using the Calibre model and will document the modeling inaccuracy from this error.
With the advent of immersion lithography, high numerical aperture (NA) and Hyper-NA (NA > 1.0) exposure tools, comes the task of understanding the impact of polarization and possibly how to master these effects for further resolution enhancements. In the past, the lithographic community has for the most part been able to ignore the polarization incident to mask, polarization induced by the 3D mask effects, and any residual polarization provided by the pupil, but with the combination of these high-NA exposure tools and the use of extreme off-axis illumination techniques, neglecting these polarization effects could be disastrous. Previous works have rigorously accounted for the polarization influences from the illumination source and within a thin film for an immersion and dry process using the Calibre vector-diffraction model 1-2. This paper will expand upon this study to include the mask and pupil polarization effects from the first order perspective and from the higher-order interactions with the four polarizations commonly found within a lithographic exposure system. It will propose possible resolution enhancement techniques by manipulating polarization in the optical path and at the mask in a hyper-NA exposure environment.
The era of week-long turn around times (TAT) and half-terabyte databases is at hand as seen by the initial 90 nm production nodes. A quadrupling of TAT and database volumes for the subsequent nodes is considered to be a conservative estimate of the expected growth by most mask data preparation (MDP) groups, so how will fabs and mask manufacturers address this data explosion with a minimal impact to cost? The solution is a multi-tiered approach of hardware and software. By shifting from costly Unix servers to cheaper Linux clusters, MDP departments can add hundreds to thousands of CPU’s at a fraction of the cost. This hardware change will require the corresponding shift from multithreaded (MT) to distributed-processing tools or even a heterogeneous configuration of both. Can the EDA market develop the distributed-processing tools to support the era of data explosion? This paper will review the progression and performance (run time and scalability) of the distributed-processing MDP tools (DRC, OPC, fracture) along with the impact to the hierarchy preservation. It will consider the advantages of heterogeneous processing over homogenous. In addition, it will provide insight to potential non-scalable overhead components that could eventually exist in a distributed configuration. Lastly, it will demonstrate the cost of ownership aspect of the Unix and Linux platforms with respect to targeting TAT.
Lithographers face many hurdles to achieve the ever-shrinking process design rules (PDRs). Proximity effects are becoming more and more an issue requiring model-based Optical Proximity Correction (OPC), sub-resolution assist features, and properly tuned illumination settings in order to minimize these effects while providing enough contrast to maintain a viable process window. For any type of OPC application to be successful, a fundamental illumination optimization must first be completed. Unfortunately, the once trivial illumination optimization has evolved into a major task for ASIC houses that require a manufacturable process window for isolated logic structures as well as dense SRAM features. Since these features commonly appear on the same reticle, today’s illumination optimization must look at “common” process windows for multiple cutlines that include a variety of different feature types and pitches. This is a daunting task for the current single feature simulators and requires a considerable amount of simulation time, engineering time, and fab confirmation data in order to come up with an optimum illumination setting for such a wide variety of features. An internal Illumination Optimization (ILO) application has greatly simplified this process by allowing the user to optimize an illumination setting by simultaneously maximizing the “combined” DOF (depth of focus) over multiple cutlines (simulation sites). Cutlines can be placed on a variety of structures in an actual design as well as several key pitches. Any number of the cutlines can be constrained to the gds drawn CD (critical dimension) while others can be allowed to “float” with pseudo OPC allowing the co-optimization of the illumination setting for any OPC that may be applied in the final design. The automated illumination optimization is then run using a tuned model. Output data is a suggested illumination setting with supporting data used to formulate the recommendation. This paper will present the multi-cutline ILO process and compare it with the work involved to do the same optimization using a single feature simulator. Examples will be shown where multi-cutline ILO was able to resolve hard annular aberrations while maintaining the DOF.
Dark field (i.e. hole and trench layer) lithographic capability is lagging that of bright field. The most common dark field solution utilizes a biased-up, standard 6% attenuated phase shift mask (PSM) with an under-exposure technique to eliminate side lobes. However, this method produces large optical proximity effects and fails to address the huge mask error enhancement factor (MEEF) associated with dark field layers. It also neglects to provide a dark field lithographic solution beyond the 130nm technology node, which must serve two purposes: 1) to increase resolution without reducing depth of focus, and 2) to reduce the MEEF. Previous studies have shown that by increasing the background transmission in dark field applications, a corresponding decrease in the MEEF was observed. Nevertheless, this technique creates background leakage problems not easily solved without an effective opaqueing scheme. This paper will demonstrate the advantages of high transmission lithography with various approaches. By using chromeless dark field scattering bars around contacts for image contrast and chromeless diffraction gratings in the background, high transmission dark field lithography is made possible. This novel layout strategy combined with a new, very high transmission attenuating layer provides a dark field PSM solution that extends 248nm lithography capabilities beyond what was previously anticipated. It is also more manufacturing-friendly in the mask operation due to the absence of tri-tone array features.
KEYWORDS: Photomasks, Optical proximity correction, Reticles, Data modeling, Process modeling, Semiconducting wafers, Scanning electron microscopy, Image processing, Etching, Deep ultraviolet
As critical dimensions (CDs) approach (lambda) /2, the use of optical proximity correction (OPC) relies heavily on the ability of the mask vendor to resolve the OPC structures consistently. When an OPC model is generated the reticle and wafer processing errors are merged, quantified, and fit to a theoretical model. The effectiveness of the OPC model depends greatly on model fit and therefore consistency in the reticle and wafer processing. Variations in either process can 'break' the model resulting in the wrong corrections being applied. Work is being done in an attempt to model the reticle and wafer processes separately as a means to allow an OPC model to be implemented in any mask process. Until this is possible, reticle factors will always be embedded in the model and need to be understood and controlled. Reticle manufacturing variables that effect OPC models are exposure tool resolution, etch process effects, and process push (pre-bias of the fractured data). Most of the errors from these reticle-manufacturing variables are seen during model generation, but there are some regions that are not and fail to be accounted for such as extremes in the line ends. Since these extreme regions of the mask containing the OPC have a higher mask error enhancement factor (MEEF) than that of the rest of the mask, controlling mask-induced variables is even more important. This paper quantifies the reticle error between different write tools (g-line vs. i-line vs. DUV lasers) and shows the effects reticle processing has on OPC model generation. It also depicts which structures are susceptible to reticle error more than others through reticle modeling and SEM images.
To achieve the demand of the ever-shrinking technologies, design engineers are embedding rule-based OPC (Optical Proximity Correction) or hand-applied OPC into bit-cell libraries. These libraries are then used to generate other components on a chip. This creates problems for the end users, the photolithographers. Should the photolithographer change the process used to generate the simulations for the embedded OPC, the process can become unstable. The temptation to optimize these shrinking cells with embedded adjustments can be overcome by other methods. Manually increasing fragmentation or manually freezing portions of bit cells can provide the same level of accuracy as a well-simulated embedded solution, so now the model-base OPC generated by the end user can be applied, tolerating process or illumination changes. Manually freezing portions of a bit cell can assist in optimization by blocking larger features from receiving a model-based solution, whereas increased fragmentation augments the model-based application. Freezing contacts or local interconnects landing sites at poly for example, would allow the model-based OPC to optimize the poly over the active regions where transistor performance is vital. This paper documents the problems seen with embedding OPC and the proper ways to resolve them. It will provide insight into embedded OPC removal and replacement. Simulations and empirical data document the differences seen between embedded-OPC bit cells and fragment-optimized bit cells.
Deep ultraviolet (DUV) bottom anti-reflective coating (BARC)- to-resist compatibility is a key component in process optimization. In addition to the reduction of optical interference effects, BARC's also improve CD uniformity by preventing substrate contamination. However, if the BARC is not compatible with the resist, it can create adverse affects. If the acidity level of the BARC is not tuned to the resist for example, the profiles will foot or undercut, or if the BARC-to-resist developer interactions are not considered, high levels of post-develop defects will most likely occur. Etch selectivity, topography conformality and bowl/drain compatibility are other factors to consider when selecting a BARC. This paper follows the progressions of the leading DUV BARC's for Acetal-based resist systems and addresses the problems that could be encountered with implementing a BARC process. From DUV32 to the topography-conforming DUV42 and finally to the profile-enhancing DUV44, the 248 nm BARC's are continually evolving to resolve the BARC-to-resist compatibility issues.
With the large field sizes scanners offer today, lens mapping (dense across-the-field CD measurements to quantify illumination and coherency aberrations) requires an extensive number of line width measurements to be taken for accurate lens evaluation. There are concerns that the accuracy required for field mapping may not be possible with a Top-Down SEM, pushing the industry to move to electrical CD (ECD) measurement. However, the etch process required for ECD can induce systematic error, either from the iso-dense etch bias or from the equipment itself. This paper explores the capability of utilizing an in-line CD SEM for extensive CD measurement collection and the requirements to achieve statistically valid data for lens mapping.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.