3-dimensional chiplet device architectures are expected to provide improved device performance, efficiency, and footprint beyond what is capable with 2-dimensional scaling technologies. Thick resist lithography of damascene and plating resists, as well as organic dielectric materials, plays a critical role in chiplet integration. However, thick resist lithography requires viscous resist solutions, specialized tooling, and long processing times. This makes patterning using these resists inherently prone to uniformity issues, which has become a crucial issue for scaling. This work highlights two strategic areas of thick resist patterning development: improved resist coating methods; and enhanced focus control during exposure. Herein, we show a track-based method for carefully controlled uniformity of the resist coating thickness, with some sacrifice of through-put. In addition, we show stepper-based focus methods to account for die level variations in resist and wafer thickness, as well as local topography. Combined, these provide precise cross-wafer control of thick resist dimensions.
We demonstrate high volume manufacturing feasibility of 7 nm technology overlay correction requirement. This stateof- the-art overlay control is achieved by (i) overlay sampling optimization and advanced modeling, (ii) alignment and advanced process control optimization, (iii) multiple target overlay optimization, and (iv) heating control. We will also discuss further improvements in overlay control for 7 nm technology node and beyond including computational metrology, extreme ultraviolet and optic tools overlay matching control, high order alignment correction, tool stability improvement, and advanced heating control.
As the demand of the technology node shrinks from 14nm to 7nm, the reliability of tool monitoring techniques in advanced semiconductor fabs to achieve high yield and quality becomes more critical. Tool health monitoring methods involve periodic sampling of moderately processed test wafers to detect for particles, defects, and tool stability in order to ensure proper tool health. For lithography TWINSCAN scanner tools, the requirements for overlay stability and focus control are very strict. Current scanner tool health monitoring methods include running BaseLiner to ensure proper tool stability on a periodic basis. The focus measurement on YIELDSTAR by real-time or library-based reconstruction of critical dimensions (CD) and side wall angle (SWA) has been demonstrated as an accurate metrology input to the control loop. The high accuracy and repeatability of the YIELDSTAR focus measurement provides a common reference of scanner setup and user process. In order to further improve the metrology and matching performance, Diffraction Based Focus (DBF) metrology enabling accurate, fast, and non-destructive focus acquisition, has been successfully utilized for focus monitoring/control of TWINSCAN NXT immersion scanners. The optimal DBF target was determined to have minimized dose crosstalk, dynamic precision, set-get residual, and lens aberration sensitivity. By exploiting this new measurement target design, ~80% improvement in tool-to-tool matching, >16% improvement in run-to-run mean focus stability, and >32% improvement in focus uniformity have been demonstrated compared to the previous BaseLiner methodology. Matching <2.4 nm across multiple NXT immersion scanners has been achieved with the new methodology of set baseline reference. This baseline technique, with either conventional BaseLiner low numerical aperture (NA=1.20) mode or advanced illumination high NA mode (NA=1.35), has also been evaluated to have consistent performance. This enhanced methodology of focus control and monitoring on multiple illumination conditions, opens an avenue to significantly reduce Focus-Exposure Matrix (FEM) wafer exposure for new product/layer best focus (BF) setup.
To further shrink the contact and trench dimensions, Negative Tone Development (NTD) has become the de facto process at these layers. The NTD process uses a positive tone resist and an organic solvent-based negative tone developer which leads to improved image contrast, larger process window and smaller Mask Error Enhancement Factor (MEEF)[1]. The NTD masks have high transmission values leading to lens heating and as observed here wafer heating as well. Both lens and wafer heating will contribute to overlay error, however the effects of lens heating can be mitigated by applying lens heating corrections while no such corrections exist for wafer heating yet. Although the magnitude of overlay error due to wafer heating is low relative to lens heating; ever tightening overlay requirements imply that the distortions due to wafer heating will quickly become a significant part of the overlay budget. In this work the effects, analysis and observations of wafer heating on contact and metal layers of the 14nm node are presented. On product wafers it manifests as a difference in the scan up and scan down signatures between layers. An experiment to further understand wafer heating is performed with a test reticle that is used to monitor scanner performance.
With decreasing CDOF (Critical Depth Of Focus) for 20/14nm technology and beyond, focus errors are becoming increasingly critical for on-product performance. Current on product focus control techniques in high volume manufacturing are limited; It is difficult to define measurable focus error and optimize focus response on product with existing methods due to lack of credible focus measurement methodologies. Next to developments in imaging and focus control capability of scanners and general tool stability maintenance, on-product focus control improvements are also required to meet on-product imaging specifications. In this paper, we discuss focus monitoring, wafer (edge) fingerprint correction and on-product focus budget analysis through diffraction based focus (DBF) measurement methodology. Several examples will be presented showing better focus response and control on product wafers. Also, a method will be discussed for a focus interlock automation system on product for a high volume manufacturing (HVM) environment.
In recent years overlay (OVL) control schemes have become more complicated in order to meet the ever shrinking margins of advanced technology nodes. As a result, this brings up new challenges to be addressed for effective run-to- run OVL control. This work addresses two of these challenges by new advanced analysis techniques: (1) sampling optimization for run-to-run control and (2) bias-variance tradeoff in modeling. The first challenge in a high order OVL control strategy is to optimize the number of measurements and the locations on the wafer, so that the “sample plan” of measurements provides high quality information about the OVL signature on the wafer with acceptable metrology throughput. We solve this tradeoff between accuracy and throughput by using a smart sampling scheme which utilizes various design-based and data-based metrics to increase model accuracy and reduce model uncertainty while avoiding wafer to wafer and within wafer measurement noise caused by metrology, scanner or process. This sort of sampling scheme, combined with an advanced field by field extrapolated modeling algorithm helps to maximize model stability and minimize on product overlay (OPO). Second, the use of higher order overlay models means more degrees of freedom, which enables increased capability to correct for complicated overlay signatures, but also increases sensitivity to process or metrology induced noise. This is also known as the bias-variance trade-off. A high order model that minimizes the bias between the modeled and raw overlay signature on a single wafer will also have a higher variation from wafer to wafer or lot to lot, that is unless an advanced modeling approach is used. In this paper, we characterize the bias-variance trade off to find the optimal scheme. The sampling and modeling solutions proposed in this study are validated by advanced process control (APC) simulations to estimate run-to-run performance, lot-to-lot and wafer-to- wafer model term monitoring to estimate stability and ultimately high volume manufacturing tests to monitor OPO by densely measured OVL data.
As leading edge lithography moves to advanced nodes, CDU requirements have relatively increased with technologies 14nm/20nm and beyond. In this paper, we want to introduce the methodology to offer an itemized CDU budget such as Intra-field, Inter-field, wafer to wafer as well as scanner contributors vs. non-scanner contributors (including detailed analysis of reticle contributors like CD, absorber thickness and SWA variation) through Top-Down CDU and Bottom-Up CDU budget breakdown and deliver sources of CD variation with measureable value so that we can estimate CDU gain from them. The test vehicle being used in this experiment is designed based on 14nm D/R basis. Measurement structures are densely located in the slit/scan direction on the reticle for the data collection plan. Hence, we can expand on this methodology to build up the tool reference fingerprint when we release new tool fleet. The final goal will be to establish a methodology for CDU budget breakdown that can be used to draw a conclusion on the root causes of the observed CDU, propose its improvement strategy and estimate the gain.
We analyze performance of different customized models on baseliner overlay data and demonstrate the reduction in overlay residuals by ~10%. Smart Sampling sets were assessed and compared with the full wafer measurements. We found that performance of the grid can still be maintained by going to one-third of total sampling points, while reducing metrology time by 60%. We also demonstrate the feasibility of achieving time to time matching using scanner fleet manager and thus identify the tool drifts even when the tool monitoring controls are within spec limits. We also explore the scanner feedback constant variation with illumination sources.
In this paper we will present the comparison study of these two methods on programmed errors of critical layers of 14nm technology node. Programmed OVL errors were made on certain fields during the exposure. Full coverage OVL measurements were performed using both IBO and DBO. Linear, HOPC and iHOPC modeling has been done from non-programmed fields. Then modeling has been subtracted from these certain programmed fields, and Reticle contribution was also calculated and subtracted. In this study, metrology measurement accuracy and stability can be feasible and more accurate OVL control is enabled by selecting better OVL measurement techniques.
We demonstrate a cost-effective automated rule based sparse sampling method that can detect the spatial variation of overlay errors as well as the overlay signature of the fields. Our technique satisfies the following three rules: (i) homogeneous distribution of ~200 samples across the wafer, (ii) equal number of samples in scan up and scan down condition and (iii) equal number of sampling on each overlay marks per field. When rule based samplings are implemented on the two products, the differences between the full wafer map sampling and the rule based sampling are within 3.5 nm overlay spec with residuals M+3σ of 2.4 nm (x) and 2.43 nm (y) for Product A and 2.98 nm (x) and 3.32 nm (y) for Product B.
Historically, the block layers are considered "non critical ", as ones requiring less challenging ground rules.
However, continuous technology-driven scaling has brought these layers to a point, where resolution, tolerance and
aspect ratio issue of block masks now present significant process and material challenges. Some of these challenges will
be discussed in this paper.
In recent bulk technology nodes, the deep well implants require an aspect ratio of up to 5:1 in conventional
resist leading to small process margin for line collapse and/or residue. New integration schemes need to be devised to
alleviate these issues, i.e. scaling down the energy of the implant and the STI deep trench to reduce resist thickness, or
new hard mask solutions with high stopping power to be dry etched.
Underlying topography creates severe substrate reflectivity issues that affect CD, tolerance, profiles and
defectivity. In addition to the CD offset due to the substrate, the implant process induces CD shrinkage and resists profile
degradation that affects the devices. Minimizing these effects is paramount for controlling implant level processes and
meeting overall technology requirements. These "non-critical" layers will require the development of more complex
processes and integration schemes to be able to support the future technology nodes. We will characterize these process
constraints, and propose some process / integration solutions for scaling down from 28nm to 20 nm technology node.
In recent years, implant (block) level lithography has been transformed from being widely viewed as non-critical into
one of the forefronts of material development. Ever-increasing list of substrates, coatings and films in the underlying
stack clearly dictates the need for new materials and increased attention to this challenging area. Control of the substrate
reflectivity and critical dimension (CD) on topography has become one of the key challenges for block level lithography
and is required in order to meet their aggressive requirements for developing 32nm technology and beyond.
The simulation results of wet-developable bottom anti-reflective coating (dBARC) show better reflectivity control on
topography than the conventional top anti-reflective materials (TARCs), and make a convincing statement as to viability
of dBARC as a working solution for block level lithography.1 Wet-developable BARC by definition offers substrate
reflectivity and resist adhesion control, however there is a need to better understand the fundamental limitations of the
dBARC process in comparison to the TARC process. In addition, some specific niche dBARC applications as facilitating
adhesion to challenging substrates, such as capping layers in the high-k metal gate (HK/MG) stack, can also be
envisioned as most imminent dBARC applications.2 However, most of the engineering community is still indecisive to
use dBARC in production, bound by uncertainties of the robustness and lack of experience using dBARC in production.
This work is designed to inspire more confidence in the potential use of this technology. Its objective is to describe
testing of one of dBARC materials, which is not a photosensitive type, and its implementation on 32nm logic devices.
The comparison between dBARC and TARC processes evaluates impacts of dBARC use in the lithographic process,
with special attention to OPC behavior and reflectivity for controlling CD uniformity. This work also shows advantages
and future challenges of dBARC process with several 248nm and 193nm resists on integrated wafers, which have
shallow trench isolation (STI) and poly gate pattern topography.
Semiconductor manufacturers are in the midst of the next technology node C045 (65nm half-pitch) development. The difference this time is that the heavy lifting is being done while swimming. Generally, for the C065 node (hp90), critical layers will be processed using 193-nm scanners with numerical apertures up to 0.85. It is also clear that the capabilities and potential benefits of immersion lithography (at this wavelength and NA) should to be examined, in addition to the development of immersion lithography for the C045 and C032 technology generations. The potential benefits of immersion lithography; increased DOF in the near term and hyper-NA imaging in the next phase, have been widely reported. A strategy of replacing conventional "dry" lithographic process steps with immersion lithographic process steps would allow the benefits of immersion to be realized much earlier. To fully realize this advantage a direct comparison of immersion lithography's benefits and therefore speed learning is needed. However, such an insertion should be "transparent": i.e. the "immersion process" should run with the same reticles (OPC) and resists, as the conventional process. In an effort to gain this knowledge about the immersion processes, we have chosen a path of optimizing and ramping-up the lithographic process for the C065 technology node. In this paper, we report on the compatibility of inserting immersion lithography processes into an established C065 process running in a pilot manufacturing line. We will present an initial assessment of some critical parameters for the implementation of immersion lithography. This assessment includes: OPC compatibility, imaging, process integration, and defectivity all compared to the dry process of record. Finally, conclusions will be made as to the overall readiness of immersion to support C065 node processing in direct transfer from dry and its extendibility to C045. In this work, the C045 technology node (hp65) is the main target vehicle. However, a successful introduction of immersion technology may allow a strategy change complementary with the previous (C065) technology node (i.e. run C065 immersion in production and benefit from larger process windows).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.