Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Microlens performance limits in sub-2μm pixel CMOS image sensors

Open Access Open Access

Abstract

CMOS image sensors with smaller pixels are expected to enable digital imaging systems with better resolution. When pixel size scales below 2 μm, however, diffraction affects the optical performance of the pixel and its microlens, in particular. We present a first-principles electromagnetic analysis of microlens behavior during the lateral scaling of CMOS image sensor pixels. We establish for a three-metal-layer pixel that diffraction prevents the microlens from acting as a focusing element when pixels become smaller than 1.4 μm. This severely degrades performance for on and off-axis pixels in red, green and blue color channels. We predict that one-metal-layer or backside-illuminated pixels are required to extend the functionality of microlenses beyond the 1.4 μm pixel node.

©2010 Optical Society of America

1. Introduction

Complementary metal oxide semiconductor (CMOS) technology has become the de-facto standard for implementing solid-state image sensors used in high-volume digital imaging systems. Even high-quality imaging applications that traditionally used charge-coupled device (CCD) imagers, such as single-lens reflex (SLR) cameras, are increasingly being equipped with high spatial resolution CMOS image sensors. CMOS technology scaling can improve image sensor performance in a variety of ways. First, scaling can be used to decrease pixel size and improve spatial resolution. Alternatively, scaling can be used to shrink transistor size and increase photosensitive area in the pixel (fill-factor). Different market segments may choose different strategies. On one hand, SLR cameras typically use scaling to increase photosensitive area. Small portable devices, on the other hand, have principally used technology scaling to increase spatial resolution. For example, the camera modules integrated with cellular telephones are being outfitted with higher resolution CMOS image sensor chips despite the fixed form factor of the module or the fixed die size of the chip.

The scalability of CMOS technology has enabled pixel size to decrease steadily from more than 10 μm to less than 2 μm in the span of a decade [14]. During this entire scaling process, the microlens that sits on top of every pixel has remained relatively successful at focusing the incident light on to the photodiode area [5,6]. As pixel size in CMOS image sensors continues to scale below 2 μm [7], while the wavelength of visible light does not, diffraction may put the usefulness of the microlens in question in sub-2 μm pixels.

In this paper, we perform an electromagnetic field optics investigation of the effects that pixel scaling has on the performance of a conventional dielectric microlens and on the optical performance of the image sensor pixel. First, we introduce the CMOS image sensor pixel model and define the performance metrics by which we evaluate pixel performance. Next, we numerically find the optimal microlens design for sub-2 μm pixels with a fixed pixel height that corresponds to a 3-metal-layer image sensor pixel. The optical efficiency and optical crosstalk are calculated with a first-principles finite-difference time-domain (FDTD) method and performance of the microlens is discussed. Finally, we study the optical performance when we allow the pixel (stack) height to scale as well.

2. Methodology

In this section, we describe the CMOS image sensor pixel model, which consists of a two-dimensional (2D) pixel cross-section with associated geometry and materials. We identify and define normalized optical efficiency (OE) and normalized spatial optical crosstalk (OX) as the performance metrics used to analyze the results. We also provide a detailed description of the electromagnetic simulation technique and pixel analysis procedure that we used.

2.1 Pixel model

The pixel model used in this work consists of a 2D pixel cross-section [Fig. 1(a) ] with one horizontal (lateral) and one vertical (axial) dimension. We describe the main components of a pixel in the order that incident light interacts with them. Light first passes through a plano-convex dielectric microlens that is intended to focus the light on the photosensitive area of the pixel. The microlens is geometrically represented by a truncated curved interface with a fixed radius of curvature and a fixed layer thickness. Underneath the microlens is a silicon oxide layer to provide a suitable surface for microlens formation as well as to separate it from the color filter in the next layer. Below that, a thin silicon nitride passivation layer is followed by a thick oxide that makes up the bulk of the pixel thickness and provides support and isolation for the metal interconnects. These layers are modeled as thin dielectric films with fixed thicknesses, which are approximated, based on examples in literature [8] and should generally agree with actual numbers for pixel aspect ratio and light transmission for a three-metal-layer pixel. Finally at the bottom of the pixel is the silicon substrate where a photosensitive area is created as part of the CMOS process, the light is absorbed and converted into an electrical signal. The dielectric material properties are described by refractive indices obtained from tabulated data [9]. For the non-standard materials, specifically the microlens and color filter, approximations are made based on a literature survey [1013]. We note that only the silicon and color filter are modeled as absorptive materials. The photodiode (photosensitive area) typically covers only a fraction of the total pixel area (fill factor). In this paper, we derive upper bounds for optical performance that are set by diffraction and we assume a fill factor of 100%. Image sensor pixel designs vary greatly, especially in terms of metal interconnects. Since interconnects are typically routed to avoid interaction with light, we omit metal lines in our simulations.

 figure: Fig. 1

Fig. 1 (a) Two-dimensional (2D) pixel model with layer materials and thicknesses. (b) Electromagnetic calculation showing energy flow toward the photodiode (z-component of the Poynting vector Sz) with significant diffraction effects, reducing optical efficiency (transmission) and leading to spatial crosstalk even at normal incidence. The color scale in panel (b) is nonlinear to bring out detail in the regions of lower energy flow.

Download Full Size | PDF

2.2 Performance metrics

We introduce two measures of pixel performance to properly evaluate the simulation results [Fig. 1(b)]. Optical efficiency (OE) is defined as the fraction of optical power incident on the surface of each pixel that reaches the intended photodiode at the silicon substrate [5]. Optical crosstalk (OX) is calculated by averaging two contributions, where each contribution measures the fraction of the optical power incident on a pixel that reaches the photodiode in one of the adjacent pixels [6]. We further define normalized OE and normalized OX, i.e., OE and OX as a fraction or percentage of the total received power, which means the sum of the normalized OE and twice the normalized OX equals to unity. This allows us to separate the diffraction effects due to the small pixel aperture from the interference effects due to the pixel stack. Total received power is defined as the power received at the silicon substrate layer by all the pixels (illuminated and adjacent pixels).

2.3 Finite-difference time-domain method

For our simulations, we use the finite-difference time-domain (FDTD) method [14,15]. FDTD models the propagation of light by discretizing and numerically solving the time-dependent Maxwell’s equations in both time and space. It is a powerful tool for explicitly determining the electric and magnetic fields at every point when electromagnetic radiation (light) is incident on a pixel structure. By stepping forward in very short time increments and alternately solving for the electric and magnetic field, a steady-state solution can be reached. This sort of explicit calculation is necessary due to the small dimensions of the pixel structure, which are well below the size at which ray optics breaks down [14].

We study pixels from the red, green and blue color channels since diffraction effects are different across the visible spectrum due to the different wavelengths. We perform separate simulations for each type of color filter. A pixel with a red, green or blue filter is placed in the center of a multi-pixel array with completely absorbing “black” color filters above all other pixels to act as a masking layer. Simulations with varying number of pixels show that a simulation domain consisting of a five-pixel array with a periodical boundary condition at left and right edge is a reasonable compromise to limit simulation time and to discount boundary layer effects. Anisotropic perfectly matched absorbing boundary layers are used at bottom edge to prevent reflections [16]. The simulation grid size is set to 10 nm as this is a reasonable minimum element position increment and yields more than 15 steps per wavelength in the highest index medium. The time step is determined using the Courant-Friedrichs-Levy condition [14].

The incident light is modeled as a continuous plane wave excitation in air just above the microlens surface. We limit the excitation to a 10-nm range around the center wavelengths, i.e., 646-654 nm for red light centered at 650 nm (similar for pixels with green and blue color filters, centered around 555 nm and 450 nm, respectively). This is a typical range of wavelengths used in experimental optical characterization of the red, green and blue color channels. The 10-nm range is sampled at 2-nm steps and results are incoherently averaged to remove any effects associated with (coherent) interference oscillations in the transmission spectrum. Over this narrow frequency range, material optical properties are approximately constant, so refractive index data at center wavelength (650 nm for red, 555 nm for green, and 450 nm for blue) is used for all simulations [9].

OE and OX measurements are taken just below the oxide-silicon interface, which we found to provide a good measure of the light propagation in the entire silicon epi-layer. They are obtained by taking the discrete Fourier transform of the full time trace to obtain the electric and magnetic fields. The fields are used to calculate the Poynting vector Sz [Fig. 1(b)], which is integrated over the photodiode area to obtain the total steady-state power delivered to the photodiode [17]. We assume 100% quantum efficiency, i.e., complete conversion of photons to electrons, to isolate optical performance. Both transverse electric (TE, electric field perpendicular to the pixel cross-section) and transverse magnetic (TM, magnetic field perpendicular) simulations are performed for every design. The stated results are the average of the two polarizations, as light from a typical scene is non-polarized.

3. Results

We now describe the effects that pixel scaling has on optical performance for both horizontal (pixel size) and vertical (pixel height) scaling of the pixel dimensions. In the horizontal dimension, we scale pixels down from 1.75 μm to 0.97 μm in width. We note that 1.4 to 1.75 μm pixels are the smallest size pixels currently in volume production [3,4]. In the vertical dimension, we simulate pixels that correspond to three-, two- and one-metal-layer geometries. We first discuss the procedure for microlens radius optimization and then detail the impact this has on the OE and OX properties for each analyzed case.

3.1 Dielectric stack tuning

The dielectric stack of a CMOS image sensor pixel is comprised of a series of dielectric layers above the photodetector. The transmission of the dielectric stack is different for different wavelengths due to interference effects. This translates into a different optical efficiency if one designs a single layered structure for the red, green, and blue color channels that operate at different wavelengths. In this paper, we want to consider only size-related diffraction effects and we tune the pixel’s dielectric stack to achieve similar OE for the red, green and blue color channels. We use a transfer matrix algorithm [5] to numerically optimize the thicknesses of the top and bottom oxide layers within ± 10% of their original values while preserving the total pixel stack height with the goal of the three pixels having identical transmission for plane wave illumination. We then create a slightly different structure for each color channel by tuning the thickness of the oxide layer and color filter layer, while keeping their combined thickness the same (to achieve same stack height for all three color channels). While the simulated optical efficiencies of a standard three-metal-layer structure vary over a large range between 60% and 84% (over a broad spectrum), the OE for all three channels can be matched at 52.2% ± 0.1% (after a thickness tuning process). Similarly, the OE for all color channels is 68.6% ± 0.1% in a two-metal-layer structure and 54.3% ± 0.1% in a one-metal-layer structure after tuning. We note that we do not optimize for maximum transmission, rather we tune for equal stack transmission in each case. This process results in a uniform but relatively low optical efficiency. This is due to the absence of anti-reflection layers at the pixel and silicon interfaces, as well as the residual absorption in the color filter.

We also confirm the one-dimensional (1D) transfer matrix calculations independently with 2D FDTD simulations for a flat microlens layer and periodical boundary conditions to get a uniform field distribution. The result is a three-metal-layer pixel structure that exhibits the same optical efficiency (52.2% ± 0.1%) for red, green and blue light channels with small tuning of absorption coefficient of color filter layer. We use this stack-optimized structure in all the simulations that follow.

3.2 Microlens radius optimization

The microlens on top of each image sensor pixel performs the task of a light concentrator, i.e., it concentrates light onto the pixel photodetector region of the pixel substrate [1]. The traditional dielectric microlens uses surface curvature and refractive index contrast to generate a phase delay at different positions and turns a plane wave in front of the lens into a converging wave after the lens. The phase distribution in the converging beam is such that it results in peak irradiance near the focal point (in the geometrical optics approximation). To achieve the maximum signal from the photodetector under these assumptions, we set the focal length equal to the distance from the microlens to the silicon surface. In this limit, where only refraction determines lens behavior, the radius of curvature for a thick plano-convex lens can be written as

r=f(nMLnair)neff
where f is the focal length, nML is the microlens index of refraction, and neff is the effective index of the dielectric stack between the microlens layer and Si layer. A flat lens would have an infinite radius of curvature and, hence, have an infinite “focal length”. By reducing the radius to a finite number, we can provide the right amount of curvature to focus the incident light onto the photodetector area. For sub-2 μm image sensor pixels (i.e., only a few wavelengths wide), however, the approximation of light as a ray is no longer valid.

To include both diffraction and refraction effects in the calculation of the optimal microlens radius for pixels of this size, we use a first-principles FDTD electromagnetic field solver. Figure 2 shows the electromagnetic field simulations that depict the energy flow toward the photodiode for a 1.75 μm, a 1.4 μm, and a 0.97 μm three-metal-layer pixel that is subject to light with a 0° incidence angle and a wavelength of 650 nm.

 figure: Fig. 2

Fig. 2 Poynting vector plots depicting energy flow toward the photodiode for (a) 1.75 μm, (b) 1.4 μm, and (c) 0.97 μm pixels for light with a 0° incidence angle and a wavelength of 650 nm. Only the center two pixels are shown and the boundaries between different materials are outlined. The color scale is nonlinear to bring out detail in the regions of lower energy flow.

Download Full Size | PDF

Figure 3 shows the optimized microlens radius for different size pixels of three-metal-layer structures with red, green and blue color filters. To obtain the optimal radius of curvature, we optimized simultaneously for the highest optical efficiency (OE) as well as the lowest optical cross-talk (OX). The ratio of OX and OE, the crosstalk ratio, is chosen as the figure of merit to minimize. The geometrical optics prediction given by Eq. (1), with an effective index of 1.53 for the dielectric stack, yields a microlens radius of 1.59 μm (black dot-dashed line in Fig. 3).

 figure: Fig. 3

Fig. 3 Optimal microlens radius in a three-metal-layer pixel design for the 1.75, 1.4, 1.2, and 0.97 μm pixel nodes. Curves represent the optimized radii for the microlenses of pixels populating the red (solid line), green (dashed line) and blue (dotted line) color channels. The geometrical optics prediction, which depends on pixel height from microlens to photodiode only, is represented by the black dash-dot line.

Download Full Size | PDF

We note immediately that all the radii obtained from electromagnetic field optics are larger than the geometrical-optics result. We also observe a distinct behavior depending on pixel size. For pixels larger than 1.4 μm, the optimal microlens radius is around 2 μm and is approximately independent of pixel size. For smaller pixels, the microlens radius becomes large rapidly. For 1.2 μm and smaller pixels, the optimized microlens radius is very large (larger than 10 μm) and the microlens essentially acts as a flat dielectric layer. Finally, the optimal microlens radius curves are shifted for the different color channels. Image sensor pixels with a red color filter have a large microlens radius for 1.4 μm and smaller pixels. Pixels in the green and blue color channels exhibit similar behavior starting at around 1.3 μm and 1.2 μm, respectively.

The first finding is that the analytical expression Eq. (1) for microlens radius is not applicable in the entire sub-2 μm regime. An approach based purely on geometrical optics and refractive effects is clearly insufficient to describe the physics of the microlens in sub-2 μm pixels. It turns out we need to consider both refraction and diffraction effects simultaneously to understand the microlens behavior. Second, the operation of a microlens is separated into two distinct regimes: a refraction-dominated regime and a diffraction-dominated regime.

In the refraction-dominated regime, diffraction from the aperture formed by the color filter is less important than refraction from microlens curvature and index contrast.

This regime holds for large pixel size, i.e., 1.75 μm and larger pixels for the red channel; 1.4 μm and larger pixels for the green channel; 1.2 μm and larger pixels for the blue channel. For pixels in this regime, an optimal microlens radius can be found that decreases OX and increases OE. For example, as shown in Fig. 2(a), the 1.75 μm pixel with a microlens of 2.6 μm in radius has a peak focus just above the Si layer. The microlens radius for a three-metal-layer pixel is around 2-4 μm and can be optimized using the geometric optics model as a first-order estimate.

In the diffraction-dominated regime, on the other hand, strong diffraction from the color filter aperture determines the optical power flow pattern and pushes the focal point closer to the lens [18], as shown in Figs. 2(b) and 2(c). Hence, smaller pixel size leads to a peak irradiance position that is closer to the aperture even without microlens. Therefore, the optimal microlens radius for a small pixel (such as those at the 0.97 μm node) is very large to eliminate all focusing effects due to refraction. In this regime, microlens is not fulfilling its optical function anymore and acts, at best, as an index-matching layer.

The transition between the refraction-dominated regime and diffraction-dominated regime can be identified in Fig. 3 as the point where the microlens curvature transitions from the size-independent behavior to the diverging behavior. This transition occurs at slightly different pixel sizes for different wavelengths (color channels). Since the red color has the longest wavelength, the diffraction effects tend to happen at larger pixel size. From a pixel node point of view, we conclude that diffraction starts to overcome refraction for 1.4 μm three-metal- layer pixels and below.

3.3 Optical performance for sub-2 μm pixels with optimized microlens at normal incidence

Figure 4(a) shows the normalized OE versus pixel size for three different color channels (red, green, blue) in a three-metal-layer pixel. We notice that the normalized OE is above 90% for pixels larger than 1.4 μm and decreases rapidly when the pixel size is reduced below 1.4 μm. For a 0.97 μm pixel, the normalized OE has dropped to 60% for the red and green channels and to 75% for the blue channel. Both the order of the normalized OE curves and their scaling behavior corroborate the distinction described in previous section.

 figure: Fig. 4

Fig. 4 Plots of (a) normalized optical efficiency (OE) and (b) normalized optical crosstalk (OX) in a three-metal-layer pixel design for the 1.75, 1.4, 1.2, and 0.97 μm pixel nodes. Curves represent normalized OE and OX for pixels in the red, green and blue color channels.

Download Full Size | PDF

The refraction-dominated regime features a rather flat and high OE and applies from 1.75 μm to 1.4 μm pixels. In this regime, the large pixel size results in smaller diffraction effects from the pixel aperture, which allows the microlens to focus the light on to the photodetector resulting in a larger normalized OE (> 90%). The diffraction-dominated regime applies from 1.4 μm to 0.97 μm pixels and is characterized by a rapid drop of the normalized OE. For a 0.97 μm pixel, the normalized OE is only 60%. In this regime, the smaller pixel size leads to larger diffraction effects that dominate the refraction effects from the microlens and make the normalized OE smaller. Thus in the diffraction-dominated regime, the use of the microlens to concentrate or focus the light on the photodetector is severely impacted.

Figure 4(b) shows the normalized OX versus pixel size for three-metal-layer pixels with red, green and blue color filters. The normalized OX remains relatively low (~5%) for pixel nodes larger than 1.4 μm, but increases rapidly to more than 20% when the pixel is scaled down from 1.4 μm to 0.97 μm. In agreement with the observations for normalized OE, we note a wavelength-dependent behavior in which the normalized OX is larger for the red channel than for the green and blue channels. Three-metal-layer pixels cannot be scaled beyond the 1.4 μm node without a serious increase in optical crosstalk.

For the red channel of a 0.97 μm pixel, in particular, the normalized OX is as high as 20%, which means that only 60% of the red light power is delivered to the correct center pixel and the remaining 40% power is diffracted to two adjacent pixels. If we compare the red channel of a 0.97 μm pixel to that of a 1.75 μm pixel, the OE only drops by a factor of 2 while the OX increases more than 10 times. Strong diffraction from the pixel aperture causes a rapid rise in OX. This seriously questions the usefulness of a 0.97 μm pixel based on a three-metal-layer design.

3.4 Optical performance for optimized microlens position at oblique incidence

Until now we have considered the optical performance of the microlens subject to a normally incident plane wave. In this section, we study the microlens performance as a function of pixel size when we optimize the position of the microlens for a plane wave with a given tilt angle. This is important for imaging applications, since light - more often than not - is incident on the image sensor pixels at oblique angles.

In our electromagnetic FDTD calculations, we model an oblique incident beam by a plane wave that illuminates five pixels with the center pixel of interest. We select a relatively-large 30 degree angle to illustrate the off-axis behavior for the red color channel illuminated by light with a wavelength centered on 650 nm. We further assume that the optimal microlens radius is not changed from the normally incident microlens radius for each pixel size.

We then optimize the location of the microlens with respect to the pixel center, i.e., the microlens array is shifted toward the incident direction in order to allow light to remain focused onto the photo detector of the center pixel despite the tilt. The microlens array is shifted in 100 nm increments to find the optimal microlens position. This position is determined by the ratio of the normalized OE to the normalized OX as the figure of merit. The color filter array also must be shifted; this offset is calculated based on the optimal microlens shift considering a line of sight from the center of the microlens to the center of the photodetector area.

Figure 5 shows the typical field distribution for an obliquely incident beam with a 30° angle of incidence for 1.75 μm, 1.4 μm and 0.97 μm pixels. Simulations are performed for three-metal-layer structures and 650-nm light (red color channel). We observe the shifted location for both the microlens and the color filter array, which are revealed by the white boundaries in the figure.

 figure: Fig. 5

Fig. 5 Poynting vector plots depicting energy flow toward the photodiode for (a) 1.75 μm, (b) 1.4 μm, and (c) 0.97 μm pixels for 650-nm light at oblique incidence with a 30° incidence angle. Only the center pixels are shown and the boundaries between different materials are outlined.

Download Full Size | PDF

The normalized optical efficiency and optical crosstalk for oblique illumination are shown in Fig. 6 . The 30° off-axis OE for 1.75 μm pixels suffers only from a small reduction compared to on-axis. For 0.97 μm pixels, on the other hand, the OE drops significantly from 59% (on-axis) to 43% (30° off-axis). Since the off-axis beam travels over a longer distance, the diffraction effects are expected to be more obvious. Similarly, the OX increases from a low value of 2% (on-axis) to still relatively low value of 4% (30° off-axis) for 1.75 μm pixels. For the 0.97 μm pixels, the OX is increased to an unacceptable 30%. The simulations clearly show that the optical performance is severely degraded for the smallest 0.97 μm three-metal-layer pixels when they are exposed to off-axis illumination.

 figure: Fig. 6

Fig. 6 Comparison of on- (0-deg) and off-axis (30-deg) optical performance. Plots of (a) normalized optical efficiency (OE) and (b) normalized optical crosstalk (OX) in a three-metal-layer pixel design for several pixel nodes below 2 μm. The solid black curve and dashed blue curves represent the normalized OE (panel a) and OX (panel b) for the on-axis and off-axis, respectively.

Download Full Size | PDF

3.5 Optical performance for pixels with different stack height

In this section, we study vertical scaling of the pixel and we assess the optical performance improvements that can be made for pixel nodes below 1.4 μm when designing a two-metal-layer or one-metal-layer pixel structure. In particular, we would like to know if the performance of a 0.97 μm or 1.2 μm pixel can be increased to the level of a 1.75 μm pixel by scaling its height. In the previous sections on a three-metal-layer structure, we assumed a total dielectric stack height between the microlens and photodiode of 4.04 μm. We now reduce that to 3.46 μm for a two-metal-layer structure and 2.74 μm for a one-metal-layer structure. We focus on the optical performance of the red channel, since the blue and green channels with shorter wavelengths are more likely to operate in the refraction-dominated regime for each respective pixel size.

Figure 7 shows the typical field distribution for a normal incident beam onto red channel pixels of 0.97 μm size with varying stack height representative of three-, two-, and one-metal-layer structures.

 figure: Fig. 7

Fig. 7 Poynting vector plots depicting energy flow toward the photodiode for a 0.97 μm (a) three-metal-layer, (b) two-metal-layer and (c) one-metal-layer pixel subject to light with a 0° incidence angle and a wavelength of 650 nm.

Download Full Size | PDF

Figure 8 shows the optical performance in terms of normalized OE and OX as a function of stack height for different pixel nodes. For the largest 1.4 μm pixel, the optical crosstalk is 5% for three-metal-layer structure, while the OE is 90%. After the dielectric stack is reduced to that of a one-metal-layer structure, the optical crosstalk is reduced to 1%, which is only one fifth of the original optical crosstalk and rivals that of a 1.75 μm pixel. For the smallest 0.97 μm pixel, the normalized OE increases from 58% to 84% and the normalized OX decreases from 21% to 8%, if the stack height is reduced from three metal layers to one metal layer. While this change results in improved optical pixel performance, a one-metal-layer structure at the smallest pixel size of 0.97 μm still fails to perform like a 1.75 μm pixel with three metal layers. For sub-1 μm pixel nodes, dielectric stack height needs aggressive scaling even below one-metal-layer (~2.74 μm) values. Simultaneous scaling of the vertical and horizontal pixel dimensions, in fact, amounts to keeping the aspect ratio of the pixel (approximately) constant. This has already proven useful in establishing the impact of imaging lens f-number on sub-2 μm pixel performance [19].

 figure: Fig. 8

Fig. 8 Plots of (a) normalized optical efficiency (OE) and (b) normalized optical crosstalk (OX) versus pixel stack height for different pixel sizes. The stack heights of 4.04 μm, 3.46 μm and 2.74 μm represent three-, two- and one-metal-layer pixels, respectively. Solid black, dashed red, and dotted blue curves represent the normalized OE (panel a) and OX (panel b) for 1.4, 1.2, and 0.97-μm pixel designs.

Download Full Size | PDF

4. Conclusion

We systematically investigated the optical behavior and performance of optimized conventional dielectric microlenses in sub-2 μm CMOS imaging sensor pixels. For a series of three-metal-layer pixel nodes below 2 μm, where diffraction is expected to play a role, we identified two operation regimes for the optimal microlens: a refraction-dominated regime and a diffraction-dominated regime. When the pixel size is larger than 1.4 μm, the optimal microlens operates in a refraction-dominated regime. In this regime, the peak irradiance position is controlled by the design of microlens and the normalized OE is higher than 90%, while the OX is lower than 5%. When the pixel size is smaller than 1.4 μm, the microlens has no control over the peak irradiance position. In this regime, the image pixel performance does not benefit from the presence of a microlens and the normalized OE can be as low as 60% for the red channel of a 0.97 μm three-metal-layer pixel. The optical performance of pixels subject to off-axis illumination is even further degraded. The OE can be as low as 43%, while the OX can be as high as 30% for a 30° oblique incidence.

To extend the functionality of the microlens beyond the 1.4 μm node, a shorter dielectric stack is required. The normalized OE can increase from less than 60% to 84% in a 0.97 μm pixel if the dielectric stack is reduced from three metal layers to one metal layer. Scaling of pixels beyond this point requires even more aggressive stack height scaling. Our predictions about the optical performance of microlenses in sub-2μm pixels and the transition in microlens behavior around 1.4 μm size pixels, in particular, seem to be confirmed by image sensor industry trends. Indeed, it is not a coincidence that back-side illumination CMOS technology is being introduced at the 1.4 μm node [20,21]. Even with extremely short stacks, however, additional approaches to control and confine the light flow, including light guides or embedded microlenses [22], might be needed to make sub-1μm pixels optically viable.

Acknowledgements

This work was supported in part through a gift from MagnaChip Semiconductor Ltd. The authors thank B. Fowler and A. El Gamal for fruitful discussions and critical feedback.

References and links

1. P. B. Catrysse and B. A. Wandell, “Roadmap for CMOS image sensors: Moore meets Planck and Sommerfeld,” Proc. SPIE 5678, 1–13 (2005). [CrossRef]  

2. H. Rhodes, G. Agranov, C. Hong, U. Boettiger, R. Mauritzson, J. Ladd, I. Karasev, J. McKee, E. Jenkins, and W. Quinlin, “CMOS imager technology shrinks and image performance,” 2004 IEEE Workshop on Microelectronics and Electron. Devices, 7–18 (2004).

3. K. B. Cho, C. Lee, S. Eikedal, A. Baum, J. Jiang, C. Xu, X. Fan, and R. Kauffman, “A 1/2.5 inch 8.1 Mpixel CMOS image sensor for digital cameras,” 2007 IEEE Intl. Solid-State Circuits Conf., 508–618 (2007).

4. C. R. Moon, J. C. Shin, J. Kim, Y. K. Lee, Y. J. Cho, Y. Y. Yu, S. H. Hwang, B. J. Park, H. Y. Kim, S. H. Lee, J. Jung, S. H. Cho, K. Lee, K. Koh, D. Lee, and K. Kim, “Dedicated process architecture and the characteristics of 1.4 μm pixel CMOS image sensor with 8M density,” 2007 IEEE Symp. on VLSI Tech., 62–63 (2007).

5. P. B. Catrysse and B. A. Wandell, “Optical efficiency of image sensor pixels,” J. Opt. Soc. Am. A 19(8), 1610–1620 (2002). [CrossRef]  

6. G. Agranov, V. Berezin, and R. H. Tsai, “Crosstalk and microlens study in a color CMOS image sensor,”, ” IEEE Trans. Electron. Dev. 50(1), 4–11 (2003). [CrossRef]  

7. J. Ahn, C. R. Moon, B. Kim, K. Lee, Y. Kim, M. Lim, W. Lee, H. Park, K. Moon, J. Yoo, Y. J. Lee, B. J. Park, S. Jung, J. Lee, T. H. Lee, Y. K. Lee, J. Jung, J. H. Kim, T. C. Kim, H. Cho, D. Lee, and Y. Lee, “Advanced image sensor technology for pixel scaling down toward 1.0μm,” 2008 IEEE Intl. Electron Dev. Meeting, 1–4 (2008).

8. W. G. Lee, J. S. Kim, H. J. Kim, S. Y. Kim, S. B. Hwang, and J. G. Lee, “Two-dimensional optical simulation on a visible ray passing through inter-metal dielectric layers of CMOS image sensor device,” J. Korean Phys. Soc. 47, S434–S439 (2005).

9. E. D. Palik, Handbook of Optical Constants of Solids (Academic Press, Orlando, 1985).

10. D. M. Hartmann, O. Kibar, and S. C. Esener, “Characterization of a polymer microlens fabricated by use of the hydrophobic effect,” Opt. Lett. 25(13), 975–977 (2000). [CrossRef]  

11. K. Shinmou, K. Nakama, and T. Koyama, “Fabrication of micro-optic elements by the sol-gel method,” J. Sol-Gel Sci. Technol. 19(1/3), 267–269 (2000). [CrossRef]  

12. C. P. Lin, H. Yang, and C. K. Chao, “Hexagonal microlens array modeling and fabrication using a thermal reflow process,” J. Micromech. Microeng. 13(5), 775–781 (2003). [CrossRef]  

13. X. C. Yuan, W. X. Yu, M. He, J. Bu, W. C. Cheong, H. B. Niu, and X. Peng, “Soft-lithography-enabled fabrication of large numerical aperture refractive microlens array in hybrid SiO–TiO sol-gel glass,” Appl. Phys. Lett. 86(11), 114102 (2005). [CrossRef]  

14. A. Taflove and S. C. Hagness, Computational electrodynamics: the finite-difference time-domain method (Artech House, Boston, 2000).

15. OptiFDTD, Optiwave Systems, Inc., http://www.optiwave.com

16. J. P. Bérenger, “A perfectly matched layer for the absorption of electromagnetic waves,” J. Comput. Phys. 114(2), 185–200 (1994). [CrossRef]  

17. P. B. Catrysse and B. A. Wandell, “Integrated color pixels in 0.18-μm complementary metal oxide semiconductor technology,” J. Opt. Soc. Am. A 20(12), 2293–2306 (2003). [CrossRef]  

18. Y. Li, “Dependence of the focal shift on Fresnel number and f number,” J. Opt. Soc. Am. 72(6), 770 (1982). [CrossRef]  

19. C. C. Fesenmaier, Y. Huo, and P. B. Catrysse, “Effects of imaging lens f-number on sub-2 μm CMOS image sensor pixel performance,” Proc. SPIE 7250, 72500G (2009). [CrossRef]  

20. S. Iwabuchi, Y. Maruyama, Y. Ohgishi, M. Muramatsu, N. Karasawa, and T. Hirayama, “A Back-illuminated high-sensitivity small-pixel color CMOS image sensor with flexible layout of metal wiring,” 2006 IEEE Intl. Solid-State Circuits Conf., 1171–1178 (2006).

21. T. Joy, S. Pyo, S. Park, C. Choi, C. Palsule, H. Han, C. Feng, S. Lee, J. McKee, P. Altice, C. Hong, C. Boemler, J. Hynecek, M. Louie, J. Lee, D. Kim, H. Haddad, and B. Pain, “Development of a production-ready, back-illuminated CMOS image sensor with small pixels,” 2007 IEEE Intl. Electron Dev. Meeting, 1007–1010 (2007).

22. C. C. Fesenmaier, Y. Huo, and P. B. Catrysse, “Optical confinement methods for continued scaling of CMOS image sensor pixels,” Opt. Express 16(25), 20457–20470 (2008). [CrossRef]   [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1
Fig. 1 (a) Two-dimensional (2D) pixel model with layer materials and thicknesses. (b) Electromagnetic calculation showing energy flow toward the photodiode (z-component of the Poynting vector Sz) with significant diffraction effects, reducing optical efficiency (transmission) and leading to spatial crosstalk even at normal incidence. The color scale in panel (b) is nonlinear to bring out detail in the regions of lower energy flow.
Fig. 2
Fig. 2 Poynting vector plots depicting energy flow toward the photodiode for (a) 1.75 μm, (b) 1.4 μm, and (c) 0.97 μm pixels for light with a 0° incidence angle and a wavelength of 650 nm. Only the center two pixels are shown and the boundaries between different materials are outlined. The color scale is nonlinear to bring out detail in the regions of lower energy flow.
Fig. 3
Fig. 3 Optimal microlens radius in a three-metal-layer pixel design for the 1.75, 1.4, 1.2, and 0.97 μm pixel nodes. Curves represent the optimized radii for the microlenses of pixels populating the red (solid line), green (dashed line) and blue (dotted line) color channels. The geometrical optics prediction, which depends on pixel height from microlens to photodiode only, is represented by the black dash-dot line.
Fig. 4
Fig. 4 Plots of (a) normalized optical efficiency (OE) and (b) normalized optical crosstalk (OX) in a three-metal-layer pixel design for the 1.75, 1.4, 1.2, and 0.97 μm pixel nodes. Curves represent normalized OE and OX for pixels in the red, green and blue color channels.
Fig. 5
Fig. 5 Poynting vector plots depicting energy flow toward the photodiode for (a) 1.75 μm, (b) 1.4 μm, and (c) 0.97 μm pixels for 650-nm light at oblique incidence with a 30° incidence angle. Only the center pixels are shown and the boundaries between different materials are outlined.
Fig. 6
Fig. 6 Comparison of on- (0-deg) and off-axis (30-deg) optical performance. Plots of (a) normalized optical efficiency (OE) and (b) normalized optical crosstalk (OX) in a three-metal-layer pixel design for several pixel nodes below 2 μm. The solid black curve and dashed blue curves represent the normalized OE (panel a) and OX (panel b) for the on-axis and off-axis, respectively.
Fig. 7
Fig. 7 Poynting vector plots depicting energy flow toward the photodiode for a 0.97 μm (a) three-metal-layer, (b) two-metal-layer and (c) one-metal-layer pixel subject to light with a 0° incidence angle and a wavelength of 650 nm.
Fig. 8
Fig. 8 Plots of (a) normalized optical efficiency (OE) and (b) normalized optical crosstalk (OX) versus pixel stack height for different pixel sizes. The stack heights of 4.04 μm, 3.46 μm and 2.74 μm represent three-, two- and one-metal-layer pixels, respectively. Solid black, dashed red, and dotted blue curves represent the normalized OE (panel a) and OX (panel b) for 1.4, 1.2, and 0.97-μm pixel designs.

Equations (1)

Equations on this page are rendered with MathJax. Learn more.

r = f ( n M L n a i r ) n eff
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.