Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Determining depth of field for slanted lenticular 3D displays

Open Access Open Access

Abstract

The availability of higher resolution display panels has increasingly made glasses-free 3D displays become a viable mainstream commercial product; it is therefore important to define and measure their parameters. We will discuss the measurement of multiview 3D displays that use a slanted lenticular screen in front of the display panel to control the light directions. Multiple perspective views are formed across the viewing field giving viewers the sensation of depth and motion parallax. In addition to the usual parameters of resolution, luminance, contrast etc., it is important that we know the image depth of field (DOF). In this paper, we will first define the DOF and then describe means of measuring it. The aim of the paper is to describe general theory and procedure, and not the measurement of specific displays. However, a comparison of the results on a sample test display of the three methods described, is reported in order to give an indication of accuracy.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Multiview displays are a mature display technology that was first developed in the 1980’s [13]. Their construction is very simple, comprising a display screen and a lenticular screen that is a series of cylindrical lenses that are aligned at an angle as depicted in Fig. 1(a). Parallax barriers [4,5] are also used, but these block a large proportion of the light, whereas a lenticular screen allows nearly all the light to pass through.

 figure: Fig. 1.

Fig. 1. Slanted lenticular. (a) The broken line is considered as the ‘capture line’ that moves across the panel in the opposite direction to the viewer’s eye position. (b) 28-view example showing and 28 sub-pixels behind each perceived pixel. H. L, V and Ɵ are the horizontal resolution, lens width, vertical resolution and slant angle respectively.

Download Full Size | PDF

The spatial resolution of a 2D display is determined only by the display panel pixel size. However, glasses-free 3D display will have a reduced spatial resolution in order to provide angular resolution that is necessary for the display of 3D. The introduction of 8 k displays, which are in the order of eight thousand pixels across, now offer the possibility of providing a good quality 3D with an acceptable resolution.

For a slanted lenticular display, the perceived horizontal spatial resolution is dependent on the width of the lens. The reason for this is that the lenticular lenses collimate the rays from the display panel so that rays emitted from a point on it appear to an observer the to fill the complete width of the lens. Therefore, for the purposes of measurement of DOF, the perceived resolution of the screen is determined by the lens width, as opposed to the pixel width in a directly-viewed display panel. It can be seen from Fig. 1(b) that the horizontal resolution H is found by:

$$\begin{array}{{c}} {H = \frac{L}{{cos\theta }}} \end{array}$$
where Ɵ is the slant angle and L the lenticuar lens pitch. The angular resolution is determined by the number of sub-pixels behind each lens and the vertical perceived resolution V is dependent on the way in which views are mapped to the display. Figure 1(b) shows an example of a 28-view display with an LCD having a vertical RGB stripe sub-pixel configuration. These displays show horizontal parallax only as the viewing zones are parallel to the slanted lenses. In this type of display, angular resolution is very important as it determines how blurred the image appears when it is away from the plane of the screen; that is, it controls the DOF.

In a 2-image stereoscopic display, the apparent depth is formed in the brain itself due to disparity between the left and right images. This has the advantage that the images stay in focus irrespective of the depth. The disadvantage of this method is the problem of accommodation/convergence (AC) conflict [6] where the eyes focus at the screen and converge at a different distance. This can cause viewer discomfort such as nausea and headaches so there are certain criteria recommended for keeping the disparity within comfortable limits. These include the 1° rule [7] where the convergence angular difference between apparent and screen distances is kept within 1° and the ± 1/3 diopter rule [6] where the apparent depth is kept within 1/3 of a diopter of the screen distance.

If a 3D display does not produce a stereoscopic image using disparity, then it must produce an image within a volume of space around the display in some manner. These displays produce images in various ways, for example moving screens [8,9], multiple layers [10,11] or integral imaging [12]. If the display uses a single static screen, then the display must produce images in space that are in optical terms either real or virtual.

If the image is considered as being composed of voxels, the width of these is proportional to the angular resolution when expressed in terms of angular increments. When the angle is in radians, the voxel width is simply the pixel width multiplied by the distance from the screen. For all 3D displays other than 2-image or volumetric, the resolution reduces with increasing distance from the screen. This applies to multiview, super multiview (SMV) which is multi-view with a large number of views, integral imaging, light field displays and even holograms.

A basic definition of DOF would be the range of depth over which a ‘clear’ non-blurred image is seen. This indicates that a visual method in some form is a valid means of determining DOF; however, visual methods are prone to subjective errors so in this paper we examine and compare purely subjective visual determination of the depth range of a ‘clear’ image with two different other methods that involve the visual examination of photographs of the displayed image.

A definition of DOF is given in the “Information Display Measurements Standard” (IDMS) [13] published by the Society for Information Display, however there is no explanation provided. The Authors’ previous work [14] on multi-layer tensor display measurement considers the display as a low-pass spatial filter. Slanted lenticular displays cannot be treated simply as low-pass filters so another approach to explaining the resolution is required. For this, we introduce the concept of precision. In Fig. 2, the edge of an actual object in the real world is shown; however, when this is portrayed on a 2D display, the precision with which it is displayed is limited to the width P of the pixel. At distance D from the screen, the precision has been halved as the edge can be displayed in space in increments of S, which is 2P. However, for a 3D display, the image that is normally displayed at the plane of the display screen is transferred into the volume in front of the screen in the form of a voxel, which is a volume equivalent of a pixel created by rays intersecting in the image space. In optical terms, this is a real image. Note when the voxel is behind the screen it is created from an optically virtual image when the rays appear to radiate from a virtual voxel behind the screen. D can be considered as the DOF in front of the screen. The same consideration applies to the image in the far region behind the screen, so the overall DOF is therefore 2D. In a slanted lenticular display, the pixel is replaced by the horizontal width of the lenses, giving the relationship:

$$\begin{array}{{c}} {DOF = \frac{{2L}}{{\Phi cos\theta }}} \end{array}$$
where $\Phi$ is the angular resolution in radians, which is given by the display panel pixel size divided by the focal length of the lenticular array lenses.

 figure: Fig. 2.

Fig. 2. Positional uncertainty. A point in the real world is depicted as a pixel on a display screen; this is the precision S of the positioning, which is halved at the DOF boundary.

Download Full Size | PDF

2. Measurement method

Initially, we investigated the procedure recommended in the IDMS manual for measuring angular resolution. With this method, a single view shows a completely white image, with all the other views showing a completely black image. To do this, a single white view was displayed on a 25-view 4 k 13.5” LCD with a 100 lpi lenticular screen at a 12.5° slant angle. The 25 views are spread over a 42.5° viewing angle. The luminance profile across the viewing field is the ‘1 view’ plot in Fig. 3. The number of adjacent views was then increased, in increments of 1 view, up to 9 and profiles taken at each increase. Figure 3 shows that full luminance was not obtained until 6 views were showing.

 figure: Fig. 3.

Fig. 3. Luminance profile vs number of views shows profiles for groups comprising between 1 and 9 adjacent views displaying white images, with the remainder of the 25 views displaying black.

Download Full Size | PDF

The plots indicate a high level of crosstalk between the views. This is not unexpected as the parameters of the display under test are optimized for a single large viewing angle, and not the customary repeated view zone groups. Slanted lenticular displays have inherent inter-view crosstalk where neighbouring views overlap. It is clear that this technique where a single view is white and the others black is unsuitable for slanted lenticular displays due to the high level of inter-view crosstalk. We therefore decided to revert to the method used previously for tensor display measurement [14]. In this section, we first consider the basic method of DOF measurement using a virtual model having a test pattern on a sloping surface so that it gives a continuous range of depth over a certain range that is centred at the plane of the screen.

This is illustrated in Fig. 4(a) and is a virtual model made in Blender 3D software. It is a block with a sloping front face where the centre of the surface is located at the convergence plane of a virtual capture camera array; therefore, it passes through the plane of the screen when it is displayed. On the front face of the model there is a vertical bar pattern where the pitch doubles from left to right every 16 bars. Therefore, the distance between the edges of the bars increases by a factor of 21/32 each time the edge moves to the right. The pitch ranges from 0.2 to 3 mm so it can readily accommodate the range of lenticular lens pitches we will be using. Also, the far right bars are sufficiently wide for an analysis of the bar edge pattern width.

 figure: Fig. 4.

Fig. 4. Virtual models: (a) model used to produce images in Figs. 6 to 12, (b) model used for images in Fig. 11 and depth measurement.

Download Full Size | PDF

Figure 4(b) shows an image of the test object that is used to readily measure the depth magnification of the display. This magnification is the ratio of the distance in the image of a point in the virtual model from the screen to the distance of that point from the convergence plane of the virtual capture camera array. The distance of the virtual image point is found from simple parallax geometry.

Measurements were carried out on photographs of the observed 3D image, with care taken to ensure that the displayed image on the monitor is sufficiently large to ensure that aliasing is kept within negligible limits.

When the images were examined, it was observed that moiré is present over much of the area. As this might hinder analysis of the image, we investigated the use of spatial filtering to remove it. The effect of this can be mitigated with the use of a one-dimensional spatial filter. A quick initial assessment of the efficacy of this approach can be made with the use of a lenticular screen with horizontally aligned lens axes. This can be considered as analog spatial filter that effectively performs a convolution of the raw monitor image with a kernel one pixel wide and several pixels high.

It is shown in Fig. 5 that the display has an optimum viewing distance (OVD). Although the best quality images are observed at this distance, the images are acceptable away from this distance. The acceptable distance range is principally determined by the resolution of the display; higher resolution display panels enable increased angular resolution, and hence a greater view density, which provides greater depth in the image. There are two points to note here; first, in a measurement setup, the camera distance from the 3D display screen can be carefully controlled. Second, it should be noted that a multiview display produces 3D images in the volume in the vicinity of the display, that are composed of ‘voxels’ that are the 3D equivalent of pixels in a conventional 2D display. In optical terms, these are real images for voxels in front of the screen (virtual images for voxels behind the screen), and as such are fixed in space and whose position is unaffected by observer position.

 figure: Fig. 5.

Fig. 5. Measurement setup: This is carried out in two stages, 1) the image on the 3D display is captured, 2) this is low-pass filtered and then analyzed after second capture.

Download Full Size | PDF

3. Measurement results

Measurements were made on a photograph of the 3D display, which is then presented on a monitor where a visual evaluation of the image could be conducted. Figure 6 shows photographs of the observed 3D image on a 20-view 4 k 5.5” LCD with a 90 lpi lenticular screen at a 12.53° slant angle. A still picture of the image on this is captured at the optimum viewing distance (OVD) which can then be analyzed on a monitor. In this section, three different methods of determining the outer boundary of the clear central region are described.

 figure: Fig. 6.

Fig. 6. Upper Raw image of camera capture of 3D display. Inset “Pac-man” showing perceived triangle, Lower Image processed with low-pass vertical-only spatial filter shows no moiré.

Download Full Size | PDF

Closer examination of Fig. 6 upper reveals that the moiré is caused by jagged edges on the bar images. This is demonstrated by the “Pac-man” image in the inset, where the brain perceives a triangle when there is not actually one there; this is an example of gestalt perception. These moiré patterns, which are highlighted by the red ellipses, which are not visible in Fig. 6 lower where the same image is low-pass filtered in the vertical direction. As the vertical pattern enables the measurement of the resolution in the horizontal direction, suppression of high frequency vertical spatial components was investigated.

3.1 Visual estimation of clear region boundary

Figure 7 is the photograph of the raw monitor image that is spatially filtered with the lenticular screen; this is the same image as Fig. 6 lower, which is reused for the purposes of clarity. The jagged bar edges are smoothed out and the appearance of moiré is eliminated. This means that the distracting pattern is removed but that measurements can be made in the horizontal direction that are substantially unaffected. The critical measurement regions are indicated by the red ellipses. In this case, the filtering is achieved with a 200 lpi lenticular screen and measurement on laser beam spreading angle gives an effective convolution kernel height of approximately 1.5 lens pitches.

 figure: Fig. 7.

Fig. 7. Region boundaries determined by visual examination. The aliasing regions represent unwanted information that produces artefacts and DOF reduction.

Download Full Size | PDF

Figure 7 shows the characteristic relevant boundaries determined by simple visual evaluation and the significant points in relation to these. The area between the transition regions represents the desired performance and the aliasing regions outside this represent unwanted artefacts that affect resolution and reduce the DOF. The line ZBEG shows the part of the image of the front face of the test object that has zero depth; that is where it cuts through the plane of the display screen. Point B is where the maximum resolution occurs; at this position, the image is in the plane of the screen and the bar width equals H, the lens horizontal width as this is the condition for the lowest spatial resolution that can support alternate black/white lines. F and D are the points in front of, and behind the screen respectively where the perceived resolution drops to half the screen resolution; the pattern here has a pitch of 2 H and E is 16 bars from B due to the geometric ratio of the pattern pitch. Therefore, the DOF is the horizontal distance between the points D and F, which can be calculated from the dimensions of the model, the magnification on the X and Y directions and the depth magnification in the Z-direction.

Alternatively, the Z values of the D and F coordinates can be determined by simple parallax measurements. In theory, in order to determine the DOF, we are only interested in points D and F; however, in practice a more accurate result is obtained by determining the positions of the lines ZAD and ZCF and their extensions with depth, as depicted in Fig. 7.

At line DF, the pitch equals 2 H. G is the point on the center line where the pattern pitch is 4 H and Z is the virtual point behind the 3D display screen where the notional line pitch is zero; that is, infinite resolution. Note that the bars do not appear to be parallel; this is to be expected as the image of the pattern is sloping so that the parallel bars appear to be converging to their vanishing point that is located at an infinite distance.

If the lenticular screen is considered as being a sampling device operating on the bars of the pattern, the frequency at line DEF is an analog of the Nyquist frequency in signal theory. However, a display is different to a signal as the region between the sampling and Nyquist frequencies is usable. This accounts for the aliasing in the areas indicated in the figure and reinforces using the distance from the screen where the resolution is halved as the basis for defining the DOF boundaries.

3.2 Measurement of “jaggies”

Figure 7 shows an approximate boundary of the region where the image is not blurred, but it is not obvious exactly where the half-resolution boundaries AD and CF lie; therefore another method was sought. A more reliable approach would be to examine the edges of the wider bars to the right of Fig. 8(a). The complete height of the bar is depicted over 3 sections in Fig. 8(b) in order to show higher magnification; it can be seen that the edges are jagged and that the jagged region, commonly referred to as a jaggy, length determines the amount of the horizontal blurring. We can now put value on the blur and define the half resolution point as being where the horizontal extent of the jaggy equals 2 H, where H is the horizontal width of the lenticular lenses. In the figures, the parallel white lines have a horizontal spacing of H and the bold first and fifth lines indicate the inner edge and the distance 4 H from this. The numbers indicate the length of the jaggy in multiples of H. Figure 9(a) shows that the width of the jaggy zone is also proportional to the distance from the screen. This means that the boundaries AD and CF of the clear region of the image are straight lines when extrapolated to the region that corresponds to behind the display screen under test, ends at Point Z; this is the notional position where the pattern pitch is zero, as shown in Fig. 9(a). This means they can never actually reach Z so that BZ = BE. It is shown clearly in this figure where the octave lines indicate multiples of H. In Fig. 9(b), FO is the imaginary point where the pitch is zero, which represents infinite resolution, and FS is the spatial frequency of the screen pattern when alternate black and white lenticular lenses are observed. As the bars from FS to FO are a geometric progression with a common ratio r < 1, then the relationship

$$\begin{array}{{c}} {{S_\infty } = \frac{a}{{1 - r}}\; } \end{array}$$
applies where S is the sum to infinity of the distances between the bar centers and r is the coefficient that is given by the ratio of adjacent bar octave (16 pattern bars) center distances. In this case, a is the first term, which is 1, and r is 1/2; therefore the distance FS FO is the same as FS FM.

 figure: Fig. 8.

Fig. 8. (a) Jaggy width; total horizontal width of vertical edge proportional to distance of the edge image from the screen plane (b) Close-up of right side of photograph.

Download Full Size | PDF

 figure: Fig. 9.

Fig. 9. Jaggy results (a) Measured boundary in relation to spatial frequency. (b) Jaggy width vs distance from screen center.

Download Full Size | PDF

The measured display is located at H and this can be considered as the sampling frequency FS in cs/m where:

$$\begin{array}{{c}} {{F_S} = \frac{1}{{2H}}\; } \end{array}$$
when H is expressed in meters. The Nyquist frequency FN in cs/m is therefore:
$$\begin{array}{{c}} {{F_N} = \frac{1}{{4H}}} \end{array}$$

3.3 Phase change

Figure 10 shows a close-up of the boundaries superimposed of an analog filtered image. The boundary is difficult to determine from the contrast alone. However, close examination of the transition region shows that the white gap between the bars changes direction, giving phase shifted aliasing patterns after passing through the outer boundary of the transition region. The figure shows that the boundary line of the clear region derived from the jaggy length coincides with the direction change of the images of the lines. This phenomenon can be used to aid a rapid visual determination of the position of the half-resolution line DEF, and hence the DOF.

 figure: Fig. 10.

Fig. 10. Aliasing phase shift shown on filtered image. Clear region boundary coincides with the end of the straight section of a bar’s image.

Download Full Size | PDF

A test image comprising a slanting surface with a scale on it was used to demonstrate the wide field of view that could be obtained from this type of display. A lens screen with a large pitch was chosen; in this case 70 lpi, which is relatively large for the viewing distance of 275 mm so some resolution will be sacrificed. However, the look-around capability given by the viewing angle of 60° gives a good sense of realism. Five perspective views of the image are shown in Fig. 11.

 figure: Fig. 11.

Fig. 11. Images of model in Fig. 2(b) right show a viewing angle of around 60°. Display uses a 70 lpi, wide viewing angle lenticular screen having short focal length lenses.

Download Full Size | PDF

3.4 Comparison of results

Referring to Fig. 7; the measured width of the screen is 67 mm, from which we can determine from the photograph that the distance DF is 26.2 mm. The slope angle of the patterned surface is 15° and the horizontal width H of the lenticular lenses is calculated as 0.289 mm from 90 lpi lenses and 12.53° slant angle, and applying the assumption that the OVD >> DOF. The horizontal lens width gives a pattern pitch of 2 × 0.289 = 0.578 mm at the half-resolution pattern line. Applying these figures to the geometry shown in Fig. 12 gives a DOF = 7.0 mm.

 figure: Fig. 12.

Fig. 12. DOF measurement geometry. The camera captures images that are then analyzed. Red lines denote the plane of the screen, and blue lines, the plane of the image surface.

Download Full Size | PDF

As mentioned in Section 3.2, in this example, the jaggies provide a more conclusive means of providing the boundary of the clear region, as Fig. 8 shows. Figure 10 shows that the phase discontinuity boundaries also show a good correspondence with the visible boundary. The conclusion from this that it is probably worth adopting all three methods and comparing the results as it may well be the case that different displays yield a different spread of results.

Precision is not a particularly important issue as all the measurements can be made to an accuracy of better than ± 5%, which represents a very small distance when translated to the small DOF value. Also, the DOF value is not one that is required to any great precision and until now, has been rarely given and has not been rigorously defined.

The results will be affected by the magnifications between the captured object and the displayed object. In the X and Y directions, the magnification factor is allowed for by using the screen width as a reference, as explained in the previous paragraph. The Z magnification is found from parallax measurements, as described in Section 3.2.

It should be noted that the DOF of a display cannot readily be deduced theoretically, which is the reason for investigating the subject in this paper. It must be borne in mind that DOF is determined by the appearance of blurred voxels, and these voxels are formed by rays intersecting in space for image regions appearing in front of the screen. This is complex to model and also, we are interested in the subjective appearance of the voxel; therefore, only empirical results are practicable. It should be noted that for voxels behind the screen, the same considerations apply but the voxel is formed as an optical virtual image.

4. Further work

Although the procedure discussed in this paper is based on a well-defined principle and the process is simple to carry out, the technique will lend itself to fully automatic implementation without requiring human observer intervention.

The clear image boundary line positions can be determined in two basic ways. First, the vertically low pass filtered raw image can be examined along the required sample line in terms of grayscale values. The local contrast along the line can be used to detect the presence and location of bar images. The sample line could be horizontal alternatively, it could possibly be radial in order to confirm the accurate location of point Z, which in Fig. 13(a) is known by definition to lie on the center line that is in the plane of the screen.

 figure: Fig. 13.

Fig. 13. (a) Vertical low-pass filtered image on monitor screen. (b) Binarized filtered image. (c) Aliasing analysis regions of interest on binarized image.

Download Full Size | PDF

An alternative method is to additionally binarize the filtered image, as in Fig. 13(b) which is a binarized version of the low pass filtered image obtained from a 5.5” 4 k, 90 lpi slanted lenticular display.

In addition to simple depth of field measurement, analysis of the of the regions of interest indicated in Fig. 13(c) will be investigated with a view to reducing aliasing artefacts, which play an important role in the quality of slanted lenticular multiview 3D images. Also, other patterns could be investigated as it is known that the direction of edges in relation to the slant angle of the lenses affects their perceived quality. The authors did present an image of a test object with the pattern bars parallel to the lens slant angle; however, the resulting image bore no resemblance to the pattern on the model, this possibly being a case of an extreme aliasing effect.

As the display shows different resolutions in different directions, it might be best to just quote the resolution in the horizontal direction as this will be the worst case and consistency will aid the comparison between different displays.

5. Conclusions

The methods described involve the use of visually examining an image and determining the DOF by three alternative methods, these are; measuring the position of the boundary between the clear and transition regions, measuring the height of the jaggies on the images or measuring the phase change discontinuity positions. These procedures are based on a well-defined principle and the use of simple measurements on an image of the screen of a logarithmic bar pattern test model captured at the optimum viewing distance. Measurements are made at position where the pitch is equivalent to double the horizontal width H of the lenticular lens pitch. The paper covers three alternative methods of determining the distance between the front and rear half-resolution planes in a slanted lenticular 3D display.

Funding

National Key Research and Development Program of China (2021YFB3602703, 2022YFB3602903); Guangdong University Key Laboratory for Advanced Quantum Dot Displays and Lighting (2017KSYS007); Shenzhen Key Laboratory for Advanced Quantum Dot Displays and Lighting (ZDSYS201707281632549); Shenzhen Science and Technology Innovation Program (JCYJ20220818100411025); Shenzhen Development and Reform Commission Project (XMHT20220114005).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. Y. Zhu and T. Zhen, “3D Multi-view Autostereoscopic Display and Its Key Technologies,” in 2009 Asia-Pacific Conference on Information Processing, 2009, vol 2, bll 31–35.

2. R.-P. M. Berretty, F. J. Peters, and G. T. G. Volleberg, “Real-time rendering for multiview autostereoscopic displays,” in Stereoscopic Displays and Virtual Reality Systems XIII, 2006, vol 6055, bll 208–219.

3. Y. Zhang, Q. Ji, and W. Zhang, “Multi-view autostereoscopic 3D display,” in 2010 International Conference on Optics, Photonics and Energy Engineering (OPEE), 2010, vol 1, bll 58–61.

4. A. Pallas, C. H. Meyer, and D. Mojon, “Nintendo 3DS,” Ophthalmologe 110(3), 263–266 (2013). [CrossRef]  

5. G. Chidichimo, A. Beneduci, V. Maltese, S. Cospito, A. Tursi, P. Tassini, and G. Pandolfi, “2D/3D switchable displays through PDLC reverse mode parallax barrier,” Liq. Cryst. 45(13-15), 2132–2138 (2018). [CrossRef]  

6. M. Lambooij, M. Fortuin, I. Heynderickx, and W. IJsselsteijn, “Visual discomfort and visual fatigue of stereoscopic displays: A review,” J. Imaging Sci. Technol. 53(3), 30201-1–30201-14 (2009). [CrossRef]  

7. D. M. Hoffman, A. R. Girshick, K. Akeley, and M. S. Banks, “Vergence–accommodation conflicts hinder visual performance and cause visual fatigue,” Journal of Vision 8(3), 33 (2008). [CrossRef]  

8. A. C. Traub, “Stereoscopic Display Using Rapid Varifocal Mirror Oscillations,” Appl. Opt. 6(6), 1085–1087 (1967). [CrossRef]  

9. C.-C. Tsao and J. S. Chen, “Moving screen projection: a new approach for volumetric three-dimensional display,” in Projection Displays II, 1996, vol 2650, bll 254–264. [CrossRef]  

10. K. Osmanis, G. Valters, R. Zabels, U. Gertners, I. Osmanis, L. Kalnins, U. Kandere, and A. Ozols, “Advanced multiplanar volumetric 3D display,” in Emerging Liquid Crystal Technologies XIII, 2018, vol 10555, bl 1055510. [CrossRef]  

11. K. Takano, K. Sato, and M. Ohki, “Improved scattering screen for a multiplanar volumetric holographic display,” Opt. Eng. 50(9), 091315 (2011). [CrossRef]  

12. H.E. Ives, “Optical Properties of a Lippmann Lenticulated Sheet,” J. Opt. Soc. Am. 21(3), 171–176 (1931). [CrossRef]  

13. Information Display Measurements Standard (IDMS). The new release, IDMS version 1.1, Sociery for Information Display, 2022.

14. P. Surman, S. Wang, J. Yuan, and Y. Zheng, “One-stop measurement model for fast and accurate tensor display characterization,” J. Opt. Soc. Am. A 35(2), 346–355 (2018). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (13)

Fig. 1.
Fig. 1. Slanted lenticular. (a) The broken line is considered as the ‘capture line’ that moves across the panel in the opposite direction to the viewer’s eye position. (b) 28-view example showing and 28 sub-pixels behind each perceived pixel. H. L, V and Ɵ are the horizontal resolution, lens width, vertical resolution and slant angle respectively.
Fig. 2.
Fig. 2. Positional uncertainty. A point in the real world is depicted as a pixel on a display screen; this is the precision S of the positioning, which is halved at the DOF boundary.
Fig. 3.
Fig. 3. Luminance profile vs number of views shows profiles for groups comprising between 1 and 9 adjacent views displaying white images, with the remainder of the 25 views displaying black.
Fig. 4.
Fig. 4. Virtual models: (a) model used to produce images in Figs. 6 to 12, (b) model used for images in Fig. 11 and depth measurement.
Fig. 5.
Fig. 5. Measurement setup: This is carried out in two stages, 1) the image on the 3D display is captured, 2) this is low-pass filtered and then analyzed after second capture.
Fig. 6.
Fig. 6. Upper Raw image of camera capture of 3D display. Inset “Pac-man” showing perceived triangle, Lower Image processed with low-pass vertical-only spatial filter shows no moiré.
Fig. 7.
Fig. 7. Region boundaries determined by visual examination. The aliasing regions represent unwanted information that produces artefacts and DOF reduction.
Fig. 8.
Fig. 8. (a) Jaggy width; total horizontal width of vertical edge proportional to distance of the edge image from the screen plane (b) Close-up of right side of photograph.
Fig. 9.
Fig. 9. Jaggy results (a) Measured boundary in relation to spatial frequency. (b) Jaggy width vs distance from screen center.
Fig. 10.
Fig. 10. Aliasing phase shift shown on filtered image. Clear region boundary coincides with the end of the straight section of a bar’s image.
Fig. 11.
Fig. 11. Images of model in Fig. 2(b) right show a viewing angle of around 60°. Display uses a 70 lpi, wide viewing angle lenticular screen having short focal length lenses.
Fig. 12.
Fig. 12. DOF measurement geometry. The camera captures images that are then analyzed. Red lines denote the plane of the screen, and blue lines, the plane of the image surface.
Fig. 13.
Fig. 13. (a) Vertical low-pass filtered image on monitor screen. (b) Binarized filtered image. (c) Aliasing analysis regions of interest on binarized image.

Equations (5)

Equations on this page are rendered with MathJax. Learn more.

H = L c o s θ
D O F = 2 L Φ c o s θ
S = a 1 r
F S = 1 2 H
F N = 1 4 H
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.