Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Autostereoscopic 3D display using directional subpixel rendering

Open Access Open Access

Abstract

In this paper we present an autostereoscopic 3D display using a directional subpixel rendering algorithm in which clear left–right images are expressed in real time based on a viewer's 3D eye positions. In order to maintain the 3D image quality over a wide viewing range, we designed an optical layer that generates a uniformly distributed light field. The proposed 3D rendering method is simple, and each pixel processing can be performed independently in parallel computing environments. To prove the effectiveness of our display system, we implemented 31.5” 3D monitor and 10.1” 3D tablet prototypes in which the 3D rendering is processed in the GPU and FPGA board, respectively.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Discomfort caused by wearing glasses is one of the major reasons for the commercial failure of 3D displays in consumer electronics while it can provide binocular disparity by showing different images for the left and right eyes. Multiview 3D displays can be seen without glasses by endowing directionality to pixels by inserting an optical layer such as a lenticular lenslet or a parallax barrier. However, it has limitations, which are mainly due to the decrease in resolution and narrow viewing zones. The optical layer between the light source and the viewer transforms the spatial distribution of the pixels into a spatio-angular distribution of the light rays because of which the resolution of the 3D images is reduced by the number of viewpoints. The decrease in resolution is also related with expressing a large depth of field. Usually, multiview displays suffer from poor depth of field. The object that is at a distance from the display panel becomes blurry as the depth increases, and more viewpoints are needed for crisp image expression. The viewing range of a multiview display is limited to the predefined region at the OVD (optimal viewing distance), which is also called the sweet spot, and dead zones occur between the sweet spots, where the disparity of the stereo image is inverted and a pseudoscopic 3D image appears. The limitations of multiview display arise from insufficient pixel resources, and high-speed spinning mirror and digital light processing projection system [1] can be used to increase the number of viewpoints. Light field 3D display generates dense light field information over a viewing range and can display 3D images for any viewpoint. There are various approaches based on the light field concept such as cylindrical display system [2], 360 degree tabletop display systems [3,4], multiple projection systems [5,6], and layered display systems [7,8]. However, the problems of large size form factor and rendering complexity should be solved before commercialization for mass marketing.

Eye-tracking based 3D display is a good alternative to the previous autostereoscopic 3D displays. It can focus pixel resources on certain user positions while preserving the flat panel form factor. It provides low crosstalk and high-resolution 3D image viewing in a wide viewing range without wearing glasses. In addition, it is possible to view the 3D content with continuous motion parallax and with sufficient depth range. It tracks the viewer's eyes and uses the position information to optimize the pixel resources. Although only one user can view the 3D images properly in this kind of display, several attractive applications utilize this technology, such as personal monitor, tablet, smartphone, and digital signage. Liquid crystal (LC) barrier based methods [9–11] are used in most commercial 3D display products such as Nintendo 3DS. In this approach, an LC panel is used to form a moving barrier slit, which has controllable openings according to the viewer's position. The advantage of the LC barrier is that the display can be switched to the 2D mode easily by opening all LC barriers, but the viewing distance is restricted to around the OVD. The viewing zone based method [12–17] considers a larger space of viewing range than the LC barrier based method. In this approach, the viewing field is divided into small viewing zones, and the viewing zone corresponding to the points where the viewer's eyes are located is determined using the eye tracker. Then, the contents are adjusted at the subpixel level using the active viewing zone information. In this approach, a large number of viewing zones should be predefined accurately, but it is vulnerable to the error caused by the fabrication process and the photorefractive effect. There are also time multiplexing approaches such as those discussed in [18–21]. Resolution reduction can be avoided in the spatial multiplexing method, which uses barrier or lenticular lens. Moreover, multi-user scenario is possible by multiplexing not only for left/right eye but also for each viewer. However, the main drawback of the time multiplexing method is that high-speed display is needed. It needs at least two times faster operating speed than the usual 2D display. Current researches are focused on light steering optics such as volume-holographic optical elements [19], cylindrical optical elements [20], and adjustable fine-stripe backlight structure [21]. Previously, we proposed an eye-tracking based subpixel rendering algorithm for a multiview display system. In [22], the inter-view crosstalk and color artifact caused by the slanted lenticular lens are reduced by color-wise normalization of brightness in the horizontal direction. In [23] [24], the viewing range is expanded to the 3D volume by using 3D eye position and by modeling the visibility response in both horizontal and vertical directions.

In this paper, we present a directional subpixel rendering (DSR) technology and implement eye-tracking based autostereoscopic 3D display prototypes. To fully utilize the limited pixel resources of a flat panel display for reconstructing a dense light rays, we designed an optical layer so that light rays from each pixel are evenly distributed in the viewing range, and we assigned input 3D content data to each light ray in real time by means of the proposed DSR method. Our rendering scheme has advantages in GPU computing and hardware implementation because each pixel is processed independently and thus suitable for parallel computing. In addition, we can adjust the computational complexity of 3D rendering by changing the light ray model. For example, if we assume that light rays are close to the optical axis, then Snell's Law can be applied with paraxial approximation, which can reduce the computation of the light ray direction. To prove the effectiveness of our method, we implemented high quality 3D display prototypes of a 31.5” 3D monitor and 10.1” 3D tablet.

Two main contributions of our 3D display technology compared to previous eye-tracking based 3D display are wide viewing range and low complexity of real time processing. Specifically, it is important to maintain the 3D image quality over the viewing range. Although the sweet spot and dead zone problems can be avoided by adapting the pixel values to the viewer's position, preserving the 3D image quality within the viewing range, including positions that are distant or close to the OVD, is difficult. The main drawback of LC barrier based methods [9–11] is that the viewing range is restricted to around the OVD and crosstalk increases as the viewing distance is changed. In the case of viewing zone based method [12–17], viewing distance can be extended in depth direction but it is difficult to define the viewing zone precisely considering local characteristics of optical layer such as refraction angle to eye position and non-uniform gap distance between display panel and lenticular lens sheet after bonding process because the viewing zone is defined globally for the display system. These local differences can make the viewing zone inconsistent, and this unconformity results in mapping error and image degradation for the wide viewing range especially for the large size display. However in the proposed DSR method, we can overcome these limitations by applying locally adaptive rendering for the each subpixel location because each ray direction can be calculated in parallel computing manner using local parameters. Also the proposed algorithm is simple and it has advantages that are easy to apply to parallel computing. Rendering complexity is related to the hardware cost issue and system latency. When we manufacture a commercial 3D display product, the 3D rendering part should be implemented in a chip, where the cost is proportional to the operation complexity, or it should be executed in GPU resources, where the GPU performance requirement also depends on the algorithm complexity. Especially, the calculation time required when the 3D rendering is processed in the GPU will increase the total latency of the display system, and can deteriorate the 3D image quality when the viewer is not stationary.

2. Optical design

The main difference between the optical design of our system and the classical multiview display is described in Fig. 1. We considered only 2D light rays in horizontal plane. V and X represent the horizontal position at the parallax barrier and the OVD respectively. The upper panels show the physical 2D light rays from the light sources such as barrier openings to each viewing positions at the OVD, and bottom panel shows 2D light field distribution in V-X domain. Each point in bottom panel represents light ray which comes from V barrier position to X viewing position. In the conventional multiview display case (a), light rays from all barrier openings converge to particular viewing positions. In the bottom panel, light rays from many V positions correspond to particular X positions which means that we can see all light rays from barrier openings at certain positions (sweet spots). On the other hand, in our case (b), the light rays from barrier openings are uniformly distributed at the viewing position. In the bottom panel, light rays from V positions are not overlapped each other in X. By exploiting the repetition of the viewcones, we designed the barrier pitch and width to make the viewcone size minimized to two times the interocular distance of 65 mm at the OVD to fill the viewing volume with dense directional light rays while securing the positional margin for stereo image separation. In the proposed display system, a slanted slit barrier or lenticular lens sheet is used to achieve a line light source. A gap glass is inserted between the RGB pixel and the optical components with optical bonding to maintain a constant gap between them.

 figure: Fig. 1

Fig. 1 Comparision of optical design scheme.

Download Full Size | PDF

The main difficulty in the optical design of the multiview display system is that we need more views to get low 3D crosstalk, but this leads to reduction in resolution, which adversely affects the image quality. This same is true in our system. To maintain the view density while maximizing the view number, we designed 27-view display for a 10.1” tablet and 45-view display for a 31.5” monitor. Figure 2 shows our optical design concept to achieve uniformly distributed light rays compared to conventional multiview display. Our design method is basically same with fractional view as in [25]. Upper figure shows our 27-view design and bottom figure shows 5 view design of conventional mutiview display. In our case, each viewcone in the 3D display is made of subpixel groups of view numbers. For example, the 27 views consist of 9 × 3 subpixels, where two lens segments correspond to nine subpixels. In this case, one lens segment covers 1.5 pixels ( = 4.5 subpixels) by three vertical lines, and the image resolution reduces by 1/1.5 horizontally and 1/3 vertically compared to the resolution of the original panel. By slanting the lenticular lens to 3 subpixels per 4 vertical lines, the directions of light rays for 27 view is distributed uniformly over 2 interpupillary distance (IPD) at OVD as in the figure. In multiview case, each viewcone consists of 5 views corresponds to 5 IPD at OVD, and light rays of each view is focused on 5 specific eye positions as in the bottom figure.

 figure: Fig. 2

Fig. 2 Subpixel structure and light ray distribution for proposed design (top) and multiview design (bottom).

Download Full Size | PDF

Figure 3 shows the simulation result for the display margin for the eye position error. Each curve represents the luminance profile at certain viewing distances, which are obtained by varying the horizontal position in millimeters. The blue curves correspond to the left eye and the red curves correspond to the right eye. The blue and red vertical lines represent the horizontal positions of the left (−32.5 mm) and right (32.5 mm) eyes respectively. We identified the crosstalk start positions where the curves of different colors are overlapped and crosstalk starts to increase, and measured their distances from each eye position. We calculated the inner margin by averaging the distance from each eye position to the crosstalk start position inside both eyes, and the outer margin outside both eyes. Tables 1 and 2 show the position error margin for the two optical design cases. At the OVD, the average margin is 7 mm for the 27-view structure and 17 mm for the 45-view structure when we use the 31.5” 4K panel display.

 figure: Fig. 3

Fig. 3 Position margin simulation result for (a) 27view (b) 45view structure.

Download Full Size | PDF

Tables Icon

Table 1. Position error margin for 27view structure

Tables Icon

Table 2. Position error margin for 45view structure

3. Directional subpixel rendering

3.1 Method

In this section, our DSR method is explained in detail. The algorithm is described for a parallax barrier, but it can be applied to other types of 3D optical components such as lenticular lens. Figure 4 shows the overall process of the DSR. The inputs are stereo image pair and display parameters and 3D eye positions. The display parameters are slanted angle, barrier pitch, barrier start position, and gap distance between the RGB panel and the barrier film. The barrier start position denotes the horizontal distance from the display coordinate origin to the center of the first barrier slit. The 3D eye positions, which are obtained by the eye tracker in terms of the camera coordinates, are transformed to display coordinates. In our method, we assumed that the light rays that pass through each subpixel are generated at the center of the barrier openings on the same horizontal plane; we trace the ray direction close to the viewer's position using the display parameters, and compare it with the ray directions from the RGB subpixel to the left and right eye positions. The 3D light refraction at the surface of glass, which is caused by the difference in the refraction indices of glass and air (Snell's law), should be considered in ray tracing modeling.

 figure: Fig. 4

Fig. 4 Overall process of directional subpixel rendering.

Download Full Size | PDF

Figure 5 shows an example of the pixel value determination procedure of DSR. We denote the display parameters as slanted angle θ; pixel pitch in the x, y directions πx, πy; barrier start position σ; barrier pitch in the horizontal direction λ; gap distance between the panel and barrier τ; refractive index of glass n. The image intensity of the 3D rendering result I(i,j,k) where i, j are the x, y position indexes and k is the RGB color index ( = 0, 1, 2) is determined from the left/right image information IL(i,j,k) and IR(i,j,k) as follows. The x, y positions of current subpixel pp(xp,yp,0) can be expressed as

 figure: Fig. 5

Fig. 5 Pixel value determination procedure of DSR.

Download Full Size | PDF

xp=iπx+(k+0.5)πx/3,
yp=jπy+0.5πy

Then we calculate xb, the x position of the corresponding position pb of the eye position pe(xe,ye,ze) at the barrier plane, which are connected via the current subpixel position pp by 3D ray tracing model. Snell’s law can be expressed as follows,

sin(tan1(rb/τ))sin(tan1(re/ze))=1n,
where rb=xbxp2+ybyp2, re=xexp2+yeyp2 and we assume the refractive index of air as 1. From this equation, rb is obtained as,

rb=τtan(sin1(sin(tan1(re/ze)n))

Further xb will be given by

xb=xp+rerb(xexp).

In addition, the xo, x position of the closest barrier opening in the horizontal direction, can be expressed as

xo=round(xbδλ)λ+δ,
δ=σrbre(yeyp)tanθ,
where δ is the sum of the barrier start position and barrier position offset caused by the yp, yb difference. Thus, the distance from the projected eye position to the barrier opening is

Δ=|xbxo|.

By comparing the distances from the projected L/R eye positions to the barrier openings, the pixel value I(i,j,k) is determined as left image or right image as follows:

I(i,j,k)={IL(i,j,k)ifΔL<ΔRIR(i,j,k)otherwise

This subpixel rendering processing is identically applied to all the subpixel positions in the display panel.

3.2 GPU implementation

Although DSR should be performed for every subpixel in the display panel, the calculations for a pixel have no dependency on the neighboring pixels like filtering. This makes DSR suitable for GPU parallel processing. We implemented our rendering algorithm using GPU shader. Figure 6 shows the 3D rendering pipeline. The input stereo images are stored in the texture buffer, and DSR is performed in the fragment shader using the display parameters and eye position data. The stereo images can be created from image/video file or can be rendered from a 3D graphic object. In case of graphics rendering, we can express motion parallax by setting the positions of the virtual camera as the viewer's eye positions. One important issue in software implementation of 3D rendering is the latency. There is time delay caused by the camera capture, eye tracking, stereo image generation, 3D rendering, and the display output. This is critical because crosstalk will increase if the position error caused by the user movement during the time delay is larger than the display margin described in section 2. To reduce the amount of latency, stereo generation and eye tracking are separated from the 3D rendering pipeline and the latest data of the stereo image and eye position are used in the 3D rendering pipeline. We obtained 60-Hz rendering speed for 3840 × 2160 resolution with NVIDIA Quadro K5000 GPU.

 figure: Fig. 6

Fig. 6 3D rendering pipeline for GPU implementation.

Download Full Size | PDF

3.3 FPGA implementation

There are several drawbacks when we apply software implementation of 3D rendering to handheld devices such as smartphone or tablet. The overall battery time can be reduced by additional power consumption for GPU processing when the display is on 3D mode. Another drawback is that the 3D rendering time would be increased for relatively slower GPU and the system latency can increase. This will be more noticeable when we play 3D games, which require high level GPU performance. The latency delay and GPU occupation by the 3D rendering process can be removed with hardware implementation. We implemented our 3D rendering algorithm in an FPGA-based hardware platform.

Figure 7 shows the 3D tablet system. We inserted our 3D rendering board between the tablet AP board and the display panel, which are connected by an eDP (embedded display port) interface, where the WQXGA resolution (2560 × 1600) image data is transferred at 60 Hz. The FPGA board is designed to operate at 60 Hz without processing delay. We designed the input 3D format as Side-by-Side, and we transferred the eye position data to the FPGA board by encoding it on the upper line of the input image data.

 figure: Fig. 7

Fig. 7 Tablet prototype design and system components.

Download Full Size | PDF

4. Eye tracking

The 3D eye positions for left and right eyes are the only parameters that are continuously varying in DSR. Moreover, the accuracy of the eye tracker is very important in our system because the change in visible light field information is identified by the viewer's eye position in real time. Moreover, motion parallax, which is necessary for realistic presence improvement of the 3D object, can be provided by generating stereo images according to the viewer's left and right eye positions. For these purposes, we developed an eye-tracking algorithm in which a dedicated eye detection module can find the precise 3D position of the eye pupil center to cause the light rays from the display panel to fall within the pupil diameter. The detection rate of eye pupil is 99.4% and tracking accuracy is 1mm. We implemented the stereo and mono versions of the eye tracker. In the case of stereo camera, the 2D pupil position tracked in each camera is converted to 3D coordinate using stereo triangulation. In the case of mono camera, the face-normal direction is estimated and 2D position is converted to 3D with the assumption that the IPD is fixed to 65 mm. The camera specifications such as frame rate, field of view, and resolution are highly related to the eye tracking performance in the viewing volume including the tracking latency caused by image capturing, data transmission, and computational processing. In our prototype, we used camera of 60fps, 60° × 40° field of view and 640 × 480 resolution. The processing time for eye tracking is 4ms in tracking mode and total system latency including display output is 70ms.

5. Calibration

The objective of 3D calibration is to determine the actual parameters for the display and camera systems after their manufacture. Three kinds of calibrations are needed in our system-camera, camera-display, and display calibrations. The purpose of camera calibration is to determine the camera intrinsic parameters and camera-display calibration estimates relative to the 3D pose between the camera and display to convert eye position from the camera coordinate system to the display coordinate system. The purpose of the display calibration is to determine the display parameters such as lens pitch, slanted angle, lens start position, and gap distance between the panel and lens sheet. Among the three calibration processes, display parameter calibration is the most important, because a small difference in the parameter can cause a ray direction error at the display panel and results in large 3D position error at the viewing distance, which causes 3D crosstalk directly. We developed a 3D calibration system, which can perform the three calibration processes simultaneously.

Figure 8 shows our calibration system. We developed a single pattern image based calibration method. A single pattern image is displayed on the panel screen and a camera takes a picture of the displayed pattern image. Based on a projection model of the autostereoscopic display, the display parameters can be computed from the observed pattern of the captured image. For higher accuracy and more robustness, our calibration method is designed to analyze the pattern and perform the parameter estimation in the frequency domain. More details can be found in [26,27].

 figure: Fig. 8

Fig. 8 Camera-display calibration system and software.

Download Full Size | PDF

6. Page layout and length

Figure 9 shows two 3D display prototypes and their specifications are in Table 3. Prototype 1 is made from 31.5” monitor of UHD (3840 × 2160) resolution. The slit barrier is used as a 3D optical component between the backlight unit (BLU) and RGB panels and the eye tracking and 3D rendering is processed in the CPU and GPU respectively. Prototype 2 is made from 10.1” tablet display of WQXGA (2560 × 1600) resolution. In this display, a lenticular lens is attached to the front of the RGB panel to split the light direction from each subpixel. Eye tracking is performed in the CPU, but 3D rendering is performed in the FPGA board.

 figure: Fig. 9

Fig. 9 Implemented prototypes (left: 31.5”, right: 10.1”).

Download Full Size | PDF

Tables Icon

Table 3. Specification of prototypes

e first measured the optical crosstalk to assess the 3D optical characteristics of the prototypes. Standard crosstalk measurement method for autostereoscopic display can be found in IEC 62629-22-1. In our experiment, we obtained the angular luminance profile based on this measurement method and optical crosstalk is calculated by merging the luminance of each view into two groups. The angular luminance profiles for each view are obtained by VCMaster3D from ELDIM from −10° to 10° with a resolution of 0.2°. Figure 10 shows the luminance profile and optical crosstalk of each prototype for the viewing angle. The average optical crosstalks of 10.1” tablet and 31.5” monitor are 2.2% and 5.7% respectively.

 figure: Fig. 10

Fig. 10 Luminance profile and optical crosstalk result for 10.1” tablet ((a), (c)) and 31.5” monitor ((b), (d)).

Download Full Size | PDF

The subjective quality of our 3D display with varying viewing positions is assessed by the images captured by the stereo cameras behind the face picture, which is used to map the eye positions to the center of the camera lens. Figure 11 shows the experimental setting for visual quality assessment.

 figure: Fig. 11

Fig. 11 Experimental setting for visual test.

Download Full Size | PDF

Figure 12 shows the left (blue) and right (red) image separation result of the 10.1” tablet prototype. We captured the separated images at each eye position by varying the angle to −20°, −10°, 0°, 10°, and 20° and distance to 300 mm, 400 mm, 500 mm, 600 mm, and 700 mm. Results show that the 3D image quality is preserved well except for case 1, which is beyond the designed viewing range. For the 10.1” tablet prototype, separations are observed for viewing volume of −30° to 30° horizontally, −20° to 20° vertically, and 400 to 1000 mm depth, which satisfies the common viewing conditions for mobile tablet devices. For the 31.5” monitor prototype, the viewing volume is −30° to 30° horizontally, −20° to 20° vertically, and 600 to 1500 mm depth, which satisfies the common viewing conditions for PC monitors.

 figure: Fig. 12

Fig. 12 Left (blue), right (red) view separation results of 10.1” tablet prototype for different viewing position.

Download Full Size | PDF

Figure 13 is comparison of the left (left column), right (right column) image separation results of 31.5” monitor prototype and commercial eye tracking based 3D display system, Toshiba Qosmio T851. We captured the separated images by varying the angle to −20°, −10°, 0°, 10°, and 20° at 800 mm distance. Results show that the 3D image quality is preserved in our case, compared to T851 which shows crosstalk locally.

 figure: Fig. 13

Fig. 13 Comparison of left (blue), right (red) view separation results for 31.5” monitor prototype and conventional eye tracking based 3D display (Toshiba Qosmio T851) at different viewing angles.

Download Full Size | PDF

Figure 14 is the left (left column), right (right column) image separation results of 31.5” monitor prototype which are captured at each eye position.display prototypes of a 10.1” tablet display and 31.5” monitor.

 figure: Fig. 14

Fig. 14 Photographs of stereo view separation results of 31.5” monitor prototype.

Download Full Size | PDF

7. Conclusion

In this paper, a new autostereoscopic 3D display using directional subpixel rendering technology is presented. We designed the optical layer so that light rays from each pixel are evenly distributed in the viewing range and the input stereo image pixels are assigned considering the light ray direction of each subpixel. We achieved wide viewing range of −30° to 30° in horizontal direction as well as more than 60cm in depth direction while maintaining 3D image quality. Compared to previous approaches such as viewing zone based method, our rendering technique can accurately model the direction of the rays, taking into account the locally varying optical properties of large 3D displays and it is suitable for implementation in parallel computing environment because there is no inter-pixel dependency. To prove the effectiveness of our method, we implemented our rendering algorithm in GPU and FPGA and created two

References

1. Y. Takaki and S. Uchida, “Table screen 360-degree three-dimensional display using a small array of high-speed projectors,” Opt. Express 20(8), 8848–8861 (2012). [CrossRef]   [PubMed]  

2. A. Jones, I. McDowall, H. Yamada, M. Bolas, and P. Debevec, “Rendering for an interactive 360 light field display,” ACM Trans. Graph. 26(3), 40 (2007). [CrossRef]  

3. S. Yoshida, “fVisiOn: 360-degree viewable glasses-free tabletop 3D display composed of conical screen and modular projector arrays,” Opt. Express 24(12), 13194–13203 (2016). [CrossRef]   [PubMed]  

4. X. Xia, X. Liu, H. Li, Z. Zheng, H. Wang, Y. Peng, and W. Shen, “A 360-degree floating 3D display based on light field regeneration,” Opt. Express 21(9), 11237–11247 (2013). [CrossRef]   [PubMed]  

5. T. Balogh, P. T. Kovacs, Z. Dobranyi, A. Barsi, Z. Megyesi, Z. Gaal, and G. Balogh, “The holovizio system–New opportunity offered by 3D displays,” Proc. TMCE, 1–11 (2008).

6. J. H. Lee, J. Park, D. Nam, S. Y. Choi, D. S. Park, and C. Y. Kim, “Optimal projector configuration design for 300-Mpixel multi-projection 3D display,” Opt. Express 21(22), 26820–26835 (2013). [CrossRef]   [PubMed]  

7. G. Wetzstein, D. Lanman, W. Heidrich, and R. Raskar, “Layered 3D: tomographic image synthesis for attenuationbased light field and high dynamic range displays,” ACM Trans. Graph. 30(4), 1–11 (2011). [CrossRef]  

8. N. Y. Jo, D. Nam, S. Lee, J. Park, and J. H. Park, “Dual-layer three-dimensional display with enhanced resolution,” SID Symposium Digest of Technical Papers45, 513–516 (2014).

9. H. Y. Wu, C. T. Chang, and C. L. Lin, “Dead-Zone-Free 2D/3D switchable barrier type 3D display, ” SID Symposium Digest of Technical Papers44, 675–677 (2013). [CrossRef]  

10. D. Suzuki, S. Hayashi, Y. Hyodo, S. Oka, T. Koito, and H. Sugiyama, “A wide view auto-stereoscopic 3D display with an eye-tracking system for enhanced horizontal viewing position and viewing distance,” SID Symposium Digest of Technical Papers24, 657–668 (2016). [CrossRef]  

11. T. Koito, A. Higashi, T. Kasai, D. Suzuki, Y. Yang, and K. Takizawa, “High resolution glassless 3D with head-tracking system, ” SID Symposium Digest of Technical Papers45, 580–583 (2014). [CrossRef]  

12. N. A. Dodgson, “On the number of viewing zones required for head-tracked autostereoscopic display,” Proc. SPIE 6055, 60550Q (2006). [CrossRef]  

13. R. Barré, K. Hopf, S. Jurk, and U. Leiner, “TRANSFORMERS - Autostereoscopic displays running in different 3D operating modes,” SID Symposium Digest of Technical Papers42, 452–455 (2011).

14. G. Woodgate, D. Ezra, J. Harrold, N. Holliman, G. Jones, and R. Moseley, “Autostereoscopic 3D display systems with observer tracking,” Signal Process. Image Commun. 14(1-2), 131–145 (1998). [CrossRef]  

15. S. K. Kim, K. H. Yoon, S. K. Yoon, and H. Ju, “Parallax barrier engineering for image quality improvement in an autostereoscopic 3D display,” Opt. Express 23(10), 13230–13244 (2015). [CrossRef]   [PubMed]  

16. K. H. Yoon, M. K. Kang, H. Lee, and S. K. Kim, “Autostereoscopic 3D display system with dynamic fusion of the viewing zone under eye tracking: principles, setup, and evaluation,” Appl. Opt. 57(1), A101–A117 (2018). [CrossRef]   [PubMed]  

17. J. Liu, T. Malzbender, S. Qin, B. Zhang, C. Wu, and J. Davis, “Dynamic Mapping for Multiview Autostereoscopic Displays,” Proc. SPIE 9391, 93911I (2015).

18. H. Stolle, J. C. Olaya, S. Buschbeck, H. Sahm, and A. Schwerdtner, “Technical solution for a full-resolution autostereoscopic 2D/3D display technology,” Proc. SPIE 6803, 68030Q (2008).

19. Y. S. Hwang, F. K. Bruder, T. Fäcke, S. C. Kim, G. Walze, R. Hagen, and E. S. Kim, “Time-sequential autostereoscopic 3-D display with a novel directional backlight system based on volume-holographic optical elements,” Opt. Express 22(8), 9820–9838 (2014). [CrossRef]   [PubMed]  

20. Z. Zhuang, L. Zhang, P. Surman, S. Guo, B. Cao, Y. Zheng, and X. W. Sun, “Directional view method for a time-sequential autostereoscopic display with full resolution,” Appl. Opt. 55(28), 7847–7854 (2016). [CrossRef]   [PubMed]  

21. C. H. Yang, C. Y. Hsu, Y. P. Huang, and H. P. Shieh, “High Resolution Time-multiplexed backlight with tracking system,” SID Symposium Digest of Technical Papers43, 301–304 (2012).

22. J. Park, D. Nam, G. Sung, Y. Kim, D. Park, and C. Kim, “Active crosstalk reduction on multi-view displays using eye detection,” SID Symposium Digest of Technical Papers2, 920–923 (2011). [CrossRef]  

23. J. Park and D. Nam, “Active light field rendering in multi-view display systems,” SID Symposium Digest of Technical Papers1, 36–39 (2012). [CrossRef]  

24. S. Lee, J. Park, J. Heo, B. Kang, D. Kang, H. Hwang, J. H. Lee, Y. Choi, K. Choi, and D. Nam, “Eye tracking based glasses-free 3D display by dynamic light field rendering,” Proc. Imaging and Appl. Opt. Congress, DM3E.6 (2016). [CrossRef]  

25. K. Yanaka, T. Nomura, and T. Yamanouchi, “Extended Fractional View Integral Photography Using Slanted Orthogonal Lenticular Lenses,” in Proceedings of the 2nd World Congress on Electrical Engineering and Computer Systems and Science (EECSS’16)), 112 (2016. [CrossRef]  

26. H. Hwang, J. Park, H. S. Chang, Y. J. Jeong, D. Nam, and I. S. Kweon, “Lenticular lens parameter estimation using single image for crosstalk reduction of three-dimensional multi-view display,” SID Symposium Digest of Technical Papers, 46, 1417–1420 (2015). [CrossRef]  

27. H. Hwang, H. S. Chang, D. Nam, and I. S. Kweon, “3D Display Calibration by Visual Pattern Analysis,” IEEE Trans. Image Process. 26(5), 2090–2102 (2017). [CrossRef]   [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (14)

Fig. 1
Fig. 1 Comparision of optical design scheme.
Fig. 2
Fig. 2 Subpixel structure and light ray distribution for proposed design (top) and multiview design (bottom).
Fig. 3
Fig. 3 Position margin simulation result for (a) 27view (b) 45view structure.
Fig. 4
Fig. 4 Overall process of directional subpixel rendering.
Fig. 5
Fig. 5 Pixel value determination procedure of DSR.
Fig. 6
Fig. 6 3D rendering pipeline for GPU implementation.
Fig. 7
Fig. 7 Tablet prototype design and system components.
Fig. 8
Fig. 8 Camera-display calibration system and software.
Fig. 9
Fig. 9 Implemented prototypes (left: 31.5”, right: 10.1”).
Fig. 10
Fig. 10 Luminance profile and optical crosstalk result for 10.1” tablet ((a), (c)) and 31.5” monitor ((b), (d)).
Fig. 11
Fig. 11 Experimental setting for visual test.
Fig. 12
Fig. 12 Left (blue), right (red) view separation results of 10.1” tablet prototype for different viewing position.
Fig. 13
Fig. 13 Comparison of left (blue), right (red) view separation results for 31.5” monitor prototype and conventional eye tracking based 3D display (Toshiba Qosmio T851) at different viewing angles.
Fig. 14
Fig. 14 Photographs of stereo view separation results of 31.5” monitor prototype.

Tables (3)

Tables Icon

Table 1 Position error margin for 27view structure

Tables Icon

Table 2 Position error margin for 45view structure

Tables Icon

Table 3 Specification of prototypes

Equations (9)

Equations on this page are rendered with MathJax. Learn more.

x p =i π x +(k+0.5) π x /3,
y p =j π y +0.5 π y
sin( tan 1 ( r b /τ)) sin( tan 1 ( r e / z e )) = 1 n ,
r b =τtan( sin 1 ( sin( tan 1 ( r e / z e ) n ) )
x b = x p + r e r b ( x e x p ).
x o =round( x b δ λ )λ+δ,
δ=σ r b r e ( y e y p )tanθ,
Δ=| x b x o |.
I(i,j,k)={ I L (i,j,k) if Δ L < Δ R I R (i,j,k) otherwise
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.