Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Correlation between perception of color, shadows, and surface textures and the realism of a scene in virtual reality

Open Access Open Access

Abstract

Head-mounted displays allow us to go through immersive experiences in virtual reality and are expected to be present in more and more applications in both recreational and professional fields. In this context, recent years have witnessed significant advances in rendering techniques following physical models of lighting and shading. The aim of this paper is to check the fidelity of the visual appearance of real objects captured through a 3D scanner, rendered in a personal computer and displayed in a virtual reality device. We have compared forward versus deferred rendering in real-time computing using two different illuminations and five artwork replicas. The survey contains seven items for each artwork (color, shading, texture, definition, geometry, chromatic aberration, and pixelation) and an extra item related to the global realism. The results confirm recent advances in virtual reality, showing considerable visual fidelity of generated to real-world images, with a rate close to 4 in a 5-step perceptive scale. They also show a high correlation of the realism sensation with the fidelity of color reproduction, material texture, and definition of the artwork replicas. Moreover, statistically significant differences between two rendering modes are found, with a higher value of realism sensation in the deferred rendering mode.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. INTRODUCTION

Modern society has witnessed a great development of head-mounted displays (HMDs) in recent years [14]. These types of devices allow for visual-immersive experiences in virtual environments and have many applications in both recreational and professional fields provided that the quality of the immersive experience is satisfactory.

So far, different commercial devices oriented to virtual reality have been developed by different companies. From Google’s Cardboard, which uses the screen of any mobile phone as its display, to Facebook-owned Oculus, with its Oculus Rift, through Samsung, Sony, HTC, and others, many devices of this type flood the market. Table 1 shows a comparison of the technical characteristics of the main models available in the market.

Tables Icon

Table 1. Characteristics of Main Virtual Reality Devices

This set of devices has revived interest in the concept of virtual reality (VR), which was first used in 1989 by Lanier, Chief Executive Officer of VPL Research, Inc., a manufacturer of gloves and goggles [5]. Virtual reality is defined as a real or simulated environment in which a perceiver experiences telepresence, the experience of presence in an environment, by means of a communication medium [6]. To get that sensation of presence, our senses play a crucial role, sending information to the brain and in turn creating a situation that is not real.

There is a growing interest in the analysis of the quality of experience in VR environments. However, measuring this factor is a complex task, and has so far been limited to 360° videos and the evaluation of particular aspects such as the effect of parallax on motion sickness [7].

Of all the senses that the human being uses to communicate with the surrounding world, sight is the one that provides the greatest amount of information to the brain [8], and the main objective of every VR device is to create that feeling of presence, starting from the sense of sight.

The feeling of presence starts with the perception of depth, which is visually achieved by generating two different views of the same scene. Each of them must be generated with points of view that differ by a distance equivalent to that existing between the pupils of human eyes, with a proper eye alignment. This generates the effect of stereoscopic vision or three-dimensional perception, which provides the observer the depth perception of a scene. In addition, the visual system of the observers must have complete functionality (simultaneous perception, fusion, and stereopsis). However, differences in depth and camera distances have significant impacts on depth perception [9].

In general, generating a stereoscopic image is not enough to obtain a good telepresence sensation. This stereoscopic image must have several visual properties. For example, it is necessary to have a wide field of view, larger than the field of view shown on a film or television screen [1012]. While human beings have a visual field of about 200°, stereoscopic vision can only provide about 110° [13]. From a technical point of view, the large field of vision in a HMD is achieved by placing the screen very close to the eyes of the observer; this forces the introduction of lenses that accommodate the eye on the screen at a short distance in all VR HMD devices. These lenses, in turn, can deform the visual field due to the optical aberrations they introduce, and can also cause the perception of light dots on the screen (image pixelation) [14].

In addition, to produce a good sense of telepresence, the VR device must be able to detect the movements of the head and generate different views of the same scene with sufficient speed and very little delay. This concept is known as low latency [1517].

The VR imaging system must be able to generate images at a rate fast enough to detect no flicker and, moreover, to change the image generated according to the movements performed by the observer’s head as quickly as possible (at least between 90 and 120 Hz). To achieve this, it is necessary to have some hardware and software elements that can track the movements and render the images with sufficient speed. These hardware elements are gyroscopes, accelerometers, and positioning cameras that, by simple calculations, determine the exact position of the head of the person who uses the VR device.

In addition to the technical characteristics of the HMDs used in VR, it is necessary to consider the types of images shown in these devices. If the images are rendered images, that is, generated by calculations of illumination and shadowing over 3D objects in a 3D scene, the required calculations get more complex depending on the number of objects in the scene and the number of light sources.

The quality of the rendered images has evolved a lot in the last years due to the improvement of ray-tracing techniques, in parallel with the computing power. The higher the quality of the computation based on ray-tracing techniques, the more the time needed to render the image. Since virtual reality devices only last a few milliseconds, to achieve high refresh rates and low latency it is at first glance unfeasible to use these types of high-quality synthetic images, although it would imply a considerable increase in the quality of the virtual immersion experience [18].

In this sense, graphics processing units have also evolved a lot in recent years, becoming electronic elements whose power allows a substantial improvement of the quality of rendered graphics. In this type of graphics, the geometry of the scene is supplied to the graphic card and this hardware projects the geometry and breaks it down into vertices. Then, the vertices are transformed and split into pixels, which get the final rendering treatment before they are passed to the screen. In this aspect, different forms of processing graphics are currently being used to handle the lighting and shading conditions, such as deferred shading rendering, in contrast to the traditional forward shading rendering [19].

Deferred shading is the rendering path with the higher degree of lighting and shadow fidelity; it is best suited when using many real-time lights, but it requires a certain level of hardware. In the case of deferred shading, there is no limit on the number of lights that can affect an object. Deferred shading has an advantage in that the illumination processing overhead is proportional to the number of pixels that light illuminates. This is determined by the size of the light volume in the scene, regardless of the number of objects being illuminated. It requires a graphic card with multiple render targets (MRT) and support for depth-rendering textures. Most PC graphic cards made after 2006 support deferred shading, starting with GeForce 8xxx, Radeon X2400, and Intel G45. However, the hardware requirements are higher for HMD devices because of the high image frequency rating. For example, Oculus and HTC recommend for their HMDs at least a GTX 970 (Nvidia, USA) or Radeon R9 290 (AMD, USA) graphic card. Other hardware requirements are typical in medium- to high-end PCs. On mobile devices, deferred shading is not supported due to the MRT formats used.

Forward shading is the traditional rendering path, and only a small number of the brightest lights are rendered in per-pixel lighting mode. The rest of the lights are computed at object vertices or per object. Forward rendering is the most used standard rendering technique in graphics engines. This process is linear, and each geometry is passed down the pipe one at a time to produce the final image.

Some of these rendering modes are incorporated into the programming platforms providing content to VR devices. The most used platforms in VR are Unreal Engine (Epic Games, USA) and Unity Game Engine (Unity Technologies, USA). Unity supports different rendering paths, and programmers can choose which one to use depending on the game content and the target platform: software and hardware. Different rendering paths have different performance characteristics that mostly affect visual appearance of the rendered scene.

In deferred rendering, as previously mentioned, the rendering is deferred for a short period of time until all geometries have passed down the pipe. The final image is then produced by applying shading at the end of the process. In Unity 5, this rendering path uses a physical bidirectional reflectance distribution function model (BRDF) with four main components (diffuse, specular, normal, smoothness). The diffuse component corresponds to material color, the specular component corresponds to surface color, and normal and smoothness components correspond to surface texture. It is therefore possible to obtain rendered scenes with a high degree of visual appearance fidelity when treating the light–matter interaction this way.

The computational complexity of forward rendering is O(num_geometry_fragments * num_lights). The complexity is directly related to the number of geometries and number of lights. Dissimilarly, the complexity of deferred rendering is O(screen_resolution * num_lights), so the computational cost is not related to the number of lights used, and you can therefore increase the number of lights.

In any case, given that the final goal is to show these rendered scenes in a virtual reality device, the processing time cannot be longer than the latency time, which is usually around 20 ms of which 11 ms corresponds to the graphics card.

The visual appearance of real objects is a very broad topic, and the International Commission on Illumination in 2006 defined the optical properties that can be measured related to this topic [20]. Color, gloss, translucency, and texture are optical properties related to each other and to the four dimensions used in Unity 5, excluding translucency.

There is no standard method to measure the quality of visual appearance in a virtual reality environment. The closest reference is the evaluation of the quality of digital images or videos after data compression. This evaluation is usually carried out by two different methods: objective and subjective.

The objective methods are classified into full reference (FR), reduced reference (RR), and no reference (NR) metrics. The FR and RR methods are important for evaluation in non-real-time scenarios, where both the original and distorted video or image are available [21]. On the contrary, for real-time quality evaluations in the receiver without availability of the original reference, NR or low-complexity RR methods are essential (these techniques are still in the very early stages of development due to their high complexity [22]).

Today, the most reliable method to evaluate the quality of the image is asking the observers their opinion, that is, doing what is called a subjective evaluation of quality and thus obtain an average opinion score (mean opinion score). The objective algorithms for video or image evaluation try to automatically predict the quality perceived by the observer, thus eliminating the human factor. They are therefore easy and practical when evaluating quality but they do not guarantee that there is a correlation between the values offered by these algorithms and human perception. The aim of this paper is to check the accuracy of the visual appearance of real objects captured through a 3D scanner, rendered in a PC, and displayed in a HMD. Our objective is to obtain a subjective quantitative assessment of these virtualization techniques, considering the actual situation of visual fidelity in VR systems, and to compare two rendering techniques.

Another objective of this work is to determine the influence of the degree of visual fidelity related to several visual parameters over the global perception of realism in a scene visualized in a VR environment.

2. METHODS

The methodology used in this work can be divided into three steps: (1) to capture real objects using a 3D scanner, (2) to simulate a real light booth using a 3D rendering software compatible with VR, and (3) to assess the appearance of the simulated scene in VR by comparing it with the real scene perceived by direct view by real observers. A detailed explanation of each step follows this section.

A. Hardware and Software

The technical equipment used in this work is comprised of an Oculus Rift DK2 HMD driven by a custom-made PC with i7 processor (Intel, USA), 8 GB RAM memory, and a GeForce GTX 960 graphic card (Nvidia Corporation, USA). The PC ran a Windows 7 operating system (Microsoft, USA) and a Unity Game Engine 5 (Unity Systems, USA) programming platform. We used this real-time software platform because Unity lets the programmer easily choose the rendering mode and because of our previous knowledge in programming for this platform.

B. Objects

We have used five high-quality artwork replicas corresponding to different styles and periods (metal warrior, wood fish, Roman mosaic, Greek amphora, Roman stele). We have scanned them with a Go!Scan 20 3D scanner (Creaform, Canada) and processed the 3D object model with VXscan and VXelements 4 software (Creaform, Canada). This scanner has a resolution of 0.5 mm and an accuracy of 0.100 mm. Figure 1 shows the actual appearance of these objects in a LED Color Viewing Light booth (JUST Normlicht, Germany).

 figure: Fig. 1.

Fig. 1. Real pictures of the five artwork replicas used in this work in the LED light booth.

Download Full Size | PDF

C. Scenes

We have recreated two scenes using the Unity Game Engine 5 software. The first scene simulates a light booth equipped with 12 LED light sources in a D65 setup. The second scene simulates a typical illumination of a museum with a spotlight orientated at a 45° angle with respect to the observer’s viewing angle. These two lighting configurations have also been used in our real laboratory over the real artwork samples (Fig. 2). The virtual appearance of the objects is shown in Fig. 3.

 figure: Fig. 2.

Fig. 2. Experimental setup includes a real light booth (left) and a HMD in which the observer can see the virtual scene displayed on an external monitor (right).

Download Full Size | PDF

 figure: Fig. 3.

Fig. 3. Simulation of real artwork illuminated by a filtered halogen lamp at incidence angle of 45° with respect to the point of view of the observer at the light booth (0°). (a) Metal warrior. (b) Wood fish. (c) Roman mosaic. (d) Greek amphora. (e) Roman stele.

Download Full Size | PDF

The reason for this double implementation of virtual scenes is to compare the effects of two different rendering modes (deferred and forward) over two different light setups: 12 light sources against only one spotlight. The main differences in the render mode will be observed when we use a different number of light sources, and it is not associated with the light technology (LED versus Halogen).

D. Observers and Procedure

The measure of the perceived quality of images shown in a VR device requires the use of subjective scaling methods, because there is not any objective metric system that can measure the quality of a VR experience [23].

Subjective evaluation has been carried out by comparing both environments, real and virtual. Twenty observers participated in a survey based on a mean opinion score (MOS) scale. In this case, we used a degradation category rating (DCR) that implies that the scenes are presented in pairs: the first scene is always the source reference (in our case the real light booth), while the second scene is the scene shown through the HMD. DCR has long been a key method for the assessment of television pictures [24]. Details of this scale are shown in Table 2.

Tables Icon

Table 2. Mean Opinion Score Scale Used in this Work

We have compared two paths of rendering, forward versus deferred, in real-time computing using two different illumination setups and five artwork replicas. The survey contains eight items for each artwork and can be divided into three parts. First, four items related to the visual appearance of the object: geometry, color, shading, and texture. The second part is related to the quality of the image: definition, chromatic aberration, pixelation, and, finally, realism as an integrating item.

Before the subjective experience, we explained to each observer the meaning of these perceptual attributes. Attributes associated with the object appearance have a clear visual meaning and observers could check them against the real scene. Other attributes related to the image quality, like pixelation, definition, or chromatic aberration, were explained carefully with examples, and we asked for observers to do an abstraction in his/her response, assigning 5 points to a theoretical image free of pixelation/chromatic aberration and a perfect definition without any blur. The expected range of MOS values to be obtained cannot be known in advance, since these types of results depend strongly on the type of particular methodology used—for instance, in the case of compression video codecs, which range from 1 to almost 5 depending on parameters such as the bitrate [25].

The population sample was composed of 20 observers aged between 20 and 55 years, all of them checked in color vision according to the Ishihara Test. Each observer answered the test three times randomly in different sessions.

3. RESULTS

This type of subjective test is based on a MOS scale. Despite being a priori a discrete qualitative scale, it is usually treated as if it were a continuous scale since the observer is given the possibility to indicate intermediate points within the scale.

In addition, the analysis is based on the average of three replicas, which generates a continuous scale given the usual variance in these types of subjective tests. A related publication fixes the standard deviation of mean of scores from 14% to 18% in subjective MOS test and proposes a method for filter observer which improves these results [26]. In our case, we have not applied any filtering criteria, and the average reliability of the measurement was 0.71 as determined from Intraclass Correlations Index.

We have checked the observer’s responses performing the corresponding statistical analysis, without finding any outlier. Table 3 presents the average values and the corresponding standard deviation obtained by the 20 observers on the eight perceptive properties that were surveyed. Results are shown in four numerical columns. The first couple belongs to results obtained comparing deferred and forward rendering using the real light booth of 12 LEDs as reference and the simulated light booth scene as test. The second couple of columns show the results of the analogous comparison of rendering modes but using the real halogen spotlight as reference and the associated simulation scene as test.

Tables Icon

Table 3. Average Values and Standard Deviation of MOS Scores Calculated over 20 Observers for Eight Items and Four Setups

The results indicate a great fidelity in shading, color, and texture using the deferred renderer that includes the BRDF model, reaching a rate close to 4 points (perceptible but not annoying). Regarding the geometry, the achieved score is close to 5 (imperceptible) in all cases.

We have tested two different setups, using 12 light sources in one case and only a spotlight in the other. We expected that the main differences would be recorded between deferred and forward rendering using 12 lights as it was. This is due to the technical procedure of deferred rendering, in which all the lights of the scene are processed at the end of the pipe. On the other hand, the differences between deferred and forward rendering using a spotlight were lower. The analysis of the statistical significance of these differences suggests that significant differences exist for the perceptive properties color, shading, and texture and for the global property realism as well. We have used a non-parametrical test (Wilcoxon signed-rank test) because the input data do not follow a normal distribution.

The worst results were obtained regarding the pixelation and chromatic aberration, with rates close to 3 (slightly annoying) in some cases. These parameters are directly related to the quality of the HMD and can be improved by increasing the dot density of the displays used and correcting the chromatic aberration introduced by the lenses.

In accordance with the second objective of this work, we have studied the relation between the realism perceived in a VR scene and several aspects of perceptive properties: color, shading, texture, definition, geometry, chromatic aberration, and pixelation. The results of such correlations are shown in the left column of Table 4. A parametric test such as the Pearson correlation coefficient has been used in this occasion since the study was enhanced on the complete set of results. In this case, the number of observations was enough to apply the Central Limit Theorem, which allows the assumption of normal populations. The results show significant correlations between the perceived realism sensation and the rest of visual parameters, obtaining the highest degree of correlation for the color and material texture.

Tables Icon

Table 4. Correlation Coefficients between Perceived Realism, Remaining Visual Properties, and Linear Model

In order to get further information, it would be very interesting to know the importance or relevance of each visual parameter studied about the overall feel of realism. For this purpose, we have studied the linear model underlying the experimental results. We have used the automated linear model tool of the SPSS statistical software (IBM, USA). This tool automatically performs the standardization of input data, analyzes the best linear model possible, and provides the coefficients of the model found together with its statistical significance and their importance. For linear models, the importance of a predictor is the residual sum of squares with the predictor removed from the model, normalized so that the importance values sum 1. All these results are shown in Table 4.

The model found presents a moderate adjustment (R2=0.604). The three main factors of the model are the color, material texture, and definition of the piece shown in the virtual environment. These three factors match with those presenting a greater correlation with the overall sensation of realism; however, only two of them (color and texture) presented significant differences depending on the rendering technology used. The third factor, the definition of the object, did not show significant differences but presented unusual better values when a single halogen light source was used instead of the lighting cabinet composed of 12 LED lights.

4. CONCLUSION

The advances in virtual reality made in recent years are significant because the visual fidelity of the image generated as compared to the real world image is considerable, close to 4 on a 5-step perceptive scale. The fidelity in color reproduction and the perception of material texture are the main factors related to this visual fidelity improvement. However, this fidelity might be improved further. So far, despite the fact that the representation of the light–matter interaction in rendered images is made following physical models, not all the current knowledge on this field has been applied, mainly due to limitations in calculation power that might be eliminated in the near future.

Other technical factors regarding the density of pixels can avoid the perception of pixelated scenes and can improve the definition of the virtual object. A higher density of pixels results in a better correction of chromatic aberration, especially at the edges of objects. In addition, the late HMD models developed by Oculus and HTC use chromatic-aberration-corrected lenses.

Finally, a statistically significant difference between the two rendering modes was noted when using a 12-light-source scene, in which case the deferred rendering mode had a higher value of realism sensation. This makes it still worthwhile to use a HMD having a graphic card compatible with deferred rendering as opposed to a HMD based on mobile devices that do not so far support this technology. It has to be taken into account that most of the applications in virtual reality require the presence of more than one light source, since the reflection of a primary light source on any object constitutes a source of secondary light.

Funding

Consejería de Educación y Empleo, Junta de Extremadura and European Regional Development Fund (ERDF) (GR15102, IB16004).

REFERENCES

1. Y. Wang, W. Liu, X. Meng, H. Fu, D. Zhang, Y. Kang, R. Feng, Z. Wei, X. Zhu, and G. Jiang, “Development of an immersive virtual reality head-mounted display with high performance,” Appl. Opt. 55, 6969–6977 (2016). [CrossRef]  

2. G. Kramida, “Resolving the vergence-accommodation conflict in head-mounted displays,” IEEE Trans. Visual. Comput. Graph. 22, 1912–1931 (2016). [CrossRef]  

3. M. Xu and H. Hua, “High dynamic range head mounted display based on dual-layer spatial modulation,” Opt. Express 25, 23320–23333 (2017). [CrossRef]  

4. P. Benitez, J. C. Miñano, D. Grabovickic, P. Zamora, M. Buljan, B. Narasimhan, and M. Nikolic, “Freeform optics for virtual reality applications,” in Optical Design and Fabrication (Freeform, IODC, OFT), OSA Technical Digest (Optical Society of America, 2017), paper ITu2A.1.

5. M. W. Krueger, Artificial Reality (Addison-Wesley, 1991).

6. J. Steuer, “Defining virtual reality: dimensions determining telepresence,” J. Commun. 42, 73–93 (1992). [CrossRef]  

7. V. Milesen, D. Madsen, and R. B. Lind, Quality Assessment of VR Film: A Study on Spatial Features in VR Concert Experiences (Aalborg University, 2017).

8. Y. Le Grand, Light, Colour, and Vision (Wiley, 1957).

9. S. Baek and C. Lee, “Depth perception estimation of various stereoscopic displays,” Opt. Express 24, 23618–23634 (2016). [CrossRef]  

10. L. Muhlbach, M. Bocker, and A. Prussog, “Telepresence in vídeocommunications: a study on stereoscopy and individual eye contact,” Hum. Factors 37, 290–305 (1995). [CrossRef]  

11. J. D. Prothero and H. G. Hoffman, “Widening the field-of-view increases the sense of presence in immersive virtual environments,” Technical Report TR-95-2 (Human Interface Technology Laboratory, 1995).

12. E. D. Ragan, D. A. Bowman, R. Kopper, C. Stinson, S. Scerbo, and R. P. McMahan, “Effects of field of view and visual complexity on virtual reality training effectiveness for a visual scanning task,” IEEE Trans. Visual. Comput. Graph. 21, 794–807 (2015). [CrossRef]  

13. I. P. Howard and B. J. Rogers, Binocular Vision and Stereopsis (Oxford University, 1995).

14. J. Faubert, “The influence of optical distortions and transverse chromatic aberration on motion parallax and stereopsis in natural and artificial environments,” in Three-Dimensional Television, Vídeo, and Display Technologies, B. Javidi and F. Okano, eds. (Springer, 2002), pp. 359–396.

15. J. Shin, G. An, J. Park, S. Jun Baek, and K. Lee, “Application of precise indoor position tracking to immersive virtual reality with translational movement support,” Multimedia Tools Appl. 75, 12331–12350 (2016). [CrossRef]  

16. P. Lincoln, A. Blate, M. Singh, T. Whitted, A. State, A. Lastra, and H. Fuchs, “From motion to photons in 80 microseconds: towards minimal latency for virtual and augmented reality,” IEEE Trans. Visual. Comput. Graph. 22, 1367–1376 (2016). [CrossRef]  

17. M. Di Luca, “New method to measure end-to-end delay of virtual reality,” Presence 19, 569–584 (2010).

18. P. Zimmons and A. Panter, “The influence of rendering quality on presence and task performance in a virtual environment,” in IEEE Virtual Reality (IEEE, 2003), pp. 293–294.

19. L. Won-Jong, H. Seok Joong, S. Youngsam, Y. Jeong-Joon, and R. Soojung, “Fast stereoscopic rendering on mobile ray tracing GPU for virtual reality applications,” in IEEE International Conference on Consumer Electronics (IEEE, 2017), pp. 355–357.

20. CIE, “A framework for the measurement of visual appearance,” CIE 175:2006 (2006).

21. Z. Wang, H. R. Sheikh, and A. C. Bovik, “Objective video quality assessment,” in The Handbook of Video Databases: Design and Applications, B. Furht and O. Marqure, eds. (CRC Press, 2003), pp. 1041–1078.

22. S. Chikkerur, V. Sundaram, M. Reisslein, and L. J. Karam, “Objective video quality assessment methods: a classification, review, and performance comparison,” IEEE Trans. Broadcast. 57, 165–182 (2011). [CrossRef]  

23. Y. Sulai, Y. Geng, O. Mercier, M. Zannoli, K. MacKenzie, J. Hillis, D. Lanman, J. Gollier, and S. McEldowney, “Optics and perception in virtual reality,” in Imaging and Applied Optics (3D, AIO, COSI, IS, MATH, pcAOP), OSA Technical Digest (Optical Society of America, 2017), paper DTu4F.3.

24. ITU-T, “Subjective video quality assessment methods for multimedia applications,” ITU-T P.910 (2008).

25. H. T. Tran, N. P. Ngoc, C. T. Pham, Y. J. Jung, and T. C. Thang, “A subjective study on QoE of 360 video for VR communication,” in IEEE 19th International Workshop on Multimedia Signal Processing (IEEE, 2017), pp. 1–6.

26. A. Ostaszewska and S. Zebrowska-Lucyk, “The method of increasing the accuracy of mean opinion score estimation in subjective quality evaluation,” in Wearable and Autonomous Systems, A. Lay-Ekuakille and S. Chandra Mukhopadhyay, eds. (Springer, 2010), pp. 315–329.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (3)

Fig. 1.
Fig. 1. Real pictures of the five artwork replicas used in this work in the LED light booth.
Fig. 2.
Fig. 2. Experimental setup includes a real light booth (left) and a HMD in which the observer can see the virtual scene displayed on an external monitor (right).
Fig. 3.
Fig. 3. Simulation of real artwork illuminated by a filtered halogen lamp at incidence angle of 45° with respect to the point of view of the observer at the light booth (0°). (a) Metal warrior. (b) Wood fish. (c) Roman mosaic. (d) Greek amphora. (e) Roman stele.

Tables (4)

Tables Icon

Table 1. Characteristics of Main Virtual Reality Devices

Tables Icon

Table 2. Mean Opinion Score Scale Used in this Work

Tables Icon

Table 3. Average Values and Standard Deviation of MOS Scores Calculated over 20 Observers for Eight Items and Four Setups

Tables Icon

Table 4. Correlation Coefficients between Perceived Realism, Remaining Visual Properties, and Linear Model

Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.