Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Embedding LiDAR and smart laser headlight in a compact module for autonomous driving

Open Access Open Access

Abstract

A headlight module embedding smart laser headlights and a LiDAR is presented. The headlights include blue lasers for the high beam, and blue LEDs for the low beam, each with a high efficiency glass phosphor-converter we have fabricated. The LiDAR is an indirect-mode time-of-flight 905-nm rangefinder. We used two Nichia GaN lasers emitting a total 9.5-W optical power at 445-nm for the high-beam headlight, and five OSRAM GaN LED emitting a total 12-W at 445-nm for the low-beam headlight. The yellow-converter phosphor is a glass-based Ce3+: YAG slab, 100-mm dia. and 0.2-mm thick, that we have fabricated at 750°C temperature, obtaining a high thermal stability, a high conversion efficiency, and a good color rendering index. A digital micromirror device (DMD) placed in the focus of the parabolic reflector is used to switch off sectors of the high-beam when crossing other vehicles. The 2D image of a CCD camera and the LiDAR data are fed to an image processing unit based on CNN that classifies the targets. The headlight module has a >85% detection accuracy of a pedestrian up to 20 m distance.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

In recent years, photonics technology has become the driving force of innovation in the automotive, thanks to the low-cost version of the light distance and ranging instruments (LiDAR) and the advent of solid-state lighting (SS) based on blue GaN laser and phosphor conversion.

LiDARs are the low-end versions of rangefinders or telemeters, well-known instruments [1] developed since the years 70s, that offer exceptional performance on terms of range covered (as exemplified by the famous LURE earth-to-moon distance measurement, and the MOLA of the planet Mars topography [1]) and accuracy and/or precision (down to a few micrometers).

In the automotive application, the specifications are more relaxed, as we typically need a resolution of a few cm on a range of 50…100 m, but we have two supplementary requirements: (i) the low cost, prompting new simplified schemes for the processing of the optical signal, and (ii) laser safety, imposing a power limit in to the optical beam at the instrument output pupil. Consequently, wavelength of operation is a nearly forced choice dictated by the available low-cost lasers and LEDs either the 905-nm of GaAs or the 1550-nm (commonly known as eye-safe wavelength) of quaternary InGaAsP laser diodes.

About the LiDAR scheme of operation, a simplified ToF (time of flight) version, known as indirect-ToF [2], has proven quite successful for low cost application in the automotive. It uses a long, rectangular light pulse to illuminate the scene, and a simple gating of the receiver photodetector, so as to slice off, from the rectangular pulse, a fraction proportional to the distance. This approach is able to supply the desired cm-resolution with a modest source power (typically a few mW's for a 100-m distance coverage).

To satisfy Class 1 operation, according to the IEC standard [3] the power output should not be larger than 1.05 and 10-mW, at λ=905 and 1550-nm, respectively, for the LiDAR to be intrinsically safe.

These constraints prevent the pulsed-ToF approach to be usable, because it requires much higher power, whereas they could be satisfied with the sine-wave-ToF [1,4] which however requires a much more elaborate (and expensive) electronic processing with respect to the indirect-ToF circuits. An excellent review of the different technical approaches proposed so far for implementing the LiDAR is provided by Ref. [5].

One key point of the application is the placement of the LiDAR. It can be as follows:

  • (i) on the vehicle roof, like in the early proposals [6], and in this placement the LiDAR can provide the target distances measurement over a 360-deg field-of-view (fov) like required by certain special applications, yet with the hindrance of having to pass the connection cables through the car roof,
  • (ii) inside the headlight compartment, usually the right-hand side headlight for right-hand driving, so as to get the best coverage of front road and sidewalk while minimizing the glare to vehicles in the opposite direction.
In this study we have chosen to embed, in the headlight compartment, the LiDAR and the smart lighting system, based on phosphor-converted GaN LEDs and laser diodes, so as to obtain the complete assembly shown in Fig. 1.

 figure: Fig. 1.

Fig. 1. The LiDAR embedded with smart headlights includes: high beam light (HB), low beam light (LB), and the LiDAR (L). Additional modules external to the headlight assembly are: the control system and the CCD camera.

Download Full Size | PDF

We also use a CCD mounted on the windshield inside the car to pick up the 2D image of the scene. The scene displayed to the driver is subdivided in 6 horizontal x 2 vertical RoI (Regions of Interest), each covering a 8 × 5 deg2 portion of the field-of-view. The CCD image is fed to the processing computer for detection and/or identification of targets in each RoI, and the output data are fused with the LiDAR distance data to sort out the action to be taken. In this study, we demonstrate a preliminary data fusion consisting in labeling the RoI with a different color according to the level of danger represented by the combination of detected object and its distance (see Sect.4).

2. LiDAR

In this work, we adopt a 905-nm Lidar, manufactured by LeddarTech [7], with a total field-of-view of 48-deg (horizontal) and 10-deg (vertical). The field-of-view is segmented in 6 × 2 RoI (Region of Interest), each covering 8 × 5 deg2. The LiDAR yields a 5-cm accuracy on a 20 to 80-m range for reflective/diffusing target of 10 × 10 cm2 cross section, has a refresh rate 100 Hz, so that latency is negligible, and is compliant to the Class 1 safety category. The 10 × 10 cm2 target is detected with 99.9% probability and it represents the equivalent signal returning from a pedestrian of with an average diffusivity of 2%.

The covered field-of-view can be expanded by scanning [8], but this operation is expensive and add bulky components to the optical system. For example, to pass from the basic 8 RoI stripe along the X-axis to an array of Nx8 RoI with N lines running at different heights of the Y- axis, we can use a rotating polygonal prism placed in the middle of a double-objective optical relay, mounted in front of the LiDAR. Assuming NA=0.7 for the lens of the relay and being 60-mm the LiDAR I/O pupil, we need a large 60✕(0.105 rad)✕2 = 12-mm prism size to accomplish the scan. An improvement is obtained by using MEMS mirrors as the scanning element [9], yet conjugating optics make the assembly rather bulky. Thus, in general it will be more appropriate to choose a LiDAR with the final field-of-view and number of channels or RoI than attempting to multiplex them.

The LiDAR supplies the raw distance data of the eight RoI, each of 6✕3-deg, like illustrated later in Sect.4, Fig. 8. Yet, we need to add information about the nature of the target to establish the associate danger. For example, the same distance in two separate RoI's may belong to a pedestrian or to the vehicle in front of us, which on its turn could be approaching or moving away.

The classification of target is performed by an image pickup camera and a processing unit. For operation with the LiDAR, a cheap Si CCD [10] can be used, with reduced pixel number (typically 800✕600) and 60-Hz frame rate. The processing unit acquires and processes the image in the GPU, using CNN (Convolutional Neural Network) software for feature extraction YOLOv4Net, trained by transfer learning to recognize and sort out car, truck, motorcycle, and human shapes.

The CCD looks at the scene illuminated by the headlights. As the phosphor-converted illuminator has a spectrum centered in the visible, rolling off sharply above 750-nm [11], the crosstalk to the LiDAR wavelength of 905-nm is negligible because the built-in optical filter of the photodetector suppresses (of more than 50 dB) the feedback from headlights.

The output information to the driver is supplied by a head-on projection on the windshield of the image taken by the CCD, with each RoI colored according to the class of danger of target and its distance (an example is provided by Fig. 8 in Sect.4).

3. Headlights design

In the last decade, there has been enormous progress in InGaN laser diodes - also called blue lasers - as regard to power (reaching up to kW for metal working), reliability and low cost. For the invention of the blue LED, Nakamura has been awarded the Nobel Prize in 2014, and in the consumer market the LED lamp has replaced the Edison's filament lamp invented in 1880, the main reason being the conversion efficiency improvement, from ≈16 lm/W of the filament lamp to ≈300 lm/W of the LED lamp.

White light is obtained from the blue LED using a phosphor converting a fraction (about half, as it can be seen from the chromaticity CIE triangle [10]) of the blue power into yellow, obtaining a color temperature high enough for color rendering of the external scene observed by the driver.

When used for lighting purposes, the phosphor is deposited on the envelope window of the LED, and because in this case the dissipated power is moderate, an easy-to-prepare silicone-based phosphor can be used.

For the car headlights, however, the power level reaches ten or more Watt and we need better endurance to thermal stresses, and so we selected a glass-based phosphor in our design. We have mounted the phosphor on an Al heatsink (Fig. 2), backed by a dichroic filter that transmits blue light and reflects yellow light, so as to recover the backward emission of the phosphor. An aspheric objective lens (Fig. 2) completes the assembly of the white-light headlight module.

 figure: Fig. 2.

Fig. 2. Schematic of the headlight module: the power emitted by the blue laser diode (or the LED for the low-beam headlight) reaches the glass phosphor plate and is in part converted to yellow, so that the mixture yellow-blue appears white at the output; the dichroic filter transmits the blue to the phosphor and reflects the yellow.

Download Full Size | PDF

In this study, we have used two blue laser diodes (Nichia 445-nm laser) for the high-beam headlights, and five blue LEDs (OSRAM, 445-nm) for the low-beam headlights. Individual devices are mounted side-by-side to minimize the gap in the near field and have a nearly continuous distribution in the far field. The emitted radiant powers at 445-nm were 9.5 W and 12 W respectively.

The choice of separate sources is good to obtain a better failure-tolerant performance, with respect to using a single source switchable from the high- to the low-beam condition by means of a DMD (digital mirror device) based on a MEM structure [12,13]. Further, use of laser-diodes for high beam headlight compared with LEDs allows the illumination range to be increased – reaching 600-m respect to the 100-m of LED-based illuminator – thanks to the higher radiance of the laser.

At the output of the phosphor we have obtained: (i) for the high-beam headlight module (HBHM) a luminous power of 4000 lm, and a correlated color temperature of 4,300 K; (ii) for the low-beam headlight module (LBHM) a luminous power of 3100 lm, and a correlated color temperature of 6,000 K.

The two modules are incorporated in a beam-forming optical system made by the combination of a reflector and a front aspherical objective lens. For the HBHM the reflector is parabolic, whereas for the LBHM an elliptical reflector is used, to focus the light on the mask, as shown in Fig. 3(a).

 figure: Fig. 3.

Fig. 3. (a): An elliptical reflector and an aspherical lens provide a high-NA objective lens combination for the LBHM. The mask is a stop to remove up-going rays external to the low-light desired spatial distribution. For the HBLM, the mask is not required, and the reflector is parabolic. (b): Example of ray tracing simulation, for the low-beam headlight, as obtained by the SPEOS software; the discretization step is 0.2-deg in both angular coordinates perpendicular to the Z-axis propagation.

Download Full Size | PDF

We have employed the software simulator SPEOS [14] to calculate the illumination distribution of the beams by ray tracing, and an example of the results is shown in Fig. 3(b). The software also helped to finely tune optical parameters and position of the elements, so as to be able to satisfy the applicable standard ECE R112. For example, from the simulations we found that the HBHM had 180 kcd on axis, 88 kcd, and 35 kcd at ±2.5 and ±5 deg off-axis, respectively, amply exceeding the class B specification of ECE R112 (i.e. >20 kcd, and >5,1 kcd respectively).

The HBHM radiant intensity at specified angles, calculated and measured, is reported in Table 1, whereas Table 2 reports the radiant intensity data for the LBHM. In both cases, the Class B specifications of ECE R112 are amply satisfied.

Tables Icon

Table 1. Parameters of the HBHM

Tables Icon

Table 2. Parameters of the LBHM

The intensity distributions of headlight low- and high-beams are plotted in Fig. 4. These are the measured values corresponding to the far-field distribution of the ray-tracing simulation shown in Fig. 3(b).

 figure: Fig. 4.

Fig. 4. Intensity distribution pattern in the far field for: (a) the low beam, and (b) the high beam, as calculated by the SPEOS ray tracing software. Also indicated are the test points for evaluation of the headlight according to Class B specification of ECE R112. The corresponding values are reported in Table 1 and 2.

Download Full Size | PDF

Other features of the headlight design are:

  • (i) a fail-safe protection against exposure to excessive optical power in case of accident or breakage of the optical system, performed by a photodiode placed near the edge of the beam, so as to receive illumination after any deformation of the optical system, and
  • (ii) a smart headlight control of the high beam, to avoid glare to vehicles coming from the opposite side, a function actuated by a DMD which deflect the beam (or a RoI segment) outside the field-of-view when a target is recognized as a preceding car or a crossing car.
The deflected beam is in this case dumped on an absorber (as described later in Fig. 6).

3.1 Phosphor converter

Phosphor converters for blue LED and lasers can be fabricated by several processes, namely: silicone based, glass based, ceramic, and single crystals [15].

Ceramic and single crystals exhibit high thermal stability layers, thanks to the high temperature (typ. 1500°C) of the process, but fabrication at high temperature is time consuming, and therefore these phosphors are too expensive for the automotive application.

Silicone based phosphors are low cost, but they are fabricated at low temperature (typ. 150°C) and so their thermal stability is expected to be insufficient in the headlight compartment that may reach a high temperature of 85°C.

Glass-based phosphors, requiring about 750°C for preprocessing, have the desired thermal stability, a good conversion efficiency and yet can be mass produced at low cost [11].

We have fabricated Ce3+:YAG yellow glass phosphor dispersing the powder into a mixture of host sodium glass, 60% mol SiO2, 25% Na2CO3, 9% Al2O3, and 6% CaO. The resulting cullet was dried and milled into powder, and the phosphor Ce3+:YAG added and mixed, then sintered at 750°C for 1 hour, and finally annealed at 350°C for 3 hours. The optimal concentration of Ce3+:YAG yielding the highest conversion efficiency was found 40% in weight [15]. The phosphor glass is worked into a slab of 20-mm diameter and 0.35-mm thickness, and its surfaces are polished.

For the low beam light, the phosphor is mounted in direct contact with the LED chip (Fig. 5). For the high beam, the phosphor is mounted on an Al substrate acting as a heat sink and has at the bottom a dichroic filter to reflect the yellow light toward the collimating lens, Fig. 2.

 figure: Fig. 5.

Fig. 5. Layout of the phosphor and LED (or laser diode) assembly: the glass phosphor is mounted atop the LED chip with a metal spacer that serves also as a heat sink.

Download Full Size | PDF

Thermal aging test was carried out on the Ce3+:YAG yellow glass phosphor (CYGP) and for comparison, on a silicone-based layer (CYSP) we have fabricated by mixing silicone and Ce3+:YAG powder and baking at 150°C. All samples were 0.35-mm thick and their surface were polished.

Six samples of CYGP and CYSP were aged at 150, 250, 350 and 450°C for 1000 hours. The conversion efficiency loss after the test was 2% for the CYGP and 5 to 10 times larger for the CYSP, and the CIE wavelength shift was 0.002% and 0.32%, respectively.

Other favorable parameters of the CYGP are the better thermal conductivity (1.88 W/°C m, that is, 7.5 times larger than CYSP) and the smaller thermal expansion coefficient (9 and 310 ppm/°C for the CYGP and CYSP, respectively).

The color rendering index (CRI) of the CYGP is around 88-90, a value that could be improved if necessary with the use of a double phosphor Lu3Al5O12:Ce3+ and CaAlSiN3:Eu2+ reaching CRI=94, as described in Ref. [16], although usually the automotive application doesn't require such an excellent color rendition for the purpose of target recognition.

3.2 Smart headlight control

An array of micro-mirrors fabricated in the MEM technology (or DMD – digital micromirror device) allows us to control and switch off the high-beam headlights upon crossing a vehicle coming in the opposite direction.

In this study we used a DMD with 1024✕768 individual micromirrors, a chip fabricated by Texas Instrument [17]. The micromirrors array is arranged in 6 stripes. Each stripe can be individually switched from −12 deg to +12 deg independent from the others, and the size of the DMD is large enough (10 mm ✕7mm) to accommodate the high-beam illumination near-field output from the phosphor [13].

The DMD is placed in the focus of the projection system, made by the elliptical reflector plus an aspheric lens objective, and is controlled by a microprocessor software provided by the manufacturer.

As illustrated in Fig. 6, the working position of the DMD is −12 deg, for which the beam coming from the laser is deflected to the outgoing folding mirror. When a particular stripe of the array is switched from −12 deg, to +12 deg, the corresponding portion of the beam (cross-hatched in Fig. 6) is removed from the outgoing beam and damped in an absorber. During this operation, no appreciable stray light has been noticed experimentally.

 figure: Fig. 6.

Fig. 6. Smart headlight control by means of a DMD: the micromirros are normally all in the −12 deg position and the beam is directed to the output projector; when a stripe of micromirrors is switched to +12 deg, the corresponding beam portion is deviated to the absorber.

Download Full Size | PDF

Thus, if a vehicle is detected in a particular RoI of the scene, the high beam lights of the corresponding segment are switched off.

4. Data processing

In view of the application to automatic driving, it is necessary to sort out the objects of interest from the 3D scene in front of the vehicle [13]. The LiDAR supplies the z-coordinate, and the 2D x,y-coordinate scene requires a separate sensor. In this study, we used a CCD camera mounted inside the car on the windshield. The camera image is fed to the on-board computer through an image acquisition board. Two tasks are carried out:

  • (i) detect and recognize predefined objects in the 6 × 2 RoI (regions of interest), each of 8 × 5 deg2, in which the field-of-view is subdivided;
  • (ii) determine the measures to be taken accordingly. The computer uses a CNN algorithm (software for feature extraction YOLOv4Net) to pinpoint and recognize objects in the 2D image [18].
Typically, objects are: cars, trucks, bicycles and motorcycles, pedestrians, and animals. The actions taken are: brake and or steer, and headlight management performed by the DMD.

The block scheme of the processing performed on the image supplied by the CCD and the distance data supplied by the LiDAR is reported in Fig. 7.

 figure: Fig. 7.

Fig. 7. Block scheme of the processing for the integrated smart headlight control and the fusion of detected target data and LiDAR data.

Download Full Size | PDF

Here, after the conversion of image from RGB to HSV (Hue Saturation Value) coordinates [10], we perform image processing [filtering, morphological image processing, image labeling functions, block size limit and RoI area sorting] and object recognition by CNN [using the YOLOv4Net software for feature extraction]. The output is used directly, to control the high headlights through the DMD, and is merged with the LiDAR distance data for the classification of potential danger and the consequent actuation [14].

As an example, the detection rate of a human-size object of 0.5 average surface diffusivity, placed at 20-m distance, has been measured to be 88% at dark and 85% under daylight illumination.

Of course, data fusion will take different formats according to requisites and specific functionality of the intended assisted/autonomous driving. In this study, we don't claim to have reached a final result, rather we wish to show that the optical parts embedded in the laser headlight module are those sufficient and adequate to supply the processing unit with the necessary data for assisted or autonomous driving.

An example of data fusion obtained in this study is reported in Fig. 8. On the CCD image displayed on the windshield, a grid of 6 × 2 RoI is superposed. The processing computer recognizes cars and motorcycles in the RoI, and the fusion with LiDAR data of these RoI tells that there is one danger situation (red panel) and two are attention situations (blue panel).

 figure: Fig. 8.

Fig. 8. Exemplary display of the processed data: the CCD image is segmented in 6 × 2 Regions of Interest (RoI) each searched for objects recognition, and the LiDAR supplies the object distance for deciding the action to be taken. Here, two preceding cars (blue panels) and a crossing motorcycle (red panel) are detected and identified as attention and danger, respectively.

Download Full Size | PDF

5. Conclusions

In this study, we have illustrated a module of smart laser headlight embedding a LiDAR that, together with a CCD camera and image processing and recognition constitutes the base for autonomous driving.

We have fabricated headlights that include a unique glass phosphor converter layer, in connection with blue laser diode (for the high-beam) and LED (for the low-beam). The phosphor exhibits excellent thermal stability and can be manufactured at low production cost.

The resulting high-beam and low-beam headlights pass the ECE R112 Class B regulation.

Merging the LiDAR distance with the 2D image taken by a CCD camera is performed in the on-board computer working with a CNN algorithm.

The output result is displayed on the windshield with the field-of-view subdivided in 6 × 2 region of interest (RoI), each of 5-deg height and 8-deg width. AS a first step of data fusion, as the on-board computer detects a situation of danger in a particular RoI, this RoI is displayed in a specific color.

The upper and lower string of RoIs are illuminated by the high and low headlights, respectively.

Headlight management is performed by DMD (digital micromirror device) deflecting away the 8✕5 deg2 stripes of the illuminating high-beam corresponding to an individual RoI.

The recognition rate of human-size objects at 20-m distance is evaluated to be >85%, and further study on the recognition algorithm may improve the result to 95%.

Funding

Ministry of Science and Technology, Taiwan (109-2218-E-005 -012) and Ministry of Education, Taiwan (Higher Education Sprout Project).

Acknowledgments

Authors wish to thank Han Pin for providing the pictures of Fig. 3(b) and 4, and Yeong-Kang Lai for providing Fig. 8.

Disclosures

Authors declare no conflict of interest.

Data availability

The data that support the findings of this study are available within the article.

References

1. S. Donati, “Electrooptical Instrumentation,” (Prentice Hall, 2004).

2. S. Donati, G. Martini, Z. Pei, and W.-H. Cheng, “Analysis of timing errors in time-of-flight LiDAR using APDs and SPADs Receivers,” IEEE J. Quantum Electron. 57(1), 1–8 (2021). [CrossRef]  

3. IEC Laser Safety Standard 60825-1, 3rd edition (2014).

4. R. Lange and P. Seitz, “Solid-state time-of-flight range camera,” IEEE J. Quantum Electron. 37(3), 390–397 (2001). [CrossRef]  

5. Y. Li and J. Ibanez-Guzman, “MIMO radar for advanced driver-assistance systems and autonomous driving: advantages and challenges,” IEEE Signal Process. Mag. 37(4), 50–61 (2020). [CrossRef]  .

6. B. Schwartz, “Lidar: mapping the world in 3D,” Nat. Photonics 4(7), 429–430 (2010). [CrossRef]  

7. Leddar V8, Medium Field of View, LaddarTech Catalogue.

8. J. O’Neill, W. T. Moore, K. Williams, and R. Bruce, “Scanning system for LiDAR,” U.S. Patent 8 072 663B2, (June 8, 2011).

9. N. Druml, I. Maksymova, T. Thomas, D. Lierop, M. Hennecke, and A. Foroutan, “1D MEMS micro-scanning LiDAR,” Proc. 9th Int. Conf. Sensor Device Technologies and Applications, Venice, Italy, 2018.

10. S. Donati, “Photodetectors,” 2nd ed., (J. Wiley and IEEE Press2021).

11. Y.-P. Chang, J.-K. Chang, H.-A. Chen, S.-H. Chang, C.-N. Liu, P. Han, and W. H. Cheng, “An advanced headlight module employing highly reliable glass phosphor,” Opt. Express 27(3), 1808–1815 (2019). [CrossRef]  

12. C. C. Hung, Y. C. Fang, M. S. Huang, B. R. Hsueh, S. F. Wang, B. W. Wu, W. C. Lai, and Y.-L. Chen, “, “Optical design of automotive headlight system incorporating digital micromirror device,” Appl. Opt. 49(22), 4182–4187 (2010). [CrossRef]  

13. C.-N. Liu, Y.P. Cheng, H.K. Shih, H. Pin, K. Li, Z. Pei, S. Donati, and W.-H. Cheng, “LiDAR embedded smart laser headlight module using a single digital micromirror device for autonomous drive,” Proc. CLEO 2020, San Jose May 11-15 (2020) paper ATu3 T.2.

14. Y.P. Chang, C.N. Liu, Z. Pei, S.M. Lee, Y.K. Lai, P. Han, H.K. Shih, and W.H. Cheng, “New scheme of LiDAR embedded smart laser headlight for autonomous vehicles,” Opt. Express 27(20), A1481–A1489 (2019). [CrossRef]  

15. Y. P. Cheng, J. K. Chang, W. C. Cheng, Y. Y. Kuo, C. N. Liu, L. Y. Chen, and W. H. Cheng, “New scheme of a highly-reliable glass-based color wheel for next generation laser light engine,” Opt. Mater. Express 7(3), 1029–1034 (2017). [CrossRef]  

16. H. S. Shih, C. N. Liu, W. C. Cheng, and W. H. Cheng, “High color rendering index of 94 in white LEDs employing novel CaAlSiN3:Eu2+ and Lu3Al5O12:Ce3+co-doped phosphor-in-glass,” Opt. Express 28(19), 28218–28225 (2020). [CrossRef]  

17. Texas Instrument DLP A200PFP.

18. E. Capellier, F. Davoine, V. Cherfaoui, and Y. Li, “Evidential deep learning for arbitrary LIDAR object classification in the context of autonomous driving,” Proc. IEEE Intelligent Vehicles Symp., Paris, 1304–1311 (2019).

Data availability

The data that support the findings of this study are available within the article.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. The LiDAR embedded with smart headlights includes: high beam light (HB), low beam light (LB), and the LiDAR (L). Additional modules external to the headlight assembly are: the control system and the CCD camera.
Fig. 2.
Fig. 2. Schematic of the headlight module: the power emitted by the blue laser diode (or the LED for the low-beam headlight) reaches the glass phosphor plate and is in part converted to yellow, so that the mixture yellow-blue appears white at the output; the dichroic filter transmits the blue to the phosphor and reflects the yellow.
Fig. 3.
Fig. 3. (a): An elliptical reflector and an aspherical lens provide a high-NA objective lens combination for the LBHM. The mask is a stop to remove up-going rays external to the low-light desired spatial distribution. For the HBLM, the mask is not required, and the reflector is parabolic. (b): Example of ray tracing simulation, for the low-beam headlight, as obtained by the SPEOS software; the discretization step is 0.2-deg in both angular coordinates perpendicular to the Z-axis propagation.
Fig. 4.
Fig. 4. Intensity distribution pattern in the far field for: (a) the low beam, and (b) the high beam, as calculated by the SPEOS ray tracing software. Also indicated are the test points for evaluation of the headlight according to Class B specification of ECE R112. The corresponding values are reported in Table 1 and 2.
Fig. 5.
Fig. 5. Layout of the phosphor and LED (or laser diode) assembly: the glass phosphor is mounted atop the LED chip with a metal spacer that serves also as a heat sink.
Fig. 6.
Fig. 6. Smart headlight control by means of a DMD: the micromirros are normally all in the −12 deg position and the beam is directed to the output projector; when a stripe of micromirrors is switched to +12 deg, the corresponding beam portion is deviated to the absorber.
Fig. 7.
Fig. 7. Block scheme of the processing for the integrated smart headlight control and the fusion of detected target data and LiDAR data.
Fig. 8.
Fig. 8. Exemplary display of the processed data: the CCD image is segmented in 6 × 2 Regions of Interest (RoI) each searched for objects recognition, and the LiDAR supplies the object distance for deciding the action to be taken. Here, two preceding cars (blue panels) and a crossing motorcycle (red panel) are detected and identified as attention and danger, respectively.

Tables (2)

Tables Icon

Table 1. Parameters of the HBHM

Tables Icon

Table 2. Parameters of the LBHM

Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.