Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Digital-optical computational imaging capable of end-point logic operations

Open Access Open Access

Abstract

In this study, digital-optical computational imaging is proposed for object data transmission with a capability to achieve end-point logic operations over free-space data transmission. The framework is regarded as an extension of computational imaging using digital-optical codes originally developed for digital optical computing. Spatial code patterns for optical logic operations are extended to digital-optical codes in the temporal and spectral domains. The physical form of the digital-optical codes is selected, as appropriate, for the situation in use, and different forms can be combined to increase the data-transmission bandwidth. The encoded signals are transferred over free space and decoded by a simple procedure on the destination device, thus enabling logic operations at the end-point of the data transmission. To utilize the benefits of digital processing, a data-transfer mode is introduced which assigns preprocessing for the signals to be encoded and the end-point processing. As a demonstration of the proposed method, an experimental testbed was constructed assuming data transmission from sensor nodes to a gateway device appearing in the Internet of Things. In the experiment, encrypted signals of the sensor nodes, which were encoded by spatial digital-optical codes on RGB channels, were captured as an image, and the original signals were retrieved correctly by an end-point exclusive OR operation.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Imaging is widely employed in a variety of tasks such as observation of objects, visualization of invisible signals, measurement of physical properties associated with samples, and so on. Among them, detailed and continuous observation of an object is an important task for monitoring of the target object. This task requires high selectivity of the target object from the background scene, as well as the ability to discriminate the target object from other targets. For this purpose, we have been investigating an effective method as an extension of computational imaging.

Computational imaging is a framework providing various imaging modalities based on a combination of optical encoding and computational decoding [1,2]. Owing to the combinatorial diversity of the encoding and decoding, high-functional and/or high-performance imaging are achieved. This technological field has two different historical backgrounds: one is computational photography and the other is optical computing. Light field cameras [3,4], which are important imagers in computational photography, are capable of capturing light ray signals, which allows reconstruction of the object space after image capturing. Extended depth-of-field techniques [5,6] are typical examples based on optical computing, in which optical signal processing is employed to encode the light signals, and computational decoding improves the imaging properties. Multi-aperture cameras such as TOMBO (thin observation module by bound optics) [7,8] have similar features to light field cameras, but a multi-aperture optical system is also considered as a practical form of optical computing. In the field of computational imaging, a variety of methods were developed on the basis of signal theory. For example, compressive sensing was effectively utilized in functional imaging [912], and recently machine learning has become a powerful tool for enhancing the capabilities of computational imaging [1317].

In this study, we propose an extended form of computational imaging under the concept of digital-optical computational imaging, enabling high selectivity and end-point logic operations for target object observation. Spatial code patterns developed for optical parallel logic operations [18] are employed as the optical codes in computational imaging. Owing to the artificial form of the codes, these optical codes can be clearly distinguished from the background signals. In addition, the optical logic operation feature enables us to perform end-point logical operations in the imaging task. Any binary function can be generated by a combination of the spatial codes encoded from the input signals [1921]. Image processing at the observer (i.e, the end point) can achieve the same procedure as the original optical logic operation. Optical code patterns are extended for different physical forms, all of which are capable of logic operations according to the common principle called array logic [22]. To utilize the benefits of digital processing, a data-transfer mode is introduced by which the preprocessing of the signals to be encoded at the sender and the post-processing at the receiver are assigned. To demonstrate the features of the proposed framework, data transmission was performed assuming signal transmission from sensor nodes to a gateway device appearing in the Internet of Things (IoT) [23,24]. An experimental testbed was constructed to emulate an environment in which sensor data are collected from multiple sensor nodes (senders) to a gateway device (receiver). As an example of the end-point logic operation, encrypted data transfer was demonstrated. Experimental results clarify the effectiveness of the proposed framework.

2. Background

2.1 Computational imaging and optical computing

Figure 1 shows a schematic diagram of computational imaging consisting of optical encoding and computational decoding. Optical encoding converts the object information into appropriately formatted signals observable by an imaging device. After the optical signals are captured and converted to electronic signals, computational decoding retrieves the object information. Owing to the combinatorial diversity in the encode/decode scheme, a variety of imaging modalities can be achieved. Point spread function (PSF) engineering, light field separation, and multiplex imaging are efficient examples of optical encoding, and these are coupled with the corresponding computational decoding, such as deconvolution, light field rendering, and L1-norm minimization, respectively. These combinations enable extended depth-of-field [5], light field photography [3,4], and compressive imaging [912].

 figure: Fig. 1.

Fig. 1. Schematic diagram of computational imaging and examples of optical encoding and computational decoding for specific functions.

Download Full Size | PDF

From the viewpoint of optical technology, optical computing should be employed in computational imaging to utilize the physical properties of light effectively. Optical computing was studied in the 1980s to explore the computational capabilities of light, with the aim of achieving high-performance computing [25,26]. In fact, the optical encoding used in computational imaging can be regarded as a form of optical computing [27]. An extended depth-of-field technique [5] is a good example. In this method, an optical phase retarder is employed to generate a focus-insensitive point spread function, the captured images are computationally restored, and the focusing depth is extended. Such phase modulation is categorized as analog optical computing, but another class of optical computing has also been studied extensively in a digital form of computing. Notable techniques, such as optical logic gates using spatial coding [18,28], were developed under the concept of digital optical computing, offering the potential to enhance the performance of computational imaging.

2.2 Optical shadow-casting logic

Spatial code patterns were designed as an efficient method for expressing binary data and have been effectively used in an optical implementation of parallel logic operations using shadow casting [18]. Figure 2 depicts the procedure of a logic operation for a binary input pair, $a$ and $b$, in shadow-casting logic. In the encoding process, any one of the spatial codes shown in the coding table is selected according to the values of the input data. Then the coded pattern is projected on a screen by an array of point light sources, so that the individual projections are overlapped with a lateral shift equal to half of the code width. By configuring the switching pattern of the point light sources, the result of any kind of binary logic function is generated at the center of the projected shadows.

 figure: Fig. 2.

Fig. 2. Procedure of shadow-casting logic for a single pixel pair.

Download Full Size | PDF

The principle of shadow-casting logic is formalized by array logic [22], which is the basis of field programmable gate devices. The procedure consists of decomposition of the input values into the minterms and recombination of the decomposed signals for function generation. For two binary signals, $a$ and $b$, their minterms are $\overline {a}\overline {b}$, $\overline {a}b$, $a\overline {b}$ and $ab$, and a combination of the minterms generates any binary logic function as follows:

$$f(a,b) = \alpha\,\overline{a}\overline{b} + \beta\,\overline{a}b + \gamma\,a\overline{b} + \delta\,ab,$$
where $\alpha$, $\beta$, $\gamma$, and $\delta$ are binary variables for configuring the function. For example, when $\alpha =\delta =0$ and $\beta =\gamma =1$, $f(a,b)$ becomes $\overline {a}b + a\overline {b}$, which is an exclusive OR function, $a \oplus b$. As shown in Fig. 2, the same operation can be performed by optical shadow casting with switching of the point light sources. Note that the same operation is also achieved by duplication and lateral shifts of the code pattern, which is interpreted as a correlation of the source pattern and the code pattern. This operation is easily performed by image processing on the captured image.

3. Methods

3.1 Digital-optical computational imaging

The fundamental idea of digital-optical computational imaging is the use of digital-optical code patterns for encoding the object signal and performing logical operations at the signal detection side. The digital-optical code patterns contribute to increased selectivity of the object signals and the ability to distinguish the target object from other objects. Since the code patterns were designed in advance and their shapes are known, high-performance selection and discrimination is achieved in the decoding process. It is possible to prepare code patterns that differ from the other objects found in the observation environment. From the point of view of processing flow, if the object signal is obtained in a digital format, successive processing after decoding can be performed seamlessly. For example, cooperative processing of the encoder (sender) and decoder (receiver) is possible, which allows designed data-transfer protocols to be employed, such as those for secure data transmission. As a result, high-performance and flexible imaging tasks are possible in this framework.

Note that, at least in this paper, the decoding process of the digital-optical computational imaging is assumed to be performed by digital processing. This implies that the physical processes are not utilized as efficiently as in optical computing. Even for such digital decoding, the performance achieved by current imaging devices and image processors is sufficiently high. In addition, specially designed devices, e.g. a TOMBO camera [7], are available to accelerate the processing throughput with compact hardware. Therefore, an electronic implementation is a practical and effective option for embodying digital-optical computational imaging.

3.2 Digital-optical codes and extensions

Originally, shadow-casting logic was proposed as a method for parallel logic operations performed on all of the pixel pairs composing two images. In the framework of digital-optical computational imaging, this scheme is broken down into a single pair or several pairs of signals, as shown in Fig. 3. Digital-optical code patterns are generated from a set of signals to be transferred. These patterns are assumed to be spatially distributed in space and captured as an image. The captured image is processed to retrieve the result of the logic operation on the original signals. Owing to the artificial shape of the spatial codes, high selectivity and discriminability are expected. Since the encoding process is a kind of digital processing, data manipulation can be combined with the process. In addition, the result of the logic operation is obtained at the output plane (i.e. the end-point) after light propagation, so that the propagating signals can be encrypted during transmission and retrieved by the decoding process at the end point. These properties and functionality are notable features of digital-optical computational imaging based on the spatial codes.

 figure: Fig. 3.

Fig. 3. Encoding and decoding processes in digital-optical computational imaging.

Download Full Size | PDF

Applying the principle of array logic, the spatial codes can be extended to other physical implementations, such as temporal and spectral ones. Figure 4 shows possible digital-optical codes and examples of function generation by different physical implementations. Encoding is achieved by selection of any one of four code patterns corresponding to the minterms of $a$ and $b$. Temporal codes use the phase slot of a clocked signal [29], and the spectral codes employ a set of wavelength channels. Any logic function can be generated by selective combination of the codes corresponding to the four minterms.

 figure: Fig. 4.

Fig. 4. Spatial code and its extension to temporal and spectral codes.

Download Full Size | PDF

These digital-optical codes have different properties and suitability for given conditions, as summarized in Table 1. For example, spatial codes can be captured by a single shot, but geometrical deformation occurs depending on the distance and orientation of the code. Temporal codes do not cause such geometrical deformation, but they require multiple observations. Although spectral codes do not suffer from these problems, sophisticated spectral separation is required for effective implementation. Owing to the orthogonality in the space, time, and wavelength domains, these digital-optical codes can be combined, so that high-density multiplexed encoding is possible. Each code has information of two signals, $a$ and $b$, which can be extended for multiple input pairs in the space, time, and wavelength domains, as shown in Fig. 3.

Tables Icon

Table 1. Properties of spatial, temporal, and spectral codes.

3.3 Data-transfer mode for functional data transmission

To utilize the capability of logic operations associated with the digital-optical codes, a data-transfer mode is introduced for the data transmission from the encoder (sender) to the image observer (receiver), as shown in Fig. 5. Although an optical implementation is a common option for encoding in computational imaging, use of electronic devices is a practical solution in digital-optical computational imaging. In particular, the encoding process for the codes combined with spatial, temporal, and/or spectral forms requires complicated processing, so that an electronic implementation is reasonable and effective for achieving compact hardware. A pair of signals, $x$ and $y$, is preprocessed by binary logic operations $F_a$ and $F_b$, respectively, and the resultant signals are encoded into a digital-optical code. The sender emits the digital-optical code over free space, and the receiver captures the code and performs a logic operation $F_p$ according to the process described in Sec. 2.2. Note that the logic operation is performed at the end-point of the data transmission and that security of the transmission is ensured. For convenience, three hexadecimal digits, denoting two-variable binary logic functions shown in Table 2, are used to identify the data-transfer mode, e.g. $F_aF_bF_p$. The symbol $*$ is a wildcard indicating an arbitrary function.

 figure: Fig. 5.

Fig. 5. Data-transfer mode for functional data transmission over free space.

Download Full Size | PDF

Tables Icon

Table 2. Function codes indicating two-variable binary logic.

Table 3 depicts some useful data-transfer modes. Two signals to be transferred, $x$ and $y$, are preprocessed and assigned to the input pairs of digital-optical codes, $a$ and $b$. Mode 35* indicates that the digital-optical code is encoded from $x$ and $y$ without preprocessing and that an arbitrary logic operation is performed. As a result, if $F_p=x$, the signal $x$ is retrieved. Mode 146 prepares $xy$ and $\overline {x}y$ for the input pair of digital-optical codes, and $(xy)\oplus (\overline {x}y)$ is performed at the end point. Because $(xy)\oplus (\overline {x}y)=(x\oplus \overline {x})y$ is equal to $y$, we can retrieve the signal $y$ at the end point. Similarly, mode 276 provides the same function. Note that the raw signals, $x$ and $y$, are not displayed directly in the digital-optical codes, and therefore, secure data transfer is performed.

Tables Icon

Table 3. Examples of useful data-transfer modes.

Mode 176 provides more secure transmission. The processed result, $(xy)\oplus (x+y)$, is an intermediate signal with ambiguity. To retrieve the original data, an additional exclusive OR is required on the intermediate signal and the signal $x$. Because the operation $\{(xy)\oplus (x+y)\}\oplus x$ is equal to $y$, the correct $y$ is retrieved only if the correct $x$ is provided. In this procedure, the signal $x$ is used as a common key of encryption, which is assumed to be shared with the sender and receiver before transmission. Note that the data-transfer mode is explained for binary signals, $x$ and $y$, but the same operation is applied to multiple sets of signals, namely, a message. As a result, a set of messages is transferred simultaneously with the functionality of the data-transfer mode.

3.4 Message transmission based on data-transfer mode

A typical procedure of message transmission using the functionality of the data-transfer mode is shown in Fig. 6. In this example, it is assumed that a message is encrypted with an encryption key and transferred with the data-transfer mode 146, 276, or 176. On the sender, the encryption key and the message are divided into bit signals, and each pair of bits is processed by $F_a$ and $F_b$. The resultant signals are encoded to any one of the digital-optical codes and emitted to the receiver. The digital-optical codes are captured by the receiver over free space transmission. The captured signals are processed to extract the digital-optical codes, and the logic operation $F_p$ and optional processing are performed. For the modes 146 and 276, the message is retrieved just by an exclusive OR operation ($F_p=6$) without post processing. while the mode 176 requires an additional exclusive operation with the encryption key for the decryption.

 figure: Fig. 6.

Fig. 6. Schematic diagram of message transmission over free space using the functionality of the data-transfer mode.

Download Full Size | PDF

4. Results

4.1 Data transmission in Internet of Things

To demonstrate the features of the proposed framework, digital-optical codes are applied to data transmission from sensor nodes to a gateway device appearing in the IoT. Although radio-frequency communication such as Bluetooth is suitable for this purpose, free-space light communication including the proposed method has a number of advantages, including localization of the broadcast area, a low risk of electromagnetic interference, and easy determination of sensor node location. Figure 7 shows a schematic diagram of the assumed data transmission. Multiple sensor nodes are distributed in an observable space and are ready to report their sensing signals. The gateway device is a parent node that requests the sensor nodes to transfer their signals and collects the responses. The responses are in the form of digital-optical codes, and the gateway device captures the codes as an image and performs the decode processing. Figure 5 corresponds to data transmission from a single sensor node to the gateway device.

 figure: Fig. 7.

Fig. 7. Schematic diagram of data transmission in IoT device communication based on digital-optical computational imaging.

Download Full Size | PDF

In the demonstration, the following operations are performed: 1) the gateway device broadcasts a query with a key, $x$, to distributed sensor nodes; 2) each sensor node emits digital-optical codes generated from the key, $x$, and the sensing data, $y$; 3) the gateway device captures an image containing the digital-optical codes; and 4) the digital-optical codes are extracted and decoded by the process of function generation, as shown in Fig. 4.

The captured image is decoded to generate the result of logic operations on the transferred signals. The advantages of the framework are simultaneous detection of multiple sensor nodes with their locations, high selectivity and discrimination of the digital-optical codes, and processing flexibility due to the end-point logic operation. In practice, the captured spatial code patterns should be normalized to compensate for deformation and to adjust the scale variance caused by the observation geometry. To avoid such processing, use of temporal and/or spectral codes is an effective solution. In contrast, the scale and deformation of the spatial codes reflect the distance and orientation of the sensor node, so that we can extract this information from the captured image. Flexibility in code selection is a notable benefit of multiple physical forms of digital-optical codes.

4.2 Experimental demonstration

An experimental testbed was constructed for data transfer used in IoT by digital-optical computational imaging. A sensor node was prepared using a Raspberry Pi 3 model B equipped with a Sense Hat having environment sensors for temperature, humidity, and atmospheric pressure, and an RGB light emitting diode panel with 8 $\times$ 8 elements [30]. A gateway device was emulated by a laptop computer connected to a CMOS color camera (Edmond Optics Inc., EO-18112) with 4912 $\times$ 3684 pixels. Query signals were broadcasted from the gateway device by Wi-Fi, and the individual sensor nodes responded to the queries. For example, the sensor nodes displayed their measured values as a form of digital-optical codes.

To implement the data transmission from the individual sensor nodes, the transfer signals can be selected from the options of space, spectral, and temporal domains. In the experiment, spatial codes were mainly used and the spectral domain was adopted for signal multiplication. The temporal domain was not used for encoding but for a temporal sequence of the signals, to simplify the demonstration. Assuming an application in IoT, a set of data consisting of group ID, node ID, time, and the sensor values was assigned to 48 bit signals, as described in Supplement 1. The group ID was set to 010, and the node IDs were assigned from 001 through 101 in 3-bit binary numbers. The sensor signals were output as 6-bit binary numbers, so that two zeros were padded to the two most significant bits of 8-bit numbers. To demonstrate secure transmission, an 8-bit encryption key was prepared as 10101001, which was commonly used for the upper and lower bytes of the RGB channels. Of course, more complicated key assignment is possible, but this would not meet the purpose of this demonstration. Data assignment on RGB channels of the LED array on the sensor node are summarized in Table 4 and Fig. 8.

 figure: Fig. 8.

Fig. 8. Bit assignment on LED array. $2 \times 2$ LEDs display any one of the spatial codes shown in Fig. 2.

Download Full Size | PDF

Tables Icon

Table 4. Data assignment on RGB channels of the LED array.

For a demonstration of functional signal transmission, data-transfer mode 146 was used for signal transmission from the sensor nodes to the gateway device. Figure 9 shows pictures of the experimental setup and a captured image of the digital-optical codes displayed on the sensor nodes located at five different positions. The locations of the sensor nodes are easily identified from the captured image. The digital-optical codes were extracted from the image and normalized by image processing. Figure 10 depicts an extracted digital-optical code of node 1 and the digital-optical codes of different colors retrieved by hand. Note that some colors are made by a combination of blue, green, and red channels. Figure 11 illustrates the retrieved digital-optical codes, correlated results, and decoded signals on RGB channels. Correlation is an equivalent operation to generate a function, as shown in Fig. 2. The decoded results were obtained by sampling at the specific pixels indicated by yellow frames. For the exclusive OR operation, the spatial code was duplicated, shifted diagonally, and overlapped.

 figure: Fig. 9.

Fig. 9. (a) A view of the experimental setup and (b) a captured image of the digital-optical codes displayed on the sensor nodes located at five different positions.

Download Full Size | PDF

 figure: Fig. 10.

Fig. 10. A digital-optical code of sensor node 1 extracted from the captured images and the digital-optical codes of different colors retrieved by hand.

Download Full Size | PDF

 figure: Fig. 11.

Fig. 11. Retrieved digital-optical codes, correlated results, and decoded signals of sensor node 1 on RGB channels.

Download Full Size | PDF

To verify the processing process, the encoded and decoded signals on the G channel of sensor node 1 are summarized in Table 5. $x$ is assigned to the encryption key, and $y$ is the sensor data. $F_a$ and $F_b$ are encoded to spatial codes, both of which are different from $x$ and $y$. $F_p$ is performed after capturing the image, or at the end-point of the signal transmission, so that the sensor data $y$ are retrieved correctly. It was confirmed that the retrieved signals including other channels were identical to the signals stored in the sensor node. By repeating the same procedure, we can capture the sensor data from the distributed sensor node continuously.

Tables Icon

Table 5. Encoded and decoded signals on G channel of sensor node 1.

5. Conclusions

In this study, digital-optical computational imaging was proposed for object signal transmission with a capability of performing an end-point logic operation over the observation space. The framework is regarded as an extension of computational imaging using digital-optical codes originally developed for digital optical computing. As options for physical implementations, space, time, and spectral domains can be selected to meet various requirements, including increasing the data-transmission bandwidth. The end-point logic is achieved by the same procedure as the original optical logic operation. The data-transfer modes for assigning the preprocessing on the signals to be encoded and the end-point processing extend the capabilities of the proposed framework. As a simple demonstration, encrypted signal transmission in IoT was performed on an experimental testbed. Although the demonstration was a very limited application of the features, the promising extendibility of digital-optical computational imaging can be recognized.

Funding

Japan Society for the Promotion of Science (JP16KT0105, JP20H02657, JP20H05890).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Supplemental document

See Supplement 1 for supporting content.

References

1. M. Levoy, “Light Fields and computational photography,” IEEE Comput. 39(8), 46–55 (2006). [CrossRef]  

2. J. N. Mait, G. W. Euliss, and R. A. Athale, “Computational imaging,” Adv. Opt. Photonics 10(2), 409–483 (2018). [CrossRef]  

3. R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light Field Photography with a Hand-Held Plenoptic Camera,” Stanford Tech Report CTSR 2, 1–11 (2005).

4. A. Lumsdaine and T. Georgiev, “The focused plenoptic camera,” in 2009 IEEE International Conference on Computational Photography (ICCP), (2009), pp. 1–8.

5. E. R. Dowski and W. T. Cathey, “Extended depth of field through wave-front coding,” Appl. Opt. 34(11), 1859 (1995). [CrossRef]  

6. Y. Takahashi and S. Komatsu, “Optimized free-form phase mask for extension of depth of field in wavefront-coded imaging,” Opt. Lett. 33(13), 1515–1517 (2008). [CrossRef]  

7. J. Tanida, T. Kumagai, K. Yamada, S. Miyatake, K. Ishida, T. Morimoto, N. Kondou, D. Miyazaki, and Y. Ichioka, “Thin observation module by bound optics (TOMBO): concept and experimentalverification,” Appl. Opt. 40(11), 1806–1813 (2001). [CrossRef]  

8. J. Tanida, “Multi-aperture optics as a universal platform for computational imaging,” Opt. Rev. 23(5), 859–864 (2016). [CrossRef]  

9. R. Horisaki and J. Tanida, “Multi-channel data acquisition using multiplexed imaging with spatial encoding,” Opt. Express 18(22), 23041–23053 (2010). [CrossRef]  

10. R. Horisaki, K. Choi, J. Hahn, J. Tanida, and D. J. Brady, “Generalized sampling using a compound-eye imaging system for multi-dimensional object acquisition,” Opt. Express 18(18), 19367–19378 (2010). [CrossRef]  

11. R. Horisaki, Y. Ogura, M. Aino, and J. Tanida, “Single-shot phase imaging with a coded aperture,” Opt. Lett. 39(22), 6466 (2014). [CrossRef]  

12. R. Horisaki, T. Kojima, K. Matsushima, and J. Tanida, “Subpixel reconstruction for single-shot phase imaging with coded diffraction,” Appl. Opt. 56(27), 7642 (2017). [CrossRef]  

13. T. A. Ando, R. Horisaki, and J. Tanida, “Three-dimensional imaging through scattering media using three-dimensionally coded pattern projection,” Appl. Opt. 54(24), 7316–7322 (2015). [CrossRef]  

14. R. Horisaki, R. Takagi, and J. Tanida, “Learning-based single-shot superresolution in diffractive imaging,” Appl. Opt. 56(32), 8896–8901 (2017). [CrossRef]  

15. R. Horisaki, R. Takagi, and J. Tanida, “Deep-learning-generated holography,” Appl. Opt. 57(14), 3859–3863 (2018). [CrossRef]  

16. G. Barbastathis, A. Ozcan, and G. Situ, “On the use of Deep Learning for Computational Imaging,” Optica 6(8), 921–943 (2019). [CrossRef]  

17. K. Yamazaki, R. Horisaki, and J. Tanida, “Imaging through scattering media based onsemi-supervised learning,” Appl. Opt. 59(31), 9850–9854 (2020). [CrossRef]  

18. J. Tanida and Y. Ichioka, “Optical logic array processor using shadowgrams,” J. Opt. Soc. Am. 73(6), 800–809 (1983). [CrossRef]  

19. J. Tanida and Y. Ichioka, “Optical-logic-array processor using shadowgrams. iii. parallel neighborhood operations and an architecture of an optical digital-computing system,” J. Opt. Soc. Am. A 2(8), 1245–1253 (1985). [CrossRef]  

20. J. Tanida and Y. Ichioka, “Programming of optical array logic. 1: Image data processing,” Appl. Opt. 27(14), 2926–2930 (1988). [CrossRef]  

21. J. Tanida, M. Fukui, and Y. Ichioka, “Programming of optical array logic. 2: Numerical data processing based on pattern logic,” Appl. Opt. 27(14), 2931–2939 (1988). [CrossRef]  

22. H. Fleisher and L. I. Maissel, “An introduction to array logic,” IBM J. Res. Dev. 19(2), 98–109 (1975). [CrossRef]  

23. L. Atzori, A. Iera, and G. Morabito, “The internet of things: A survey,” Comput. Networks 54(15), 2787–2805 (2010). [CrossRef]  

24. J. Gubbi, R. Buyya, S. Marusic, and M. Palaniswami, “Internet of things (iot): A vision, architectural elements, and future directions,” Futur. Gener. Comput. Syst. 29(7), 1645–1660 (2013). [CrossRef]  

25. D. G. Feitelson, Optical Computing (The MIT press, Cambridge, MA, 1988).

26. R. Athale and D. Psaltis, “Optical Computing Past and Future,” Opt. Photonics News 27(6), 32–39 (2016). [CrossRef]  

27. J. Tanida, “Computational imaging demands a redefinition of optical computing,” Jpn. J. Appl. Phys. 57(9S1), 09SA01 (2018). [CrossRef]  

28. H. Bartelt, A. W. Lohmann, and E. E. Sicre, “Optical logical processing in parallel with theta modulation,” J. Opt. Soc. Am. A 1(9), 944–951 (1984). [CrossRef]  

29. Y. Hayasaki, M. Mori, T. Yatagai, and N. Nishida, “Simplification of space-variant parallel logic operations using the temporal method,” Opt. Rev. 4(2), 305–308 (1997). [CrossRef]  

30. E. Upton and G. Halfacree, Raspberry Pi User Guide (Wiley, 2014).

Supplementary Material (1)

NameDescription
Supplement 1       Supplement

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1.
Fig. 1. Schematic diagram of computational imaging and examples of optical encoding and computational decoding for specific functions.
Fig. 2.
Fig. 2. Procedure of shadow-casting logic for a single pixel pair.
Fig. 3.
Fig. 3. Encoding and decoding processes in digital-optical computational imaging.
Fig. 4.
Fig. 4. Spatial code and its extension to temporal and spectral codes.
Fig. 5.
Fig. 5. Data-transfer mode for functional data transmission over free space.
Fig. 6.
Fig. 6. Schematic diagram of message transmission over free space using the functionality of the data-transfer mode.
Fig. 7.
Fig. 7. Schematic diagram of data transmission in IoT device communication based on digital-optical computational imaging.
Fig. 8.
Fig. 8. Bit assignment on LED array. $2 \times 2$ LEDs display any one of the spatial codes shown in Fig. 2.
Fig. 9.
Fig. 9. (a) A view of the experimental setup and (b) a captured image of the digital-optical codes displayed on the sensor nodes located at five different positions.
Fig. 10.
Fig. 10. A digital-optical code of sensor node 1 extracted from the captured images and the digital-optical codes of different colors retrieved by hand.
Fig. 11.
Fig. 11. Retrieved digital-optical codes, correlated results, and decoded signals of sensor node 1 on RGB channels.

Tables (5)

Tables Icon

Table 1. Properties of spatial, temporal, and spectral codes.

Tables Icon

Table 2. Function codes indicating two-variable binary logic.

Tables Icon

Table 3. Examples of useful data-transfer modes.

Tables Icon

Table 4. Data assignment on RGB channels of the LED array.

Tables Icon

Table 5. Encoded and decoded signals on G channel of sensor node 1.

Equations (1)

Equations on this page are rendered with MathJax. Learn more.

f ( a , b ) = α a ¯ b ¯ + β a ¯ b + γ a b ¯ + δ a b ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.