Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Optics inspired logic architecture

Open Access Open Access

Abstract

Conventional architectures for the implementation of Boolean logic are based on a network of bistable elements assembled to realize cascades of simple Boolean logic gates. Since each such gate has two input signals and only one output signal, such architectures are fundamentally dissipative in information and energy. Their serial nature also induces a latency in the processing time. In this paper we present a new, principally non-dissipative digital logic architecture which mitigates the above impediments. Unlike traditional computing architectures, the proposed architecture involves a distributed and parallel input scheme where logical functions are evaluated at the speed of light. The system is based on digital logic vectors rather than the Boolean scalars of electronic logic. The architecture employs a novel conception of cascading which utilizes the strengths of both optics and electronics while avoiding their weaknesses. It is inherently non-dissipative, respects the linear nature of interactions in pure optics, and harnesses the control advantages of electrons without reducing the speed advantages of optics. This new logic paradigm was specially developed with optical implementation in mind. However, it is suitable for other implementations as well, including conventional electronic devices.

©2007 Optical Society of America

1. Introduction

The history of optical signal processing and computing can be divided into two main periods. The first period contains the rapid growth of the field and its decline while the second period is the more conservative, but secure revitalization that is leading us to a much brighter future. As often happens in science and technology, the evolution of the field has not necessarily followed the track anticipated by its initiators.

The modern era of optical signal processing and computing started with the introduction of the coherent optical processors by Cutrona, et al. and VanderLugt.1, 2 These coherent processors exploited the main attributes of optics, namely its massive parallelism and speed. In particular, the success of the VanderLugt correlator raised much interest and extended applications were anticipated. As a result, intensive research efforts were started and they continued for about two decades. Unfortunately, the attributes of optics were compensated by severe technical difficulties, the lack of proper devices and their inflexibility. Consequently, researchers turned toward digital computing and attempted to replace electrons by photons, an approach which was doomed due to the fundamental differences between the behavior of light and electrons.

The fact that nature exploits optics so extensively indicates that optics must have some attributes that cannot be matched by other media. One of these attributes is the capability of photons to solve the wave equation with any given set of boundary conditions. Moreover, this wave equation is solved almost instantaneously, in parallel, and with no expenditure of energy. Energy is dissipated only at the moment when the final result is detected. In contrast, digital computers dissipate energy for each intermediary step of a calculation even if those intermediary calculation results are not interesting. As pointed out by Caulfield and Shamir in 1990,3 this attribute alone is an adequate incentive to pursue optical signal processing. Optical Fredkin gates and gate-arrays4–7 were also introduced within the effort to exploit the non-dissipative nature of processing with light.

Optical signal processing and computing has thus far been limited to certain narrow niches. Apart from technological difficulties, the main reason that has prevented a wider applicability is the relative inflexibility of optical architectures as compared to their electronic counterparts. In this article we introduce a novel structure for digital computing networks specifically designed to exploit the attributes of optics. The new concepts introduced here enable the implementation of all combinational Boolean operations in a reversible way without the need to dump information or interchange signal and control inputs as required by Fredkin’s approach.

In the next section we present an overview of the structure followed by a discussion of its advantages and limitations. Section 4 is devoted to an overview of various technological aspects of implementing optical gates and their assembly into complete computing networks. Section 5 discusses some prospects for the future and this is followed by concluding remarks.

2. Directed logic: an Overview

We believe that the failure of optical computation has been due, in large part, to the fact that researchers have primarily attempted to make optics behave like electronics. That is, optical computation researchers have adopted the paradigm of logic used in electronics, but, as indicated above, this paradigm does not recognize the inherent differences between optics and electronics. In this section we introduce a new logical paradigm, “Directed Logic”, which is specially adapted to the features and promises of optics. This section will introduce the main features of directed logic.

Directed logic circuits are networks of simple elements. The primary input to the circuit is a vector, not the traditional Boolean scalar. Each element performs a specific operation on its input vector. The cumulative effect of these operations yields the value of the function in question. Locally, each element performs one of two operations on its input vector. Which operation is performed is determined by a separate Boolean input to the element. The output vector is passed on either in part or in whole to subsequent elements in the network, each of which performs an operation determined by its Boolean input. In short, directed logic architecture performs distributed parallel computation of a function and its negation, using a computational method based on vector operations.

The most obvious difference between directed logic and traditional logic is the lack of anything corresponding to a Boolean logic gate. Computation in a directed logic circuit is performed by a network of elements each of which performs a simple switching operation. The operation of each element is independent of the operation of the other elements in the circuit. Computation of the logical function is performed only by the circuit as a whole; one cannot in general identify portions of the circuit as computing sub-functions. A second noticeable feature is that directed logic computation inherently computes both a function and its inverse simultaneously. Thus any circuit that computes AND also, and at the same time, computes NAND.

Directed logic operates on vectors represented as ordered pairs of Boolean values. Thus (0,0),(0,1), and (1,0) are admissible values. It will turn out that (1,1) is not admissible, corresponding in some ways to the notion of contradiction. There are two operators in directed logic which we call “pass” and “switch”. Both operators are monadic – they take only one argument. Pass (hereafter P) is the identity operation, it’s output is the same as it’s input. Switch (hereafter S) reverses its input vector. Thus S(1,0) = P(0,1) = (0,1), S(0,1) = P(1,0) = (1,0) and S(0,0) = P(0,0) = (0,0). If we interpret (1,0) as “True” and (0,1) and “False”, then sequences of S and P can be used to calculate certain Boolean functions on given arguments. Trivially, S calculates Boolean negation and P calculates Boolean identity. Less trivially, we can compute Boolean XOR and XNOR with a string of S and P elements as illustrated in Figure 1 and described in the following paragraph.

Suppose we want to calculate XOR(v 1,v 2,…,vn). This is computed by the string of elements E 1,E 2,…En where Ei is P if vi = 0 and S if vi = 1. Importantly, this means that the nature of each element is determined by the value of the corresponding variable. In Figure 1 this control is represented as a separate input at the top of each element. E 1 receives the input (1,0) and thereafter the output of each element is the input to the next.

In the simple XOR/XNOR circuit just described the output vector of one element is used as the input vector of a subsequent element. However in other circuits the output vector of one element may be de-composed before being used as input. Here we use the OR/NOR circuit as an example.

The two-input signal OR/NOR circuit depicted in Figure 2 starts with the input vector (1,0). The information signal, A, is introduced simultaneously into the two elements, A and A', while input B activates the operation of element B. The output vector of the initial element is split into its component scalars which are then composed with other scalars to form new vectors that serve as the inputs to subsequent elements. Although it is tempting to see the intermediate scalars as Boolean values, this should be resisted. In some instances both outputs of an element will be 0. (This happens in the B element of Figure 2 when A = 0.) If these were true Boolean values, this would amount to a violation of the law of excluded middle.

 figure: Fig. 1.

Fig. 1. XOR/XNOR computed with a series of directed logic elements

Download Full Size | PDF

 figure: Fig. 2.

Fig. 2. OR/NOR circuit in directed logic

Download Full Size | PDF

It is worth spending a bit of time understanding what is going on in Figure 2. The output vector of the first A element is split into two paths, it may be helpful to think of the top path as the negative path and the bottom path as the positive path. One of these two paths will carry the scalar 1, the other will carry the scalar 0. In this sense the position of the scalar 1 carries the information of the value of A. If A is positive (the scalar takes the bottom path) then it is switched to the OR output line without the need to check the value of B. If, on the other hand, A is negative, then it becomes necessary to check the value of B. A negative result from B yields the scalar 1 at NOR, a positive result sends it down to the second A element. Since the scalar 1 only passes through B when A is negative, it will be passed through the second A element to the OR output. The reader should take the time to convince himself that the 1 will always arrive at either OR or NOR while the other will have a scalar 0. The extra 0 input at B merely ensures that every path has either a 1 or a 0 scalar. It should be reiterated here that the (1,0) input vector elements are maintained throughout the whole network and are merely redirected. Minor changes to the OR/NOR circuit produce circuits for all of the other two input Boolean functions. (diagrams are given in Appendix A)

2.1. Directed logic and Fredkin Gates

Readers familiar with conservative logic will recognize the elements in the above circuit as similar to Fredkin gates. Fredkin used controlled switches as gates, and proved their completeness with respect to Boolean logic in Ref. 8. However there are crucial difference between Fredkin’s implementation and ours, despite their being based on the same fundamental elements..

Fredkin conceived of the three inputs as being interchangeable, that is, any output of one gate could be used as any input for a subsequent gate. His proof of the universality of Fredkin Gates essentially depends on this interchangeability. In all our circuits the controlling input is kept entirely separate from the other two lines. This careful separation facilitates the optical implementation of our circuits along with their generalizability to other media.

Current versions of optical Fredkin gates4,9 require that the gates be controlled by a signal which is different in character from the other inputs. In most cases the control signal is electronic. However, even in ‘all optical’ solutions the controlling signal differs from the other inputs, for example by being of a different wavelength. This difference means that optical Fredkin gates have not been cascadeable in the way that Fredkin envisaged. This in turn has meant that they cannot be shown to produce all of Boolean logic. We resolve this problem by reinventing cascading.

Directed logic is generalizable in that even though it was designed with optics in mind, it may be used in any medium at all. The first author is fond of pointing out that it could be implemented using railroad trains and track switches, though of course such an implementation is unlikely to be commercially feasible.

2.2. Directed logic cascading

2.2.1. From syntax to circuits

It is commonplace that logical syntax may be developed using different grammars. We wish to highlight here the difference between the grammars of infix and suffix notation. Infix notation places the operator between its arguments while suffix notation places the operator after the arguments. Thus ‘p OR q’ is in infix notation while ‘pq OR’ is in suffix notation.

As we scan an expression in suffix notation from left to right, we encounter the arguments for each function prior to the operator itself. In a somewhat similar way, if we look at the operation of gates cascaded in the traditional way, the inputs for each gate are computed temporally prior to the output for the gate. Thus there is a certain analogy between the temporal ordering of computation in traditional cascading and the spatial ordering of symbols in suffix notation. To be sure, the analogy is not perfect. For example, expressions in suffix notation are linearly ordered while the corresponding circuits are only partially ordered. However, analogies may be instructive even when imperfect. Infix notation is substantially different than suffix notation in that the arguments for the main operator in infix notation are typically not scanned until well after the operator is scanned.

Directed logic circuits cascade in a way that we suggest is analogous to the infix notation rather than suffix notation. Directed logic circuits are cascaded by nesting within each other rather than chaining one after another. In fact, there is typically no explicit computation of the operator that is temporally distinct from the computation of the operands. Instead, the operator is computed by computing the operands within the context of a particular type of structure. The structure within which computation is performed determines the function computed. This is a vastly different model than that used for traditional cascading.

The crucial observation underlying this new model is that directed logic circuits are themselves controlled switches in many ways similar to the elements of which they are composed. Both have a constant 1 and a constant 0 input. Both have two outputs, one of which is the negation of the other, and both are controlled by one or more control lines that may be of a different type than the data lines. What this suggests is that we may treat the argument positions in the structures as composed of “black boxes” which may in turn be replaced either by individual elements or by directed logic circuits. Circuits for complex functions may thus be built by recursive nesting. We begin with the circuit for the main operator. Into each argument position we place the directed logic circuit that computes the appropriate function. We continue placing circuits into argument positions until we reach arguments which may be computed with a single element, i.e. the level of literals. There is one caveat to this process. When an argument position is marked with a′, that indicates that the circuit filling that position is to be reversed. When the position is filled by a single switch this does not matter as the functionality of a single switch is the same whether reversed or not. However, when the position is occupied by a more complex circuit the ‘decomputation’ of the two inputs can only be accomplished by reversing the entire circuit. (It is possible to replace the reversed circuit with other devices. This can be accomplished optically, for example, by a coupler followed by an amplifier. However, non-switch based circuits may lack the speed advantages of section 3.1 and the energy efficiency of conservative and reversible circuits. For these reasons we concentrate on the use of reversed circuits. We believe it is important to realize that the decomputation can be done entirely within the logic without the need for extra-logical devices.)

2.2.2. Building a complex circuit

As an example, in this section we demonstrate how the new notion of cascading is used in constructing the circuit for (A OR B) AND C. The circuit for (A OR B) AND C is obtained by inserting circuits for A OR B into the A and A′ argument places of the circuit for AND as indicated in Fig. 3. The end result of the nesting is shown in Figure 4.

In this way a circuit for any logic formula can be ‘read off’ of the structure of the formula in much the same way that traditional logic circuits can be. This point cannot be stressed too much. The circuit of Figure 4 follows simply from the syntax of the formula. It is not necessary to have the truth table or any normal form of the formula, a correct circuit follows from the formula itself.

Of course the circuit that can be simply read off a formula may not be the most efficient circuit for computing the function represented by the formula. For example, the circuit of Figure 4 can be simplified by noting that (A OR B) AND C is equivalent to C AND (A OR B) and then reading the circuit off of the latter formula. The result is shown in Figure 5. Moreover, if we are not interested in the complement output, a significant fraction of the circuit can be discarded (i.e. the right hand decomputing OR gate in Fig. 4).

3. Advantages and limitations of directed logic

3.1. Slower is Faster

In traditional logic architectures each gate must wait for the result of previous gates before computing its result. Upon receiving all inputs the gate effects a change of state depending on the inputs and shortly thereafter the output stabilizes at a particular value. The time between the initial presentation of the inputs and the time the output signal stabilizes is known as the ‘gate delay’. Gate delay is influenced by two factors: the speed of the state change and the size of the gate.

 figure: Fig. 3.

Fig. 3. Recursive cascading to produce a complex circuit: OR circuits are nested within an AND circuit as indicated by the arrows.

Download Full Size | PDF

 figure: Fig. 4.

Fig. 4. The final circuit for (A OR B) AND C

Download Full Size | PDF

 figure: Fig. 5.

Fig. 5. A simpler circuit computing the same function as Figure 4

Download Full Size | PDF

Size is important because it takes a given type of signal a certain amount of time to traverse a given distance in a given medium. For a toggle switch (e.g. a typical light switch), this is the length of time it takes for a signal to cross the switch when the switch is already in the correct position. The longer the path the signal must travel, the longer it will take to travel it. This is one reason why smaller gates are preferable to larger gates, and why it is preferable to have gates packed as closely together as possible. We will call this sort of delay ‘path delay’ as it is the kind of delay that is present even in the paths that connect the various logic elements. Path delay is reduced simply by making the circuit physically smaller, even if each gate switches at the same speed as larger versions.

A second kind of delay stems from the fact that each logic element must make a state change based upon its inputs. Although a signal may travel across the gate prior to the completion of the state change, it’s value is unpredictable and cannot be used as a logic output until the signal has stabilized after the state change. For a toggle switch, this is the length of time it takes for the switch to change from “on” to “off”. Let us call this type of delay ‘state delay’. State delay is reduced by using faster switches even if the size of the circuit is not altered.

Together, the path and state delays determine the speed at which logic elements can operate. Delay times vary with the type of gate and the specifics of its construction, but typically are on the order of a few tenths of a nanosecond. The portion due to path delay is, of course, cumulative. However, since each gate depends on the previous gates for its input, state delays also add up. Later gates on a path cannot begin their state changes until all previous gates in the path have completed theirs. So each additional gate on a path adds both path delay and state delay to the circuit as a whole. For example, a path involving 100 gates, each of which has a state delay of 0.5ns, will have a delay of 50ns above and beyond the time it would take for the signal to traverse a wire of the same length as the circuit (the path delay of the circuit). This is one of the reasons that minimization is so important in circuit design; minimized circuits are significantly faster than non-minimized ones. By reducing the number of gates, logical minimization of a circuit increases the speed of the circuit more than simply shrinking it would.

The situation is quite different in directed logic. Each element needs to make a state change just as electronic gates do. However, because the signals that determine the state changes do not pass through previous gates in the circuit, all elements can perform their state changes simultaneously. As a result, the circuit is slowed by only the duration of a single state delay, not by the cumulative state delays of the entire path. The upshot of this is that directed logic circuits can have markedly less state delay than traditional circuits, even when they are built of elements which are individually slower.

Returning to the above example of a circuit with a path length of 100 gates or elements, and assuming a conservative 5 ns state delay for the directed logic elements, the overall state delay remains 5ns for the whole circuit. The advantage of directed logic circuits increases as the circuits become larger.

The speed advantage of optical directed logic circuits cannot be overemphasized. Although controlled optical switches are currently large compared to their electronic counterparts, we expect that technology will continue to make them smaller. Because all switches in a DL circuit operate simultaneously, reducing the path delay in this way is much more significant than similar reductions in traditional implementations. Assuming that switches can eventually be fabricated at about a 1 micron pitch, directed logic could potentially compute on the order of 105 level of logic in a clock period of 1/3 nanosecond. This represents a factor of about 104 over current logic implementations.

While in this paper we are targeting optical computing paradigms, we should reiterate here our earlier remark that directed logic circuits can be implemented in many other ways, including conventional electronic components. This should be remembered when considering the present state of art with integrated optics still being in its infancy. Thus, at present the elements of which optical directed logic circuits are built are both larger and slower than comparable electronic elements. Nonetheless, optical directed logic circuits can already experience substantially less delay than traditional circuits. This advantage is likely to increase substantially as the field of integrated optics matures.

3.2. Conservative and Reversible

The fact that directed logic is based on Fredkin-like gates, combined with the fact that there is no detection during computation, means that computational processes are reversible and conservative. As a result there is no theoretical lower bound to the energy dissipated in computation as there is with traditional electronic logic.

Conventional implementations of Boolean logic destroy information and so also incur an energy cost. This point was originally made in Ref. 10. A clear exposition of the claim along with a discussion of the implications for logic implementations is provided in Ref. 8. A typical Boolean operation, say AND, takes two bits of input and returns only one bit of output. There is thus less information at the output than at the inputs. The lost information must be dissipated as heat. Although the heat of information loss is quite small on a per gate basis, it can be significant for large structures.

Directed logic is conservative and reversible. (For expositional reasons we have shown the control inputs as terminating at the elements they control. To be fully conservative, they must lead beyond the elements and be gathered at the end of the complete circuit. This is a trivial exercise, but the additional lines make the diagrams harder to read, thus we have left them out.) Every circuit has just as many ’1’ and ’0’ inputs as ’1’ and ’0’ outputs respectively. No information is lost as a result of the logic, and thus the logic, by itself, does not require the expenditure of energy. To be sure, the operation of the gate will require energy; there must be energy input into the system for it to work. But there is no loss due solely to the logic as there is in traditional logic systems.

Of course conservative logic can also be implemented in other ways. As with non-conservative logic electronics is ahead of optics in this regard. (c.f.,1112 and13) DL makes no claim to being the best or most complete implementation of conservative logic. Indeed, our primary point is simply that DL is an optically implementable logic, something that has been hard to come by. The fact that it is conservative is an added bonus.

3.3. Two limitations

In its present form DL was designed to implement logic functions. Obviously, general computing is not limited to the evaluation of a single logic operation and much more work is needed to expand the DL paradigm toward more general computing. It is quite likely that such an expansion will require a departure from pure DL but it will still maintain an advantage over conventional systems. Below we discuss the two main limitations of pure DL.

3.3.1. Fan-out

One important logical operation that is missing from DL is fan-out. Of course specific implementations of DL may have readily accessible fan-out operations. For example, if DL is implemented optically with outputs encoded by amplitude, fan-out may be implemented with reversed y-couplers and some form of amplification (see 4.2.1). However, this requires more than the simple switching networks of DL and so is not part of DL per se.

3.3.2. Sequential Logic

As we have presented it here, directed logic is only able to perform combinational logic. Full computation requires sequential logic in addition. We have reserved discussion of DL-based sequential logic for a subsequent publication as it requires the use of further elements, such as fan-out above, which are not properly part of DL and which may depend on the specific of physical implementation in ways that DL itself does not.

4. Toward optical implementation

This section is devoted to a discussion of possible optical implementations of directed logic networks. Starting from the basic switching elements – the optical Fredkin gates, aspects of advanced implementations of large networks will be addressed as well.

4.1. Optical controlled switches: an updated survey

For applications in logic networks one is usually interested in logic gates containing nonlinear bistable elements. This is not the case for the directed logic networks. Moreover, the basic configuration of a controlled switch is not restricted to digital signals; in principle, one may use these gates for processing analog signals as well. Since the introduction of optical Fredkin gates in Ref. 4, technology has evolved and one may compile a new list of possibilities. The following list contains the most obvious ones, many other options exist with more to come in the future.

4.1.1. Polarization switching gate

Polarization switching gates were recently considered in Ref. 14. In such a gate the input and output lines correspond to two orthogonal polarizations of a light beam (or a waveguide channel of an integrated optical system) traversing a single controlled polarization rotator, such as a liquid crystal light modulator, an electro-optic (Kerr) modulator or any other means of polarization rotation. If desired by architectural requirements, the ’0’ and ’1’ signals can be converted into intensities by properly positioned polarizing beam splitters.

The main advantages of this gate is its relative simplicity and its robustness. The fact that the control input has a different nature than the signals that propagate through the gate is a problem for conventional applications of Fredkin gates but our architecture is designed with this characteristic in mind.

4.1.2. Acousto-optic gate

The two input lines are laser beams incident on an acousto-optic deflector (either bulk or integrated surface acoustic wave devices) at the Bragg angle. Considering the acoustic signal as the control input, if there is no acoustic signal, the two beams continue unaffected. An acoustic signal of the proper frequency will deflect each beam into the direction of the second one, interchanging the two outputs. This is also a simple gate but less robust than the polarization gate. It also has a control signal different from the propagating signals. For our application, this kind of gate can be easily cascaded and integrated. For example, a single acoustic pulse may activate many gates as it travels along the system as will be required by the systolic architecture to be discussed below.

4.1.3. Photorefractive gate

Basically photorefractive materials change their refractive index as a function of illumination. Thus, they serve as an ideal candidate for all-optical gates where light provides the propagating signals as well as the control signals. In general, photorefractive media will be used as phase modulators controlled by light as discussed below. In a more sophisticated architecture the photorefractive gate is based on four-wave mixing. In this gate, two counter-propagating, coaxial beams are the two input beams. The control signal constitutes the two other counter-propagating pump beams. The two inputs are transmitted if the control beam is absent and they are phase-conjugated when the pump is present resulting in switching between the outputs.

4.1.4. Waveguide coupler gates

In optical communication and integrated optical systems controlled waveguide and fiber couplers are widely employed. 2x2 controlled couplers perform exactly the task of a controlled switch. While state of the art couplers are based on electronic control, it is straight forward to use photodetection combined with the electrooptic coupler to facilitate optical control. A more advanced technology would be the use of photorefractive material for direct optical control of the coupling constant. Optical control signals can be applied from outside, normal to the plane of the waveguides, or within the waveguide itself.

4.1.5. Mach-Zehnder gates

Mach-Zehnder interferometers are also ideal as controlled switches and they are also highly developed for communications technology. Although the Mach-Zehnder interferometer has two input ports and two output ports conventional applications utilize only one of each. In our architectures we exploit both ports and a controlled phase modulator (liquid crystal, photorefractive medium, electro-optic phase modulator, etc.) can switch between the two outputs implementing a Fredkin gate.

4.2. Advanced network implementations

For a specific application, a complete network will be assembled by a large number of elements. Usually these will be constructed of the same kind of embodiment, such as one of those listed above, but, for some applications, it will be advantageous to use more than one kind. Moreover, additional components may be incorporated in the network as well. In this subsection we consider several concepts that will be handy for actual implementations of these networks.

4.2.1. Amplification

Ideally, controlled switches are lossless. However, any practical device has losses, and in a large network these losses must be compensated for. The obvious approach is to insert amplification within the network. This can be done for each gate or periodically along the net. Possible implementations include the use of Erbium doped optical fiber amplifiers, semiconductor optical amplifiers, quantum dot amplifiers and any other method that can regenerate a weakened optical signal. To maintain the attributes of the present computing paradigm, one should avoid signal regeneration by a detector-laser combination unless extremely fast systems can be incorporated.

4.2.2. Parallel addressing and smart pixels

Up to this point we have mainly discussed the conceptual layout of the computing network and its components concentrating primarily on the propagation of the signal along the network. However, we have not specifically addressed the technical issue of the information input, which must be directed to the control line of each element. As indicated in section 2 the information vector must be distributed throughout the whole logic network with each vector element activating one or more gates in parallel. Until now it was tentatively assumed that the individual gates were hard wired to the input vector elements, thus implementing a fixed operation for a given network. To execute any other operation with the same network, the wiring to the control elements must be altered.

A significant improvement can be achieved if the wiring is replaced by a separate logic circuit which establishes the connection layout in a way that can be easily modified according to the required operation. An efficient way to implement such a connection is through an array of smart pixels that are optically addressed. There are several possibilities for such an optical addressing scheme out of which a particularly attractive one is a spatial light modulator which can project the complete control layout in parallel.

4.2.3. Systolic process

As noted in subsection 3.1, unlike conventional logic arrays, the operating speed of a directed logic network is limited only by the propagation time of the signal through the network. Since in some of the embodiments of optical controlled switches the transit time is determined only by the propagation speed of light through the medium of the net this can be very fast. Nevertheless, at computing rates practiced today, even this speed sets practical limits if the network has a reasonable length. Moreover, for most applications, existing interfaces between the network and the external world will usually set an even more severe limit to the computing speed.

The speed limitation indicated above can be partially mitigated if the information is introduced into the control elements sequentially in synchronization with the signal propagating within the network. With such an arrangement, after the signal passes a certain cross-section of the net it is ready to accept the next information vector. The result is a systolic processor that can be operated in a pulsed mode: A pulse of light is injected into the first layer of controlled switches together with the control information. The controls of the switches in subsequent layers are activated only just before the pulse riches that layer. Meanwhile the first layer is ready to accept the next light pulse together with the next control sequence. One way to achieve a proper synchronization in the parallel architecture described above is a projection system which is inclined with respect to the plane of the logic network.

5. Prospects for the Future

5.1. Scaling with Technology

Much of the work in optical logic has focused on developing specific devices, a switch here, a NOR gate there. The current proposal is, to a certain extent, technology independent. There are many ways of implementing optical Fredkin gates4,15 and as optical technology advances, we expect ever faster, smaller mini-optics, and more energy efficient Fredkin gates to appear. Because the current proposal is based on a re-envisioning of the logical paradigm, these new technologies may be seamlessly incorporated in much the way that improved transistors have been incorporated in electronic logic.

5.2. 2-D Multi-channel computation

Most logic functions as discussed in subsection 2.2 can be implemented by a directed logic network containing two or three rows of gates. Considering these rows as a computing channel, it is straight forward to extend the system in two or three dimensions5 to form a multichannel computing system for highly parallel computation.

5.3. Beyond logic operations

As already indicated elsewhere in this paper, DL in its present form was developed to evaluate logic functions in a optics friendly way. This development lead to a new concept for implementing logic operations but it still lacks a general computing scenario. Future work will be dedicated to the mitigation of the present limitations such as fan-out and fan-in as well as conventional cascading and feedback operations. It is quite likely that these extensions will require a payment in terms of reversibility, energy gain and speed. Nevertheless, we expect to maintain the advantages of DL at least within the sections where logic operations are performed.

6. Conclusion

Directed Logic is an optics-friendly logic. It was designed from the start as a way for implementing Boolean logic that takes advantage of the unique properties of photons as opposed to electrons. It can implement all of combinational Boolean logic entirely in the optical domain. Nevertheless, it is quite possible that the future will lead us in a different direction, such as the implementation of directed logic entirely in the electronic domain. The authors feel that the optimal track should be based on hybrid architectures where the attributes of both media are exploited in an optimal manner as is promoted in this paper. In the proposed architecture light does what it is proven to be good at; it conveys and processes the information while electrons do the essential non-linear operation of controlling the individual Fredkin gates.

Initially, we expect the procedure to be applied in niche markets and products. As indicated in this paper, one such application is the multiple input XOR operation, which is an indispensable element in all encoding and decoding schemes. While conventional implementations of such XOR gates relies on cascading a large number of two-input gates, directed logic implements the operation simultaneously for all the inputs, leading to a significant time advantage.

Appendix A: List of Boolean circuits

Figures 6-10 present directed logic circuits for each of the two input Boolean functions.

 figure: Fig. 6.

Fig. 6. AND/NAND circuit

Download Full Size | PDF

 figure: Fig. 7.

Fig. 7. Circuit for IF A, THEN B and its negation A AND NOT B

Download Full Size | PDF

 figure: Fig. 8.

Fig. 8. Circuit for IF B, THEN A and its negation B AND NOT A

Download Full Size | PDF

 figure: Fig. 9.

Fig. 9. OR/NOR circuit

Download Full Size | PDF

 figure: Fig. 10.

Fig. 10. A fully cascadeable XOR/XNOR circuit.The XOR gate shown in Figure 1 works only for XOR among literals, it can’t be cascaded into in the style of section 2.2. The XOR design shown here is fully cascadeable though somewhat larger.

Download Full Size | PDF

Acknowledgments

We wish to acknowledge with thanks the stimulating and constructive discussions with Jonathan Westpahl, H. John Caulfield and Liz Golden. We are also indebted to an anonymous referee for his constructive comments. This work was partially supported by the United States Missile Defense Agency under contract No. HQ000604C0010.

References and links

1. L. J. Cutrona, E. N. Leith, C. J. Palermo, and L. J. Porcello, “Optical data processing and filtering systems,” IRE Trans. Inform. Theory IT-6,386–400 (1960). [CrossRef]  

2. A. B. VanderLugt, “Optical data processing and filtering systems,” IRE Trans. Inform. Theory IT-10,139–145 (1964).

3. H. J. Caulfield and J. Shamir, “Wave-particle duality processors – characteristics, requirements and applications,” J. Opt. Soc. Am. A 7,1314–1323 (1990). [CrossRef]  

4. J. Shamir, H. J. Caulfield, W. Micelli, and R. J. Seymour, “Optical Computing and the Fredkin Gates,” Applied Optics 25(10),1604–1607 (1986). [CrossRef]  

5. J. Shamir, “Three-dimensional optical interconnection gate array,” Appl. Opt. 26,3455–3457 (1987). [CrossRef]   [PubMed]  

6. M. M. Mirsalehi, J. Shamir, and H. J. Caulfield, “Residue arithmetic processing utilizing optical Fredkin gate arrays,” Appl. Opt. 26 (1987). [CrossRef]   [PubMed]  

7. K. M. Johnson, M. Surette, and J. Shamir, “Optical interconnection network using polarization-based ferroelectric liquid crystal gates,” Appl. Opt. 27,1727–1733 (1988). [CrossRef]   [PubMed]  

8. E. Fredkin and T. Toffoli, “Conservative Logic,” International Journal of Theoreticl Physics 21(3/4),219–253 (1982). [CrossRef]  

9. H. J. Caulfied, R. A. Soref, L. Qian, A. Zavalin, and J. Hardy, “Generalized Optical Logic Elements – GOLEs,” under review .

10. R. Landauer, “Irreversibility and Heat Generation in the Computing Process,” IBM Journal of Research and Development 5,183–191 (1961). [CrossRef]  

11. S. Younis, “Asymptotically zero energy computing using split-level charge recovery logic,” Ph.D. thesis , MIT (1994).

12. S. Younis and T. Knight, “Asymptotically zero energy split-level charge recovery logic,” in Proc. of 1994 International Workshop on Low Power Design, pp.177–182 (1994).

13. M. P. Frank, “Physical Limits of Computing. Lecture #24 Adiabatic CMOS,” (2002). URL http://www.cise.ufl.edu/mpf/physlim/PhysLimL24.ppt.

14. A. I. Zavalin, J. Shamir, C. S. Vikram, and H. J. Caulfield, “Achieving stabilization in interferometric logic operations,” Appl. Opt. 45(2),360–365 (2006). [CrossRef]  

15. H. J. Caulfied and R. A. Soref, “Universal reconfigurable optical logic with silicon-on-insulator resonant structures,” Photonics and Nanostructures - Fundamentals and Applications (to appear).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. XOR/XNOR computed with a series of directed logic elements
Fig. 2.
Fig. 2. OR/NOR circuit in directed logic
Fig. 3.
Fig. 3. Recursive cascading to produce a complex circuit: OR circuits are nested within an AND circuit as indicated by the arrows.
Fig. 4.
Fig. 4. The final circuit for (A OR B) AND C
Fig. 5.
Fig. 5. A simpler circuit computing the same function as Figure 4
Fig. 6.
Fig. 6. AND/NAND circuit
Fig. 7.
Fig. 7. Circuit for IF A, THEN B and its negation A AND NOT B
Fig. 8.
Fig. 8. Circuit for IF B, THEN A and its negation B AND NOT A
Fig. 9.
Fig. 9. OR/NOR circuit
Fig. 10.
Fig. 10. A fully cascadeable XOR/XNOR circuit.The XOR gate shown in Figure 1 works only for XOR among literals, it can’t be cascaded into in the style of section 2.2. The XOR design shown here is fully cascadeable though somewhat larger.
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.