Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Characterization of ultrashort electromagnetic pulses

Open Access Open Access

Abstract

Ultrafast optics has undergone a revolution in the past two decades, driven by new methods of pulse generation, amplification, manipulation, and measurement. We review the advances made in the latter field over this period, indicating the general principles involved, how these have been implemented in various experimental approaches, and how the most popular methods encode the temporal electric field of a short optical pulse in the measured signal and extract the field from the data.

© 2009 Optical Society of America

1. Introduction

1.1. Need for Ultrafast Metrology

The development of mode-locked lasers in the mid-1960s gave rise to the problem of ultrashort pulse measurement, since optical pulses generated by this class of lasers were of significantly shorter duration than any photodetector response time. Despite the vastly increased capabilities of modern photodetectors in terms of both speed of response and sensitivity, the equally dramatic improvements in laser technology have sustained this disparity, and, indeed, with the emergence of attosecond pulse trains have extended it.

The need for metrology has increased along with the development of new sources and their application in a wide range of new fields. Of course, determining the pulse durations remains critical, both because this parameter is an important specification of the laser output needed for other applications and because it provides a diagnosis of the system operation.

Modern mode-locked lasers, for example, generate pulses with spectral bandwidths exceeding one octave, corresponding to pulses the brevity of which is well beyond anything that can be characterized by means of fast photodetectors. The operation of such lasers relies on a complex combination of linear pulse propagation, influenced by the chromatic dispersion of the laser material, the mirrors, and the intracavity dispersion-compensating devices, together with nonlinear effects, such as self-phase modulation of the pulse in the laser material or saturation of an intracavity absorption, such as in a semiconductor saturable absorber mirror (SESAM), as well as, in some cases, space–time coupling. The optimization of a mode-locked laser is made practicable by means of a diagnostic providing the electric field as a function of time or frequency, or at least providing some temporal information such as the second-order intensity autocorrelation [1, 2, 3, 4, 5, 6, 7]. Figure 1 displays the characterization results of the output pulse from a Ti:sapphire oscillator. One of the primary limits at present to the generation of few-cycle pulses directly from a laser is the dispersion of the intracavity mirrors and other optical elements. Historically, detailed measurements of laser output were able to identify this as a major obstacle to generating pulses of greater brevity.

Chirped pulse amplification (CPA) operates by lowering the peak power of the pulses in the amplifier gain medium, which would otherwise induce nonlinear phase distortion of the pulse or damage to the amplification medium [8, 9]. To achieve this, the pulses are stretched in time by means of a dispersive delay line, often based on angular dispersion from diffraction gratings or prisms. After amplification, the pulse is temporally recompressed by using an inverse dispersive delay line, or compressor, that compensates for the dispersion introduced by the stretcher and the propagation through the other amplifier elements. Obtaining peak performance from such a scheme requires a reliable and rapid method to characterize the output. Accurate characterization of the output pulses enables the optimization of the parameters of the system, such as the distance between the two gratings of the compressor and the angle of incidence of the input beam on the gratings. The usual optimization parameters in such an application are the duration of the recompressed pulses, since the peak power scales as the ratio of the energy per pulse to the duration, and the temporal contrast, since prepulses can hinder the control or observation of the physical processes of interest, for example the ionization of a target. Examples of optimization of CPA systems can be found in [10, 11]. Figure 2 presents an example of CPA optimization obtained with spectral phase interferometry for direct electric-field reconstruction (SPIDER) [11]. The spectral phase of the output pulse from a Ti:sapphire CPA system is plotted before and after optimization. The compressor optimization consisted in adjusting the angle of the diffraction gratings relative to the input beam and the relative distance between the two gratings. The large cubic spectral phase gives rise to significant prepulses, and the compressor optimization leads to a better pulse shape with a higher peak intensity.

The bandwidth of an optical pulse can be increased while maintaining a deterministic phase relation between different spectral components by means of various nonlinear optical processes such as self-phase modulation and harmonic generation. All of these require careful compensation of the spectral phase in order to lead to an output pulse with a shorter duration than the input. Further, these processes are dynamically complicated and sensitive to the details of the input pulse shape. Therefore, even characterizing the raw output pulse before recompression can be a difficult task [12, 13, 14, 15]. Figure 3 shows the characteristics of a filament pulse compressor that allows the generation of high-energy ultrashort optical pulses [16]. The output pulses have complicated spectral and temporal structures, and correlation between time and frequency can be visualized in the chronocyclic time–frequency space by calculating the spectrogram of the output electric field.

Shaped pulses, sometimes of a quite complex temporal structure, are now commonly used to both probe and manipulate fundamental processes in atoms and molecules [17, 18]. For instance, the study of primary processes in biologically relevant systems via ultrafast microscopy is now quite common. The details of the pulse shapes usually contain important information about the dynamical process under study, and this information, residing in both the temporal amplitude and the temporal phase of the field, can be extracted only by using modern techniques of metrology. For example, the important phenomenon of the self-action of intense optical pulses in nonlinear media gives rise to a complicated set of dynamics that has analogs in many branches of physics. The study of the changes in the shape of pulses propagating through such media provides access to these dynamics. Optical pulse shaping can also be used to generate trains of pulses useful in optical telecommunications or to generate shaped electrical waveforms after optical-to-electrical conversion by a photodetector. Figure 4 displays the intensity of a train of pulses generated by an optical pulse shaper based on a liquid crystal spatial light modulator placed in a zero-dispersion line.

There are also important technological applications of metrology. In optical telecommunication systems, such metrology is used to characterize modulators and the dispersion of fiber links. The propagation of light pulses carrying bits of information from transmitter to receiver demands long-distance transmission through various passive and active elements. Both linear processes (e.g., the frequency-dependent transmission and phase of the medium) and nonlinear processes (e.g., the intensity-dependent index and absorption) modify the electric field of the pulses, and these effects must be quantified in order to maximize the overall system performance. There is also a need to optimize the shape of the pulses that are used, which are typically carved from a continuous-wave (cw) quasi-monochromatic source by a modulator. Although telecommunication pulses typically have durations ranging from 1ps to 1ns, the deleterious effects of propagation are significant because propagation distances of the order of 1000km in a physical medium can be involved. Moreover, the pulses used in state-of-the-art commercial and research optical telecommunication systems are beyond the reach of all-electronic characterization. Further, temporal phase information is also needed. A review of high-speed diagnostics for optical telecommunication systems is presented in [19], and some examples of diagnostics used in the telecommunication environment can be found in [20, 21, 22, 23]. Figure 5 presents results of a pulse carver optimization using a real-time pulse characterization diagnostic based on linear spectrograms [24]. The electric field of the output of a Mach–Zehnder modulator driven by a 20GHz sinusoidal RF drive depends on the phase difference between the two arms of the interferometer, which is controlled by a continuous voltage. The modulation format can be set to 33% return to zero (8ps pulses with identical phases) or 67% carrier-suppressed return to zero (17ps pulses with a π phase shift between adjacent pulses). Data-encoded optical signals require diagnostics that can acquire an invertible experimental trace in a single shot or can gather statistically significant samples of an optical waveform. Intensity sampling diagnostics use nonlinear cross-correlation schemes, while sampling systems based on homodyne detection are sensitive to the electric field of the waveform under test. These diagnostics are not detailed in this review, and relevant references can be found in [19].

Thus ultrafast metrology continues as an active field of research. In this review, we outline the basic approaches to pulse-field measurement and describe in detail several of the most popular and powerful methods. The aim is both to summarize the state of the art in this rapidly moving field and to provide sufficient analysis and design criteria that a researcher may begin to implement these methods in the laboratory.

1.2. Historical Developments

Considerable insight into the field can be gained from a look at the history of ultrafast metrology. Therefore we outline in a more or less chronological order the major advances over the past nearly four decades, since the invention of mode locking. To prefigure the structure of the review, the chronology is given in terms of several threads that have led to distinct techniques.

Of course, much has been written on the subject in recent years, and a number of excellent reviews of a few methods exist. A review of pulse measurement methods prior to 1974 can be found in the article by Bradley and New [25]. A review of concepts for shaping and analysis of short optical pulses can be found in a 1983 article by Froehly and coworkers [26], and a summary of methods available up to 1990 in the chapter by Laubereau in the book edited by Kaiser [27]. In more recent developments, a comprehensive description of frequency-resolved optical gating (FROG) is given in a book edited by Trebino [28], and a broader treatment of the field in the context of ultrafast optics is to be found in the book by Diels and Rudolph [29].

For pulses in the range of several picoseconds or longer, the temporal intensity can be measured by using a streak camera or a photodiode. Combined with a measurement of the spectrum of the pulse, this information can be used to provide a reasonable characterization. For pulses in the femtosecond and indeed attosecond range such methods are not possible, in part because detectors that can absorb across the spectral range of these pulses are not always available, but mostly because direct photodetection is not fast enough.

Thus a different approach is required, one that avoids the need for fast detectors. Nonetheless, it is clear that something with a response time as brief as the pulse itself is needed, and the initial work in the field made use of the most obvious short event to hand—the pulse itself. This was used to synthesize a rapidly responding material excitation by means of the nonlinear optical responses of several common processes and materials. This trend has continued, though it is now understood that measurements using linear systems may also provide sufficient information to measure a pulse field.

Historically, the lack of fast detectors led to the adoption of nonlinear optical processes for the purposes of pulse characterization. An early technique, and one that dominated the field for many years, was the measurement of the intensity autocorrelation of a pulse. This relies on the observation that the efficiency of a nonlinear process (such as second-harmonic generation) is higher for higher input intensities. Thus the second-harmonic signal when a pair of pulses is incident on a nonlinear crystal is greater when they arrive at the same time, as opposed to when they arrive separately. Therefore a measurement of the second-harmonic power as a function of the delay gives an estimate of the pulse duration. Although it is possible to determine something of the time dependence of the phase of the pulse from more sophisticated versions of the autocorrelation, it is not possible to get a complete map. Nonetheless, it was also understood that the temporal structure of the pulses was strongly dependent on the spectral phase, and various representations of the pulse were developed to help visualize this and to develop new methods of measurement. Among the most fruitful of these was the spectrogram, consisting of a time–frequency map that plotted what was called the instantaneous frequency of the pulse.

Many current methods of metrology borrow heavily from methods developed in other branches of optics, notably imaging and testing. The strong analogy between space and time in Maxwell’s equations was employed first in metrology in the concept of the “time lens.” This employs a temporal phase modulator (a time-domain analog to a spatial phase modulator—a lens) and a dispersive delay line (a frequency-domain analog to free-space propagation) to generate a time-stretched replica of the input pulse whose temporal intensity can be measured by using a relatively slow detector.

The ideas of time–frequency representations have proved to be one of the most lasting in metrology, through both spectrography and its cousin sonography. The former builds on the original notion by developing methods to measure the spectrum of sequential time slices of the test pulse. The latter, in contrast, measures the time dependence of adjacent spectral slices. The relationship of these time–frequency (or chronocyclic) representations to various pulse measurement schemes has been an important source of ideas in ultrafast metrology.

The most common form of spectrography relies on the idea that the nonlinear mechanism used to measure an autocorrelation effectively provides a time gate. Since the second-harmonic intensity is largest when the two replicas of the test pulse overlap in time, this mechanism can be thought of as the test pulse selecting a time slice of itself. Resolving the spectrum of the resulting second-harmonic radiation, rather than simply measuring the total second-harmonic power, then provides more information than the autocorrelation alone. This notion was developed in various ways, but a major breakthrough came when it was realized that the pulse field could be retrieved from spectrograms measured in this way by using methods from image processing. This idea forms the basis of one of the currently popular nonlinear methods—frequency-resolved optical gating. This form of spectrography has spawned numerous offspring, buoyed by the discovery of a powerful inversion algorithm based on the recognition that a matrix representation of the spectrogram should be of rank one. This allows powerful singular value decomposition methods to extract the most appropriate fields by iteration.

Sonography has also developed considerably since its first demonstration contemporaneously with frequency-resolved optical gating. Sonography requires a detector of reasonable speed, usually synthesized by using a nonlinear mechanism, and pulse reconstruction can be accomplished rapidly by using a deterministic algorithm applicable to a modified sonogram or an image-processing-related iterative inversion.

A second analogy from optics that has proved equally fruitful for pulse characterization is interferometry. This is a well-known and sensitive method for extracting phase information about an optical field and is commonly employed in precision metrology. The measurement of the time-dependent phase of an optical pulse was first demonstrated by interfering it with a reference pulse of known character. In this case, a source with a narrowband spectrum provides a usable reference. This is analogous to using a point source as a reference wave in optical testing. It is possible to make this into a self-referencing interferometer by selecting the narrow frequency band for the reference from the test pulse spectrum. This is analogous to the generation of a spatial reference wave in optical testing by selecting a single point from the input beam. The temporal interference pattern obtained by combining different frequencies from the test pulse spectrum, recorded for example by using a cross-correlation (synthesized by using the same nonlinear mechanisms as had been developed for the autocorrelator), enables the relative phases of the two spectral components to be determined.

A different approach to interferometry avoids the need for a fast detector and is instead based on measurements made in the frequency domain. In spectral interferometry, a test pulse is gauged by using a known reference, and the phase difference extracted by using a noniterative algorithm. This method, first applied to the measurement of pulse distortions through propagation, was shown to be an extremely sensitive tool for pulse characterization, capable of attaining the quantum limit for photodetection. The proposal of self-referencing spectral interferometry showed how the lack of a known reference pulse could be circumvented by interfering the test pulse with a frequency-shifted (or spectrally sheared) version of itself and measuring the resulting two-pulse spectrum. A nonlinear implementation of this idea—SPIDER—retains the direct and unique inversion characteristic of interferometry with the ability to acquire and process data on individual laser pulses at rates up to 1kHz. This opens the way to measurements of statistical properties of pulse trains.

The final optical imaging analogy that has proved useful in pulse measurement is chronocyclic tomography. In this approach, the pulse field is reconstructed from a set of spectra after phase modulation. The name comes from the idea that these spectra represent projections of the chronocyclic phase-space representation of the field in the form of a Wigner function. The phase modulation serves to rotate the phase space, thus giving a series of one-dimensional sections of this two-dimensional entity. Further analogs from optical imaging have been used to develop simplified versions of this method that require significantly fewer measurements at the cost of some assumptions about the character of the input pulses. One of the earliest attempts to reconstruct pulse fields this way was to use the dispersive properties of glass to temporally stretch the pulse, then to determine the time dependence of the stretched pulse intensity by using an intensity cross-correlation with the unstretched pulse. This idea was further extended by using phase retrieval algorithms. Several approaches to tomographic pulse reconstruction have also been made by using self- and cross-phase modulation to achieve the phase-space rotation, coupled with measurements of the spectrum of the modulated pulse. Because the nonlinear mechanisms involve an ancillary pulse that must have a known (if not precisely controlled) shape, the best approach to inverting such measurements also makes use of iterative image processing algorithms.

Although most of the development of metrology for ultrafast pulses has made use of nonlinear optical processes, this turns out to be an artifact of the time scales involved rather than a fundamental restriction. In fact, it has been shown that complete characterization can be achieved by using entirely linear optical filters, such as spectrometers and temporal modulators. The only requirements are that the apparatus consist of at least one filter with a time-stationary response (e.g., a spectrometer) and at least one with a time-nonstationary response (e.g., a phase modulator) [30]. All of the above-mentioned classes of measurement can be formulated in this way. It is only in recent years, however, that it has been possible to get active optical elements that have time and modulation scales appropriate for operating on subpicosecond pulses. Nonetheless, since modulation and photodetection with sub-100ps response times are required for a 40Gbits optical telecommunications system, these elements are also available to build linear methods that have proved very useful in assessing the performance of systems and components in this application. Interferometric, spectrographic, and tomographic methods have been implemented by using linear temporal modulators and spectral measurements, with time-integrating or “slow” detectors (i.e., with electrical bandwidths much less than 40GHz).

If one goes beyond purely integrating detectors that measure only the energy of a pulse or the power in a pulse train, it is possible to achieve a different sort of measurement by using linear optics. In this case, using linear filters, one can, for example, select two different frequencies from the pulse spectrum and mix them on a fast photodiode. The phase of the resulting modulation of the photocurrent is closely related to the phase difference of the two optical components.

The subsequent sections of this review deal with each of these approaches in detail, providing both an analysis of the methods and a description of the current state of the art. To begin, a general analysis of pulse characterization is described in Section 2. This covers all known methods and indicates the necessary minimum conditions that all apparatuses must satisfy in order to operate successfully. The following sections describe each of the major approaches in turn: spectrography in Section 3, tomography in Section 4, and interferometry in Section 5. Some current areas of research are described in Section 6, together with the conclusions.

2. General Principles and Concepts of Pulse Characterization

2.1. Concepts and Protocols

What apparatus is required to characterize an optical pulse? Given the plethora of techniques purporting to achieve this aim, it is worth considering what are the necessary and sufficient conditions that must be satisfied by any method that provides a complete specification of an ultrashort pulse field. It is possible to formulate such conditions quite generally in terms of the theory of linear filters. The fact that this is possible already implies that apparatuses based entirely on linear optical elements are capable of pulse characterization, something that was not appreciated until relatively recently [31]. In practice, many of the popular methods make use of nonlinear optical processes, but this is because it has proved difficult to construct linear filters of the correct character or response time, rather than for any fundamental reason.

The inversion protocols for extracting the pulse shape from measured data are also made clear by working with linear transformations, which allows a categorization of different experimental methods and the development of a catalog of what is possible in principle. An important feature introduced by the use of nonlinear optics is that the inversion algorithms can become more complicated. In some cases they remain deterministic, but in others an iterative search for a solution satisfying the twin constraints of the signal form and the data must be implemented. Thus the two major considerations in pulse characterization are the physical arrangement of the linear and nonlinear components and the inversion procedure [32].

2.1a. Representation of Pulsed Fields

Before describing how pulse measurement methods operate, it will be well to set out some definitions and to delineate exactly what we mean by pulse characterization. An electromagnetic pulse may be specified by its electric field alone, at least below intensities that give rise to fields that will accelerate electrons to relativistic energies. Thus a useful notation is that of the analytic signal, whose amplitude and phase we seek to determine via measurement. The (real) electric field of the pulse is given in terms of the analytic signal by

ɛ(t)=E(t)+E*(t),
where E(t) is an analytic function of time (and space, although we suppress other arguments here for clarity). The signal E is taken to have compact support in the domain [T,T], and we shall refer to it henceforth as the “field of the ultrashort pulse.” The analytic signal is complex and therefore can be expressed uniquely in terms of an amplitude and phase,
E(t)=|E(t)|exp[iψ(t)]exp(iψ0)exp(iω0t),
where |E(t)| is the time-dependent envelope, ω0 is the carrier frequency (usually chosen near the center of the pulse spectrum), ψ(t) is the time-dependent phase, and ψ0 a constant, known as the “carrier-envelope offset phase.” The square of the envelope, I(t)=|E(t)|2, is the time-dependent instantaneous power of the pulse, which can be measured if a detector of sufficient bandwidth is available (note that absolute measurement of the instantaneous power is usually not required, and most pulse characterization diagnostics return a normalized representation of this quantity). The derivative of the time-dependent phase accounts for the occurrence of different frequencies at different times, i.e., Ω(t)=ψt is the instantaneous frequency of the pulse that describes the oscillations of the electric field around that time. The frequency representation of the analytic signal is defined by the Fourier transform
Ẽ(ω)=|Ẽ(ω)|exp[iϕ(ω)]=TTdtE(t)eiωt,
so that ɛ̃(ω)=Ẽ(ω)+Ẽ*(ω). Note that Ẽ contains only positive frequency components, since E(t)=0(dω2π)Ẽ(ω)eiωt. This is therefore a reasonable description for the field of pulses propagating in charge-free regions of space, for which the pulse area TTdtɛ(t)=ɛ̃(0) must be zero. Here |Ẽ(ω)| is the spectral amplitude and ϕ(ω) is the spectral phase. The square of the spectral amplitude, Ĩ(ω)=|Ẽ(ω)|2, is the spectral intensity (strictly speaking this quantity is the spectral density—the quantity measured in the familiar way by means of a spectrometer followed by a photodetector). The spectral phase describes the relative phases of the optical frequencies composing the pulse, and its derivative ϕω is the group delay T(ω) at the corresponding frequency, i.e., the time of arrival of a subset of optical frequencies of the pulse around ω. A pulse with a constant group delay, i.e., a linear spectral phase, is said to be Fourier-transform limited because it is the shortest pulse that can be obtained for a given optical spectrum [33].

A single pulse is said to be completely characterized if the function E(t) is known on the domain [T,T]. In practice one usually adopts the approximation that the pulse is also characterized by the function Ẽ(ω) on the domain [ω0Ω,ω0+Ω], where Ω is a frequency that is large compared with the bandwidth of the source (i.e., large compared with the inverse of the coherence time of the source). The sampling theorem prevents a function from having compact support in both domains, but it is usually a reasonable approximation to truncate the spectral function at large frequencies, where the spectral density falls below the noise level of the detector. With this approximation, all integrals are usually formally extended from to +. Figure 6 presents the temporal and spectral representations of a Gaussian pulse with flat spectral phase, a quadratic spectral phase ϕ(2)ω22, and a cubic spectral phase ϕ(3)ω36. The impact of these different phases on the temporal profile of the pulse can also be seen.

2.1b. Correlation Functions and Chronocyclic Representations

The analytic signal describing a pulse field is not sufficient to specify the character of an ensemble of pulses. For example, each pulse from an amplifier system may be, indeed probably is, slightly different from its predecessor and successor, and thus each pulse represents a different realization of the ensemble. A complete specification of the ensemble is given by the probability distribution of the field at each point in time. However, it is usually sufficient to specify a set of correlation functions of the field, since experiments can be described in terms of a fairly small number of such functions.

The lowest order of these is the two-time correlation function C(t,t)=E(t)E*(t), where the brackets indicate either a time average over the pulse train or an ensemble average over repeated experiments. Note that C(t,t) is not the same as the correlation function that is derived from the pulse spectral intensity |Ẽ(ω)|2 via the Wiener–Khintchine theorem. In that case, the Fourier transform yields the reduced correlation

C(τ)=dtC(t,t+τ)=dω2π|Ẽ(ω)|2eiωτ.
This obviously contains no more information than the spectrum itself, in contrast to C(t,t), which encodes dynamical correlations in the electric field across the pulse.

A knowledge of the two-time correlation function allows one to determine whether the pulses in the ensemble are coherent, that is, to determine whether they have the same pulse field. A useful number characterizing the similarity of the pulses in the ensemble is the degree of temporal coherence, defined by [34]

μ=dtdt|C(t,t)|2[dtC(t,t)]2.
When this number is unity, all pulses are the same, and values smaller than 1indicate various degrees of statistical variations in the pulse ensemble. In the case of identical pulses, the correlation function factorizes and the ensemble may be characterized by a single pulsed field. The analytic signal may be extracted from a single line of the correlation function, since E(t)C(t,t0).

Similarly, one can define a two-frequency correlation function C͌, which is the double Fourier transform of its temporal counterpart, by

C͌(ω,ω)=Ẽ(ω)Ẽ*(ω).
For application to interferometry, it is most useful to consider correlation functions written in terms of center- and difference-frequency variables,
C͌(Δω,ωC)=Ẽ(ωC+Δω2)Ẽ*(ωCΔω2),
and similarly in the time domain,
C(tC,Δt)=E(tC+Δt2)E*(tCΔt2),
where ωC=(ω+ω)2, Δω=ωω, tC=(t+t)2, and Δt=tt. An obvious way to measure correlation functions is to make repeated measurements of the electric field of the individual pulses that make up the realizations of the ensemble. From a large set of such measurements it is possible to estimate the statistics of the pulse field of the ensemble, or at least to determine some of the lower correlation functions. This has been done in several cases, and the fluctuations in pulse shape from a chirped-pulse amplifier system have been systematically characterized [35, 36].

Beyond this approach, though, the correlation functions are difficult to measure. The reason is that the measured signals are functionals of the two-time correlation function and cannot always be simply inverted. This problem is usually ignored, and it is assumed from the beginning that the pulse train may be described in terms of a field. This makes possible more or less straightforward inversion algorithms.

Whether or not the pulse train is coherent, it is nevertheless useful to consider metrologic schemes in the two-dimensional space of the correlation function. The reason is that the output of all absorptive detectors is proportional to bilinear functional of the pulse field, and thus a linear functional of the two-time correlation function. However, it is frequently productive to work with a variation of the correlation function that uses the two-dimensional chronocyclic space (t,ω). The intuitive concept of time-dependent frequency can be most easily seen within this space.

One approach to defining a representation of the pulse in the chronocyclic phase space is a Fourier transformation of the correlation function with respect to the time difference of the two arguments:

W(t,ω)=dtE(t+t2)E*(tt2)eiωt.
W can be calculated equivalently from the frequency representations of the analytic signal:
W(t,ω)=dω2πẼ(ω+ω2)Ẽ*(ωω2)eiωt.
The function W is known as the “chronocyclic” Wigner function [37, 38]. Other time–frequency representations of the pulse are also possible, and can be related to the Wigner function via convolution. Particularly useful features of the Wigner function are that it is real valued and that its marginals are the temporal and spectral intensities
I(t)=|E(t)|2=dω2πW(t,ω),
Ĩ(ω)=|Ẽ(ω)|2=dtW(t,ω).
Note also that the Wigner function is sufficient to characterize both individual pulses and partially coherent pulse ensembles. It is not in general positive definite, and cannot therefore be considered a probability distribution of the pulse field. Indeed, negative Wigner functions are quite common even for simple pulse shapes and also characterize many of the complicated pulse shapes that are in current use in, say, quantum control. For example, the Wigner function of a pair of phase-locked Gaussian pulses is negative over a significant region of the phase space. The restrictions on the pulse duration and bandwidth required by Fourier’s theorem are inherent in the Wigner function, and there is a minimum area of the chronocyclic phase space that it may occupy.

Some examples of chronocyclic Wigner functions for common pulse shapes are shown in Fig. 7. The Wigner function of a Gaussian pulse with a flat spectral phase does not show a correlation between time and frequency. However, with a quadratic spectral phase (i.e. a linear group delay), the Wigner function acquires a slope indicating the correlation between time and frequency, and its contours provide some intuition about the pulse chirp via a graph of the time-dependent frequency. The Wigner function of a pair of phase-locked Gaussian pulses and a Gaussian pulse with cubic spectral phase take some negative values, although their marginals are, as expected, positive quantities.

2.2. General Strategies for Pulse Characterization

2.2a. Linear Systems Model and Photodetection

The oscillations of the electric field ɛ are too fast to be directly resolved by photodetection. Photodetectors are intrinsically square-law detectors, sensitive to the intensity of optical waves but not to their phase. Indirect approaches are therefore used to provide phase sensitivity with square-law photodetectors and to resolve the shape of short optical pulses. The basic elements required for the complete characterization of optical pulses are quite simple: at least one fast shutter or phase modulator, a spectrometer or an element to temporally stretch the pulse via dispersion, and one or two beam splitters. One can think of all elements except the beam splitters as two-port devices: a pulse enters at one port and exits at another. There may be ancillary ports for control signals, such as the timing signal for the shutter opening, for example, but these are essentially linear systems, in that the output pulse field scales linearly with the input pulse field. Thus the input–output relations for these devices are all of the kind

EOUTPUT(t)=dtH(t,t)EINPUT(t),
where EINPUT and EOUTPUT are the analytic signals of the input and output field, and H is the (causal) response function of the device. We will specify the functional forms of the common linear filters given above in subsequent paragraphs.

The beam splitter is a four-port device, having two input and two output ports. The input–output relations for this device are well known, and the main utility in pulse measurement applications is either in providing a means to generate a replica of a pulse (one input and two outputs) or to combine the unknown pulse with a reference pulse (two inputs and two outputs), or as elements of an interferometer in which phase-to-amplitude conversion takes place.

We take it that all detectors available have a response that is slow compared with the pulse itself. For pulses with temporal structure of duration less than 100fs or so, this is usually the case. The measured signal from a square-law detector is related to the incident field, for our purposes, by

S(t)=dtR(tt)|E(t)|2,
where R(t) is the detector response function, which is causal, real, and time stationary. When the detector has a response time TR taken to be much longer than the duration of the field E, but shorter than the time between pulses realizing an ensemble, then the signal becomes a functional of the test pulse energy alone, S=TRTRdt|E(t)|2=dt|E(t)|2.

Linear filters may be separated into two classes: those with time-stationary response functions and those with time-nonstationary responses. For the former class, which includes the spectrometer and dispersive delay line, the shape of the output pulse does not depend on the arrival time of the pulse. For the latter class, which includes the phase modulator and the shutter, the output pulse shape clearly depends on the timing of the input pulse with respect to the shutter opening or the modulator drive signal.

Time-stationary filters are characterized by response functions of the form H(t,t)=S(tt), and a particularly useful class of time-nonstationary filters by H(t,t)=N(t)δ(tt). Equivalently, in the frequency domain, these stationary filters take the general form H͌(ω,ω)=S̃(ω)δ(ωω), and the nonstationary the form H͌(ω,ω)=Ñ(ωω), where the tilde represents a Fourier transform.

We may postulate a general linear filter function in the form of a temporal Fresnel kernel:

H(t,t)=12πbexp[i2b(at22tt+dt2)],
where a, b, and d are complex numbers (though real for phase-only filters). H is unitary and satisfies
dtH(t,t)H*(t,t)=δ(tt).
Most common manipulations can be described by such a filter function; indeed, an arbitrary response function may be constructed piecewise by concatenating several such filters. Representative response functions for the various elements named above, that facilitate analysis of all pulse measurement apparatuses, are particular cases of this general kernel with elements a, b, d determined by the action of the filter. Some particularly useful examples are the response functions for
Shutter (time gate),NA(tτ;τg)=exp[(tτ)2τg2],
Linear phase modulator,NLP(t;ψ(1))=exp(iψ(1)t),
Quadratic phase modulator,NQP(t;ψ(2),τ)=exp[iψ(2)(tτ)22],
Spectrometer,S̃A(ωΩ;Γ)=exp[(ωΩ)2Γ2],
Delay line,S̃LP(ω;ϕ(1))=exp(iϕ(1)ω),
Dispersive delay line,S̃QP(ω;ϕ(2),ωR)=exp[iϕ(2)(ωωR)22],
where N and S̃ indicate that the response functions are associated with nonstationary and stationary filters, the superscripts A and P denote amplitude-only and phase-only filters, and for phase-only filters, L and Q denote linear phase modulation and quadratic phase modulation. Although the spectrometer’s response function is not strictly causal, it can be made so by the introduction of a suitable delay that has no physical significance in the measurement protocol [39]. The various parameters characterizing these filters are the following.
  • Gate: opening time τ, and duration of opening window τg,
  • Linear temporal phase modulator: frequency shift ψ(1),
  • Quadratic temporal phase modulator: amplitude of quadratic phase modulation ψ(2), and time of phase modulation extremum τ
  • Spectrometer: center frequency of passband Ω, and bandwidth Γ
  • Delay line: delay ϕ(1)
  • Dispersive delay line: group-delay dispersion ϕ(2) at reference frequency ωR

Some of these parameters become variables in the measured signal function (for example, the opening time of the gate or the center frequency of the spectrometer passband), while other parameters might be constant (for example the gate duration). The variable parameters are those on which the inversion is based. It is therefore important to ensure that the number and type of filters are adequate to the task.

2.2b. Measurement of the Marginals of the Wigner Function

Pulse energy spectrum. Possibly the simplest quantity that can be measured for an isolated pulse is its spectrum. It is therefore also one of the most important, since it can be used as a consistency check for all pulse characterization techniques: the reconstructed spectrum must match an independent direct measurement of the spectrum. The pulse spectrum is usually determined by the obvious expedient of sending the pulse into a spectrometer (usually a grating spectrometer is necessary to yield the necessary dispersion) and recording the output as a function of the setting of the passband of the instrument, Ω. Then the spectrometer output is

S(Ω;Γ)=dt|dtSA(tt;Ω,Γ)E(t)|2=dω2π|S̃A(ωΩ)|2Ĩ(ω),
with S̃A(ω) given by Eq. (2.20). When the bandwidth of the spectrometer is small relative to the variations of the spectrum, the measured signal is simply the optical spectrum of the source. It is clear that the signal measured in this way contains no information about the spectral phase of the pulse and can at best lead to the optical spectrum when the filter passband is significantly narrower than the features of the spectrum of the pulse under test.

It is of particular importance that an equation equivalent to Eq. (2.23) can be written for all stationary filters; i.e., the output signal of a device built entirely with stationary amplitude and/or phase filters does not depend on the spectral phase of the pulse. The implication is that stationary-only filters are insufficient to gather information on the spectral phase of an optical pulse and can at best return information on the spectral intensity of the pulse.

In terms of the Wigner representation, the measurement of a pulse spectrum is written as

S(Ω;Γ)=dtdω2πW(t,ω)WS(t,ω;Ω,Γ),
where WS(t,ω;Ω,Γ) is the Wigner chronocyclic representation of the spectrometer response function, defined by
WS(t,ω;Ω,Γ)=dtSA(tt2;Ω,Γ)SA(t+t2;Ω,Γ)eiωt.
Equation (2.24) gives the same result as Eq. (2.23); that is, the measured signal is the frequency marginal of the pulse chronocyclic Wigner function, which is the spectrum of the source.

The important point is that all measurement techniques can be represented in terms of the overlap of the Wigner function of the test pulse (or pulse ensemble) and that of the apparatus. This provides an important insight into ways that the experimental data may be inverted to obtain the pulse field itself, as discussed in subsequent sections.

Measurement of the temporal intensity. The measurement of the temporal intensity of an optical pulse is in some sense the conjugate operation of the measurement of its optical spectrum. If the pulse under test is sent to a fast square-law detector (or equivalently, a fast shutter followed by a time-integrating detector), the measured output is

S(τ;τg)=dt|NA(tτ;τg)|2I(t).
Because of the relatively slow response time of photodetectors, the measured signal is usually only a blurred representation of the actual temporal intensity of an ultrashort optical pulse. However, direct photodetection is commonly used with longer pulses, such as the pulses used in optical telecommunication systems.

2.2c. Autocorrelations and Cross-Correlations

Intensity autocorrelation. The simplest technique for gathering at least moderate quantitative information about the temporal structure of an ultrashort pulse is the intensity autocorrelation. In a conventional autocorrelator, two pulse replicas are mixed in a nonlinear material, and the average power of the generated beam (measured with an integrating detector) is recorded as a function of the relative delay between the two replicas. By assuming a functional form for the temporal shape of the test pulse, one can estimate its duration from the autocorrelation trace. Because of its simplicity, autocorrelation is by far the most common method of measuring ultrashort optical pulses. However, the autocorrelation trace by itself provides little more than an estimate of the pulse duration.

A variety of schemes based on intensity correlation measurements were demonstrated during the late 1960s and early 1970s [40, 41, 42, 43, 44]. One particular form, the second-order intensity autocorrelation function (AC) became one of the standard techniques in the field for nearly two decades and is still in use today. This technique uses the lowest-order nonlinear process available, and therefore operates at the lowest power possible for a nonlinear process. This is important for making measurements of pulse trains from mode-locked laser oscillators, whose energy is in the picojoule to nanojoule range. The most common approach to extracting information from this AC data, however, involves fitting an AC calculated from a specific pulse shape.

Consider a material (say, a crystal) with second-order nonlinearity and two optical waves around the optical frequencies ω1 and ω2. The nonlinear susceptibility χ(2) links the second-order contribution to the nonlinear polarization to the electric field of the two waves by

P(2)(t)=χ(2)ɛ1(t)ɛ2(t).
With sufficient intensity and proper phase matching over the entire bandwidth of the two optical waves, a new optical wave is generated around the optical frequency ω1+ω2, and its electric field is therefore given by
E3(t)=E1(t)E2(t),
where proportionality constants have been removed for clarity. This mechanism is used for measurement in the following manner [Fig. 8(a)]. The pulse to be characterized is incident on an interferometer that generates two replicas of the pulse with an adjustable delay between them. The two pulses, whose fields are related by E1(t)=E(t) and E2(t)=E(tτ), are then mixed in the nonlinear material, and the pulse energy of the upconverted beam measured by using a square-law, integrating detector. Separation of the upconverted signals from the independent mixing of each field itself is ensured by noncollinear operation, or by using a type II crystal with orthogonally polarized replicas. The data consists of a one-dimensional array of numbers representing the upconverted pulse energy as a function of the delay and is represented here by the function AC(τ). This is related to the input field by
AC(τ)=dt|E(t)E(tτ)|2=dtI(t)I(tτ).
Such an apparatus therefore yields the intensity autocorrelation of the input pulse. This gives an indication of the temporal extent of the intensity, but it cannot distinguish the details of the pulse shape. For example, the autocorrelation is fundamentally symmetric with respect to τ. In the case when the energy of the upconverted signals [E1]2 and [E2]2 is measured on the same broad-area, time-integrating detector as the main signal E1E2, the autocorrelation signal is
AC(τ)=4I(t)I(tτ)dt+2I(t)2dt.
The background described by the second term on the right-hand side of Eq. (2.30) can be useful as a check for the data, since AC(0)AC(τ)=3. Any reduction from this value is a symptom of either misalignment or space–time coupling in the pulse (that is, the pulse shape depends on the position in the beam, so that ignoring the spatial dependence of the field is no longer valid). It may also be a symptom that the pulse ensemble is not coherent, since incoherent time-stationary backgrounds (such as amplified spontaneous emission from an amplifier chain) gives a lower contrast ratio.

The AC described by Eq. (2.29) yields a direct measure of the root-mean-square (rms) pulse duration ΔtI through the relation

ΔtAC2=τ2AC(τ)dτAC(τ)dτ=2t2I(t)dtI(t)dt=2ΔtI2.
Although this relation is exact, it is usually preferred experimentally to estimate the pulse duration by using a decorrelation factor assuming a functional form for the intensity of the pulse. The particular shape is chosen either for simplicity (such as a Gaussian) or on theoretical grounds (such as the secant hyperbolic, which is a solution to the dynamical equations of a passively mode-locked laser). Figures 8(b), 8(c) show the intensity autocorrelations of a pulse with a Gaussian spectrum with either a flat spectral phase (a Fourier-transform-limited pulse) or a quadratic spectral phase. The Gaussian autocorrelations obtained in both cases demonstrate that the autocorrelation by itself is not sufficient to determine the structure of the electric field of the pulse. However, the pulse duration obtained from the AC combined with the bandwidth obtained from a measurement of the spectrum thus determines the proximity of the pulse to transform-limited duration. If the pulse is not transform limited, then these measurements are insufficient to characterize the way in which the pulse is distorted, and decorrelation is in general ambiguous [45, 46, 47]. Thus there are two difficulties with inferring the pulse shape from AC-related measurements: the intensity profile is not unique, and the chirp cannot be determined.

Interferometric autocorrelation. The AC is often extended to its so-called fringe-resolved form [48] by using a collinear setup [Fig. 8(d)]. One advantage of the interferometric autocorrelation (IAC) is that it is sensitive to the phase of the electric field. Another advantage is that the quickly varying fringes lead to a natural calibration of the temporal axis, which is useful when characterizing few-cycle pulses. The upconverted signal is given by

IAC(τ)=dt|E(t)+E(tτ)|4.
The interferometric autocorrelation contains the intensity autocorrelation as well as correlation terms of E(t) and E(tτ). Since the field of the input pulse oscillates at a frequency ω0, the interferometric autocorrelation contains oscillating terms at the frequencies ω0 and 2ω0. These terms are phase sensitive and can in theory be used to estimate the temporal phase present on an optical pulse [49, 50]. Figures 8(e), 8(f) display the interferometric autocorrelations corresponding to a Gaussian Fourier-transform-limited pulse and a Gaussian pulse with a quadratic spectral phase. The interferometric autocorrelation is sensitive to the temporal phase of the pulse, and the two autocorrelations have different structures, although the corresponding intensity autocorrelations are similar.

N th-order intensity autocorrelation and cross-correlation. While autocorrelations provide a somewhat blurred version of the intensity of the pulse under test, a better picture can be obtained from a higher-order intensity correlation function, such as

Sn+1(τ)=dtIn(tτ)I(t).
If n is large enough, In is of much shorter duration than I, and thus Sn+1 is a good approximation to I. Since In is often generated by using an nth-order nonlinear process, this technique is not suitable for low-energy pulses. The Kerr effect can be used for such correlation and is particularly useful for UV pulses [51, 52], and phase-matched parametric gain has also been used [53]. A wave-mixing process is appropriate for determining the contrast ratio of pulses from high-energy amplifier systems, i.e., for measuring the intensity of the incoherent pedestal and prepulses before the main pulse [54, 55, 56]. Typically, a probe pulse around the frequency 2ω0 is generated from the test pulse around the frequency ω0 by using sum-frequency generation. The quadratic dependence of the probe pulse intensity on the input pulse intensity leads to a probe pulse with a better contrast than the pulse under test. The two pulses are mixed by using a tripling process, and the cross-correlation signal around the frequency 3ω0 is measured as a function of their relative delay. The third-order cross-correlation is a somewhat blurred representation of the intensity of the pulse under test, which can be measured with excellent dynamic range, since the scattering of the interacting pulses at ω0 and 2ω0 does not affect the measurement. The obtained traces need not be symmetric and can distinguish a prepulse (which can interact with a physical medium before the main pulse) from a postpulse (which is usually of no consequence). It is difficult to obtain the necessary high dynamic range with more sophisticated pulse characterization instruments, and third-order cross correlators are popular for high-dynamic-range measurements.

Another area where cross-correlations have been used extensively is pulse shaping. Pulse shapers can transform an input pulse into a temporally shaped waveform with a temporal support much larger than the input pulse duration. In many cases, a description of the intensity of the output field is sufficient, and this can be obtained by cross correlating the output waveform with a replica of the input pulse (Fig. 4).

Finally, recovery of the intensity of a test pulse from triple cross-correlations of the intensity has been attempted [57]. The temporal intensity can in theory be reconstructed unambiguously from the two-dimensional triple correlation function measured as a function of the two relative delays between replicas of the test intensity.

2.2d. Classes of Pulse Characterization Devices

Measurement devices may be categorized by the arrangements of filters through which the test pulse is passed before being detected. A second classification involves the type of algorithm that is used to extract the pulsed field from the experimental data. The data is a function only of the filter parameters, and there should be a sufficient number of these, and of the right kind, that complete information about the test pulse is encoded in the data. The requirements that this places on the apparatus will be laid out in this subsection.

The filters are characterized by a set of parameters {pi}. For example. the shutter transmits any portion of the pulse that falls within a time window of duration τg near the opening time τ. Likewise the spectrometer transmits any portion of the pulse that falls within a spectral window of width Γ near the passband center frequency Ω. The modulator adds a time-dependent phase onto the pulse, whose magnitude depends on the modulation index ψ(2), and the time of arrival of the pulse compared with the peak of the modulation at time τ. The dispersive line adds a spectrally dependent phase onto the pulse, whose magnitude depends on the second-order dispersion ϕ(2) and the position of the pulse spectrum with respect to the reference frequency ωR.

Although these are not the most general linear response functions possible, they are sufficient for our purposes. Moreover, the categories they represent are complete, in that any linear filter may be synthesized from a sequence of such filters. A pulse measurement apparatus therefore consists of a sequence of filters in series or in parallel, or both, followed by an integrating detector, as shown in Fig. 9.

Within this framework, the signal measured by a detector following a sequence of such filters is a function of the filter parameters. It may be written as the overlap of the Wigner function of the pulse with a chronocyclic window function F(t,ω;{pi}) depending only on the properties of the arrangement of linear filters:

D({pi})=dtdω2πW(t,ω)F(t,ω;{pi}).
The action of F is to smooth the pulse Wigner function to yield a positive signal measurable by a square-law detector. The trick is to design F such that W can be recovered from the experimental data D. If F is able, by suitable choices of the pi, to explore all of the phase space occupied by the pulse, then D contains sufficient information to reconstruct the pulse field. Indeed, this is both a necessary and sufficient condition for characterizing the pulse. The window function formed by a sequence of time-stationary filters can be shown to be dependent only on frequency ω, a window function formed using time-nonstationary filters on t alone. They do not generate a window function that can move throughout the phase space. Therefore all apparatuses must contain at least one time-stationary filter and one time-nonstationary filter. This is a necessary, but not sufficient condition. These elements may be combined in a number of different ways for pulse measurement. It is clear that the final filter (that is, the one immediately preceding the detector), must be an amplitude filter (or at least not be a phase-only filter), as phase-only filters will not change the detected signal. This restricts the number of configurations of filters that are allowed.

If arranged in series, there are four combinations. Two of these belong to the class of spectrographic measurements, and two to the class of tomographic measurements. If arranged in parallel, these elements give another four schemes. All of the latter are based on interferometry: two in the time domain, and two in the spectral domain. A final amplitude filter that is either a shutter (a time gate) or a spectrometer (a frequency gate) enables a slow detector to measure the interferogram. The full catalog of possible configurations is shown in Fig. 10: we describe each separately below.

It is instructive to revisit the autocorrelation in light of the chronocyclic representation. It consists of a delay [time-stationary filter S̃LP(ω;τ)=eiωτ] followed by a shutter (time-nonstationary filter NA(t)), so that the detected signal is

D(τ)=dtdω2πW(t,ω)F(t;τ)=dtI(t)F(t;τ).
The shutter response function, however, is the pulse field itself, so that NA=E. It is clear that F does not provide the necessary phase-space coverage and that this particular arrangement of filters is inadequate to characterize the electric field of an optical pulse in a general manner.

2.2e. In-Series Filtering Measurements

Spectrography. Spectrography refers to schemes in which a simultaneous measurement of the spectral and temporal intensity of the pulse is made. In particular, methods of this type are based on the measurement of the spectra of different temporal sections of the pulse, or on the measurement of the temporal intensity of different spectral sections (in which case it is known as a “sonogram”). For the former, one needs a fast shutter opening at time τ (with a speed comparable with, though not necessarily as fast as, the test pulse itself) followed by a high-resolution spectrometer with passband at frequency Ω [Fig. 10(a)].

For this arrangement, the Wigner function of the measurement apparatus is

WM(t,ω;{Ω,τ})=dω2π|S̃A(ωΩ)|2dtNA(t+t2τ)NA*(tt2τ)exp[i(ωω)t].
In the limit of narrowband filtering, i.e., |S̃A(ω;Ω)|2δ(ωΩ), the apparatus function occupies the minimum volume of phase space allowed by Fourier’s theorem and therefore smooths the pulse Wigner function by the least possible amount. In this limit, the signal may be written as
D(Ω,τ)=dω2πdtW(t,ω)WM(tτ,ωΩ)=|dtE(t)NA(tτ)exp(iΩt)|2.
In this case, the experimental trace is the Gabor spectrogram with a window NA. Provided the gate function NA is known with sufficient precision, the signal is directly invertible to the pulse field, although an iterative deconvolution algorithm is usually required.

The elements may also be used in the reverse order [Fig. 10(b)]. In this case, referred to as sonography, the test pulse first encounters a low-resolution spectrometer, then a very fast shutter. The Gabor sonogram is defined in an analogous manner to Eq. (2.37) by

D(Ω,τ)=|dω2πS̃A(ωΩ)Ẽ(ω)eiωτ|2,
where this time the spectral gate is of the form of Eq. (2.20). Again, a fast shutter may also be synthesized by a nonlinear optical process. In fact, it is clear from the form of the integral kernel of the Gabor spectrogram why this is so: the gate function is a time-shifted replica of the test pulse, and the product of the test pulse with itself can be realized by sum-frequency generation in a second-order nonlinear interaction.

The first method to provide complete information about an ultrashort optical pulse used a shutter based on upconversion of the spectrally filtered (and therefore temporally stretched) test pulse with the test pulse itself. The shutter speed is then equal to the duration of the test pulse, and something close to a sonogram of the test pulse can be measured [58].

In measuring either spectrograms or sonograms, it is important that the first filter encountered by the test pulse have low resolution in its appropriate domain (a nominally slow shutter for spectrography, and a low-resolution spectrometer for sonography), and that the second filter have high resolution (a high-resolution spectrometer for spectrography and a fast shutter for sonography). This makes the measured spectrogram or sonogram most similar to the true Gabor-type spectrogram or sonogram of the test pulse.

As discussed previously, nonlinear optics is not a necessity for pulse characterization. Its use in spectrographic techniques when characterizing sub-100fs pulses is required because there is no other way to build a shutter with a similar responses time. The test pulse itself is, ipso facto, the shortest-duration entity to which the experimenter has access, thereby setting a lower limit on the shutter speed. Because of this constraint, measurements of a sonogram of a femtosecond optical pulse always have lower resolution than the corresponding spectrogram.

When nonlinear optics is used, the measured spectrograms are nonlinear functionals of the test pulse Wigner function. A true spectrogram, such as the Gabor spectrogram, is a linear functional of the test pulse and the known shutter response Wigner function. Reconstruction of the pulse field from the Gabor spectrogram requires a deconvolution, but has in most cases a unique solution. Reconstruction of the pulse from nonlinear spectrograms requires an iterative nonlinear deconvolution where the convolution function depends on the unknown pulse. This problem might have multiple solutions. As a consequence, much of the effort devoted to these techniques has concentrated on devising robust iterative algorithms for extracting the field from the measured quantity, and this is discussed in Section 3.

An alternative approach to pulse reconstruction is to ensure that the apparatus operates with parameters that allow approximate direct inversion. This is possible with sonography [59], for example, and methods have been suggested for spectrography [60] as well as the spectrally and temporally resolved upconversion technique (STRUT) [61, 62].

Tomography. Tomographic pulse measurement is based on the notion of the time lens. This approach exploits the idea that the temporal intensity profile of the pulse can be transformed into the spectrum by suitable (linear) manipulations. The underlying principle can be illustrated by using the well-known dispersive properties of a grating pulse stretcher, as illustrated in Fig. 11(a). In this device, a pulse experiences a large group-velocity dispersion, since each wavelength traces a different path through the grating pair. Although the different wavelengths each occupy a different spatial position within the beam (an example of space–time coupling) after the second grating, a second pass through the apparatus undoes the space–time coupling while doubling the dispersion. Thus at the output of the stretcher the pulse duration is much longer than at the input, because each frequency in the pulse experiences a different transit time (or delay) through the grating pair. The Wigner functions of a pulse before and after quadratic spectral phase modulation ϕ(2)ω22 are related by

WOUTPUT(t,ω)=WINPUT(tϕ(2)ω,ω).
This corresponds to a shear of the chronocyclic Wigner function, as shown in Fig. 11(b), which encodes the spectrum of the input pulse onto the temporal intensity of the output pulse.

The inverse effect can also be made to happen. That is, the input pulse temporal shape can be made to appear in the output pulse spectrum. This requires a time-domain analog to a pair of gratings. Such a device turns out to be a temporal phase modulator [Fig. 11(c)], which shifts the frequency of different time slices of the pulse by different amounts, just as the grating stretcher shifts the time delay of different spectral slices of the pulse by different amounts. Clearly the response time of the modulator must be comparable with that of the pulse itself for this operation to provide a unique mapping, and for this reason it is only recently that such methods have begun to be practical in the picosecond and subpicosecond regimes. The Wigner functions of a pulse before and after quadratic temporal phase modulation ψ(2)t22 are related by

WOUTPUT(t,ω)=WINPUT(t,ω+ψ(2)t).
The effect of a quadratic temporal phase modulation is therefore to shear the Wigner function along the frequency axis [Fig. 11(d)].

The combination of the temporal modulator and dispersive stretcher allows one to perform an operation called “temporal imaging” by analogy to the operation performed by an optical imaging system in the spatial domain. Consider a standard optical imaging device, consisting of an object placed some distance before a lens, and an image plane (at which is placed a detector) some distance after the lens. The underlying physics of image formation is that light from the object undergoes diffraction in free space for a prescribed distance, then refraction by the lens, then further diffraction before being detected. For the appropriate adjustment of the distances and power of the lens, a magnified image of the object can be formed. The time–frequency analog is that the grating stretcher plays the role of diffraction and the temporal modulator the role of the lens. Using such a setup, a temporally magnified image of the input short pulse can be constructed, which is easy to measure by using detectors with response times much longer than the input pulse.

2.2f. In-Parallel Filtering Measurements

Interferometry refers to the situation where the phase of the test pulse is encoded into the intensity by means of mixing with a second pulse, which may be an ancillary reference pulse or the test pulse itself. These two categories are known as “test-plus-reference” and “self-referencing” interferometry, respectively. They are both direct techniques, in that it is possible to reconstruct the correlation function in either the time domain or the frequency domain directly (i.e., noniteratively) from the recorded intensity distributions. A general model of this category of measurement devices may be developed in terms of a sequence of in-parallel linear filters. In this model each pulse in the ensemble is split into two replicas at a beam splitter, and each replica is independently filtered before being recombined. The interference of the field from the parallel pathways introduces structure on the output intensity distribution, which then carries information about both the amplitude and the phase of the correlation function of the input field. If the ancillary port of the input beam splitter is empty, then the interferometer is said to be self-referencing. Alternatively, if the ancillary port is used to inject a characterized reference pulse, then it is possible to reconstruct the electric field of the test pulse in a rather straightforward manner [63, 64]. Of course, this approach requires one to first obtain a well-characterized reference pulse.

One significant advantage of direct techniques compared with phase-space techniques is that the entire space over which the phase-space or correlation functions are defined need not be explored if the pulse train is assumed to consist of identical pulses. Only a single section of one quadrature of the (complex) correlation function is necessary to obtain the electric field amplitude and phase, and these are precisely what is recorded by direct techniques.

Test-plus-reference interferometry. The most common form of test-plus-reference interferometry is Fourier-transform spectral interferometry (FTSI) [63, 64]. In this approach, the test and the reference pulse are delayed in time with respect to each other by τ before combining at the input beam splitter. The detected signal (interferogram) is then S(ω;τ)=|Ẽ(ω)+ẼR(ω)eiωτ|2, where ẼR and Ẽ are the spectral representations of the analytic signal of the reference and the test pulse. The spectral phase difference between test and reference pulses is encoded in the relative positions of the spectral fringes with respect to the nominal spacing of 2πτ and can be extracted by using a three-step algorithm involving a Fourier transform to the time domain, a filtering operation, and an inverse Fourier transform. The phase of the reference pulse must then be subtracted, leaving the spectral phase of the test pulse as required. A measurement of the test pulse spectrum then provides sufficient information to characterize the pulse. In common with all interferometric methods, the data set has one parameter, frequency, and may therefore be collected by using a one-dimensional detector array. This leaves the second dimension of a camera, for example, available for coding information about other degrees of freedom of the test pulse, such as the spatial phase. This method is therefore easily extended to full space–time characterization of the test field, again provided that a suitable (i.e., fully space–time characterized) reference pulse is available.

Self-referencing interferometry. It is possible to extract the phase of a field without a known reference pulse by gauging one spectral or temporal component of the field with another component. This is known as “self-referencing interferometry.” In this approach, the goal is to reconstruct the correlation function in either the time domain or the frequency domain directly (i.e., noniteratively) from one or several recorded intensity distributions. In fact, when the pulse train is coherent, it is necessary only to measure a section of the two-time or two-frequency correlation function in order to reconstruct the pulse field [34].

The in-parallel amplitude-only filters select either two frequency or two time slices of the pulse that beat together at the output of the interferometer. These are the time-domain analogs of Young’s double-slit interferometer [Figs. 10(e), 10(f)].

In the spectral domain [Fig. 10(e)], the center frequencies of the spectral filters are ωC1 and ωC2, and each has the same bandwidth Γ. The selected frequency components are recombined, giving rise to temporal fringes—or a time-dependent modulation of the intensity—at the output. These are resolved by using a time gate or a fast shutter. The signal recorded by the square-law detector is a function of the spectral filter center frequencies as well as the time of maximum transmission τ of the time gate,

D(ωC1,ωC2,τ)=dt|NA(tτ)dω2π[S̃A(ωωC1)+S̃A(ωωC2)]Ẽ(ω)exp(iωt)|2.
The detected signal takes on a particularly useful form when the passband of the spectral filters is much narrower than the spectrum of the input pulses and the time gate is short. In this case, the functions NA(t) and S̃A(ω) may be replaced by Dirac δ functions in the appropriate domains, and Eq. (2.41) simplifies to
D(ω+Δω2,ωΔω2,τ)=Ĩ(ω+Δω2)+Ĩ(ωΔω2)+2|C͌(Δω,ω)|cos{arg[C͌(Δω,ω)]Δωτ},
where ω=(ωC1+ωC2)2 and Δω=ωC1ωC2. This is an interferogram, for which the visibility of the fringes, occurring with nominal temporal period 2πΔω, provides a measure of the magnitude of C͌(Δω,ω). The location of the fringes along the delay axis τ provides a relative measure of the phase of C͌(Δω,ω). Each temporal beat note in the fringe pattern supplies enough information to reconstruct the two-frequency correlation function at the single point (Δω,ω).

A complementary form of interferometer consists of an in-parallel fast time-gate (time-nonstationary amplitude-only filters) followed by a spectral filter (time-stationary amplitude-only filter), as pictured in Fig. 10(f). The two replicas of the pulse are independently sampled with variable times,τ1 and τ2, before being recombined. The spectral beats, resulting from the overlap of the two time slices, are resolved by a spectrometer. The resulting signal, for the case of a very fast time gate and a very high-resolution spectrometer, written in terms of the center-time (t=(τ1+τ2)2) and difference-time (Δt=τ1τ2) coordinates, is the temporal interferogram

D(t+Δt2,tΔt2,Ω)=I(t+Δt2)+I(tΔt2)+2|C(t,Δt)|cos{arg[C(t,Δt)]+ΔtΩ}.
The visibility of the spectral fringes, occurring at the spectral period 2πΔt, is a measure of the magnitude of the two-time correlation function at the point (t,Δt), while the position of the fringes is linked to the phase of the correlation function.

A different class of interferometers makes use of a frequency shifter (a time-nonstationary linear phase filter) and a delay line (a time-stationary linear phase filter) arranged in parallel, followed by spectrometer after these signals are recombined [Fig. 10(g)]. The detected signal is a function of the delay ϕ(1), which acts as a fixed parameter, as well as the center frequency of the spectrometer passband, Ω,

D(ψ(1),Ω;ϕ(1))=dω2π|S̃A(ωΩ)[dω2πÑLP(ωω,ψ(1))Ẽ(ω)+S̃LP(ω,ϕ(1))Ẽ(ω)]|2.
With the usual simplifying assumptions regarding the spectrometer resolution, together with the frequency-shifting property of the idealized temporal phase modulator ÑLP(ω,ψ(1))=δ(ω+ψ(1)), the signal simplifies to
D(Δω,ωCΔω2;ϕ(1))=Ĩ(ωC+Δω2)+Ĩ(ωCΔω2)+2|C͌(Δω,ωC)|cos{arg[C͌(Δω,ωC)]ϕ(1)(ωCΔω2)},
where Δω=ψ(1) is the spectral shear and ωC=Ω+ψ(1)2 is the center frequency. For a given shear the recorded interferogram maps out an entire line of the real part of the two-frequency correlation function, in contrast to the previous Young’s double-slit-type configurations. This section may be extracted by using a simple and direct inversion algorithm that separates the interference term [the third term in Eq. (2.45)] from the noninterferometric terms. This is easily accomplished by means of Fourier transforms, in a manner described in Subsection 5.3, for the case when the delay τ between the pulses in each arm of the interferometer is sufficiently large. The key point here is that the spectral phase of the test pulse, arg[C͌(Δω,ωC)], is encoded on the spacing of the fringes in the interference term.

An entirely analogous argument may be made for temporal shearing interferometers [Fig. 10(f)]. In this case, the delay line in one arm of the interferometer causes the pulses on recombining at the second beam splitter to exhibit temporal beats in their intensity that may be resolved by a fast time gate. This latter element is the amplitude-only filter that replaces the spectrometer required in the spectral shearing interferometer. In this arrangement, a temporal linear phase modulator may be used to provide a temporal carrier for the two-time correlation function in the interference term. This is accomplished by frequency shifting one of the pulses with respect to the other by a shear ψ(1) and by introducing a relative delay Δt between the pulses. The signal detected as a function of τ, the delay of the time-nonstationary gate, which is assumed to be of infinitesimal duration, is then

D(τ,Δt;ψ(1))=I(tC+Δt2)+I(tCΔt2)+2|C(tC,Δt)|cos{arg[C(tC,Δt)]ψ(1)(tCΔt2)},
where tC=τ+Δt2 is the center time. A similar algorithm as described for the spectral shearing interferogram may be used to extract the temporal phase of the test pulse in this case. In practice, however, it is very difficult to provide a short enough time gate to enable this method to work. Nonlinear optical interactions that cross correlate the interferogram with the test pulse will not provide enough temporal resolution to resolve the fringes. Therefore this method is restricted to pulses whose duration is long enough that an externally controlled time gate, such as a telecommunication pulse carver, may be used. This is typically in the regime of several tens of picoseconds or longer.

2.2g. Joint Measurements

There are several modifications to the methods that have allowed some headway. The spectrum of the pulse helps in determining whether the pulse is close to the Fourier-transform limit and is an obvious second piece of data that is relatively easy to measure. Several iterative schemes have been developed to extract the pulse shape from a correlation and the spectrum [50, 65, 66, 67]. They provide varying degrees of success in extracting the pulse fields, but all share the same characteristic that they are very sensitive to noise in the data [46].

Attempts at retrieving the electric field of the pulse from a set of intensity autocorrelations measured after various amounts of second-order dispersion have been made [68, 69]. The use of the intensity autocorrelation in the temporally resolved optical gating (TROG) technique, instead of a direct intensity measurement, significantly increases the complexity of the retrieval compared with tomographic techniques.

Deterministic changes of the spectral phase of the pulse with a pulse shaper have also attracted some attention. For a given spectrum, the autocorrelation signal at τ=0 is maximized by a flat spectral phase. Since this signal can be measured directly by doubling the pulse and measuring the energy of the converted pulse with a photodetector, an iterative algorithm can be used to modify the spectral phase and maximize the measured signal. For a given pulse, the spectral phase introduced by the pulse shaper when the maximum is reached corresponds to the opposite of the spectral phase of the input pulse, therefore leading to a measurement of the phase by adaptive pulse shaping [70]. In a multiphoton intrapulse interference phase scan (MIIPS), the spectrum of the upconverted signal is used as a feedback mechanism when a spectral phase is scanned across the spectral support of the pulse with a pulse shaper [71]. Iterations are required for accurately determining the spectral phase of the input pulse: the pulse shaper is also used to introduce a static spectral phase that attempts to compensate the spectral phase of the input pulse, and a specific multiphoton intrapulse interference phase scan trace is obtained when the shaper output pulse is Fourier-transform limited.

2.3. Conclusions

There are three general classes of measurement techniques for characterizing ultrashort optical pulses—spectrography, tomography, and interferometry—which lead to eight devices consisting of the smallest possible number of optical elements (two spectrographic, two tomographic, and four interferometric). All of these devices contain at least one time-stationary and one time-nonstationary filter, which may be linear in the input field.

The two spectrographic methods measure a smoothed version of the chronocyclic Wigner function by using sequential amplitude filters to make a simultaneous measurement of time and frequency. Tomographic methods measure projections of the chronocyclic Wigner function onto the frequency variable, following the application of a quadratic phase modulator to the input pulse. This serves to rotate the phase-space distribution of the pulse, so that a measurement of its modified spectrum reveals information about its initial orientation, and hence chirp. The data, consisting of a set of projections of the Wigner function for a range of phase-space rotations, can be deterministically inverted to retrieve the Wigner function itself.

Self-referencing interferometric techniques measure a point or a section of the two-frequency or two-time correlation function. A single section of either function is adequate for reconstructing the underlying electric field. Interferometers work by splitting each pulse in the ensemble into two replicas at a beam splitter, independently filtering the replicas, and then recombining them at a second beam splitter. The interference of the parallel pathways introduces structure on the output intensity distribution, which then carries information about both the amplitude and the phase of the correlation function. There are two interferometric devices that are analogs of Young’s double slits: either two spectral or two temporal slices are taken of the input pulse and fringes recorded in the temporal or spectral domains, respectively. Since these devices require time gates that are short compared with the input pulse duration, they are difficult to implement for femtosecond pulses. A more useful approach is via shearing interferometry. In this case, one of the pulses is shifted in frequency (or in time) with respect to the other, and an interference pattern recorded in the spectral (or temporal) domain. The simple and direct inversion algorithm gives a provably unique solution to the problem of pulsed field reconstruction.

3. Spectrography

3.1. Introduction

Some of the earliest attempts at measuring the chirp of optical pulses were based on spectrographic concepts. Such ideas also underpinned the first attempts at precisely characterizing the electric field of pulses. The concept of “chirp” was developed for microwave pulses and refers to the existence of a time-dependent instantaneous frequency—or, equivalently, a frequency-dependent group delay—in which all of the frequencies of the pulse do not arrive at the observer simultaneously. In 1971, Treacy quantified this quantity for pulses from a mode-locked Nd:glass laser by measuring the time of arrival of spectral slices of the pulse [72]. This recording allowed the first evaluation of the optical chirp, which had been known for microwaves for some time. A description of early developments can be found in [26]. Different implementations of the same concept were then developed and are usually referred to as time-resolved spectroscopy [73, 74, 75]. Another precursor to the spectrographic techniques used nowadays is the measurement of optical spectra of the upconverted signal in an intensity autocorrelator [76]. Chilla and Martinez’s implementation of sonograms using nonlinear wave mixing in a crystal has inspired most setups for sonographic measurements of femtosecond pulses [58]. Spectrograms and sonograms are now widely used in ultrafast optics, and the development of phase retrieval algorithms enables full recovery of the amplitude and phase of the electric field without prior assumptions as to its functional form. The best-known example of this class of measurements is frequency-resolved optical gating (FROG) [28]. In this section, we describe the principles of spectrography, the apparatuses required for measuring spectrograms and sonograms, and the approaches available for extracting the pulse field from the experimental data. We also give some experimental implementations of these concepts adapted to ultrafast optics.

3.2. General Implementation of Spectrography

3.2a. Definitions

Spectrographic techniques aim at measuring simultaneously the arrival time and frequency of an optical wave, that is, a joint representation of the Fourier conjugate variables time and frequency. In the most general case, the measurement yields a time–frequency distribution that is uniquely related to the input pulse field. The usual approach to this measurement is to perform a sequential gating in the time and frequency domains by using a time-nonstationary and a time-stationary filter. The time-nonstationary filter can be delayed in time by a quantity τ with respect to the test pulse, and the transfer function of the time-stationary filter can be tuned along the optical frequency axis by an amount Ω. The measured quantity is therefore a function of these two variables, which have to be varied to cover completely the entire chronocyclic phase space occupied by the pulse. This permits a faithful estimation of the pulse time–frequency distribution. If the two filters are in sequence, and the second has very high resolution (i.e., is either a fast shutter or a narrowband spectral filter), this approach can be considered to make a simultaneous measurement of the time and frequency of the pulse.

Figure 12 shows two typical arrangements for spectrographic measurements consisting of two sequential filters. In Fig. 12(a) the first filter is a time-nonstationary device modulating the electric field with a gating function g, and the second filter is a time-stationary filter described by R̃. The gating function can be delayed in time by a delay τ relative to the pulse under test, and the stationary filter can be scanned in frequency, Ω describing a parameter relevant to this filter, for example the center of its passband. The signal measured by a time-integrating detector is

S(τ,Ω)=dω2π|R̃(ωΩ)|2|dtE(t)g(tτ)exp(iωt)|2,
where the two integrals extend from to + in the time and frequency domains. The second filter is chosen to have a high resolution, which in this case implies a spectrometer capable of resolving all the features of the optical spectra after they pass through the first filter. This choice leads to minimal blurring of the spectrogram, and therefore more reliable inversion. From a formal point of view, the transfer function of the stationary filter may be replaced by a Dirac function, so that the measured experimental trace becomes
S(τ,Ω)=|dtE(t)g(tτ)exp(iΩt)|2.
This quantity is by definition the spectrogram of the electric field E measured with the window or gate g [37].

The order of the stationary and nonstationary filters can be inverted, so that the measurement is that of the temporal intensity of the pulse after spectral filtering [Fig. 12(b)]. The signal measured by a time-integrating detector is in this case

S(τ,Ω)=dt|g(tτ)|2|dω2πẼ(ω)R̃(ωΩ)exp(iωt)|2,
where g is the impulse response of the time gate and R̃ is the transfer function of the spectral filter. If the time-gating nonstationary filter has sufficient resolution to reveal all the temporal features of the spectrally filtered pulse, its response function can be formally replaced by a Dirac function and the experimental trace is
S(τ,Ω)=|dω2πẼ(ω)R̃(ωΩ)exp(iωτ)|2.
This quantity is by definition the sonogram of the electric field Ẽ measured with the spectral filter R̃. In practice, the measured sonograms are often given by Eq. (3.3) (where the nonstationary filter can be a function of the test pulse) instead of Eq. (3.4). For example, nonstationary filtering of femtosecond pulses is often provided by cross-correlation with another pulse, usually the unknown pulse under test itself [77]. Spectrograms and sonograms should be understood as making simultaneous measurements of the time and frequency degrees of freedom of the test pulse. Note that the spectrogram and sonogram given by Eqs. (3.2, 3.4) are mathematically equivalent, and the spectrogram calculated from the temporal representations E(t) and g(t) is the sonogram calculated from the spectral representations Ẽ(ω) and g̃(ω). Mathematical properties of these time–frequency distributions can be found, for example, in [37].

3.2b. Wigner Representation

The spectrogram of Eq. (3.2) and the sonogram of Eq. (3.4) can be written as a double convolution of the Wigner function of the test pulse WE with the Wigner function of the apparatus Wg:

S(τ,Ω)=dtdω2πWE(t,ω)Wg(tτ,Ωω).
The spectrogram is the result of the measurement of the Wigner function of the pulse in the chronocyclic space (ω,τ) with a measurement device having an instrument function equal to the Wigner function of the time or frequency gate. (Note that although a Wigner function may have negative values, the convolution of two Wigner functions is always nonnegative, so that the signal is always a physically realizable quantity.) Varying the delay τ and frequency Ω is equivalent to moving the instrument function around the chronocyclic space. It is clear that this motion must encompass the portion of the chronocyclic space where the Wigner function of the pulse under test has nonzero values. It is usually desirable to have an instrument function of area as small as possible in the chronocyclic space to provide minimal blurring of the measured Wigner function. However, the size of the support of any Wigner function has a lower bound; i.e., the area of the chronocyclic space where it is nonzero is larger than π. This lower bound arises from Fourier’s principle; if it were not so, there could exist an apparatus Wigner function that was highly localized in both time and frequency and that would, therefore, be able to measure with high precision the time and frequency variables. This contradicts Fourier’s theorem concerning conjugate variables. In fact, a rapid time-nonstationary filter realizes good temporal resolution but provides little spectral information about the test pulse. Its Wigner function has a correspondingly small extension in the temporal variable, but large spread in the spectral variable. In contrast, a narrowband time-stationary filter as used in the sonogram provides good spectral resolution but little temporal resolution, and its Wigner function has small extension in the spectral variable but large spread in the temporal variable. The spectrogram and sonogram are therefore always blurred versions of the Wigner function of the pulse under test, in the way described by Eq. (3.5).

3.2c. Chirp Representation

The first-order moments of the Wigner function can be linked to the group delay and instantaneous frequencies defined from the first derivatives of the spectral phase and the temporal phase of the electric field. The first-order moments of the spectrogram are by definition

ΩS(τ)=dΩΩS(τ,Ω)dΩS(τ,Ω),
TS(Ω)=dττS(τ,Ω)dτS(τ,Ω).
One can show that
ΩS(τ)=dtIE(t)Ig(tτ)[ΩE(t)+Ωg(tτ)]dtIE(t)Ig(tτ),
TS(Ω)=dωIE(ω)Ig(Ωω)[TE(ω)Tg(Ωω)]dωIE(ω)Ig(Ωω),
where the subscripts E and g refer to the test pulse and the time-nonstationary filter, so that, for example, IE(t) is the temporal intensity of the test pulse, and Tg(ω) the frequency-dependent group delay of the response function of the nonstationary filter. Because of the symmetric role played by E and g in the definition of the spectrogram, these moments depend identically upon the properties of the pulse and the gate, and the ability of a spectrogram or sonogram to represent chirp in the test pulse is linked to the properties of the gate. The first-order frequency moment of a spectrogram measured with a rapid time gate (with real response function) is the instantaneous frequency of the pulse, given by
ΩS(τ)=ΩE(τ)=ψt(τ).
A spectrogram implemented with a gate that is narrowband and real in the spectral domain leads to the equivalence of the spectrogram group delay and the test pulse group delay:
TS(Ω)=TE(Ω)=ϕω(Ω).
Figures 13(a), 13(b) display the spectrogram of a Gaussian pulse with second- and third-order dispersion calculated with a real gate. Note that the ridge of the spectrogram follows a curve corresponding to the group delay in the pulse, which is a straight line for second-order dispersion and a parabola for third-order dispersion. As expected, the negative values of the Wigner function in the latter case have been washed out in the convolution process. The ability of the spectrogram and sonogram to represent chirp in an intuitive manner finds application in signal representation and processing. They are time-tested concepts and are still in use today.

3.3. Inversion Procedures for Spectrographic Techniques

The basic problem behind the inversion of the spectrogram is the determination of a relevant quantity describing the train of pulses under test (e.g., chirp, electric field, or Wigner function) from the measured time–frequency distribution. In some implementations of spectrographic techniques, the gate is unknown; for example it can be a function of the pulse under test itself in FROG, where the time-nonstationary filter is synthesized by a nonlinear interaction with a replica of the unknown pulse under test. The inversion approaches are classified here as chirp retrieval, Wigner deconvolution, and phase retrieval.

3.3a. Chirp Retrieval

A quantitative assessment of the chirp of the test pulse can be obtained from a spectrogram or sonogram by calculating its first-order moments by using Eqs. (3.10, 3.11), or simply by locating the delay at which the spectrogram has a maximum for each frequency or, equivalently, the frequency at which the spectrogram has a maximum for each delay (assuming that the pulse structure is simply enough that the maxima are unique). These properties were understood very early on and are at the basis of the works of Treacy [72] and Chilla and Martinez [58], who determined the group delay as a function of frequency for an optical pulse by spectrally filtering the pulse and determining the time of arrival of the wave packets centered at the corresponding frequencies. Precise estimation of the chirp is made difficult by the fact that the second-order moments of the time–frequency distribution along one axis increase significantly when tight filtering is performed along the conjugate variable [37]. For example, a spectrogram measured by using a short nonstationary filter leads to a large spread of the spectrogram along the frequency axis, which means that the practical determination of the instantaneous frequency requires a very high signal-to-noise ratio. Another limitation of this approach, in common with many methods, is that the chirp and the group delay are measures of the derivative of the phase with respect to time or frequency, respectively. To extract the full phase of the test pulse field, it is necessary to integrate the measured quantities, which can be done when the support of the field is continuous but is difficult otherwise. Such an approach would not perform well, for instance, in the characterization of pulses with disjoint spectral or temporal support or pulses with phase jumps (an example of the latter is a train of pulses used in telecommunication, such as carrier-suppressed return-to-zero pulses where adjacent pulses differ by a π phase shift [78]).

3.3b. Wigner Deconvolution

There is in principle a more direct way to extract the field from the spectrogram. When the gate response function is known, the corresponding apparatus Wigner function Wg is known, and the Wigner function of the pulse under test can in principle be obtained by inverting the convolution of Eq. (3.5) [79]. The steps necessary to perform such an operation are the calculation of the double Fourier transform of the measured spectrogram or sonogram, the division of this quantity by the double Fourier transform of Wg, and the calculation of the inverse double Fourier transform of the obtained quantity, which leads to the Wigner function of the test pulse, followed by the calculation of the electric field of the test pulse from its Wigner function. However, direct deconvolution is highly sensitive to the precision with which the gate response function is known and to the signal-to-noise ratio and is prone to error at points in the phase space where the Fourier transform of the Wigner function of the gate takes zero values. Further, this approach does not take into account additional information such as the degree of coherence of the test pulse ensemble, nor can it easily include any assumptions about this. Therefore this approach is not widely used in practice.

3.3c. Phase Retrieval

The two-dimensional deconvolution method described in the previous subsection does not make use of the fact that the underlying quantity of interest is the one-dimensional electric field. In the common case where the test pulse ensemble is coherent, so that the test pulse field is well defined, the two-dimensional spectrogram or sonogram is a function of two one-dimensional functions describing the gate response and the test pulse field. This redundancy in the data may be put to good use. It is possible to make use of this fact to effect iterative deconvolution algorithms that work at lower signal-to-noise ratios than direct deconvolution.

The retrieval of E and g is equivalent to the retrieval of the phase of the short-term Fourier transform dtE(t)g(tτ)exp(iΩt). The spectrogram is by definition the modulus square of the latter quantity, and once the short-term Fourier transform is known, both E and g can be obtained by Fourier transformation. Spectrogram inversion therefore falls into the category of phase retrieval problems. These problems have been studied extensively in optics, owing to the fact that common square-law detectors, such as charge-coupled device arrays (CCDs) used in imaging, provide only intensity information. Various phase retrieval algorithms have been used for such inversion in the context of imaging, and phase retrieval for ultrafast optical metrology can be traced back to the spectrogram inversion by Kane and Trebino [80] and the later sonogram inversion by Wong and Walmsley [39]. The general approach to iterative inversion is to locate the intersections of two sets of two-dimensional functions corresponding to two constraints. The first constraint is that the modulus square of the short-term Fourier transform must match the experimentally measured spectrogram. The second constraint is that the experimental signal should be consistent with the functional form of spectrogram of a pulse gated by a gate; i.e., it can be written as Eq. (3.2) or (3.4). There can also be additional constraints, such as the functional dependence between the pulse and gate, or the spectral characteristics of the field or gate. Since the two sets of constraints are not convex, convergence is not guaranteed, but iterating by projecting the solution at each step onto each set has proved a robust way of inverting the spectrogram. Projection on the set of functions satisfying the modulus constraint is easily performed by replacement of the modulus with the square-root of the measured spectrogram. Projection on the set of functions satisfying the spectrogram mathematical form was initially performed by using an error minimization algorithm [81]. A more efficient algorithm to achieve this task is the principal component generalized projection algorithm (PCGPA) [82, 83] (Fig. 14).

The principal component generalized projection algorithm works in the following way: assuming one has a pair of functions (En,gn) at iteration n, the outer product En(t)gn(t) is first calculated as a matrix (with row and column indices indicated by discretized values of t and t), and the short-term Fourier transform dtEn(t)gn(tτ)exp(iΩt) is then calculated by a sequence of row rotation and Fourier transformation. The modulus constraint is then applied by replacing the modulus of the calculated short-term Fourier transform with the square root of the measured spectrogram, retaining its phase. This new complex matrix undergoes the inverse of the operations applied to its predecessor. It is rendered into outer-product form, as required by the signal constraint, by means of a singular value decomposition (SVD) of the two-dimensional matrix. This decomposes the matrix into a sum of outer-product (i.e., rank one) matrices. The outer product corresponding to the largest eigenvalue is kept, and the corresponding eigenvectors are used as the set of solutions (En+1,gn+1) for the next iteration. In practice, this singular decomposition is slow, and an approximate decomposition is obtained by using matrix multiplications (power method). This algorithm does not use an explicit relation between the pulse and the gate, and is therefore referred to as “blind.” The algorithm yields in principle both the characteristics of the pulse and gate, and this is useful for implementations of spectrographic techniques where the pulse and the gate are not related, for example in cross-correlation FROG and linear spectrograms and sonograms. Some theoretical cases of ambiguity in this approach have been reported [84], though these can usually be removed using additional knowledge such as the optical spectrum of the test pulse or the relation between the pulse and the gate, as is often the case in FROG. The knowledge of the specific link between gate and field has been successfully inserted into the algorithm for polarization-gate FROG and second-harmonic generation (SHG) FROG [83, 85].

3.3d. Ambiguities, Accuracy, Precision, and Consistency

Ambiguities. An ambiguity in phase retrieval arises when more than one phase function can be assigned to the reconstructed field while satisfying all constraints of the inversion problem. Consider, for example, the spectrogram

S(τ,Ω)=|dtE(t)g(tτ)exp(iΩt)|2=|dtg*(t)E*[(tτ)]exp(iΩt)|2.
It is clear that the pairs [E(t),g(t)] and [g*(t),E*(t)] are always solutions of the same phase retrieval problem, regardless of the inversion algorithm. This is called the time-reversal ambiguity, because both a specific pulse and gate and their time-reversed versions produce the same spectrogram. Ambiguities may arise particularly in blind deconvolution when no prior or side information about the pulse or the gate is available. Often it is possible to obtain such information experimentally by measuring the optical spectrum of the pulse, for example. Further, in cases where the gate is a prescribed function, say using a temporal modulator with an external drive signal unrelated to the test pulse, it is possible to make use of the independently measured gate function for a range of different test pulses.

Equation (3.12) demonstrates the important direction-of-time ambiguity of SHG-FROG, as explained below in Subsection 3.5b. In this case, the gate is derived from the test pulse so that g=E, and the inversion will yield either E(t) or E*(t). This implies that a single SHG-FROG measurement cannot determine the direction of time unless one has additional information about the test pulse structure. Various studies on ambiguities for specific implementations of FROG can be found in [28, 86, 87, 88, 89].

Accuracy. The accuracy of a diagnostic quantifies the similarity between the measured quantity and the physical quantity. This can be specified only if a well-known test pulse is available or by means of numerical simulations. There have been no extensive studies of these for modern ultrafast spectrographic methods, especially in the presence of noise or nonoptimal experimental conditions. Some simulations relevant to this issue can be found in [90].

Consistency. The consistency of the inversion of a spectrogram or sonogram specifies the degree to which the data reconstructed from the solution matches the experimental data; i.e., it tells how well the inversion algorithm for the problem worked. The two constraints that are used in inverting the data yield their own consistency criterion:

  • The rms difference between the measured experimental trace and the trace calculated from the retrieved solution, also known as the “FROG error,” quantifies the match with the experimentally measured trace.
  • The relative magnitude of the singular values given by the singular value decomposition quantifies the match with the outer-product form in the decomposition of the spectrogram: a single nonzero singular value corresponds to a perfect decomposition as an outer product. The distribution of singular values may be used to evaluate the convergence of the algorithm [91].

Precision. Evaluating the precision of a measurement device requires the ability to compare several retrievals of the same quantity by the same device. In the case of ultrafast pulse characterization, this may done in principle by characterizing the same test pulse several times, with the underlying assumption that the ensemble of test pulses is coherent, so that the electric field of each is the same. In practice, it is much more useful to be able to evaluate the precision of a given measurement from a single experimental trace. This obviously requires some redundancy. In the case of spectrographic techniques, such redundancy is likely to be present because of the size mismatch between the experimental trace and the measured quantities. Redundant data can be used to evaluate precision in conjunction with a statistical technique called bootstrapping, where multiple inversions of the data are performed after removal of some of the data points [92, 93].

3.4. Specific Implementation of Sonograms

3.4a. Treacy’s Sonogram

Among the earliest sonographic methods, Treacy’s sonogram [72] (also known as the “dynamic spectrogram”) was implemented by using the angular dispersion of a diffraction grating to spread the spectrum of the pulse in space. Temporal information about the arrival time of each spectral component was obtained by using two-photon fluorescence in a dye cell. In the original experiment, the dispersed pulse was correlated with a spatially inverted copy of itself, therefore comparing the time of arrival of optical frequencies symmetrically located on each side of a reference frequency. More recent implementations instead correlate the spatially dispersed pulse with a short pulse (for example, a replica of the pulse under test).

3.4b. Measurement of Sonograms with Nonlinear Crystals

Sonogram measurement of ultrashort pulses nowadays makes use of nonlinear crystals, which provide both reasonable signal amplitudes and appropriate temporal resolution [39, 58, 59, 94]. A typical setup is illustrated in Fig. 15(a). There, the test pulse is split into two replicas. One of the replicas is sent to the spectral filter that acts as the stationary filter, for example a slit in a zero-dispersion line. The output of this filter is cross correlated with the other (short) replica in a nonlinear crystal. The complete sonogram can be measured by scanning the frequency of the spectral filter and the delay between the filtered replica and the replica of the input pulse. One advantage of this implementation of the sonogram, as well as all subsequent implementations based on SHG, is that the experimental trace usually gives directly a good picture of the chirp. This is clearly seen in Fig. 15(b), which shows the correlation between time and frequency in the sonogram of a chirped pulse. As is shown below, implementations of spectrograms with SHG (SHG-FROG) do not benefit from such an intuitive structure. Martinez’s approach led to the measurement of the chirp of a colliding-pulse mode-locking laser by using the determination of the group delay as a function of the optical frequency.

Two-photon absorption has also been used to measure sonograms with high sensitivity [95]. The setup is similar to the setup shown schematically in Fig. 15, but it is necessary to remove the constant background arising from one-photon excitation that is present on the traces. Real-time implementations of the sonogram based on a scanning Fabry–Perot filter and a two-photon detector have also been demonstrated [96].

3.4c. Spectrally Resolved Cross-Correlation

A spectrally resolved cross-correlation approach has also been used to characterize femtosecond pulses. In this method, a narrowband reference pulse is generated from the test pulse by spectral filtering (for example, using a slit in a zero-dispersion line). The field resulting from the cross-correlation between the reference pulse and the unfiltered test pulse in a nonlinear crystal is then spectrally resolved by using a spectrometer. The resulting two-dimensional trace is

S(τ,Ω)=|dtE(t)ER(tτ)exp(iΩt)|2,
and is therefore the spectrogram of the pulse under test measured with a gate equal to the field of the reference pulse. While such a trace is identical to a cross-correlation FROG (X-FROG) trace (see Subsection 3.5b), it also appears that if the gate response function is narrowband and real, the first moment of the spectrogram will lead to the group delay in the pulse following Eq. (3.11). This property was used in [61], and a setup providing real-time measurements was demonstrated in [62].

3.4d. Measurement of Sonograms with Fast Photodetection

Chirp measurements are important for optical telecommunications because of the detrimental effect of chromatic dispersion and self-phase modulation in optical fibers and the presence of time-varying phase modulation on the pulses generated by lasers and modulators. Telecommunication pulses have low peak power, and their polarization state can vary quickly, which makes diagnostics based on nonlinear optics difficult to implement. Since these pulses often have durations longer than a few picoseconds, time-resolved information can be obtained by using a streak camera, for example. A streak camera can display a two-dimensional image where one spatial direction corresponds to time (calibration of the space-to-time correspondence is, of course, required) and the other direction corresponds to a physical spatial coordinate. A sonogram can therefore be recorded by mapping the optical frequency onto a spatial coordinate at the Fourier plane of a monochromator [Fig. 16(a)]. The pulse under test goes into the monochromator (diffraction grating and imaging system) that maps the optical frequency onto the spatial coordinate x. The streak camera maps the temporal intensity onto spatial intensity along the y direction. The (x,y) image therefore corresponds to the sonogram as a function of Ω and τ. Sonograms measured with fast photodetection have been used, for example, in the chirp evaluation of various externally modulated lasers [74, 97, 98], the measurement of the chromatic dispersion of optical fibers [73, 75], and the characterization of Raman radiation generated by propagation of optical pulses in a fiber [99].

A recent implementation of the sonogram for trains of pulses in the telecommunication environment is based on phase comparison in the RF domain [100]. These trains of pulses usually have repetition rates f of the order of 10GHz. As shown in Fig. 16(b), the train of pulses under test is spectrally filtered at the optical frequency Ω (upper part of the setup) and detected by a photodetector with bandwidth greater than f. This gives a RF signal whose phase is proportional to the group delay for the group of frequencies around Ω selected by the spectral filter. This phase can be measured by comparison with another RF signal at the same frequency, generated by sending the unfiltered train of pulses to an identical photodetector (lower part of the setup); the two RF signals are downconverted by mixing with an identical local oscillator running at a frequency close to f. The measurement of the RF phase as a function of the filtered optical frequency then yields the group delay in the pulse composing the pulse train. This implementation uses conventional telecommunication and RF components, is polarization insensitive, and generates its own temporal reference, all of which are significant advantages. Further, telecommunication signals can have large amounts of incoherent amplified spontaneous emission, and it has been suggested that the measurement process is insensitive to this noise background.

3.4e. Sonogram with Phase Retrieval

While the previous implementations of sonograms are based on the simplified retrieval of the chirp, it was pointed out by Wong and Walmsley that complete phase retrieval could be performed on a sonogram, provided that the complete trace of Eq. (3.4) is measured [39]. While the previous implementations typically use a narrowband spectral filter, this is not optimal for phase retrieval, and the stationary filter should in that case have a bandwidth comparable with that of the pulse under test. While spectrograms such as FROG are implemented with an unknown gate, the gate used in sonograms can be characterized in the spectral domain (e.g., by using a spectrometer to measure its transmission and using test-plus-reference spectral interferometry to measure the phase induced on the filtered pulse). The setup follows the general concept of the sonogram implementation, with a spectral filter such as a slit in a zero-dispersion line or a Fabry–Perot filter and fast photodetection provided by a nonlinear cross-correlation with the unfiltered pulse under test. Sonograms were also inverted by using the principal component generalized projection algorithm [77]. In these approaches, the time-nonstationary filter has finite duration and unknown shape. Deconvolution can be performed appropriately [101]. The phase retrieval can be used for all sonograms, provided that the entire experimental trace is measured and the transfer function of the stationary filter does not vary when its frequency is modified.

3.4f. Single-Shot Sonograms

Single-shot sonograms require the acquisition of the two-dimensional sonogram where the frequency and time variable are simultaneously scanned. Two experimental implementations have been demonstrated.

The thick nonlinear crystal approach [102] follows lines similar to those developed in the poor man’s FROG and the GRENOUILLE devices, which are presented in Subsection 3.5 [103, 104]. Phase-matching conditions in a nonlinear crystal can be used to provide strong spectral filtering, therefore enabling, for example, angular dispersion, while the nonlinearity itself provides the gating mechanism. This was implemented in a type II crystal, following Fig. 17(a). The combination of a cylindrical and a spherical lens magnifies the input beam in the horizontal transverse direction and focuses it in the vertical direction. A Wollaston prism is then used to split the incoming pulse into two orthogonally polarized beams that propagate at an angle. This therefore encodes the relative delay between the two pulses on the horizontal spatial axis. The tight vertical focusing in a type II KDP crystal (KH2PO4) leads to a SHG process that is mostly narrowband along the extraordinary axis and broadband along the ordinary axis, with a one-to-one relation between the output angle and the wavelength of the extraordinary wave being phase matched. In other words, the crystal essentially performs the narrowband spectral filtering along the extraordinary axis and gates the broadband pulse under test with the filtered pulse. Another combination of cylindrical and spherical lenses is used to map the vertical angle and the horizontal position on a two-dimensional detector to allow the single-shot measurement of the sonogram as a function of optical frequency and time.

Single-shot sonograms have also been measured by using the encoding of the optical frequency on a spatial coordinate, using a zero-dispersion line and spatially dependent time gating by correlation with the short pulse under test in a noncollinear geometry on a two-photon detector [105, 106]. Following Fig. 17(b), the pulse under test is spatially dispersed by using a diffraction grating and a cylindrical lens so that the optical frequency is encoded on the horizontal variable. A short reference pulse (in practice, a replica of the pulse under test) is incident at the Fourier plane of the zero-dispersion line and produces a cross-correlation signal on a two-photon CCD array, where the relative delay between the two interacting pulses is encoded onto the vertical direction.

3.5. Specific Implementations of Spectrograms

Spectrogram measurement apparatuses can be classified by their implementation of the time-nonstationary filter and the specific measurement geometry.

3.5a. Early Attempts

One of the earliest attempt to measure a complete spectrogram used SHG as the time-nonstationary filter [76, 107, 108]. The setup is an intensity autocorrelator followed by a spectrometer that measures the upconverted pulse spectrum, an experimental combination that is now referred to as SHG-FROG. In this case, the gate is the pulse itself, and the experimental trace is related to the test pulse field by

S(τ,Ω)=|dtE(t)E(tτ)exp(iΩt)|2.
No complete measurement of the spectrogram was performed in this early attempt, and the experiment only demonstrated the influence of chirp on spectra measured at different relative delays between the two replicas of the pulse in the autocorrelator.

3.5b. Frequency-Resolved Optical Gating

A complete measurement of the upconverted spectra, together with a means for inverting the data to extract the test pulse field, is generally referred to as FROG (frequency-resolved optical gating). In this method, the time-nonstationary filter is obtained via a nonlinear interaction in a medium with a nearly instantaneous response. The spectrum of the output of the nonlinear mixing process is measured for all delays between the input test pulse replicas, and the pulse is recovered from the measured spectrogram by means of a phase retrieval algorithm. The reader is referred to [28, 109] for extensive descriptions of this technique, together with details of various implementations and experimental results. The most popular experimental implementations are described in the following subsections, with examples of an experimental trace in the case of a Gaussian pulse with second- and third-order dispersion.

Second-harmonic generation frequency-resolved optical gating. In SHG-FROG, the pulse is mixed with a delayed replica in a nonlinear crystal with large spectral acceptance, as indicated in Fig. 18 [60, 110, 111]. The experimental spectrogram is

S(τ,Ω)=|dtE(t)E(tτ)exp(iΩt)|2.
SHG-FROG is more sensitive than most FROG techniques, and it can be used to characterize ultrafast pulses from Ti:sapphire oscillators [7] and pulse trains in the telecommunication environment [22, 112, 113]. Sensitivity can be enhanced by using nonlinear interaction in waveguide structures [114]. SHG-FROG can also be used to characterize extremely short pulses when various deleterious effects such as the dispersion of the nonlinear crystal and the nonuniform response of the wave-mixing and spectral detection system are taken into account [115, 116], and, since second-order nonlinearities are widely available, it can be used in the mid-IR [117].

SHG-FROG has a major drawback that is derived from the fact that the gate is the electric field of the test pulse itself. This leads to a spectrogram that is rather unintuitive, and the sign of a chirp is, for example, not visible on a SHG-FROG trace. Inversion of a SHG-FROG trace is ambiguous in the direction of time, as explained previously. The spectrograms of a Gaussian pulse with second- and third-order dispersion are shown in Fig. 18. Determination of the chirp from first-order moments of the SHG-FROG trace is not possible, and the traces are both symmetric with respect to the relative delay τ. Issues due to the direction-of-time ambiguity can be alleviated in various ways. Some prior knowledge of the electric field of the pulse under test (e.g., knowing that the pulse is positively chirped) or the introduction of a recognizable feature (e.g., a trailing pulse using multiple reflections in a piece of glass) can break this ambiguity. Another approach is to perform an additional measurement of the SHG-FROG trace after addition of chromatic dispersion of a known sign, but this is rather impractical in applications requiring single-shot operation. The deleterious effects of limited spectral acceptance of the nonlinear crystal can be compensated by numerical correction of the experimental trace or by crystal dithering [118]. While SHG-FROG is usually implemented in a noncollinear geometry, some collinear SHG-FROG setups, useful when the experimental setup has some spatial constraints, have been investigated [119, 120, 121]. Collinearity of the two replicas of the pulse in the SHG crystal leads to an interferometric modulation of the experimental trace, which can be used independently of the SHG-FROG trace for phase retrieval [121, 122, 123]. A recent example of the application of SHG-FROG to the determination of pulse formation in a nonlinear optical process is shown in Fig. 19 [124].

Polarization-gate frequency-resolved optical gating. Polarization-gate FROG (PG-FROG) uses the principle of the Kerr shutter [125]. The basic physical principle is that cross-phase modulation is polarization dependent, i.e., the phase shift induced by a pump pulse on a probe pulse depends on the relative polarization state of the two pulses. Consider the interaction of a low-energy probe pulse with a high-energy pump pulse. The phase shift induced on the probe is ϕCO(t)=(4πn2Lλ)IPUMP(t) if the pulses are copolarized, and ϕORTHO(t)=(4πn2L3λ)IPUMP(t) if the pulses are orthogonally polarized. If the probe pulse is polarized along x̂ and the pump pulse is polarized along x̂+ŷ, there is an induced time-dependent birefringence, and the probe after interaction is proportional to E(t){(x̂+ŷ)exp[iϕCO(t)]+(x̂ŷ)exp[iϕORTHO(t)]}. The field transmitted through an analyzing polarizer set along ŷ is then E(t){exp[iϕCO(t)]exp[iϕORTHO(t)]}, which is proportional to E(t)IPUMP(t) for small birefringence. A gate proportional to IPUMP(t) can therefore be implemented in such a geometry. Spectrally resolving the transmitted pulse as a function of the delay between the probe pulse and the pump pulse yields the PG-FROG trace

S(τ,Ω)=|dtE(t)I(tτ)exp(iΩt)|2.
Since the gate is real, the PG-FROG spectrogram can give a better representation of the chirp (see, for example, the FROG traces in [10]). As can be seen in Fig. 20, the PG-FROG traces of a Gaussian pulse with second- and third-order spectral phases have an aspect similar to the spectrograms of Fig. 13 and the Wigner functions of Fig. 7. PG-FROG is popular in applications such as the characterization of amplified pulses from chirped-pulse amplification systems [10]. It has also been implemented with cascaded second-order nonlinearities [126].

Self-diffraction frequency-resolved optical gating. Self-diffraction FROG (SD-FROG) uses the diffracting properties of an index grating written in a material by the interference of two replicas of the pulse under test via the Kerr effect [80]. The efficiency of diffraction of each replica on the grating is proportional to the interference term between the electric field of the two replicas. The SD-FROG trace [Fig. 21(a)] is obtained by delaying one replica with respect to the other and spectrally resolving one of the replicas and can be written as

S(τ,Ω)=|dtE2(t)E*(tτ)exp(iΩt)|2.
SD FROG traces are rather intuitive, although the relation between group delay and moments depends on the order of the phase distortions because the temporal gate is pulse-dependent and not necessarily real. The self-diffraction effect is not phase matched, and the phase mismatch is wavelength dependent. This constrains the nonlinear medium to be thin and makes the technique difficult for broadband pulses. However, SD-FROG does not require high-extinction polarizers and can therefore be used to characterize short-wavelength pulses [127]. SD-FROG has also been implemented by using cascaded second-order nonlinearities [126].

Third-harmonic generation frequency-resolved optical gating. Third-harmonic generation FROG (THG-FROG) has been implemented by using surface harmonic generation in a glass plate [128] and more recently in organic films [129]. Two replicas of the pulse under test are mixed noncollinearly in a medium, so that they overlap spatially at one of the surfaces of the medium [Fig. 21(b)]. This is actually the most sensitive FROG setup based on third-order nonlinear effects for femtosecond pulses, and it has the advantage of a large phase-matching bandwidth. The traces of THG-FROG can be difficult to interpret; for example, the THG-FROG trace of a Gaussian pulse with second-order dispersion does not show the familiar correlation between time and frequency of linear chirp. However, the THG-FROG traces usually have some asymmetry; for example, the THG-FROG trace of a Gaussian pulse with third-order dispersion has a more familiar shape. Since there is no third-harmonic generation from a beam with circular polarization, a collinear fringe-free THG-FROG setup can be built by using two beams with opposite circular polarizations [130].

Transient grating frequency-resolved optical gating. Transient gradient FROG (TG-FROG) is based on a three-beam geometry similar to what is called BOXCARS in nonlinear spectroscopy [131]. As in Figs. 21(c), 21(d), the input test pulse E(t) is split into three replicas E1(t), E2(t), and E3(t). The field generated by four-wave mixing is proportional to E1(t)E2(t)E3*(t). Depending on the choice of the delayed pulse (either E2(t) or E3(t)), the TG-FROG trace obtained by frequency resolving the generated field is, respectively

S(τ,Ω)=|dtE(t)I(tτ)exp(iΩt)|2
or
S(τ,Ω)=|dtE2(t)E*(tτ)exp(iΩt)|2,
which are, respectively, mathematically equivalent to the PG-FROG trace or the SD-FROG trace. The nonlinear process of TG-FROG is phase matched, and therefore a long nonlinear medium can be used to increase the sensitivity.

Four-wave mixing frequency-resolved optical gating. The strong wave-mixing effects in semiconductor optical amplifiers (SOAs) have been used to characterize pulses in the telecommunication environment. The pulse is once again split into two replicas, where one is used as a strong pump, depleting the carriers in a semiconductor optical amplifier, therefore temporally modulating the gain and phase of the SOA. The second replica, with a lower power, is used as a probe, and one spectrally resolves this replica after the SOA. This approach is extremely sensitive, as it usually suffices to have peak powers of the order of 1mW to induce significant changes in the SOA transmission [132]. Spectrograms measured by using a SOA depleted by a strong pump pulse as the gate acting on a probe pulse from a different source have also been measured, leading to the characterization of dynamical processes in SOAs [133].

Two-photon absorption frequency-resolved optical gating. The high sensitivity of the two-photon absorption in an indium phosphide crystal has been used to characterize pulses in the telecommunication environment [134, 135]. This is a two-pulse mixing, but it is not background free, since the pump induces absorption on the probe. A background-free trace can be obtained by proper modulation of the interacting beams.

Cross-correlation frequency-resolved optical gating. Nonlinear mixing of the pulse under test with another optical pulse has also been used. In this cross-correlation FROG (X-FROG), the pulse and the gate do not have an explicit mathematical link, and the X-FROG trace can be written as function of the field of the pulse E(t) and the field of the gate E(t) as

S(τ,Ω)=|dtE(t)E(tτ)exp(iΩt)|2.
X-FROG is not self-referencing, as it requires an additional pulse. It is particularly useful when characterizing pulses in a wavelength range where regular FROG setups would be difficult to implement, often because of the absence of phase-matched nonlinear interactions. For example, X-FROG has been used to characterize blue pulses around 400nm by downconversion with a pulse at 800nm [136] and to characterize infrared pulses at 4μm by frequency mixing with a pulse at 770nm [137]. As the measured signal increases with the energy of the ancillary pulse, low-energy pulses can be characterized with a high-energy ancillary pulse. Parametric gain can be used in a X-FROG configuration to provide high-sensitivity measurements, though at the expense of background noise [138, 139]. X-FROG is particularly useful for characterizing the broadband pulses generated by nonlinear propagation in optical fibers and therefore for interpreting the combination of linear and nonlinear effect that are operative in the formation of solitons [15, 140, 141].

Single-shot frequency-resolved optical gating. The single-shot acquisition of a FROG trace relies on encoding the time and frequency variables into the two transverse spatial coordinates, making use of a two-dimensional detector to record the spectrogram [142]. Mapping the optical frequency to one spatial coordinate is performed with a conventional grating-based spectrometer, which can record the optical spectrum of a pulse in a single shot by using a CCD array. A relative delay between two pulses can be mapped into the other spatial coordinate by using a geometry similar to the single-shot autocorrelator [143]. In this configuration, the convergence of two unfocused beams means that the delay is a linear function of the distance across the interaction plane of the beams. The field at spatial position x after nonlinear interaction is E(t)E(tαx). This plane can be reimaged onto the input slit of an imaging spectrometer, which then allows the measurement of the optical spectrum as a function of the optical frequency ω and spatial position x. The measured intensity is therefore S(τ,ω), where the relation between x and τ has been calibrated. Since delay is now coded in a transverse spatial coordinate, it is necessary to avoid spatial distortions of the beam, as they can appear as temporal distortions on the FROG trace. Some corrections of the FROG trace taking into account the beam spatial profile can in general be performed [144]. Single-shot operation of FROG has been used to characterize the output of large laser systems based on chirped-pulse amplification [10].

3.5c. Poor Man’s Frequency-Resolved Optical Gating and GRENOUILLE

In FROG, the stationary filter allowing the optical spectrum measurement after a nonlinear interaction is usually a standard optical spectrum analyzer. It is, however, known that nonlinear interactions in a thick nonlinear crystal having limited spectral acceptance lead to a coupling between the optical frequency of the converted field and its wave vector. That is, the upconversion of a particular frequency occurs only in a particular direction. It was pointed out that such coupling could be used to provide angular dispersion of the upconverted spectrum, allowing, by means of suitable imaging optics and a detector, implementation of the frequency-resolving element of a FROG diagnostic [103, 104]. This property can be used in concert with the time-to-space mapping configuration of single-shot autocorrelators and FROG to provide complete mapping of the two transverse spatial coordinates into the time and frequency coordinates of the FROG trace (Fig. 22). Such an arrangement, nicknamed grating eliminated no-nonsense observation of ultrafast incident laser light electric fields (i.e., GRENOUILLE, the French word for “frog”), is composed of a cylindrical lens, a Fresnel biprism, a thick nonlinear crystal, and a combination of two cylindrical lenses, so that a spatially extended input pulse is focused in the vertical direction in the crystal, where the spatially dependent phase matching leads to angular dispersion in the upconverted beam [104]. To characterize shorter pulses some of the transmissive optics may be replaced by reflective optics [145]. The biprism generates two beams that cross one another from a single input beam, so that the relative time between the two waves is mapped into the horizontal spatial coordinate. The two cylindrical lenses after the crystal are used to map the horizontal position and the vertical dispersion into the horizontal and vertical coordinates in a plane where a two-dimensional detector can be set. The GRENOUILLE trace is fundamentally a SHG-FROG trace; it therefore suffers from a direction-of-time ambiguity, and the trace does not encode chirp information in a direct visual manner. Nonetheless, it has been shown that some types of space–time coupling can be evaluated by using some prior assumptions [146].

3.5d. Linear Spectrograms

A spectrogram can be measured by gating the test pulse with a fast temporal modulator that acts as the nonstationary filter [23, 147]. There are several advantages to this approach, which is particularly well suited to optical telecommunication pulse durations and wavelengths [148]. The technique is highly sensitive, since no nonlinear interaction is involved, and pulse trains with as low as 100nW of average power have been characterized. The choice of appropriate electroabsorption modulator renders the method fairly insensitive to both the polarization and the wavelength. Another technical advantage is that both the electric field of the pulse and the transfer function of the modulator can be retrieved by using the principal component generalized projection algorithm . Therefore, such an experiment can also be used to characterize the amplitude and phase transfer function of an unknown modulator. Since the gate is independent of the pulse, deconvolution of the spectrogram can be performed with a known or approximate transfer function for the gate. While the gate must have a bandwidth similar to the bandwidth of the pulse under test, this condition is rather loose, and subpicosecond pulses have been characterized by using a 30ps gate. The action of the modulator must be synchronized to the pulse under test. In the telecommunication environment, a clock synchronized to the source of pulses is usually available, and can therefore be used to drive the modulator. This approach has been implemented, for example, with an electroabsorption modulator or an electro-optic modulator [149, 150] driven by a RF sine wave, in which the relative delay between the pulse under test and the gate was controlled by using a RF phase shifter, so that no free-space optical delay line was needed [Fig. 23(a)]. Such an arrangement was also used to characterize the output of various pulse carvers [78, 151, 152], to characterize various pulse shapers [153, 154], and to characterize subpicosecond pulses [147, 155]. Figure 23(b) presents the spectral representation of the 2.4ps pulse from a mode-locked laser diode. This pulse was sent in a nonlinear fiber, where self-phase modulation was used to broaden its spectrum, followed by a dispersive fiber to compensate for the nonlinear chirp. The output pulse [represented in Fig. 23(c)] is recompressed to 900fs. The characterization of the temporal behavior of a gain-depleted SOA was also performed by gating a probe pulse with the SOA under test [133]. It is possible to operate a linear spectrogram system close to video-rate retrieval, effectively in real time, by using a Fabry–Perot etalon to measure the optical spectra after modulation has demonstrated update rates of the order of 10Hz [24].

3.6. Conclusions

Spectrographic techniques have been used for a large variety of pulses in very diverse configurations. Nonlinear optics plays a central role in most implementations to provide a gating process for subpicosecond pulses, although linear implementations have also been used to characterize longer pulses, e.g., in optical telecommunications. Spectrographic techniques lead in some cases to intuitive experimental traces capable of displaying the time-to-frequency correlation in a chirped pulse. In most cases, the electric field reconstruction must be iterative.

4. Tomography and Imaging Concepts

4.1. Introduction

Means for measuring the phase of the spatial electric field describing a quasi-monochromatic beam of light have been known for a long time. The concepts developed for imaging and wavefront sensing have inspired approaches to characterizing the temporal field variation of short optical pulses. For example, imaging of a pulse using temporal magnification is analogous to imaging of a spatially localized, pointlike object with spatial magnification by means of a conventional imaging system. Similarly, approaches to the measurement of spatially extended wave field by means of tomography have parallels in temporal pulse characterization. In this section the analogy between the spatial and the temporal domains in wave propagation is outlined. We then focus our attention on specific aspects of imaging and wave-field tomography as applied to the characterization of short optical pulses. Finally, we describe in detail various experimental implementations of these concepts.

4.1a. Analogy between Space and Time

The analogy between wave propagation and spatial focusing, on the one hand, and dispersion and temporal focusing, on the other, has proved a fruitful tool for understanding and development of concepts for temporal pulse-field characterization. In conventional optical imaging, a magnified image of an object is obtained by propagating the field emanating from, or scattered by, the object, through a sequence of optical elements that modify the field in such a way that a scaled version of the field at the object is formed at an image plane, where it can be recorded. In this way, a small object can be viewed by using a detector of limited spatial resolution. The process of temporal imaging follows a similar sequence, allowing very brief pulses to be viewed by using a slow detector. Temporal imaging of electrical waveforms was first studied by Caputi [156, 157, 158] and Tournois et al. [159], who experimentally demonstrated a temporal magnification equal to 2 [i.e., the input and output intensities of the waveform before and after the time lens imaging system were related by IOUTPUT(t)=IINPUT(t2)]. The analogy between Fresnel diffraction in space and dispersive propagation in frequency for optical waves was also mentioned independently [160], and the equivalence between spatial and temporal domains has been studied extensively [161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171]. Experimental realizations with short optical pulses include the time-to-frequency converter [172, 173, 174, 175, 176], leading to the magnification of various waveforms [177, 178, 179] and various applications in optical telecommunications [180, 181, 182]. Other related works include the pinhole time camera [183], the generalization of various imaging concepts to the time domain [184], the conversion of a waveform from time to space [185, 186, 187, 188, 189], the temporal Talbot effect [190, 191, 192], and the temporal van Cittert–Zernike theorem [193]. All of this work is based on the correspondence between the pair of transverse space and transverse wave vector coordinates (x,kx) and the time–frequency coordinates (t,ω), the equivalence between spatial diffraction and chromatic dispersion, and the possibility of mimicking the action of a lens in the time domain by using a quadratic temporal phase modulation. Such equivalence is depicted in Fig. 24 in the context of an imaging system. The physics underlying this can be seen by the following argument. Fresnel diffraction over a distance L is expressed in the transverse wave-vector space as a quadratic phase factor [194],

ẼOUTPUT(kx)=ẼINPUT(kx)exp(iL2k0kx2).
Changing from the space to the time domain, this can be cast in terms of chromatic dispersion of second order, where an element of second-order dispersion per unit length ϕ introduces a quadratic spectral phase factor
ẼOUTPUT(ω)=ẼINPUT(ω)exp(iϕLω22).
In the spatial domain, an aberration-free thin lens of focal length f introduces a quadratic modulation in the transverse space [194]
EOUTPUT(x)=EINPUT(x)exp(ik02fx2).
In the time domain, this corresponds to the action of a quadratic temporal phase modulator, which by definition provides a quadratic phase modulation in the t space,
EOUTPUT(t)=EINPUT(t)exp(iψt22).
With this equivalence in mind, approaches leading to spatial imaging of objects and characterization of spatial wavefronts can be implemented for the temporal characterization of short pulses. For example, imaging with magnification can be implemented by using a dispersive line followed by a temporal lens, followed by another dispersive line. This mimics the action of a spatial imaging system based on a single lens, which generates the magnified image of an object by using the combination of a lens with free-space diffraction before and after the lens. Note, however, that the spectrotemporal domain has a unique feature: one can have positive or negative quadratic spectral phase modulations, whereas the quadratic modulation associated with diffraction has a constant sign.

4.1b. Wigner Formalism

Temporal imaging can be understood easily by using the Wigner formulation. The Wigner functions of a pulse before and after quadratic spectral phase modulation ϕω22 are related by

WOUTPUT(t,ω)=WINPUT(tϕω,ω).
The Wigner functions of a pulse before and after quadratic temporal phase modulation ψt22 are related by
WOUTPUT(t,ω)=WINPUT(t,ω+ψt).
From these two relations, the action of any combination of aberration-free temporal lenses and dispersion can be calculated.

The temporal and spectral intensities of the pulse are the time and frequency marginals of the Wigner function, i.e., the projections of that function onto the time and frequency axes:

I(t)=dω2πW(t,ω),
Ĩ(ω)=dtW(t,ω).
It is usually experimentally easier for short pulses to measure the frequency marginal than the time marginal. Indeed, the frequency marginal can be measured accurately by using a spectrometer or monochromator of sufficient resolution, while the temporal marginal requires a detector with time resolution much better than the duration of the pulse under test.

4.1c. Tomography

Tomography broadly relates to the reconstruction of an object in N dimensions from a set of projections onto lower-dimensional data sets [195]. This concept has been used in medical imaging for a long time [196] and has also been applied to quantum state reconstruction [197], where a high-dimensional entity is estimated from a set of probability distributions. In imaging applications, the three-dimensional reconstruction of an object is obtained from a set of two-dimensional measurements of the attenuation of a probe beam through the object taken along different directions through the object. Restricting our attention to a two-dimensional object with attenuation specified by the function a(x,y), we can define Pθ(u) as the projection of a nondiffracting source of uniform spatial intensity orthogonal to an axis making an angle θ with the y axis (Fig. 25). Noting that the line Δθ(u) satisfies y=xtan(θ)+ucos(θ), one has

Pθ(u)=dxdya(x,y)δ[yxtan(θ)ucos(θ)],
where δ is the Dirac delta function. Integrating over y gives
Pθ(u)=dxa[x,ucos(θ)+xtan(θ)].
Gathering a set of projections Pθ(u) for different angles θ (a set usually referred to as the Radon transform of the function a), one can then attempt the reconstruction of a(x,y). In the context of ultrashort optical pulses, tomographic reconstruction implies estimating the two-dimensional chronocyclic Wigner function representing the pulse train. The approach is similar to that described above: measure a set of projections of the chronocyclic Wigner function, from which the Wigner function itself can be obtained. As noted above, if the train of pulses is coherent, its electric field, within some arbitrary constants, can be obtained directly. If the train of pulses is partially coherent, the description by an electric field is inappropriate, and the Wigner function is the next-lowest-order description. The procedure is known as “chronocyclic tomography” [198], as it applies tomography to the chronocyclic space (t,ω). The time-to-frequency converter mentioned above and simplified chronocyclic tomography [199] are variations of the complete tomographic technique that use a restricted set of projections.

4.2. Chronocyclic Tomography

4.2a. Principle

Implementing chronocyclic tomography for reconstructing the electric field of a short pulse requires an apparatus that can project the Wigner function of the pulse onto sets of axes defined in the chronocyclic phase space spanned by (t,ω) [198]. The approach is equivalent to rotating the Wigner function and measuring its projection on a fixed axis, usually the frequency axis. Rotation of the Wigner function by an angle θ corresponds to an axis rotation by the angle θ. Experimentally, one can easily measure the frequency marginal of the rotated Wigner function by means of a spectrometer with sufficient resolution. An arbitrary rotation in the chronocyclic phase space can be implemented by means of quadratic spectral phase modulation in series with quadratic temporal phase modulation. The Wigner function Wϕ,ψ(t,ω) of a pulse after a quadratic spectral phase modulation exp(iϕω22) followed by a quadratic temporal phase modulation exp(iψt22) is related to the input test pulse Wigner function W(t,ω) by

Wϕ,ψ(t,ω)=W[(1ϕψ)tϕω,ω+ψt].
The spectral density of the field after these two modulations is the frequency marginal of the rotated Wigner function:
Ĩϕ,ψ(ω)=dtWϕ,ψ(t,ω)=11ϕψdtW(t,ω11ϕψ+t11ψϕ).
Defining the angle θ by cot(θ)=ϕ1ψ and the variable ωθ=ωsin(θ)ψ and rescaling by 1(1ϕψ), the projection can be written for all angles different from π2 as
Ĩϕ,ψ(ω)=Ĩθ(ω)=dtW[t,ωθcos(θ)tan(θ)t].
For θ=π2, one simply has
Ĩπ2(ω)=dtW[ϕω,ω+ψt]=I(ϕω).
Equations (4.13, 4.14) correspond to the projection of the Wigner function on an axis making an angle θ with the frequency axis, i.e., to the projection on the frequency axis of the Wigner function rotated by the angle θ. A set of projections of the Wigner function of the train of pulses can be obtained by measuring the spectrum of the pulse for a set of appropriate quadratic spectral and temporal phase modulations (ϕ,ψ). Such a formalism is identical to the measurement of one-dimensional parallel projections of a two-dimensional object. From the set of projections, one can reconstruct W. If the source is coherent, its electric field E is algebraically reconstructed from a single slice of the two-time correlation function, which can be calculated by Fourier transforming W. This approach has not been experimentally demonstrated until now because of the relative difficulty in implementing accurately variable quadratic spectral and temporal modulations. Note that such a technique would lead to the unambiguous measurement of the Wigner function, whether it corresponds to a train of identical pulses or a partially coherent train of pulses. In this sense it offers a new feature over the nonlinear methods, which require the assumption of a coherent pulse train. However, the efforts in pulse characterization have so far concentrated, apart from a few exceptions, on measurements of coherent test pulse ensembles. This notion may also be applied to tomography. Two variations on the theme of chronocyclic tomography, the time-to-frequency converter and simplified chronocyclic tomography, use a smaller number of projections to characterize the electric field or the intensity of the pulse by making use of some similar prior assumptions.

4.2b. Inversion for Chronocyclic Tomography

Tomographic data inversion has been studied extensively, both theoretically and experimentally. We deal here with inversion methods for a common configuration: parallel projections measured with a nondiffracting source. For data sets taken in this configuration, inversion can be performed by using the filtered backprojection algorithm. We briefly discuss this algorithm in order to illustrate its principle and to emphasize that such inversion is algebraic and noniterative. More complete treatments of tomographic imaging can be found in the literature [195]. The key to the inversion of tomographic data is the Fourier slice theorem. Using the notation of Subsection 4.1c, we can define the Fourier transform of the attenuation function a(x,y) by

ã(u,v)=dxdya(x,y)exp[2πi(ux+vy)].
This Fourier transform taken at u=0 gives
ã(0,v)=dy[dxa(x,y)]exp(2πivy),
where the quantity between brackets is by definition the projection of the attenuation along a line of constant y, i.e., P0(y). This leads to
ã(0,v)=P̃0(v).
The Fourier transform of the image at u=0 can therefore be calculated from the Fourier transform of the projection measured for θ=0. As this is independent of the angle between the object and the axis, any one-dimensional slice of the Fourier transform of the object can be obtained from the one-dimensional Fourier transform of the projection measured at the appropriate angle, and the Fourier slice theorem can be written with our definitions as
ã[ρcos(θ),ρsin(θ)]=P̃π2θ(ρ).
This is used to derive the filtered backprojection algorithm. The object function is written as
a(x,y)=dudvã(u,v)exp[2πi(ux+vy)],
which can be expressed in circular coordinates (ρ,θ) as
a(x,y)=0πdρdθã[ρcos(θ),ρsin(θ)]|ρ|exp{2πiρ[xcos(θ)+ysin(θ)]}.
Using the Fourier slice theorem, this leads to
a(x,y)=0πdρdθP̃π2θ(ρ)|ρ|exp{2πiρ[xcos(θ)+ysin(θ)]}.

The image can therefore be directly reconstructed from the set of projections Pθ. This involves a filtering operation, represented by the product with |ρ|, and the operation is equivalent to a projection back from the set of projections to the image, hence the name “filtered backprojection” given to this reconstruction procedure. When applied to pulse characterization, this approach to tomography has several advantages. It allows the algebraic noniterative reconstruction of the Wigner function independently of the state of coherence. However, in all tomographic reconstruction procedures, proper sampling of the experimental trace is critical, and the quality of the reconstruction varies greatly with the number of projections [195].

4.3. Time-to-Frequency Conversion

4.3a. Exact Time-to-Frequency Converter

A device that permits estimation of the temporal intensity of a pulse by using temporal and spectral modulations followed by a measurement of the pulse spectrum is known as the “time-to-frequency converter” [172, 173, 174, 175, 200]. From Eq. (4.14), for (1ψ)ϕ=0 (i.e., θ=π2), one measures Ĩπ2(ω)=I(ϕω). The successive action of the quadratic temporal and spectral phase modulation is depicted in Fig. 26. The key point is that the frequency marginal of the output Wigner distribution is a scaled version of the temporal marginal of the input Wigner distribution, and temporal intensity measurements can therefore be performed by using a spectrometer. Under these conditions, a quadratic spectral phase modulation remains after the temporal phase modulation, since the Wigner function is W[ϕω,ω+ψt]. This is apparent from the orientation of the interference fringes of the function in the chronocyclic space. Therefore, the spectral field after the two modulations is not a mapping of the input temporal field to the output, just as the image of an object in a simple telescope does not arise from a replication of the object field in the image space. Rather, it is a mapping of the moduli of the two fields only. Similarly to the imaging system, this does not affect the recovery of the temporal intensity. As in the spatial domain, a complete Fourier-transform setup can be obtained by combining dispersion ϕ, quadratic temporal modulation 1ϕ, and dispersion ϕ. This would make the interference fringes observed in Fig. 26 vertical.

4.3b. Chirped Pulse Modulation

An approximate version of the time-to-frequency converter has been popularized for terahertz pulse characterization and photonics [201, 202, 203]. In this version, a short broadband optical pulse (the ancilla) is first stretched by an element with large second-order dispersion ϕ1. The chirp is large enough that there is a clear relation between time and frequency: the group delay in the stretched pulse is T1(ω)=ϕ1ω, and, equivalently, the instantaneous frequency is Ω1(t)=tϕ1. In the time domain, the pulse has a quadratic temporal phase t2(2ϕ1). This pulse is then modulated by the waveform under test r(t), for example, through a temporal modulator, a nonlinear interaction with another optical pulse, or electro-optic modulation. Assuming for simplicity that the stretched pulse before modulation has a flat temporal profile (i.e., a flat optical spectrum), the electric field of the output pulse is simply E2(t)=r(t)exp(it22ϕ1). The resulting Wigner function W2 is related to the Wigner function of the test waveform Wr by

W2(t,ω)=Wr(t,ωtϕ1).
If the function r is slow enough, it does not significantly modify the time–frequency relation in the chirped pulse, and the modulation is therefore encoded on the optical spectrum of the chirped pulse. The optical spectrum after modulation is therefore
Ĩ2(ω)=dωW2(t,ω)=dωWr(t,ωtϕ1)|r(tϕ1)|2.

The intensity of the modulation (e.g., the intensity of an optical test pulse, or the intensity transmission of a modulator) can be measured in a single shot by using a spectrometer that records the optical spectrum of the ancilla after modulation. A similar formalism can be used to describe frequency-to-time conversion by a single time lens followed by dispersive propagation. The optical spectral density of the test pulse is found by measuring the temporal intensity of the ancilla [176, 204, 205, 206].

4.4. Simplified Chronocyclic Tomography

Provided a number of prior assumptions about the coherent nature of the test pulse ensemble are accepted, the electric field of the pulse can be obtained from a limited number of projections of its Wigner function [199, 207]. More specifically, the electric field can be retrieved by using one projection of W, i.e., Iα, and its angular derivative, i.e., Iαα. Because the latter can be obtained as a finite difference, i.e., as the difference between two projections for two different angles when the difference between the angles tends to zero, the required number of projections is equal to two (Fig. 27). Following Eq. (4.10), suppose that one can measure Ĩθ(ω)=dtW[t,ωcos(θ)tan(θ)t]. The angular derivative of this function with respect to θ is

Ĩθθ(ω)=dt[ωθ(1cos(θ))tθ(tan(θ))]Wω[t,ωcos(θ)tan(θ)t].
Calculating the derivatives at θ=0 gives
|Ĩθθ|θ=0(ω)=dttWω(t,ω)=ωdttW(t,ω).
Finally, since ϕω is equal to the first-order temporal moment of W, one obtains
|Ĩθθ|θ=0(ω)=ω[Ĩ(ω)ϕω].

The spectral phase of the pulse ϕ(ω) can therefore be reconstructed by using the angular derivative of the frequency marginal of the rotated Wigner function. The spectral intensity of the pulse, which is needed for the reconstruction of the field as well as for the reconstruction of the phase, can be obtained directly as the marginal for no rotation, i.e., Ĩ0(ω). It is therefore possible to reconstruct the field by using the frequency marginal of its Wigner function and the angular derivative of its frequency marginal taken at θ=0. Note that if one uses the frequency marginal at a finite angle θ and its derivative, the reconstructed field is the field corresponding to the Wigner function rotated by θ, from which the field in the absence of rotation can be reconstructed algebraically as long as θ is known precisely.

It turns out that the proposed reconstruction is also valid when one uses only a quadratic temporal phase modulation (in which case the operation on the Wigner function is not a rotation but a shear) [199]. Indeed, the projection on the frequency axis of the Wigner distribution for a quadratic temporal phase modulation ψ is

Ĩψ(ω)=dtW[t,ω+ψt].
The derivative of this quantity with respect to ψ is
Ĩψψ(ω)=dtW(t,ω+ψt)ψ=dttW(t,ω+ψt)ω=ωdttW(t,ω+ψt).
Applying this relation at ψ=0 gives
|Ĩψψ|ψ=0(ω)=ωdttW(t,ω)=ω[Ĩ(ω)ϕω].
The group delay ϕω can be obtained by dividing the derivative of the optical spectrum of the modulated pulse with respect to the amplitude of the temporal phase modulation by the optical spectrum of the pulse. The derivative can be obtained experimentally as a finite difference, i.e., as the difference between two spectra measured for small finite quadratic temporal phase modulations. These spectra correspond to the projections of the Wigner function after small rotations in the chronocyclic space, since tan(θ)=ψ is small. Figure 27 represents the Wigner function of a pulse having a Gaussian spectrum and a cubic spectral phase after small quadratic temporal phase modulations of opposite signs. The difference between the resulting spectral marginals is plotted in Fig. 27(c). The electric field can be reconstructed in the spectral domain by using the optical spectrum and measured difference, and since the optical spectrum can be estimated by the average of the two spectral marginals, the electric field is completely reconstructed in the spectral domain by using only two projections of the Wigner function. This measurement requires a modulator of smaller bandwidth than the time-to-frequency converter and yet provides a complete measurement of the electric field by using only two one-dimensional spectra [199], or, more practically, by directly measuring the spectrum and its derivative by using synchronous detection [208].

Simplified chronocyclic tomography can also be implemented with projections of the Wigner function onto the temporal axis [209]. The temporal phase of the pulse can be reconstructed from the derivative of the temporal intensity of the pulse under test with respect to an applied quadratic spectral phase modulation. The derivative can be approximated by the finite difference of two measured temporal intensities after different amounts of quadratic spectral phase modulation.

Simplified chronocyclic tomography is analogous to wavefront reconstruction from spatial intensity distribution measured in various planes, as first studied by Teague [210] and Roddier [211]. In this case, the transport-of-intensity equation, which is a two-dimensional equivalent of Eq. (4.29), is used to reconstruct the wavefront of a monochromatic beam. A similar approach has also been proposed for the temporal characterization of attosecond x-ray pulses [212].

4.5. Temporal Imaging

4.5a. Exact Temporal Imaging

Temporal imaging is the process of generating an optical pulse field that is a temporally magnified version of the input test pulse field (usually up to a constant phase, which is of no importance when square-law detection is performed) [164, 165, 177, 178, 179]. The bandwidth of the imaged pulse is smaller than that of the input pulse, so that registering temporal structure requires a smaller bandwidth detector than for the input pulse. Temporal magnifications of the order of 100 have been experimentally demonstrated.

The principle of time magnification is presented in Fig. 28, illustrated for a pair of phase-locked Gaussian pulses with different energies. The test pulse experiences, first, second-order dispersion ϕ1, followed by quadratic temporal phase modulation ψ, followed by another second-order dispersion ϕ2. These actions correspond to a shear along the time axis, a shear along the frequency axis, and another shear along the time axis, respectively. Denoting by W0 the initial Wigner function and W1, W2, and W3 the Wigner functions after each of these actions, one has

W1(t,ω)=W0(tϕ1ω,ω),
W2(t,ω)=W1(t,ω+ψt),
W3(t,ω)=W2(tϕ2ω,ω).
This leads to the Wigner function of the pulse at the output of the imaging system:
W3(t,ω)=W0[(1ϕ1ψ)t(ϕ1+ϕ2ϕ1ϕ2ψ)ω,(1ϕ2ψ)ω+ψt].
The temporal imaging condition is defined by ϕ1+ϕ2ϕ1ϕ2ψ=0, which can be written in a manner analogous to Newton’s equation for a lens system:
1ϕ1+1ϕ2=ψ.
Under this condition, the Wigner function of the output field is
W3(t,ω)=W0(ϕ1ϕ2t,ϕ2ϕ1ω+ψt),
and the temporal intensity of the output field is related to that of the input field by
I3(t)=I0(ϕ1ϕ2t).
The intensity of the pulse has therefore been magnified by M=ϕ2ϕ1, at the same time that the pulse has been stretched in time. This is illustrated in Fig. 28, where the magnification is equal to 2. There is a residual quadratic temporal phase on the electric field of the magnified waveform. This is obvious from (4.35) and can also be identified in Fig. 28 by the slant of the interference fringes in the chronocyclic space. An additional temporal phase, i.e., a shear along the frequency axis, would make these fringes parallel to the time axis, and the electric field would then be a magnified version of the input electric field. This additional step is rarely taken because the interest in time magnification lies mostly in measuring the temporal intensity.

The temporal magnification equations are analogous to spatial imaging of an object with a combination of free-space propagation along a distance z1, propagation through a lens of focal length f, and free-space propagation along a distance z2. The well-known condition for imaging is

1z1+1z2=1f,
and the magnification of such a system is given by
M=z2z1.
It should be noted that in the time–frequency domain a dispersion of arbitrary sign can be implemented, and therefore it is possible to obtain positive magnification by simply combining dispersion, lens, and dispersion.

The reason why temporal magnification is attractive is that fast photodetectors may be able to temporally resolve the time-magnified intensity. The bandwidth requirement for the photodetector is decreased compared with that required to measure the test pulse temporal intensity directly, owing to the reduction in bandwidth of the pulse provided by the temporal magnification setup. Also, in contrast to sampling oscilloscopes, which may have sufficient bandwidth, time-magnification systems can operate in single-shot mode. By reducing the bandwidth of the test pulse, one can potentially use real-time oscilloscopes to measure the temporal intensity.

4.5b. Time-Stretch Technique

A popular technique related to temporal imaging has been used to characterize transient phenomena by optical means [213, 214, 215, 216]. Although it does not exactly magnify the waveform in the way described in the previous subsection, it yields good results provided that its parameters are set appropriately and is simpler experimentally. Consider Eq. (4.22) that describes the Wigner function of a pulse after dispersive propagation and low-bandwidth modulation. Since the modulation has been encoded in the optical spectrum of the chirped pulse, it may be measured by chirping the modulated pulse significantly and then performing a time-domain measurement. An additional dispersive element with second-order dispersion ϕ2 is therefore added, resulting in the Wigner function

W3(t,ω)=W2(tϕ2ω)=Wr[tϕ2ω,ω(tϕ2ω)ϕ1].
The temporal marginal of this Wigner function is
I3(t)=dωW3(t,ω)=dωWr[ϕ2ω,ω(1+ϕ2ϕ1)+tϕ2]
after a change of variable. Since r is narrowband, the frequency variable is constrained to verify ω(1+ϕ2ϕ1)+tϕ20, and this leads to
I3(t)|r[t(1+ϕ2ϕ1)]|2.
This shows that the measured temporal intensity is a magnified version of the magnitude of r. The magnification coefficient is 1+ϕ2ϕ1, which can be made large enough to allow single-shot real-time measurements of nonrepetitive events. Although Eq. (4.41) indicates that this implementation is only sensitive to the intensity of the modulation, phase-sensitivity can be obtained by heterodyning of the waveform under test with a cw laser [216].

4.6. Cross-Phase Modulation and Self-Phase Modulation with an Unknown Pulse

Tomographic ideas can also be applied in self-referencing temporal field characterization, when there is no reference pulse, and for which there is no common spatial equivalent. Suppose one implements a temporal phase modulation by using cross-phase modulation (XPM) or self-phase modulation (SPM). In the case of XPM with a bell-shaped pulse (i.e., one with quadratic temporal intensity profile), a temporal lens is obtained. However, if the temporal intensity profile of the pulse is unknown, then the induced phase modulation using either XPM or SPM is unknown. This is generally the case for self-referencing techniques, unless one has modified the pulse under test in a controlled manner. Nonetheless, XPM and SPM can be used for pulse characterization.

4.6a. Two-Spectra Technique

In this initial implementation [217], the experimental trace is composed of the spectrum of the pulse under test Ĩ1(ω)=Ĩ(ω)=|Ẽ(ω)|2 and the spectrum of the pulse under test after SPM Ĩ2(ω)=|dtE(t)exp[iαI(t)]exp(iωt)|2, where α is related to the nonlinear index and length of the medium. The retrieval of the field, i.e., reconstruction of the one-dimensional spectral phase, can be attempted by using a backpropagation iterative algorithm between Ĩ1 and Ĩ2. However, the experimental data do not uniquely specify the unknown spectral phase, so that no unambiguous determination of the field can be obtained without additional prior knowledge about the pulse shape. An extension of the SPM technique to the characterization of arbitrarily polarized pulses has also been demonstrated [218].

4.6b. Multispectra Technique

One way of obtaining redundancy in the experimental trace is to use XPM with a delayed replica of the test pulse [219, 220]. The spectrum of the pulse after XPM with a pump pulse of intensity IPUMP(t) can be measured for various relative delays between the two pulses in order to build a two-dimensional trace, as a function of the optical frequency ω and the delay τ,

S(τ,ω)=|dtE(t)exp[iαIPUMP(tτ)]exp(iωt)|2.

The electric field can be reconstructed from this trace by means of a generalization of the Gerchberg–Saxton algorithm. Backpropagation between the spectra measured for each delay is made possible by the fact that the gate is purely a phase gate, so that division by the gate response function does not make the algorithm unstable. There is a strong similarity of this approach to spectrography. Indeed, time scanning the unknown nonstationary filter obtained with XPM across the test pulse is equivalent to the temporal scanning of the gate in spectrography. Although semantically one would expect a gate to be an amplitude-varying filter, nothing precludes the use of a phase-only nonstationary filter in spectrography. Inversion based on generalized projections has also been used for the XPM technique [221]. Note that, in this case, the gate function can be unambiguously calculated from the pulse electric field provided that the nonlinearity α is known.

4.7. Practical Implementations of Tomography

4.7a. Quadratic Spectral Phase Modulation

Quadratic spectral phase modulation can be obtained by linear propagation of the test pulse through a dispersive device with specific phase transfer function. Useful devices include dispersive materials far from absorption resonances, waveguides, interferometers, devices based on diffraction, such as the two-grating compressor, or devices based on refraction, such as the two-prism compressor. The spectral phase added by linear propagation is a quadratic spectral phase modulation provided that all other terms in the Taylor development of the phase can be neglected over the bandwidth of the pulse under test. Such a modulation is independent of the pulse under test and can be accurately calibrated by using linear techniques. Techniques that require a large number of different spectral phase modulations benefit from the use of the two-grating or two-prism compressor, which can be tuned to vary the amount of quadratic spectral phase modulation.

4.7b. Quadratic Temporal Phase Modulation

Three approaches to generating a quadratic temporal phase modulation on a short optical pulse have been demonstrated.

Electro-optic phase modulation. Electro-optic phase modulators, based, for example, on lithium niobate waveguides, rely on the index change induced by a voltage via the electro-optic effect. Quadratic temporal phase modulation is obtained by synchronizing the optical pulse with one of the extrema of the modulation induced by a narrowband RF sine wave [177, 199], as depicted in Fig. 29 for simplified chronocyclic tomography. The sinusoidal drive voltage V(t)=V0cos(Ωt) induces the phase modulation

ψ(t)=πV0Vπcos(Ωt)=ψ0πV0Ω22Vπt2
around t=0, where Vπ is the voltage needed to obtain a π phase shift. This leads to a quadratic temporal phase modulation with amplitude πV0Ω22Vπ at the maximum of the phase modulation, and synchronization with a minimum of the modulation leads to the opposite amplitude πV0Ω22Vπ. For the simplified chronocyclic tomography setup of Fig. 29(a), alternation of the relative delay between the train of pulses under test and the modulation allows synchronization with either the positive or negative quadratic temporal phase modulation. This alternation is performed at frequency f, and the spectral density around the optical frequency ω is therefore modulated at the same frequency. Lock-in detection allows extraction of the time average of the modulation Ĩ0(ω) (i.e., the optical spectrum) and the oscillating component Ĩf(ω) (i.e., the difference between the optical spectra obtained for the two quadratic temporal phase modulations). The results obtained with this setup included pulse compression by nonlinear propagation and dispersion compensation, as shown in Figs. 29(b), 29(c). Real-time operation allowed quick optimization of this nonlinear compressor. For a sine wave at a 10GHz frequency, with V0=10V, the quadratic temporal modulation achievable in a modulator with Vπ=7V is 8×1021s2. The temporal phase modulation is quadratic only over a limited temporal window, e.g., for a 10GHz sine wave with a 100ps period, but the temporal window over which the modulation is within 1% of its parabolic approximation is approximately 10ps. This approach, although still limited in terms of bandwidth, allows a pulse-independent temporal modulation, which can be accurately determined from the measurement of the parameters of the drive voltage and modulator. This is also a completely linear modulating scheme. The development of these modulators has benefited from the developments of high-speed lithium niobate modulators, and progress in this area is likely to occur.

Cross-phase modulation. The intensity-dependent refractive index of a material can be used in conjunction with an optical pulse to induce a temporal phase modulation. For SPM, the input and output pulse are related by

E(t)=E(t)exp[i2πn2LI(t)λ],
where L is the length of the medium, λ is the wavelength, and n2 is the nonlinear coefficient of the medium (i.e., n2I(t) is the intensity-dependent variation of the optical index). It is difficult to implement a temporal lens by using SPM because the temporal phase modulation depends on the temporal intensity of the unknown pulse. However, SPM can be used with an iterative reconstruction algorithm [217, 219, 220].

In the case of XPM [175, 222], a temporal phase modulation is induced on the test pulse by nonlinear interaction with a different optical pulse having the temporal intensity IPUMP(t). The input and the output field are related by

E(t)=E(t)exp[i4πn2LIPUMP(t)λ],
assuming that the two pulses have the same polarization. If the intensity of the pump pulse is given accurately by a second-order polynomial over the temporal support of the pulse under test, e.g., IPUMP(t)=IPUMP(0)+12(2IPUMPt2)t2 around t=0, quadratic temporal phase modulation can be obtained. One way of obtaining a suitable pump pulse is to use a Gaussian pulse, or a chirped Gaussian pulse. For example, following the time-to-frequency conversion experiment described in [175] and depicted in Fig. 30, a dispersive delay line introduces a negative spectral phase modulation with ϕ=0.7ps2. The quadratic temporal phase modulation needed to obtain the π2 rotation (i.e., ψ=1ϕ) is obtained by cross-phase modulation with a pump pulse having a parabolic temporal intensity over the temporal support of the waveform under test after dispersion. A 1ps pump pulse with a 1kW peak power propagating into a 1m long fiber with γ=1W1km1 leads to approximately ψ=4ps2, leading to the required temporal modulation. A recent development consists in using pulses having a parabolic intensity profile over most of their temporal support. Pulses with a parabolic temporal intensity profile can be generated directly as optical similaritons, a type of pulse generated by the interplay of chromatic dispersion, nonlinearity, and gain [124, 223, 224, 225], or that are obtained via pulse shapers [226]. Fourier transformation, pulse retiming, and distortion compensation have been demonstrated for high-speed optical telecommunication using time lenses implemented with parabolic pulses and XPM [222, 226, 227]. In XPM setups, very large temporal phase modulations can be obtained by increasing the power of the pump pulse or improving the nonlinearity of the medium.

Wave-mixing. Wave-mixing with a chirped pulse can provide a quadratic temporal phase modulation of arbitrary sign [172, 173, 179, 228]. The electric field of a short pulse EPUMP(t) after large quadratic spectral phase modulation ϕPUMPω22 is EPUMP(t)=ẼPUMP(tϕPUMP)exp[it2(2ϕPUMP)] within some multiplicative factors. Nonlinear mixing of the test pulse with an ancillary chirped pump pulse is therefore formally equivalent to quadratic temporal phase modulation, provided that the amplitude of the latter is constant over the temporal support of the pulse under test. A schematic of a time-magnification setup based on wave mixing is shown in Fig. 31(a). The input pump pulse is sent into a dispersive delay line (for example, a two-grating compressor or an optical fiber), leading to the second-order dispersion ϕPUMP. The waveform under test is first dispersed by a dispersive delay line introducing a second-order dispersion ϕ1, then interacts with the chirped pump pulse in a nonlinear medium, and the product of this interaction is sent into an additional dispersive delay line with second-order dispersion ϕ2. The imaging condition relating ϕ1, ϕ2, and ϕPUMP depends on the nonlinear interaction.

With three-wave mixing, [172, 173, 179] the electric field of the generated signal is proportional to EEPUMP, i.e., the product of the electric field of the input chirped signal and the chirped pump, and the temporal phase modulation induced by the time lens is t2(2ϕPUMP). Equation (4.34) is directly applicable, with ϕ1, ϕ2 and ψ=1ϕPUMP, leading to 1ϕ1+1ϕ2=1ϕPUMP and M=ϕ2ϕ1. For example, [179] uses a stretcher to provide the dispersion ϕ1=0.17606ps2 on the waveform under test, and a compressor to provide the dispersion ϕPUMP=0.17784ps2 on the pump. After sum-frequency generation, the signal propagates in a compressor providing the dispersion ϕ2=17.606ps2. This setup theoretically leads to a temporal magnification M=100, and was indeed shown experimentally to magnify the temporal intensity of pairs of pulses by 103. This allowed waveform measurements with a 300fs resolution by use of direct photodetection and a sampling oscilloscope.

With four-wave mixing [228], the electric field of the generated idler is proportional to EPUMP2E*, i.e., the square of the field of the chirped pump and the conjugate of the field of the signal. Because of this, Eq. (4.34) must be applied to ϕ1, ϕ2, and ψ=2ϕPUMP, leading to 1ϕ1+1ϕ2=2ϕPUMP and M=ϕ2ϕ1. This has, for example, been implemented in a silicon chip [228]. In this case, the pump and waveform under test are, respectively, dispersed by 1900 and 1000 m of standard single-mode fiber, leading to dispersions ϕ1=21.6ps2 and ϕPUMP=41ps2. The idler generated by four-wave mixing is sent into a dispersion-compensating module with dispersion ϕ2=434ps2. This setup demonstrates a temporal magnification M=20. An example of a setup for time-to-frequency conversion using four-wave mixing in a silicon waveguide is shown in Fig. 31(b) [229]. The chromatic dispersion for the pump and test waveform is provided by optical fibers. Four-wave mixing occurs in a silicon waveguide, and wave-mixing with a pump pulse that has twice the dispersion of the test waveform leads to time-to-frequency conversion. An additional span of fiber identical to the input fiber span allows a full time-to-frequency conversion of the electric field, although this is not required for intensity measurements. Time-to-frequency conversion enabled measurement of the intensity of high-speed optical waveforms over a time interval longer than 100ps with a 220fs temporal resolution. Single-shot operation is provided by using a spectrometer capable of measuring the entire spectrum after modulation in a single shot. Examples of waveforms measured with this setup are shown in Fig. 32. The left column corresponds to results from the time-to-frequency conversion ultrafast oscilloscope. The right column corresponds to the intensity measured by cross-correlation of the test waveform with a short optical pulse. Very good agreement is obtained for a number of different waveforms, even in single-shot operation.

4.8. Conclusions

Chronocyclic tomography provides a means by which the full two-time correlation function of a pulse ensemble can be determined. It has proved difficult to implement full tomographic reconstruction of femtosecond pulses in practice because of the difficulty in modulating pulses with sufficient bandwidth. However, a number of subtomographic approaches have been implemented successfully, and the most common of these, the temporal imaging system, allows direct measurement (and indeed simple visualization) of subpicosecond waveforms by using single-shot data acquisition by means of a fast photodetector and sampling oscilloscope. Such devices are useful for low-repetition-rate systems, or for systems where the pulse shape is changing rapidly from shot to shot and from which samples can be taken.

5. Interferometry

5.1. Introduction

Interferometry provides a very sensitive and accurate means to measure the phase of an optical field. This approach has a long pedigree in the field of optical testing [230, 231]. The conversion of phase to amplitude information that is the hallmark of interferometric measurement allows deterministic and robust extraction of the phase from the measured data.

The earliest suggestions for using interferometric methods for pulse characterization made use of the concept of test-plus-reference interferometry in the spectral domain to show how the phase of an optical pulse was changed during propagation in a linear or nonlinear optical medium [232, 233, 234]. This implementation of spectral interferometry (SI) made use of the fact that the measured signal reflected the difference in the spectral phase between the test and reference pulses, so that differential measurements (say, of the input to the output fields) could be made with great precision.

It was soon realized that detailed knowledge of the reference pulse enabled extraction of the complete spectral phase of the test pulse, which, together with a measurement of its spectrum, constitutes a complete characterization of the pulse [63, 64]. Of course, this begs the question, since it depends on the availability of a known pulse of appropriate character. However, the extremely high sensitivity of this method, which is entirely linear in the test pulse electric field, has enabled it to continue as a viable method in certain applications.

Methods of interferometry that do not require a reference pulse have been developed and are now in wide use. The basic feature of self-referencing methods is to measure the temporal beats that arise when one replica of the pulse is interfered with a second, time-shifted, replica, or, equivalently, the spectral fringes that occur when one replica of the pulse is interfered with a second, frequency-shifted (or spectrally sheared) replica. The latter case has strong analogies to SI and is known as “spectral shearing interferometry” (SSI). In both cases the fringes patterns reveal the relative phase between two adjacent parts of the pulse field (either two time slots or two frequency components), from which the complete phase function may be reconstructed.

The key features of interferometry that make it useful for pulse characterization are the rapidity of data acquisition, the direct and fast reconstruction of the field from the data, and the insensitivity of the measurement to wavelength-dependent apparatus response. These properties are important for characterizing sources for which the pulse shape fluctuates and the pulses have large bandwidths. In this Section, we discuss interferometric methods of pulse characterization and show how these are implemented in ultrafast optics.

5.2. General Considerations and Implementations

5.2a. Definitions

The detected signal in interferometry is related to the two-frequency or two-time correlation function. This two-dimensional function is itself related to the time–frequency representations used in spectrography and tomography. Further, the correlation function is encoded in the measured data in a way that makes it easy to invert. In the most general case, the correlation function can be mapped out directly from the fringe pattern. Moreover, in almost all implementations of interferometry, it is assumed that the ensemble underlying the measurement is coherent, so that the field can be extracted from a single section of the correlation function. The two-variable structure of the correlation function suggests that there will be complementary versions of interferometry related to these variables. This is indeed the case: for every time-domain interferometer, it is possible to identify a frequency-domain analog. It is this latter class that has proved most fruitful in the subpicosecond regime. Note that this time–frequency duality reflects the similar duality between phase-space methods, where spectrography emphasizes the frequency dependence of time sections of the pulse and sonography the time dependence of frequency sections.

Two classes of interferometer can be identified. In the first, measurements are made in the frequency domain, and in the second in the time domain. The signals are then proportional to sections of the two-frequency and two-time correlation functions, respectively. Because available detectors are slow compared with the pulses themselves, measurement in the frequency domain is usually preferred. In both of these classes there are two species of interferometer, self-referencing and test plus reference.

A typical spectral-domain test-plus-reference interferometer is illustrated in Fig. 33(a). The test pulse is shifted in time by the delay τ with respect to the reference pulse using a delay line. It is mixed with the reference pulse at a beam splitter, and the resulting spectrum is measured. This exhibits fringes in the spectral domain, whose spacing is inversely proportional to the delay τ. The detected signal is

D(Ω;τ)=|ẼR(Ω)+Ẽ(Ω)eiΩτ|2
where ẼR is the reference field (the Fourier transform of the reference pulse analytic signal) and Ẽ the test pulse field. The spectral phase difference between test and reference pulses is encoded in the relative positions of the spectral fringes with respect to the nominal spacing of 2πτ.

The time-domain analog of this interferometer shifts the test pulse in frequency with respect to the reference pulse by an amount Ω, before mixing the two at a beam splitter [Fig. 33(b)]. The resulting temporal interference pattern is measured by passing the signal through a fast shutter and recording the transmitted energy. In this case the signal is given by

D(τ;Ω)=|ER(τ)+E(τ)eiΩτ|2,
where in this case the relative temporal phase of the two pulses is encoded in the relative positions of the temporal fringes with respect to the nominal spacing of 2πΩ.

In the spectral case, the final spectrometer must have a resolution that is high compared with the nominal spectral fringe spacing. In the temporal case, the shutter must be open for a time short compared with the period of the temporal beat pattern.

The class of self-referencing interferometers similarly has implementations both in the time and the frequency domains. Self-referencing interferometers are based on spectral or temporal shearing. This type of interferometer uses modulators and delay lines to generate two modified versions of the input test pulse, shifted in time and frequency with respect to each other, that are then interfered. The resulting interferogram may be measured in the time or the frequency domain. When characterizing ultrashort pulses by using slow square-law detectors, the most common sort of interferometer makes use of spectral shearing.

Schematic apparatuses for shearing interferometry are shown in Fig. 34. The test pulse enters the interferometer, experiencing a frequency shift in one arm (by means of a linear temporal phase modulator) and a time-delay (or temporal shift) in the other (effected using a simple delay line, which may be considered in linear filter terms as a linear spectral phase modulator). The two modified pulses are combined at the exit ports of the interferometer, and the output is measured in the time domain or frequency domain by passing it through a fast time gate or narrow spectral filter, respectively.

The field after the interferometer in both apparatuses is

EOUT(t)=E(t)eiΩt+E(tτ)=FT[Ẽ(ω+Ω)+Ẽ(ω)eiωτ],
where FT[] represents the Fourier transform, τ is the delay imposed in one of the arms of the interferometer, and Ω the frequency shift imposed in the other. These may be thought of as lateral shears in their respective domains.

In the case of a temporal shearing interferometer, the signal is measured directly in the time domain by means of a very fast time gate or shutter, followed by the usual integrating square-law detector. When the shutter response is very fast with respect to the variations in the pulse temporal field, the detected signal is

D(t;Ω,τ)=|EOUT(t)|2=I(t)+I(tτ)+2Re[E(t)E*(tτ)eiΩt],
where I(t)=|E(t)|2 is the intensity of the pulse. In the case of the spectral shearing interferometer, the detected signal is measured in the frequency domain by means of a high-resolution spectrometer, with a slow detector. The spectrometer passband must be narrow with respect to variations in the pulse spectral field, in which case the detected signal is
Ĩ(ω;Ω,τ)=|ẼOUT(ω)|2=Ĩ(ω)+Ĩ(ω+Ω)+2Re[Ẽ(ω)Ẽ*(ω+Ω)eiωτ],
where Ĩ(ω)=|Ẽ(ω)|2 is the spectrum of the pulse. In both cases, it is clear that the interferogram encodes the derivative of the temporal or spectral phase function in the fringe spacing. For example, in the spectral shearing interferometer, the fringe extrema are located at frequencies ω satisfying
ϕ(ω)ϕ(ω+Ω)+ωτ=mπ,
where ϕ(ω)=arg[Ẽ(ω)] is the spectral phase function and m is an integer. Therefore, as with phase-space methods, it is possible to determine ϕ(ω) to within a constant (the carrier-envelope offset phase) and a linear term (the overall delay of the pulse with respect to an external clock). For most applications, this is sufficient to characterize the pulse, as long as the spectrum of the pulse is known.

5.2b. Interpretation of Interferograms

Interferometry measures the two-frequency (or two-point, in the case of spatial interferometry) correlation function of a field. (Since, as with all methods, it derives from measurements made with square-law detectors, it must yield some bilinear functional of the input field.) In the most general case, it yields a two-dimensional complex function, whose arguments are both time variables or both frequency variables. To this extent, it is quite different from spectrographic techniques, which work in a time–frequency representation, and tomographic methods, which use a set of one-dimensional functions parameterized by an external variable often unrelated to time or frequency.

The two-frequency correlation function

C͌(ω1,ω2)=Ẽ(ω1)Ẽ*(ω2)
is clearly a complete characterization of the optical pulse field E(t). For application to interferometry, however, it is most useful to consider this function written in terms of center- and difference-frequency variables,
C͌(Δω,ωC)=C͌(ω1,ω2),
where ωC=(ω1+ω2)2 and Δω=ω1ω2. This function is closely related to the detected output of spectral interferometric measurements. For example, it can be easily seen that
C͌(0,ωC)=|Ẽ(ωC)|2
is the spectrum of the pulse, as might be measured by using a standard laboratory spectrometer or monochromator and a slow photodiode. The aim of interferometry is to measure a section of the spectral correlation function C͌(Δω,ωC): as can be seen from Eq. (5.7), this will give the required field to within a constant multiplicative complex number.

The correlation function is easily related to any of the many time–frequency representations of the pulse field (in particular, the Wigner distribution is simply the Fourier transform of C͌(Δω,ωC) with respect to the difference frequency). However, the important feature of interferometric measurements, compared with time–frequency methods, is that a single section of the correlation function yields complete information about the pulse, whereas for time–frequency techniques the entire two-dimensional representation must be measured to extract the field. This allows data acquisition to be very rapid, which, coupled with a deterministic inversion algorithm, makes for the possibility to characterize pulses with very fast update rates.

As a particular example, it is useful to see how a spectral shearing interferogram relates to the correlation function. Starting from Eq. (5.5), variables are changed to the mean- and difference-frequency coordinates, ωC=ω+Ω2, Δω=Ω. Then the interferogram may be written as

Ĩ(ωC;Δω,τ)=|Ẽ(ωC+Δω2)+ei(ωCΔω2)τẼ(ωCΔω2)|2,
or, in terms of the correlation function,
Ĩ(ωC;Δω,τ)=Ĩ(ωC+Δω2)+Ĩ(ωCΔω2)+2|C͌(Δω,ωC)|cos{arg[C͌(Δω,ωC)]+τ(ωCΔω2)}.
This shows that the interferogram maps out a line of the two-frequency correlation function, taken as a function of ωC, keeping Δω fixed. Since C͌(Δω,ωC) is a complex function and Ĩ(ωC;Δω,τ) a real one, two measurements are required. The interferogram is a superposition of the two quadratures of the two-frequency correlation function, so that these can be individually retrieved by using only two values of the delay τ. In the case of a coherent ensemble, we have seen that the complete correlation function need not be measured in order to extract the field. This is in contrast to phase-space methods, in which the entire distribution must be measured. Importantly in this common experimental situation, only a single line of either quadrature of the two-frequency correlation function is sufficient for reconstructing the electric field, making interferometry inherently more economical of data than other methods.

5.3. Inversion

The inversion methods for interferometric measurements are direct, and therefore robust and reliable. The basic element of the inversion algorithms is the extraction of the correlation function, which is a complex entity, from a purely real and positive detected signal. Though there are a number of ways of doing this, the simplest and most commonly used is based on a Fourier analysis of the signal, accompanied by filtering to remove the symmetry in the Fourier domain that arises from its real character. This is the first step of all interferometric phase retrieval methods; the subsequent steps depend on whether the method is self-referencing. In this section we first describe this algorithmic element as applied to methods with known reference pulses, then describe the additional steps needed for self-referencing methods.

5.3a. Fourier-Transform Spectral Interferometry

Fourier-transform spectral interferometry (FTSI) is a version of test-plus-reference interferometry where the signal is measured in the frequency domain relative to a reference pulse [232]. Typically this is recorded with a detector array placed in the focal plane of a flat-field grating spectrometer, to yield a spectral interferogram. The schematic apparatus is shown in Fig. 35(a) in the case when the pulse under test is derived from the reference pulse by propagation in a device under test. The data set is a function of only a single variable—the frequency—rather than of two variables as in time–frequency methods. This means that the second dimension of a two-dimensional detector array may be used to record spatial variations in the spectral phase, for example.

The spectral phase is extracted via a direct inversion that is both rapid and robust. The test and reference pulse are delayed in time with respect to one another by τ by using a linear time-stationary filter S̃LP(ω)=eiωτ. The detected signal (interferogram) is then D(ω;τ)=|ẼR(ω)+Ẽ(ω)eiωτ|2, where ẼR is the reference field and Ẽ the test pulse field. The spectral phase difference between test and reference pulses is encoded in the relative positions of the spectral fringes with respect to the nominal spacing of 2πτ. Examples of interferograms corresponding to identical reference and test pulses and to a test pulse with a quadratic spectral phase are plotted in Fig. 35(b). In the first case no change in the spacing of the fringes is observed, while for the quadratic spectral phase the fringe spacing is clearly a function of the optical frequency, revealing that the spectral phase difference between the two pulses is quadratic. The phase difference can be extracted by using a three-step algorithm involving a Fourier transform to the time domain, a filtering operation, and an inverse Fourier transform [63, 235]. The interferogram may be written as

D(ω;τ)=D(dc)(ω)+D(ac)(ω)eiωτ+[D(ac)(ω)eiωτ]*,
where
D(dc)(ω)=Ĩ(ω)+ĨR(ω),
D(ac)(ω)=|Ẽ(ω)ẼR(ω)|ei[ϕ(ω)ϕR(ω)].
The dc portion of the interferogram, Eq. (5.13), is the sum of the individual spectra of the pulses and contains no phase information. The ac term, Eq. (5.14), contain all of the relative phase information.

There are three steps for reconstructing the spectral phase from the interferogram. First, isolate one of the ac terms, and hence ϕ(ω)ϕR(ω)+ωτ, by means of a Fourier transform and filter technique (Fig. 36). Let t be the conjugate variable to ω and D̃ be the Fourier transform of Eq. (5.12). If τ is sufficiently large, the dc and ac components (located at t=0 and t=±τ) are well separated in time, and the phase-sensitive component D(ac) can be filtered. For this purpose we use a filter H(t) centered at t=τ. The filtered signal,

D̃(filtered)(t)=H(tτ)D̃(t),
is simply the Fourier transform of the positive ac portion (t=+τ) of the interferogram. The spectral phase difference is the argument of IFT[D̃(filtered)], i.e., the inverse Fourier transform of D̃(filtered)(t),
ϕ(ω)ϕR(ω)+ωτ=arg[D(ac)(ω)exp(iωτ)]=arg{IFT[D̃(filtered)](ω)}.
The next steps include removing ωτ by subtracting a calibration phase and reconstructing ϕ(ω) by subtracting the reference phase ϕR(ω). In cases when the test pulse is obtained by linear propagation in a device under test, the extracted phase difference ϕ(ω)ϕR(ω) completely characterizes the dispersion properties of the device.

The above analysis pertains to an idealized version of an experiment. The spectrometer has a finite spectral resolution that depends on its optics and detector, which leads to a decreased fringe contrast when the fringe period becomes comparable with the spectral resolution. Furthermore, sampling of the interferogram of Eq. (5.12) is performed at a finite rate (e.g., with the array of finite-size photodetectors that compose the detector located at the Fourier plane of the spectrometer). The interferogram is sampled at frequencies that are not necessarily evenly spaced. Finally, the quickly varying fringes that allow the extraction of the spectral phase difference from a single interferogram can make FTSI sensitive to frequency calibration of the optical spectrum analyzer. These effects are not detrimental to most applications of SI, and can be accounted for [236, 237].

FTSI has applications in cases where one wishes to characterize a weak modulated pulse whose spectrum overlaps completely with that of a known, and usually more intense, reference pulse. This is not an uncommon situation in ultrafast optics, arising wherever linear filters (such as a pulse shaper or stretcher and compressor) are used to manipulate the pulse [238]. It also pertains to some nonlinear optical processes that are used in time-resolved spectroscopy, such as degenerate four-wave mixing [239, 240, 241, 242, 243]. Since the Fourier transform of the optical spectrum can also be measured directly by using temporal scanning, versions of SI based on this principle have been applied in wavelength ranges where direct spectral measurements are difficult [244, 245]. Examples of related techniques can be found in [246, 247, 248, 249].

5.3b. Concatenation

Shearing interferometry may be implemented in the optical frequency domain and thus be used to measure the spectral phase function of the input pulse using itself as a reference [31, 250]. Two delayed replicas of the (unknown) test pulse are generated in an interferometer, and one is frequency shifted with respect to the other. The combined spectrum of the pulse pair is measured by using a spectrometer and a detector array, can be measured simultaneously with the interferogram [251], or can be extracted from the shearing interferogram itself [252]. The important feature is that the frequency shift, or spectral shear, allows two adjacent frequencies in the original pulse spectrum to interfere on an integrating detector. The resulting fringe pattern thus reflects the spectral phase difference between spectral components of the pulse separated by the shear. Extracting the spectral phase of the input pulse therefore requires additional steps. The simplicity of the inversion mean that such characterization can be done at very rapid rates—up to a 1kHz refresh rate has been reported to date, limited only by the detector readout time [253, 254].

The SSI interferogram has a similar form to the FTSI interferogram [Eq. (5.12)] except that the dc and ac terms contain different frequency arguments:

D(dc)(ω)=Ĩ(ω+Ω)+Ĩ(ω),
D(ac)(ω)=|Ẽ(ω+Ω)Ẽ(ω)|ei[ϕ(ω+Ω)ϕ(ω)].

The spectral phase difference θ(ω)=ϕ(ω+Ω)ϕ(ω) between two frequencies separated by the spectral shear Ω is extracted from the interferogram by using the FTSI algorithm previously described. It is then concatenated into the spectral phase of the pulse under test, ϕ, by following the formula

ϕ(0)=0,
ϕ[(n+1)Ω]=ϕ(nΩ)+θ(nΩ).
An interpolation of the spectrum on the same grid completes the measurement in the spectral domain. This then gives the electric field in the spectral domain at frequencies [0,Ω,2Ω,,NΩ].

In this method, therefore, a sampling of the spectral phase (to within a constant) at intervals of the shear Ω across the pulse spectrum is obtained. According to the Shannon theorem, all pulses with compact support in the domain [T2,T2] may be completely characterized by a sampling of their spectral representation every 2πT. Thus SSI is able to reconstruct all pulses that have support (i.e., that do not have energy outside this domain) only in the temporal window [πΩ,πΩ]. Moreover, the inversion is unique.

5.3c. Ambiguities, Accuracy and Precision in Phase Extraction

Ambiguities. Difficulties in reconstruction arise in SSI when the spectrum goes to zero over a region that is large compared with the spectral shear, in which case the spectral phase is not defined for several samples of the SSI phase. In this case, two interferograms measured by using two different values of the shear are needed to reconstruct the pulse. The first returns the spectral phase across each continuous region of the spectrum; the second returns the relative phase between the two discontinuous pieces. Note that zeros of intensity in the time domain do not lead to ambiguities, unless they are associated with zeros in the spectral domain.

When a nonlinear interaction is used to spectrally shear an optical pulse, this difficulty can result in an undeterminable phase. For example, in spectral phase interferometry for direct electric field reconstruction (SPIDER) the single known case for which the data is incomplete is that of a pulse whose spectrum consists of no more than two well-separated components, when the measurement is made using only the pulse itself (i.e., not by means of a separate uncharacterized chirped pulse). By “well separated”, we mean that the spectral intensity is below the noise level of the detection system over a domain that is larger than the shear [86]. For spectra with several such components, it is still possible to obtain the relative phases between them by using several different values of the shear. When a separate independent pulse is used as an ancillary for inducing the spectral shear, even the case of two-component spectra is possible.

Accuracy and precision. For any measurement, testing the accuracy of the reconstruction, i.e., how close the measurement result from the apparatus is to the actual physical quantity, is of primary importance. This is mainly a theoretical task, relying on simulations or equations, for the obvious reason that in most experimental situations the measured field is unknown before the measurement. A measure of the difference between the input target field and the output retrieved field provides the criterion of accuracy. The choice of a measure is, however, somewhat subjective [255, 256].

In SSI, the accuracy of the reconstruction of the spectral phase is perfect in the absence of noise, when the spectral phase function on the sampling interval Ω is represented by a polynomial, and the sampling criterion is satisfied. Therefore it is possible to reconstruct very sharp spectral phase functions, especially those produced by a Fourier-plane pulse shaper. Beyond the sampling limit for the pulse spectrum, the accuracy depends somewhat upon the details of the reconstruction algorithm. In practice, those that use integration over the measured spectral phase give the most accurate results.

In practice the accuracy must be evaluated for each implementation, on the basis of the parameter settings for that piece of apparatus. It is therefore impossible to make a general statement about the accuracy of the spectral shearing method as a whole. However, an instrument using a simple integration algorithm for which the spectral phase is oversampled by a factor of 2 has an accuracy that scales roughly as ten times the noise fraction, where this is defined as the ratio of the variance of the noise to the maximum signal of the interferogram [256]. Although the SSI experimental trace is a single one-dimension interferogram, its robustness to noise was found similar to spectrographic techniques requiring the acquisition of a two-dimensional experimental trace [255].

One useful feature of the direct inversion possible in SSI is that it is possible to determine analytically the effect of systematic errors in the apparatus on the estimation of the test field. The primary systematic errors arise from miscalibration of the delay τ that gives rise to the carrier fringes. For example, miscalibration of the spectrometer can lead to some error in the delay calibration, although efficient and simple calibration procedures have been devised [236]. Ultrabroadband pulses also require a carefully designed interferometer [257, 258]. A delay calibration error δτ leads to an additive linear component ωδτ on the spectral phase before concatenation. The integrated phase has an additional component that is quadratic in frequency, δτω2(2Ω), i.e., an error δτΩ is made in the retrieved second-order dispersion. This may alter the duration of the reconstructed pulse compared with the actual pulse. A simple example illustrates the main issues. Consider a Gaussian test pulse with bandwidth Δω, corresponding to a Fourier-transform-limited pulse duration ΔtFTL and to an actual pulse duration Δt0. In the presence of second-order dispersion ϕ(2) and error on the second-order dispersion δτΩ, the actual pulse duration is

Δt02=ΔtFTL2[1+Δω2ΔtFTL2(ϕ(2))2],
and the measured pulse duration is
Δt2=ΔtFTL2[1+Δω2ΔtFTL2(ϕ(2)+δτΩ)2].
If the actual pulse is Fourier-transform limited (i.e., ϕ(2)=0 and Δt0=ΔtFTL), Eq. (5.21) can be written simply as
ɛΔt=1+(Nɛτ)21,
where ɛΔt is the relative error on the pulse duration (ΔtΔt0)Δt0, ɛτ is the relative error on the delay δτΔtFTL, and N=ΔωΩ is the degree above the Fourier-transform limit chosen for the sampling window. If the pulse is far from the Fourier-transform limit, Eq. (5.21) can be written simply as
ɛΔt=NɛτΔtFTLΔt0.
Orders of magnitude can be obtained from a simple example. For N=20 (i.e., a setup that is arranged to measure pulses up to 20 times the Fourier-transform limit) a 1% delay calibration error corresponds to a 2% error in the estimated pulse duration for a Fourier-transform-limited pulse. Taking the example of a chirped pulse with Δt0=10ΔtFTL, a 1% delay error still leads to a 2% error in the estimated pulse duration. This is not a severe constraint. For example, even for the most extreme case of pulses in the single-cycle regime, where ΔtFTL=2.5fs, the delay must be calibrated to within a path length of λ100, which requires a well-designed interferometer and proper alignment procedure. In that extreme case, however, all methods have some form of systematic error that must be dealt with carefully. Unfortunately, it is not so simple to ascertain the severity of calibration errors for most methods—and not at all in analytic form.

Precision. Because the spectral interferogram is often measured by using a spectrometer with much higher resolution than required by the Shannon theorem, the measured spectrum is actually oversampled for reconstructing pulses on the interval [πΩ,πΩ]. Therefore multiple samples of the phase can be concatenated and used to estimate the precision of the reconstruction. Thus, a first sample is constructed by starting at pixel 0, concatenating every Ω, a second set by starting at pixel 1, and so on. The second set is sampled on a grid that is shifted from the first by δω, the spectrometer sampling rate:

ϕ(δω)=0,
ϕ[δω+(n+1)Ω]=ϕ(δω+nΩ)+θ(δω+nΩ).
The number of different independent determinations of the field M is of the order of Ωδω. Because of the initial hypothesis that Ω is a sufficient sampling for the field, all of these determinations are equivalent. These sets of data can be used to refine the measurement or reduce the sensitivity to noise [259]. Because the fields are on different sampling grids in the spectral domain, it is not possible to directly sum them to get an average retrieved field. If, however, each of them is Fourier transformed to the temporal domain, where they represent the same electric field on the same sampling grid, then an average of the temporal field can be obtained with no shifting of the temporal fields. The reason is that all the fields retrieved from the same extracted SSI phase θ(ω) will have the same time fiducial and thus be consistent. The constant phase of each of these fields is completely arbitrary, since it depends only on the choice of the initial phase for each set before concatenation and is not determined by the phase θ(ω). This procedure has several technical advantages. First, because it reconstructs several representations of the same electric field, it allows a test of the precision of the reconstruction. Second, because it directly uses the sampling rate Ω, it allows undersampling to be easily recognized. The reconstructed temporal electric field should be equal to zero at the edges of the time window, to be consistent with the condition of finite temporal support compatible with the shear Ω. Any deviation from this condition indicates undersampling.

5.4. Time-Domain Interferometric Measurements

Although we have focused on SI for the purposes of discussing inversion methods, these can be applied equally well to a signal measured in the time domain. As mentioned in Subsection 5.2, this requires a detector with high temporal resolution, rather than one with high spectral resolution. Typically these are difficult to find for femtosecond-duration pulses. Nonetheless, some of the earliest work in interferometric characterization of optical pulses was done in the time domain.

5.4a. Test-Plus Reference Temporal Interferometry

A time-domain version of reference-pulse-based interferometry was developed by Rothenberg and Grischkowsky [260]. In their approach, a spectral filter is placed in one arm of an interferometer. The monochromatic frequency component resulting from the spectrally filtered path provides an effective reference with which to compare the pulse that passes through the unfiltered arm of the interferometer. The resulting temporal interferogram contains sufficient information for reconstructing the pulse-shape of the unaltered pulse. The fringe pattern is measured by a photodetector. The response of the detector determines the maximum bandwidth Δω of the test pulse: the fastest temporal beats occur at frequency Δω2. Constraints on the available detector time resolution limit this method to the measurement of relatively long pulses.

5.4b. Self-Referencing Temporal Interferometry

In the femtosecond regime, a fast-response detector may be synthesized by using a nonlinear optical wave-mixing process, such as upconversion, with the test pulse, which sets the temporal resolution to be close to the duration of the input pulses. Consequently, the narrow-time-gate assumption is not valid for frequency separations, Δω, greater than a small fraction of the pulse bandwidth, since the temporal beat note is too fast to resolve.

The narrow-time-gate approximation does hold for small frequency separations; so slices of the two-frequency correlation function near Δω=0 can be recorded. If the pulses in the train are assumed to be identical, a sampling of one such slice is sufficient for reconstructing the pulse electric field. When coherence is assumed, the phase of the two-frequency correlation function is no more than the phase difference between the selected spectral components. When coupled with knowledge of the pulse spectrum, the spectral phase differences for a set of frequencies separated by Δω provide ample information for reconstructing the pulse electric field. This is precisely the approach adopted by Chu and coworkers [261, 262] in their direct optical spectral phase measurement (DOSPM). Direct optical spectral phase measurement uses an apparatus in which a pair of adjustable slits is placed in the Fourier-transform plane of a zero-dispersion pulse stretcher. This spectral filter with dual passbands of adjustable center frequencies is equivalent to a pair of in-parallel single-frequency spectral filters.

A related approach to sampling the two-frequency correlation function by means of interferometry has been developed by Prein et al. [263]. In this approach, an ultrafast photodiode and Schottky diode nonlinear mixer are used to record the temporal intensity beats between adjacent pairs of spectral components of the input pulse that are separated by a gigahertz or so in frequency. The relative phase of the beats for different spectral pairs is related to the spectral phase difference between the two wavelengths, permitting the spectral phase function itself to be reconstructed.

The temporal interference between two optical frequencies can be resolved in principle by using a much slower photodetector. This can be achieved by stretching two temporally delayed replicas of the pulse under test by using chromatic dispersion [264, 265]. For a delay τ and large second-order dispersion ϕ(2), two optical frequencies separated by τϕ(2) interfere at a given time t, and it is possible to recover the spectral phase difference between these frequencies, for example, by using a Fourier-transform algorithm [266, 267]. One difficulty with this approach is that the reconstructed spectral phase is that of the stretched pulse, and accurate characterization of the large chromatic dispersion added to the pulse to perform the measurement is mandatory. This, however, does not hinder the ability of this technique to accurately quantify the chromatic dispersion of the element used to stretch the pair of pulses.

If the source has a high duty cycle, it suffices to measure the phase difference between well-separated spectral modes with a fast photodiode used as a nonstationary element. This can be achieved by isolating the two modes, for example at frequencies Ωn and Ωn+1 and measuring the resulting temporal intensity with a bandwidth larger than the mode spacing Ω [268, 269, 270].

An alternative way to separate two spectral components is to use the spatial multiplexing properties of SI. For example, in spectral interferometry resolved in time (SPIRIT), two replicas of the pulse under test are spatially dispersed so that a point x in space corresponds to a frequency ω from the first pulse and frequency ω+Ω from the second pulse [271, 272, 273]. One way of achieving this is to send two noncollinear beams on a diffraction grating and focus the diffracted beams with a lens. The phase of the beating between these replicas, which leads to the corresponding spectral phase difference, can be read in the time domain by using a nonlinear cross-correlation with a short optical pulse, e.g., the pulse under test itself.

5.5. Spectral Phase Interferometry for Direct Electric Field Reconstruction

Self-referencing SI relies on the interference between two frequency-sheared replicas of the input (test) field. These may be obtained from a single input pulse by either linear or nonlinear means. It is, of course, preferable to use the former where at all possible: the current technological limit is to pulses of at least 100fs duration. For durations shorter than this, nonlinear means of generating a frequency shear must be employed. Nonlinear methods are therefore important in the regime of ultrashort optical pulses, while linear techniques are more appropriate for low-energy pulses with durations longer than 100fs.

SPIDER is an implementation of shearing interferometry in the optical domain, using nonlinear means to obtain a relative frequency shift between two replicas of the test pulse [274, 275]. This spectral shear is obtained by nonlinear mixing of both delayed replicas of the pulse with a chirped pulse in a nonlinear crystal. This leads to a shift of each replica by a different frequency because of the change of the instantaneous frequency in the chirped pulse over the delay between the replicas. This, in turn, gives rise to a relative shear between the two replicas.

5.5a. Generic SPIDER

A generic SPIDER apparatus suitable for the measurement of pulses in the optical region of the spectrum is shown in Fig. 37 [5]. Two pulse replicas with a delay τ are generated in a Michelson-type interferometer or an etalon. A strongly chirped pulse is generated by a dispersive delay line inducing the second-order dispersion ϕ(2). The chirped pulse and the two time-delayed replicas are mixed in a crystal cut for sum-frequency generation (SFG; in the case of a type II nonlinear interaction, a half-wave plate is introduced into the optical path of the chirped pulse). The chirp introduced by the delay line is adequate to ensure that each pulse replica is upconverted with a quasi-cw field, and the delay ensures that each of the replicas upconverts with a different frequency component of the input pulse spectrum (ω0 and ω0+Ω), leading to an output consisting of two identical pulses with a spectral shear Ω=τϕ(2). The spectral representations of the sheared pulses Ẽ1 and Ẽ2 are centered near twice the carrier frequency of the input pulse being characterized. When the sheared pulses are interfered, the frequency-resolved signal is related to the input pulse field by

Ĩ(ω)=|Ẽ1(ω)+Ẽ2(ω)|2=|Ẽ(ωω0Ω)ẼR(ω0+Ω)|2+|Ẽ(ωω0)ẼR(ω0)|2+2|Ẽ(ωω0Ω)||Ẽ(ωω0)||ẼR(ω0+Ω)||ẼR(ω0)|cos[ϕ(ωω0Ω)ϕ(ωω0)ϕR(ω0+Ω)+ϕR(ω0)+ωτ].
Thus the spectral fringe pattern (as a function of ω) is determined by the difference between the spectral phase difference ϕ(ωω0Ω)ϕ(ωω0) between two frequencies in the test pulse separated by the shear Ω. This is exactly as in SSI, and therefore the inversion algorithms outlined previously enable the spectral phase of the test pulse to be extracted. Alternate processing techniques for SPIDER can be found in [276, 277, 278].

The term linear in frequency ωτ in the argument of the cosine is removed by using a calibration phase either at the fundamental or harmonic wavelength that is characteristic of the instrument and that must be taken once [275]. This reference phase exactly corrects any influence of the calibration of the spectrometer on the SPIDER interferogram [236]. This calibration of the device must be accurate, and may be done a priori (thus the signal can be integrated or averaged). It is frequently a linear measurement that can be performed around the wavelength of the pulse under test. The pulse spectrum can be measured simultaneously with the SPIDER interferogram and on the same experimental trace [251], or extracted from the SPIDER interferogram [252].

Note that the difference of the spectral phases of the ancillary stretched pulse gives rise to an unknown constant phase, which is set arbitrarily to zero. Thus the pulse is reconstructed completely except for the carrier-envelope offset phase and the exact time of arrival of the pulse with respect to an external clock, as is usually the case for self-referencing pulse characterization techniques.

SPIDER has a number of important features that make it particularly suitable for certain applications. First, the rapidity of the data acquisition and inversion mean that the reconstruction is not compromised by the source’s stability. Moreover, the inversion algorithm returns the mean spectral phase when the signal is averaged over small random fluctuations in the pulse shape. The update rate for pulse shape reconstruction is usually limited by the time needed to acquire the traces: the algorithm itself runs at over 1kHz [11, 253, 254].

Second, accurate measurement of the spectral phase does not require the recorded trace to be corrected for the phase-matching function of the nonlinear process or the detector sensitivity [279, 280]. The key to this remarkable robustness is that the phase information is contained in the fringe spacing rather than the visibility, and this is not compromised by wavelength-dependent responsivity in the apparatus, provided the sensitivity does not vary across one fringe.

5.5b. Cross-Correlation SPIDER

Equation (5.25) shows that the chirped pulse need not be derived from the input test pulse: it can come from an entirely separate (though synchronized) laser system, and be at quite a different wavelength than the test pulse. This version of the technique is called X-SPIDER and has been used to characterize pulses in the blue [281] and visible regions of the spectrum [254]. There are obvious technical advantages to using a high-energy pulse as this kind of ancilla, including an improved signal-to-noise ratio, especially for weak test pulses, and the possibility for frequency shifting to regions that are favorable for particular detectors, such as Si-based CCD arrays.

5.5c. Homodyne Optical Technique for SPIDER

SI has the important advantage over SPIDER that it is linear in both the test pulse field and the ancillary pulse field (in SI this is the reference pulse). This advantage is characteristic of all homodyne detection methods. However, the major disadvantage is that in SI the ancillary reference pulse must be well characterized. It is possible, however, to apply the methods of homodyne detection to self-referencing SI, in which case a significant gain in sensitivity is possible.

In the homodyne optical technique for SPIDER (HOT-SPIDER), the pulse corresponding to the two different frequency shifts ω0 and ω0+Ω is interfered sequentially with a local oscillator (LO) [282]. After shifting by ω0, using nonlinear conversion with a chirped pulse, the spectral phase difference θ1(ω)=ϕLO(ω)ϕ(ωω0)+ωτLO is extracted from the resulting interferogram, where ϕLO is the spectral phase of the local oscillator and τLO is the delay between the replica and the local oscillator. For the replica shifted by ω0+Ω (obtained for example by delaying the chirped pulse used for nonlinear conversion by τ), the extracted spectral phase difference is θ2(ω)=ϕLO(ω)ϕ(ωω0Ω)+ωτLO. Note that the shear is set by the delay τ, and the delay between the sheared replicas and the local oscillator remains constant. Subtracting the two phases thus gives θ2(ω)θ1(ω)=ϕ(ωω0)ϕ(ωω0Ω), which is identical to the phase difference obtained with standard SPIDER after removal of the delay term. The standard recovery algorithm can then be applied to extract the spectral phase of the test pulse.

Apart from the increase in sensitivity that can be obtained with a high-energy local oscillator, HOT-SPIDER is automatically calibrated and does not require two replicas of the input pulse to be made simultaneously. Furthermore, since the shear is not set by the delay τLO, this delay can be set to a small value, so that HOT-SPIDER can operate with greatly reduced spectrometer resolution. This, too, leads to a higher signal-to-noise ratio and thus greater accuracy in reconstructing the pulse field. HOT-SPIDER setups based on temporal scanning [283, 284] and dual-quadrature detection [285] have also been demonstrated.

5.5d. Spatially-Resolved SPIDER

SPIDER is particularly well suited for measuring space–time coupling in ultrashort pulses because the temporal dependence of the field at a single point in the beam can be reconstructed from a one-dimensional measurement—a single spectral interferogram. Therefore, an imaging spectrometer and a two-dimensional detector array (a CCD camera, for example) enables measurements of the spatial dependence of the temporal field [286]. Moreover, the noniterative reconstruction algorithm enables rapid processing of the large amount of data resulting from the additional degree of freedom. This simple extension does not require any prior knowledge of the spatial chirp of the pulse before the apparatus, nor does it require the beam to be spatially filtered, as is often the case in autocorrelation-based measurements.

For a SPIDER device based on SFG, for example, the two broadband test pulse replicas are mixed with two quasi-cw slices of a strongly chirped ancillary pulse. In the spectral domain, each of these SFG processes corresponds to the convolution of a broadband spectrum with a narrowband spectrum. As a result, the input beam becomes shifted by a constant frequency and is multiplied by the spatial mode pattern of the particular quasi-cw slice. Therefore the spatial chirp of the ancilla does not cause a frequency-dependent efficiency, since the same cw slice is mixed with each frequency component of the test pulse. Thus only the spatial intensity pattern, fringe contrast, and an undetermined phase constant of the SPIDER signal are affected. Importantly, the spectral fringe spacing is unaffected by spatial chirp. Because SPIDER uses only the latter for spectral phase reconstruction, it works correctly even in the presence of significant spatial chirp.

For the success of this method, it is important that the spatial phase information be preserved during the nonlinear interaction and acquisition of the interferogram. This can be achieved by focusing the two time-delayed replicas of the input pulse into the nonlinear crystal, together with the unfocused chirped pulse. As the beam size of the focused replicas at the image plane is very small compared with the size of the unfocused chirped pulse, frequency conversion preserves the spatial information. A second recollimating lens then gives two frequency-shifted replicas, Ẽ(x,ωω0) and Ẽ(x,ωω0Ω). These fields may also be obtained by mixing the unfocused fundamental pulses with a spatially expanded chirped pulse. This eliminates any space–time coupling due to the focusing optics. The spectral interferogram is measured as a function of x and ω with a two-dimensional imaging spectrometer having its entrance slit oriented along x. Independent processing of each line of the interferogram leads to the spectral phase of the pulse at the corresponding spatial location and, hence, the temporal pulse shape at this location up to a constant phase and delay. Therefore, variations in the pulse shape are revealed, but no wavefront information is obtained.

5.5e. Space–Time SPIDER

In fact it is possible to go beyond spatially resolved SPIDER measurements and relate the spectral phases at each point in the beam. This provides a unique capability: the measurement of the complete spatiotemporal field, including spectrally dependent wavefront distortions [287]. The device operates by measuring two orthogonal gradients of the phase ϕ(x,ω) as a function of space and frequency, the spectral phase gradient ϕω(x,ω) and the spatial phase gradient ϕx(x,ω) and reconstructs the phase from these gradients by using standard spatial shearing interferometry algorithms.

The spectral phase gradient at each point in the beam ϕω(x,ω) can be measured by using a spatially resolved version of SPIDER, as described in Subsection 5.5d above. The spectrally resolved spatial phase gradient ϕx(x,ω) is measured by imaging the input beam at the fundamental frequency onto the slit of the two-dimensional spectrometer through a Michelson interferometer. This interferometer provides independent control of the shear, tilt, and delay between the two interfering pulses. One can use a combination of delay and tilt to provide fringes in the interferogram measured by the imaging spectrometer, which allows the extraction of the interferometric component by Fourier-transform techniques. For a small shear X, the gradient ϕx(x,ω) can be approximated by [ϕ(x+X,ω)ϕ(x,ω)]X.

The spatially resolved SPIDER interferogram, near the second-harmonic wavelength of the input pulse, and the spectrally resolved lateral shearing interferogram, at the fundamental wavelength, can be recorded simultaneously on a single two-dimensional detector by using the first and second diffraction order of the grating of the spectrometer. Single-shot operation of the device is therefore possible. Both phase gradients can be extracted simply from the single data set because they can be encoded differently in the superimposed interferograms. The spectral phase gradient is extracted from fringes due to the delay between interfering pulses, i.e., that lie predominantly parallel to the spatial axis of the interferogram. The spatial phase gradient is obtained from fringes set by the Michelson interferometer; for example, tilt will lead to fringes that are predominantly parallel to the spectral axis. An example of spatiotemporal characterization of the field of a pulse after propagation through a prism, which causes space–time coupling by virtue of angular dispersion, is shown in Fig. 38. The induced angular dispersion is seen as the phase ϕ(x,ω)=γωx, where γ is the proportionality constant between optical frequency and the wave vector due to the prism angular dispersion. In the spatiotemporal domain, the pulse-front tilt manifests itself as the coupling between time and space.

5.5f. Spatially Encoded Arrangement for SPIDER

In SSI, the spectral sampling rate of the detected signal must be twice the Whitaker–Shannon limit for the test pulse. In spectrography, however, it can be at the limit. This means that a lower resolution spectrometer may be used for pulses of the same bandwidth and temporal support in spectrography than in SSI. In practice, an implementation of SSI such as SPIDER operates at a significantly higher sampling rate (typically between 5 and 10 times the Whitaker–Shannon limit) because of the coupling of the spectral shear and the temporal delay required for encoding the phase into the interferogram. By contrast, nonlinear spectrographic methods, such as frequency-resolved optical gating (FROG), typically operate at only a few times the Whitaker–Shannon limit. The advantage of the oversampling in SI is that the inversion of the data to the spectral phase does not require iteration and is insensitive to variations in the spectral response of the apparatus. The disadvantage is that a higher resolution spectrometer is needed than is strictly necessary for the pulse at hand.

A way around this problem is provided by encoding the spectral correlation function, which contains the required spectral phase information, into a spatial fringe pattern, hence the acronym SEA-SPIDER for spatially encoded arrangement for SPIDER [288, 289]. In this case the spectral resolution of the spectrometer can be exactly at the sampling limit, though the spatial resolution must be correspondingly beyond the spatial sampling limit. Further, this approach requires only a single copy of the test pulse, thereby eliminating extraneous optics required to produce a replica. This is important for extremely broadband pulses.

The apparatus is configured as shown in Fig. 39. The beams are arranged so that the test pulse is mixed with two noncollinear chirped ancillary pulses in a nonlinear crystal, and two frequency-shifted and sheared replicas are generated. These propagate in different directions, set by the phase-matching angles in the crystal, and are brought together to interfere in an imaging spectrometer. Since there is no temporal delay between the two beams, there are no spectral fringes: hence the spectrometer resolution can be the minimum required by the sampling theorem. On the other hand, because the beams are at an angle with respect to one another, there are spatial fringes, which are resolved on the CCD camera. A straightforward modification of the SI inversion algorithm enables the SPIDER phase for the pulse at position x in the beam to be extracted. This is achieved by taking a Fourier transform with respect to both position and frequency to separate the spectral correlation function from the spatiospectral intensities. The spatiospectral phase extracted from the interferogram is ϕ(x,ωω0)ϕ(x,ωω0+Ω)+Kx, where K is the difference in the mean transverse wave vectors of the interfering beams (their tilt with respect to each other). An example of a SEA-SPIDER interferogram measured for pulses in the few-cycle regime generated by means of a highly nonlinear process is shown in Fig. 40(a). The Wigner function of the reconstructed pulse [Fig. 40(b)] shows that the pulse has a slight positive chirp and temporal structure separated from the main peak, which is very short. The marginals of the Wigner function shown in Figs. 40(c), 40(d) indicate that the temporal intensity profile is a little longer than a transform-limited pulse of the same bandwidth, owing to the residual spectral phase induced by the finite bandwidth of the chirped mirrors.

5.5g. Long-Crystal SPIDER and ARAIGNEE

The spectral filtering properties of the phase-matching function of wave mixing in a long nonlinear medium may also be used to simplify the SPIDER apparatus [Fig. 41(a), 41(b)]. In conventional SPIDER, a spectral shear between two test pulse replicas is produced when they upconvert with different quasi-monochromatic slices of a highly chirped ancillary pulse in a thin nonlinear crystal. An alternative approach is to effect the mixing of a broadband test pulse with a narrowband ancilla in a single, long nonlinear crystal. In such a crystal oriented for type II SFG the incident pulse propagating as an ordinary wave (o wave) has a large acceptance bandwidth, whereas the extraordinary wave (e wave) has a much narrower acceptance bandwidth. This highly asymmetric phase-matching function shape is due to a group-velocity match between the o fundamental input and the e upconverted output and a group-velocity mismatch between the e fundamental and the e upconverted fields. As a result, the ordinary test pulse is upconverted with a single e-ray frequency, resulting in its replication at the upconverted frequency, as shown in Fig. 41(b) [290]. The angle of propagation relative to the crystal optic axis determines the frequency of the narrowband component of the e wave, which upconverts with the entire spectrum of the o wave, providing the spectral shear necessary for SSI. If the e-ray pulse walks completely through the o-ray pulse as it propagates through the crystal, then the upconverted o-ray pulse is a spectrally shifted replica of the fundamental test pulse.

The apparatus to effect a spectral shear is shown in Fig. 41(c). The test pulse is split into two orthogonally polarized components by a wave plate, and the e-ray is advanced with respect to the o-ray on transmission through a piece of linear, low-dispersion, birefringent crystal such as quartz. The pulses are further split into two by reflection from a split mirror, which directs each time-delayed pair into the nonlinear crystal, each pair propagating at a different angle to the c axis. Each pair mixes so that two spectrally sheared replicas of the test pulse are produced by SFG. At the output of the crystal, the two beams are overlapped by means of a 10cm lens onto the entrance slit of a compact grating spectrometer. The resulting spectral interferogram is processed in the same way as a normal SPIDER trace. In keeping with established practice, this is known as “another ridiculous acronym for interferometric geometrically-simplified noniterative E-field extraction” (ARAIGNEE, the French word for SPIDER) [291].

5.5h. Zero-Added-Phase SPIDER

Zero-added-phase SPIDER (ZAP-SPIDER) does not require replication of the test pulse and makes use of two chirped pulses to upconvert with a single test pulse. Because no replica needs to be made, the optics seen by the test pulse are minimal and may be all be completely reflective. This means that they add no spectral phase to the test pulse; hence the acronym [292, 293]. The two ancillae nevertheless generate two upconverted and frequency sheared replicas that may be interfered to obtain a spectral interferogram. The important innovation to note is that the upconversion process may be angularly multiplexed. That is, each ancilla mixes with the test pulse at a slightly different angle, still within the angular and spectral acceptance bandwidth of the nonlinear crystal. This generates two upconverted pulses from the single test pulse that propagate at different directions, owing to the phase-matching conditions imposed by the nonlinear process. The upconverted pulses are recombined into copropagating beams by using mirrors that also introduce a delay, so that the spectral interferogram is identical in form to the SPIDER interferogram. The same constraints on the chirp of the ancillae applies here as in SPIDER, but ZAP-SPIDER shares with HOT-SPIDER the independence of the delay and shear, which again provides an improvement in the signal-to-noise ratio.

5.5i. Calibration-Parameter Encoding: Two-Dimensional Spectral Shearing Interferometry

Spatial encoding of the spectral phase may be viewed as using the calibration parameter as the encoding variable. This approach may also be implemented by modulating the pulse delay in the conventional SPIDER apparatus. The measured signal is then a function of the frequency, as in the conventional SPIDER detection, and also a function of the delay—thus the fringe pattern is two-dimensional. The functional form of the interferogram is very similar to that of SEA-SPIDER, except that the spatial variable x is replaced by the delay variable τ. The fringes are given by the loci of constant value of the SPIDER phase function ϕ(ωω0)ϕ(ωω0+Ω)+ω0τϕ, and the spectral phase of the test pulse may be extracted by using the SEA-SPIDER algorithm. This approach has similar advantages to SEA- and ZAP-SPIDER for ultrabroadband pulses, since it may be implemented without requiring replication of the test pulse [258]. In this apparatus, illustrated in Fig. 42(a), the ancilla is replicated in a Michelson-type interferometer, and the delay between the two ancillae is modulated. The combined ancillae are upconverted by mixing with the test pulse, and the resulting sum-frequency spectrum is recorded as a function of this delay. Exemplary measured interferograms are shown in Fig. 42(b) for a pulse close to the Fourier-transform limit and in Fig. 42(c) for a pulse stretched by propagation into 1mm of fused silica, where the chirp is revealed by the tilt of the fringes. The fringe-resolved autocorrelation calculated from the extracted pulsed field is compared with the measured autocorrelation in Fig. 42(d): the excellent agreement between the two traces emphasizes the suitability of this approach for ultrabroadband pulses in the single-cycle regime.

5.6. Self-Referencing Spectral Interferometry Based on Linear Temporal Phase Modulation

A spectral shift can be obtained directly by linear temporal phase modulation. In fact, the first proposals of SSI were made along these lines [31, 250]. A linear temporal phase modulation exp(iΩt) directly induces a spectral shear on an optical pulse, provided that the temporal phase modulation is linear over the temporal support of the pulse. Since the relative shear in SSI must be of the order of a few percent of the bandwidth of the source under test, this approach has become practical only with the development of high-efficiency, high-speed phase modulators based on lithium niobate. In these modulators, the voltage drive modulates the optical index via the electro-optic effect. This has been implemented by using a sinusoidal drive or a pulse generator for characterization of pulses in various wavelength ranges [294, 295, 296]. A modulator driven by a sinusoidal RF voltage at a frequency f, V(t)=V0sin(2πft), induces the temporal phase π(V0Vπ)sin(2πft), where Vπ is the voltage necessary to obtain a π phase shift. The temporal phase can be linearized around one of its zero crossings to give the temporal phase modulation (2π2V0fVπ)t, which identifies the induced spectral shear Ω=2π2fV0Vπ. Large shears are then obtained by using large voltages V0 in high-efficiency, low-Vπ modulators at high frequency f (or equivalently, high-bandwidth voltage pulses). Symmetric setups in which two optical pulses are spectrally sheared in opposite directions have been used. This is naturally obtained by sending two pulses separated by a delay τ in a phase modulator driven by a sinusoidal drive with a period equal to 2τ, as pictured in Fig. 43(a). The linear implementations of SSI, like other linear techniques, are highly sensitive. Figure 43(b) presents the spectral phase of a short optical pulse measured at three different average powers. Accuracy is preserved even with power lower than 1μW. In fact, the SSI setup described in [296] can characterize a 1nJ pulse in single-shot operation. Furthermore, pulses with long temporal support can be measured as long as a linear temporal phase modulation can be maintained, and pulses stretched to tens and even hundreds of times the Fourier-transform limit have been characterized. Spectral shearing interferometry can also be used without a delay between the two interfering pulses, in which case the phase of the interferometric component can be retrieved by scanning the relative phase of the interfering pulses [297].

5.7. Techniques for Sources with Discrete Spectral Modes

There exist implementations of SSI adapted to the characterization of periodic sources with high duty cycles. These sources have an optical spectrum composed of a small number of spectral modes at the frequencies {Ωn} separated by the repetition rate of the source Ω, and complete characterization of the corresponding electric field is obtained by measuring the intensity and phase of each mode. The phase difference between the spectral modes can be inferred by measuring the beating between two adjacent modes in the time domain. A significant reduction of the photodetection bandwidth requirement is achieved if a time-nonstationary modulation is performed before photodetection. For example, modulation of the periodic source under test at the frequency Ω2 generates sidebands of each mode; i.e., the mode at frequency Ωn leads to sidebands at ΩnΩ2 and Ωn+Ω2. The lower and upper sidebands of the two successive modes Ωn and Ωn+1 therefore interfere at the optical frequency Ωn+Ω2, and the relative phase between modes can be recovered from the spectrally resolved interference measured for a plurality of relative phases between the optical source and the temporal modulation [20]. If the modulation is performed at a frequency slightly offset from half the repetition rate of the source, the sideband interference occurs at a small nonzero frequency. It can be measured with a low-bandwidth photodetector after the modulated source has been filtered around the sidebands being measured [298]. In practice, it might be more convenient to use a modulation at the frequency of the source under test, since it can be recovered easily by direct photodetection or from the drive of the optical source in some cases. For a modulation at the frequency Ω, the spectral density at a given mode Ωn is impacted by the phase of the modes Ωn1, Ωn, and Ωn+1. The spectral phase can nevertheless be reconstructed by using the optical spectra measured for different relative delays between the source under test and the modulation [299].

A nonlinear technique was recently demonstrated to characterize sources with discrete modes: the source with modes spaced by Ω is nonlinearly mixed with two CW sources at frequencies separated by Ω [300]. As in SPIDER variants using only one test pulse and two chirped pulses, nonlinear mixing of the test pulse with each monochromatic frequency leads to two spectrally sheared pulses. The spectrum of the two interfering pulses can be measured by using an optical spectrum analyzer, and extraction of the interferometric term requires the measurement of several interferograms for different relative phases between the two monochromatic sources (e.g., the four relative phases 0, π2, π, and 3π2 are sufficient to extract the two quadratures of the complex interference term). An advantage of this approach is that it can be used for sources with high duty cycles.

5.8. Conclusions

Interferometry has proved to be a reliable and flexible method for measuring ultrashort pulsed fields. A particularly useful feature of this approach is the direct coding of spectral phase in the experimental data, allowing simple, robust and rapid inversion algorithms. In its test-plus-reference form, it is suitable for use with a known reference that can be obtained from any of the many characterization methods and is very sensitive, reaching the quantum limit for measurement of the quadrature amplitude of the fields and providing a direct estimation of the quantum state of the light pulse [301]. Self-referencing interferometry extends the scope of interferometric measurements to the case where no appropriate reference pulse exists. One form, spectral shearing interferometry (SSI), is particularly adapted to the measurement of broadband pulses and has been implemented for a wide range of wavelengths and pulse durations.

6. Current Areas of Research

The field at present is moving in a number of different directions. Now that the basic principles are well established, and the most simple and reliable apparatuses have been demonstrated, the application to more complex fields is underway. This includes determining the space–time structure of a pulse, rather than simply its temporal structure, as well as identifying the fields of pulses that are not close to the transform limit or that may have highly structured spectra. In another direction, the techniques developed for the optical domain are finding application in the characterization of pulses in quite different spectral and temporal domains, including that of attosecond-duration extreme UV (XUV) pulses generated via high-harmonic radiation from atoms. In this section, we highlight a number of areas of current research activity that are particularly promising in terms of new methods and new applications.

6.1. Attosecond Metrology

It is now possible to produce pulses whose duration lies in the attosecond regime, with mean wavelengths correspondingly in the XUV region of the spectrum. With these pulses one can study processes that have characteristic time scales of attoseconds, namely, electronic dynamics in atoms, in molecules, and on surfaces. Some notable achievements in the emerging field of attoscience include the generation of x-ray pulses with a duration of 650as [302], trains of 250as pulses [303, 304], and the creation of individual sub-200as pulses [305] that can be used to measure electron motion with a temporal resolution of 100as [212]. At present, it is not possible to produce and measure attosecond pulses routinely and easily. This is mainly because the bandwidth required is extraordinarily large and the mean wavelength of the pulse is in a region of the spectrum where there are no standard linear or nonlinear materials. Therefore there are limited options for optics that can be used to manipulate these pulses. The constraints of the extreme mean wavelength and the extreme bandwidth pose many practical and physical limitations on the generation, manipulation, and detection of attosecond pulses.

The primary method of creating attosecond pulses is to generate XUV light via the interaction of an intense, phase-stabilized few-cycle laser pulse with a wavelength in the mid-IR with an atomic or molecular gas. This process is known as “high-harmonic generation” (HHG) [306, 307, 308]. In fact, the nonlinearity associated with this interaction can itself form the basis for measurements. However, the most common nonlinear interaction is to mix the XUV pulse with an optical pulse in another gas of atoms. The nonlinearity arises because when the XUV pulse ionizes the atoms, the ionized electrons remain sufficiently close to the ion for long enough to absorb radiation from the optical pulse that is simultaneously present. Therefore the ionized electron energy is shifted with respect to what it would be without the presence of the optical field. This energy shift can be observed by using a photoelectron spectrometer. The details of the modification of the photoelectron energy spectrum depend on the details of the XUV pulse that is to be measured. For instance, if the XUV pulse is long compared with one cycle of the optical pulse field, then the electron energy is simply proportional to the sum of the XUV and optical frequencies. In this case, the photoelectron spectrum has sidebands around the main XUV peak. Since these sidebands are in principle replicas of the main ionization peak, which itself maps the amplitude and phase of the XUV pulse spectrum, the effective action of the optical field is to produce images of the XUV spectrum that are spectrally sheared into the photoelectron spectrum. Now, if the XUV pulse consists of a train of short attosecond bursts, as is the case when long driving pulses are used for HHG, then its spectrum consists of the odd-order harmonics of the driving pulse frequency ω0. In this case, the photoelectron spectrum produced on mixing this pulse train with a long optical pulse in an atomic gas is to produce both upconverted and downconverted sidebands for each peak in the XUV spectrum. Since these are displaced from the XUV peaks by ω0, then the downconverted sideband of the nth high harmonic overlaps spectrally with the upconverted sideband of the (n1)th harmonic and will interfere with it. The relative phase of the nth and (n1)th harmonics can thus be determined. With the assumption that all pulses in the train are temporally similar, the measurement of this relative phase across the whole XUV spectrum enables the envelope of the pulses in the train to be determined. This technique, known as “RABITT” (reconstruction of attosecond-harmonic beating by interference of two-photon transitions) [309], bears some similarity to the sideband method of Debeau et al. [20], since the optical field mimics the action of a phase modulator on the electron wave function and was one of the first to be developed for attosecond pulse measurement. This approach can simply be extended to the case of a two-frequency drive field, in which case a SPIDER (spectral phase interferometry for direct electric-field reconstruction) interferogram is possible in the photoelectron spectrum [310, 311, 312, 313]. If, in addition, the relative delay between the optical pulse and the XUV pulse is changed, then the interference fringes between the harmonic sidebands may also be mapped out as a function of this delay, along with the harmonics themselves. In this case the delay-resolved photoelectron spectrum contains components that are reminiscent of a spectrogram as well as an interferogram. In fact, the model of the optical pulse acting as a phase modulator for the ionized electron wave function can be formulated in terms of a phase-gated spectrogram, so that an iterative deconvolution algorithm may be used to unravel the XUV pulse field, as well as (at least in principle) the optical pulse field. This method is therefore known as “FROG-CRAB” (frequency-resolved optical gating complete reconstruction of attosecond bursts) [314, 315]. Figure 44 presents results obtained with this technique. The measured and retrieved spectrograms are shown in Figs. 44(a), 44(b), and the temporal and spectral representations of the pulse are shown in Figs. 44(c), 44(d). The reconstructed XUV pulse has a duration of 80 as.

A different approach may be used to extract the XUV field if the XUV pulse duration is short compared with one cycle of the optical pulse. In that case two situations can be distinguished. The first is that the pulse arrives near the extremum of a cycle and envelope of the optical field. In this case, the electron wave function is modulated just as a short optical pulse would be when arriving at a phase modulator near an extremum of its drive signal. That is, the wave packet has a quadratic temporal phase modulation imposed (the sign of which depends on whether it arrives near the maximum or minimum of the cycle). This may lead to a broadening or compression of the electron energy spectrum, depending on whether the electron wave function (and therefore the XUV pulse that generated it) is chirped or not. As with the optical case, for sufficiently large modulation (i.e., sufficiently large optical pulse intensity) this scheme can be used as time-to-frequency converter, when it is known as the “attosecond streak camera” [212, 316]. Similarly it should be possible to use this to implement chronocyclic tomography [310].

A different approach to attosecond pulse characterization is to use the nonlinearity of the harmonic generation process itself. In this approach, it is possible to implement a spectral shear by changing the mean wavelength of the driving pulse. To understand this, note that the spectrum of high harmonics for long pulses consists of a series of peaks separated by twice the mean frequency of the optical drive pulse. These odd-order harmonics occur because the ionized electron is driven twice past the ion core during each cycle of the optical pulse. Each passage gives a probability amplitude for emission of an XUV photon upon recombination, and the sum of these different quantum pathways for the generation of an XUV photon leads to the radiation’s being emitted as a train of pulses, with a corresponding comblike spectrum. The spacing of the harmonic spectral peaks may be altered by driving the generation process with an optical pulse of a different frequency. The spectral interferogram of radiation generated by two sources with different spacing of the harmonics is then equivalent to a SPIDER interferogram, from which the spectral phase of the XUV spectral peak may be estimated. This approach has been implemented for one harmonic order of HHG by using a collinear pair of frequency-shifted drive pulses [317, 318]. This version is limited in the wavelengths that can be measured because ionization of the atoms by the first pulse modifies in a complicated way the phase of the XUV radiation generated by the second pulse. This can be overcome by a SEA-SPIDER (spatially encoded arrangement SPIDER) configuration [319], which also has the advantage that space–time coupling in the emitted harmonics can be measured.

6.2. Spatiotemporal Characterization

Space–time characterization of optical waveforms brings in the spatial dependence of the temporal waveforms of optical pulses. While the temporal waveform is ideally independent of the location in the beam, this property is not preserved by some optical pulse generation mechanisms. For example, short-pulse oscillators based on Kerr-lens mode locking have been shown to exhibit a spatially dependent optical spectrum and pulse shape [286, 320]. The stretcher and compressor of chirped-pulse amplification systems can induce spatiotemporal coupling if they are not aligned properly [321]. Such coupling is also induced by properly aligned zero-dispersion lines [322]. Nonlinear propagation leads to spatiotemporal coupling, since it induces a temporal phase proportional to the intensity of light, therefore locations of the beam corresponding to different intensities correspond to different induced temporal phases [323, 324]. Chromatic aberrations in lenses can lead to significant spatiotemporal coupling [325]. The problem becomes particularly acute for high-power laser systems that use large singlet lenses to perform relay imaging of broadband optical pulses [326] and for applications of ultrashort optical pulses that require tight focusing [327]. The performance of some applications of short optical pulses can actually be enhanced by spatiotemporal shaping [328, 329, 330]. There is therefore a need for some accurate estimation of the spatial variations of the optical pulse shape, optimally the full determination of the electric field as a function of time t (equivalently ω) and the transverse coordinate x and y (equivalently the spatial wave vectors kx and ky).

Spatiotemporal characterization usually goes beyond the simple measurement of the temporal electric field at different locations in the beam, which can be obtained by performing independent measurements of the temporal waveforms. Indeed, while the electric field E(t,x1) and E(t,x2) can usually be retrieved up to a constant and time delay, the information about the relative phase and delays between these two fields is usually lost. Therefore, a more global approach to spatiotemporal measurements, where relative phases and delays are properly accounted for, is needed.

Some quantification of pulse-front tilt (the spatially dependent time of arrival of an optical pulse to a reference plane perpendicular to its direction of propagation) can be obtained in correlating devices. For example, a nonlinear cross-correlation between a short optical pulse and the pulse under test at different positions in the beam reveals the spatial variations of the time of arrival of the optical wave packet at a reference plane [331]. Pulse-front tilt can also be inferred by using an autocorrelator provided that the number of mirrors in the two arms of the setup has a different parity [332]. The relative spatial inversion introduced by this setup makes the experimental trace sensitive to spatial variations of the time of arrival at a reference plane.

Interferometry plays a significant role in spatiotemporal measurements, since spatial interference on a time-integrating detector can directly reveal optical phase differences. The space–time coupling introduced by various optical elements can be revealed by time-of-flight interferometry [333, 334, 335]. An input source is split to generate a reference field and a probe field that propagates into an element under test. The interference between the spatially inhomogeneous field generated by the element under test and the reference field is recorded on a time-integrating detector as a function of the relative delay between the two fields. Interference is visible only when the relative delay is smaller than the coherence time of the input source, and one can therefore map the group delay difference between the two fields as a function of spatial location.

Spectral interferometry is another approach to spatiotemporal measurements. The spectral interference between two optical pulses measured by a spectrometer directly leads to their spectral phase difference. An undistorted or precisely characterized pulse can be used as a reference pulse, in which case one can map the spectral phase variations of the distorted pulse at different locations in the beam relative to the spectral phase of the reference pulse [327, 336, 337]. A technique using optical fibers has been demonstrated to spatially filter the two beams: one of the fibers delivers a reference pulse while the other fiber is spatially scanned [338, 339]. Interference between the two optical waves occurs after free-space propagation and recollimation and leads to the spectral phase difference between the reference pulse and the pulse under test at the spatial location where the corresponding collection fiber is located. A full representation of the spatially resolved spectral phase of the pulse under test relative to the spectral phase of the reference pulse can be obtained by spatially scanning the collection fiber. Note that, in all multishot techniques that use interference with a reference pulse, coherence between the two interfering pulses is mandatory to measure the interferometric term containing the phase difference between the two pulses. This coherence is naturally obtained if a common optical source is used to generate the reference pulse and the distorted pulse.

The spatial interference of two monochromatic beams leads to their spatial phase difference. Assuming that one of the fields is a reference field, performing this measurement at a discrete set of optical frequencies {ωi} leads to ϕ(x,y,ωi) up to a frequency-dependent phase that can be determined by measuring the spectral phase at one or several spatial locations. The set of monochromatic wavefront measurements can, for example, be obtained by generating a reference monochromatic wave and using it to measure the spatially resolved wavefront of the pulse under test at the corresponding optical frequency, then scanning the frequency of the reference wave [340]. This set of phases can be obtained in a single shot by spatially multiplexing the interferograms corresponding to different optical frequencies on the same camera. A discrete set of optical frequencies from the reference and test beams is selected by combining a diffractive optical element with an interference filter [341]. The diffractive optical element generates replicas of the two beams traveling in different directions. The narrowband filtering function of the interference filter is direction dependent, and the different diffracted directions correspond to filtering at different optical frequencies. After this filtering element, propagation to a two-dimensional detector leads to a set of discrete interferograms, which, with careful calibration of the spatiotemporal coupling of the reference field, yields the spatial phase difference between the reference and pulse under test at the corresponding set of optical frequencies.

Self-referencing shearing interferometry can be used to quantify spatiotemporal coupling. A simple linear setup uses the spectrally resolved interference of two spatially sheared replicas of the input beam, which is measured with an imaging spectrometer [342, 343]. The spatially and spectrally resolved interference of the two beams leads to the phase difference ϕ(x+X,ω)ϕ(x,ω), which can be integrated to yield the spatiospectral phase ϕ(x,ω) up to an unknown function of the optical frequency ϕω(ω). Although such an experiment alone does not determine the pulse shape at any point in the beam, it is sufficient to estimate how different the spectral properties of the pulse are at different points in the beam. More complete information can be obtained by determining the function ϕω(ω) by using a complete measurement of the spectral phase performed at one point in the beam. This was, for example, performed with spectral shearing interferometry in [287], where a spatially resolved spectral shearing interferometer measures the spectral phase gradient ϕω(x,ω) and the spectrally resolved spatial shearing interferometer measures the spatial phase gradient ϕx(x,ω). Algorithms used for spatial shearing interferometry can be used to reconstruct the phase as a function of the spatial coordinate x and optical frequency ω.

6.3. Terahertz Optics

Terahertz (THz) waves, with wavelengths in the millimeter range, are extremely useful for medical imaging, detection of concealed objects and substances, and other applications where they are advantageous over optical waves because of their increased penetration depth and propagation distance [344, 345]. Proper characterization of the THz waveforms that are generated and modified by propagation in probe media is important to implement these applications. While the short optical cycle of visible radiation prevents the direct measurement of the electric field, the electric field of THz waveforms can be directly measured. There are mainly two approaches for this measurement, photoconductive sampling [346, 347] and electro-optic sampling [348, 349, 350], both of which rely on an ancillary optical pulse. Experimental details of these implementations can be found, for example, in [351] and references therein, and direct comparisons of these two techniques are presented in [351, 352, 353].

In photoconductive sampling, the THz waveform under test modulates the current generated by a photoconductive antenna (e.g., fabricated with GaAs) excited by a short optical pulse. The measured current I(τ)=dtvg(t)g(tτ), where vg represents the voltage bias induced by the THz radiation across the photoconductive gap and g is the photoconductance induced by the short optical pulse. When the antenna frequency response is flat over the spectrum of the THz waveform, vg is proportional to the THz temporal electric field. The conductance is a convolution of the temporal intensity of the short optical pulse with decreasing exponentials that take into account the response time of the photocurrent and its recovery. These finite response times can induce some distortions of the measured photocurrent, but in principle the measured modulation leads the temporal electric field of the THz waveform. Nonetheless, it is possible to extract accurately both the THz pulse shape [354] and the antenna response function [355] by using the rapid turn-on of the gating pulse intensity, which sets an upper bound on the bandwidth and a lower bound on the temporal resolution of the apparatus. Figure 45 shows the design of a polarization-sensitive photoconductive detector for THz pulses [356]. The vectorial electric field of two different pulses is also shown.

Electro-optic sampling uses the voltage-induced variation of the optical index in an electro-optic crystal (e.g., ZnTe) to modulate the polarization state of a linearly polarized optical pulse. The polarization modulation can be analyzed with a polarizer, and the intensity modulation is directly proportional to the electric field of the THz waveform. Electro-optic sampling can have very large bandwidth if it is implemented in thin electro-optic crystals, where effects such as group-velocity mismatch between the optical and THz waveforms and distortions due to phonon–polariton coupling are small.

Single-shot operation of electro-optic sampling has been demonstrated in various ways. Since some optical setups can measure the temporal intensity in a single shot, THz waveforms encoded on an optical wave can be directly characterized in the time domain provided that the intensity of the modulated wave is recorded with sufficient bandwidth. This direct time-domain THz characterization has been demonstrated by using a streak camera measuring the probe pulse intensity after the electro-optic crystal and polarizer [357].

The relation between time and frequency in a chirped optical pulse allows the encoding of a temporal modulation on the chirped-pulse optical spectrum, which can be measured in a single shot with an array detector at the focal plane of a spectrometer [203]. In this implementation, the chirped pulse is synchronized with the THz waveform in the electro-optic crystal, and the polarization modulation of the chirped pulse is analyzed by a polarizer followed by a spectrometer. The THz waveform field is recorded over the temporal duration of the chirped pulse TCHIRPED. The bandwidth of such a diagnostic is limited by the chirp rate, as the frequency content of the temporal modulation must be small relative to the local frequency content of the chirped optical pulse. The temporal resolution is of the order of T0TCHIRPED, where T0 is the duration of the optical pulse before chirping. Since encoding is not performed on a spatial coordinate, this technique can be extended to single-shot spatiotemporal THz characterization. The chirped optical beam is extended along one spatial dimension, and the optical spectrum measurement is performed along the corresponding direction by using an imaging spectrometer [358].

Other setups for single-shot characterization of THz waveforms use time-to-space encoding. In these techniques, the modulation is induced on a short optical pulse, and different times in the THz waveform are encoded onto different spatial locations. An array detector measuring the spatial intensity after the polarizer therefore yields the THz electric field after proper calibration of the time-to-space encoding. Time-to-space mapping can be obtained by using configurations similar to single-shot autocorrelators, for example, by using a large angle between the THz and the short optical pulse in an electro-optic crystal [359], using a large angle between an optical pulse modulated by a THz pulse via the electro-optic effect and a short optical pulse in a nonlinear crystal [360], or using a probe optical pulse after a dispersive element that introduces pulse-front tilt [361]. A discrete version of a single-shot cross correlator has also been built with echelons [362]. Because these techniques do not rely on encoding of a temporal modulation on the frequency component of the optical pulse, they are limited in bandwidth only by the duration of the short optical pulse. Their temporal range is limited by the temporal range covered by the time-to-space mapping of the optical pulse.

6.4. Carrier-Envelope Offset Phase

The phase stabilization of trains of optical pulses generated by mode-locked lasers and the broad spectra generated by these lasers have become increasingly important in the recent years as octave-spanning sources have been developed [363, 364, 365]. A truly complete characterization of a pulse field requires knowledge of the carrier-envelope offset (CEO) phase ψ0 (also known as the “absolute” phase). The CEO phase is usually not a concern for applications of pulses that are significantly longer than the duration of an optical cycle (i.e., the oscillation of the electric field) at the corresponding wavelength range. However, for pulses with durations of the order of a few optical cycles, the phase of the field oscillation under the envelope can significantly modify the shape of the real electric field. The effect of this phase can be seen in Fig. 46: an offset of the CEO phase by π2 leads to a very different electric field. For a pulse train generated by a mode-locked laser, the CEO phase is usually different for each pulse in the train because of the difference between phase velocity and group velocity in the laser cavity and because of various noise sources. None of the methods described in this review provides such knowledge because their experimental trace does not depend upon ψ0, and, in general, only highly nonlinear processes with ultrashort optical pulses depend on it [315]. Time-domain measurements of the relative phase of successive pulses in a pulse train can be performed, however [366, 367]. It is possible to use a feedback control loop to stabilize the CEO phase to a particular value using the f-2f interferometer [364]. The way in which this is achieved is to measure the beating between modes of the spectrum of the pulse-train fundamental and its second harmonic. The spectrum of a pulse train is a frequency comb—a set of discrete modes separated by the pulse repetition frequency frep, as illustrated in Fig. 46. The modal frequencies are nfrep+fCEO, where fCEO is a frequency between 0 and frep. In the time domain, the envelope of the electric field is periodic, with period frep. However, the phase of the field oscillation under the envelop changes by 2πfCEOfrep between two successive pulses: the electric field is periodic only if fCEO=0. The second-harmonic comb frequencies are at 2nfrep+2fCEO. For two modes in an overlapping frequency region of the fundamental and second-harmonic spectra, beating in the time domain occurs at the frequency fCEO. This beat frequency can be directly measured by a photodetector, and the value of fCEO can be adjusted, for example, by changing the intracavity path length of the laser oscillator. Of course, in order that the fundamental and second-harmonic spectra overlap, the former must have at least a one-octave-wide spectrum. It is possible to obtain this directly from an oscillator, but more commonly it is obtained by expanding the bandwidth of the spectrum by means of a nonlinear optical process, such as self-phase modulation in a photonic crystal fiber. Several methods have been proposed to determine the CEO phase for individual pulses (or the common CEO phase of the pulses composing a train of pulses with a CEO frequency equal to zero) [368, 369]. These methods use the asymmetry of the direction of motion of electrons ionized by the pulse. For sinelike pulses, this distribution will be symmetric; for cosinelike, it will not be.

6.5. Conclusions

The establishment of general principles of pulse characterization, and the instantiation of these principles in several different techniques, suggests that the field of ultrafast pulse characterization is a mature one. Indeed, some of the methods have proved commercially viable, so that they have passed from being laboratory tools to being workaday devices.

Nonetheless, challenges remain. There is a clear need, evidenced by some of the applications and current areas of research, to extend metrology to more extreme wavelengths, larger bandwidths, shorter durations, and greater complexities, as well as to encompass all the degrees of freedom of the electric field, including its quantum state. Even for a single pulse, unless it is close to transform and diffraction limited (and in a pure polarization state, not to say a pure quantum state!) the amount of data required for specifying the pulse is enormous, and itself poses significant processing challenges. Coupled with the rapid repetition rates of laser oscillators and amplifiers, this demands new ways of extracting the appropriate quantities. Extracting the field itself from this vast data set in a reliable way remains a challenge.

As technology improves, so will the sensitivity and compactness of the current generation of devices. This is already becoming evident in the burgeoning number of applications based on linear shearing interferometry and spectrography, even though these were proposed and analyzed more than a decade ago. This opens new capabilities and applications based on ultralow-light level devices with integration capability. This will likely be very valuable for system-level control loops.

Underlying all of this, however, it is fundamental that the development of new tools underpins scientific discovery, so that as new methods arise and old ones evolve, we can be sure there will emerge new phenomena. Conversely, new science underpins the development of new technology, so we can also be sure that there will be new techniques and applications arising from new discovery. The symbiosis between science and technology is clear in ultrafast optics: the generation, amplification, and measurement of short electromagnetic pulses have opened new vistas in physics, chemistry, and materials science, as well as in applications such as biomedicine and telecommunications. This will continue.

Acknowledgments

We are grateful for ideas and insights gleaned from numerous discussions with our many colleagues with whom we have worked in this field over the past decade. Of particular help in preparing this article, and whom we especially wish to thank, are Inuk Kang (Bell Laboratories, Alcatel-Lucent), Jake Bromage (Laboratory for Laser Energetics, University of Rochester), Adam Wyatt, Tobias Witting, and Dane Austin (University of Oxford) and John Dudley (Université de Franche-Comté). Ian A. Walmsley has been supported by the UK Engineering and Physical Sciences Research Council (EPSRC) and by a Royal Society–Wolfson Research Merit Award, as well as by the European Commission through the research training networks EMALI and FASTQUAST. Christophe Dorrer was supported by the U.S. DOE Office of Inertial Confinement Fusion under Cooperative Agreement DE-FC52-08NA28302, the University of Rochester, and the New York State Energy Research and Development Authority. The support of DOE does not constitute an endorsement by DOE of the views expressed in this article.

Figures

 figure: Fig. 1

Fig. 1 Spatially encoded arrangement (SEA-) SPIDER measurements of a few-cycle Ti:sapphire oscillator. (a) SEA-SPIDER interferogram. (b) Measured spectral intensity (black) and reconstructed spectral phase (red). (c) Fourier-transform-limited (black) and reconstructed (red) temporal intensity of the pulse. The pulse durations (full width at half-maximum) are 6.6 and 7.6fs, respectively.

Download Full Size | PDF

 figure: Fig. 2

Fig. 2 Characterization of the output pulse from a CPA system. (a) Spectrum of the pulse (solid curve), the spectral phase when the two-grating compressor is mismatched with the stretcher, inducing a large third-order spectral phase (long-dashed curve), and the spectral phase after optimization (short-dashed curve). (b) and (c), respectively, show the temporal intensity of the pulse with third-order spectral phase and after optimization.

Download Full Size | PDF

 figure: Fig. 3

Fig. 3 Output of a dual-stage plasma filament compressor under different experimental conditions. (a) and (d) show the temporal intensity, (b) and (e) are the corresponding spectral representations of the electric field, and (c) and (f) are spectrograms in the time–frequency space (courtesy of G. Steinmeyer).

Download Full Size | PDF

 figure: Fig. 4

Fig. 4 Intensity of a train of optical pulses generated by an optical pulse shaper. The intensity was measured by nonlinear cross-correlation with a short unshaped optical pulse (courtesy of A. M. Weiner).

Download Full Size | PDF

 figure: Fig. 5

Fig. 5 Characterization of a pulse train from a Mach–Zehnder modulator driven by a 20GHz RF drive. (a) Temporal intensity and phase of a 33% return-to-zero train of pulses. (b) Temporal intensity and phase of a 67% carrier-suppressed return-to-zero train of pulses, with the expected π phase shift between successive pulses. (c) Pulse train measured when the bias of the modulator is set at an intermediate value between the values that lead to the pulse trains represented in (a) and (b). The upper plots in (a) and (b) are the corresponding experimental data from which the electric field is reconstructed.

Download Full Size | PDF

 figure: Fig. 6

Fig. 6 Representations of a pulse in the (a) spectral and (b) temporal domains. The temporal phase has been removed for clarity.

Download Full Size | PDF

 figure: Fig. 7

Fig. 7 Wigner functions of (a) a Fourier-transform limited Gaussian pulse, (b) a pulse with Gaussian spectrum and quadratic spectral phase, (c) a pair of identical Fourier-transform-limited Gaussian pulses, and (d) a pulse with Gaussian spectrum and third-order spectral phase. In each case, the temporal and spectral marginals are plotted.

Download Full Size | PDF

 figure: Fig. 8

Fig. 8 (a) Principle of an intensity autocorrelator where only the mixing signal between the two relatively delayed replicas of the input pulse is measured. (d) Principle of an interferometric autocorrelator where the total upconverted signal from two collinear replicas of the input pulse is measured. (b) and (e) are, respectively, the intensity and the interferometric autocorrelations of a pulse with a Gaussian spectrum and a flat spectral phase, while (c) and (f) are, respectively, the intensity and the interferometric autocorrelations of a pulse with a Gaussian spectrum and a quadratic spectral phase.

Download Full Size | PDF

 figure: Fig. 9

Fig. 9 General interferometer for optical pulse characterization. The test pulse encounters a sequence of linear filters, after (possibly) being split into two replicas at a beam splitter. The combined outputs of the filters are incident on a square-law photodetector, usually with a response much slower than the duration of the filter response functions, and certainly much less than that of the input pulse.

Download Full Size | PDF

 figure: Fig. 10

Fig. 10 Linear filter description of type I to type VIII devices. Spectrographic devices, based on two serial amplitude filters in conjugate variables, correspond to (a) type I and (b) type II. Tomographic devices, based on a quadratic phase modulation followed by an amplitude filter in the conjugate variable, correspond to (c) type III and (d) type IV. Interferometric techniques related to Young’s double-slit experiment, with two amplitude filters in parallel followed by one amplitude filter in the conjugate variable, correspond to (e) type V and (f) type VI. Interferometric techniques related to shearing interferometry, with two linear phase modulations in conjugate domains in parallel, correspond to (g) type VII and (h) type VIII.

Download Full Size | PDF

 figure: Fig. 11

Fig. 11 Representations of (a) the effect of dispersive propagation and (c) propagation in a quadratic temporal phase modulator. (b) Dispersive propagation leads to a shear of the chronocyclic representation along the time axis. (d) The quadratic temporal phase modulator leads to a shear of the chronocyclic representation along the frequency axis.

Download Full Size | PDF

 figure: Fig. 12

Fig. 12 Approaches for the measurement of (a) a spectrogram and (b) a sonogram. The spectrogram is measured by first gating the pulse with a time-nonstationary filter and measuring the optical spectrum as a function of the optical frequency and relative delay between the pulse and the gate. The sonogram is measured by first filtering the pulse with a time-stationary filter and measuring the temporal intensity as a function of time and the position of the spectral filter.

Download Full Size | PDF

 figure: Fig. 13

Fig. 13 Spectrogram of a pulse with (a) second-order dispersion, i.e., a linear group delay and (b) third-order dispersion, i.e., a quadratic group delay. The group-delay function has been overlapped on the spectrogram in each case.

Download Full Size | PDF

 figure: Fig. 14

Fig. 14 Block diagram of the principal component generalized projection algorithm.

Download Full Size | PDF

 figure: Fig. 15

Fig. 15 (a) Measurement of a sonogram by using nonlinear optics and (b) measured sonogram of a chirped pulse (courtesy D. T. Reid). The pulse under test is split into two so that one replica is sent to the spectral filter and cross correlated with the input pulse. This setup and variations on this setup can be used for either chirp retrieval or phase retrieval. The sonogram plotted in (b) shows the familiar time-to-frequency correlation indicative of the chirp of the input pulse.

Download Full Size | PDF

 figure: Fig. 16

Fig. 16 (a) Implementation of sonograms with a streak camera; (b) self-referencing implementation of a sonogram in the telecommunication environment by RF phase detection.

Download Full Size | PDF

 figure: Fig. 17

Fig. 17 Single-shot measurements of a sonogram using (a) a thick nonlinear crystal or (b) a two-photon detector. In (a), encoding of time and frequency on the two spatial coordinates is performed with noncollinear upconversion in a thick nonlinear crystal. The pulse under test first travels through a cylindrical lens and a spherical lens to shape the beam, then into a Wollaston prism. This assembly generates two replicas of the pulse that are tightly focused in the vertical direction and spatially extended and noncollinear in the horizontal direction. After interaction in a thick nonlinear crystal, the vertical direction and horizontal position that correspond to the optical frequency of the upconverted field and the relative delay between the two interacting waves are mapped into vertical and horizontal positions with a combination of a spherical and cylindrical lenses. In (b), the encoding of frequency on one spatial coordinate is performed with a diffraction grating and a cylindrical lens acting on one replica of the input pulse. The encoding of the relative delay between the different spectral slices of the pulse and the input pulse acting as a temporal gate is obtained thanks to the noncollinear interaction geometry on a two-photon array.

Download Full Size | PDF

 figure: Fig. 18

Fig. 18 Top, implementation of SHG-FROG with a nonlinear crystal. Two replicas of the pulse at ω0 are mixed, and the upconverted signal at 2ω0 is spectrally resolved. Bottom, example of a SHG-FROG trace of a Gaussian pulse with (left) second- and (right) third-order dispersion.

Download Full Size | PDF

 figure: Fig. 19

Fig. 19 SHG-FROG measurements of the evolution of an arbitrary input pulse into a self-similar asymptotic similariton in an optical amplifier. (a) Experimental and (b) theoretical temporal pulse intensities versus propagation distance. (c) FROG trace of the pulse after exiting the fiber amplifier. (d) Temporal amplitude and phase of the output pulse (courtesy J. Dudley).

Download Full Size | PDF

 figure: Fig. 20

Fig. 20 Top, implementation of PG-FROG with a third-order nonlinearity. A high-energy replica of the pulse at ω0 rotates the polarization state of a low-energy replica of the pulse set between crossed polarizers, and the low-energy replica is spectrally resolved. Bottom, example of a PG-FROG trace of a Gaussian pulse with (left) second- and (right) third-order dispersion.

Download Full Size | PDF

 figure: Fig. 21

Fig. 21 Implementation of (a) SD-FROG, (b) THG-FROG, (c) and (d) TG-FROG with a third-order nonlinearity.

Download Full Size | PDF

 figure: Fig. 22

Fig. 22 Implementation of GRENOUILLE. The spatially extended pulse under test travels from left to right and propagates into a cylindrical lens and Fresnel biprism. After interaction in the nonlinear crystal, the vertical direction and horizontal position that correspond to the optical frequency of the upconverted field and the relative delay between the two interacting waves generated by the biprism are mapped into vertical and horizontal positions with a combination of two cylindrical lenses.

Download Full Size | PDF

 figure: Fig. 23

Fig. 23 (a): Measurement of a spectrogram as a function of the optical frequency and the relative delay between the modulation and the source under test. (b) and (c) are the spectral representations of a pulse from a mode-locked diode before and after pulse compression by propagation in a highly nonlinear fiber and dispersive fiber. The insets show the corresponding spectrograms.

Download Full Size | PDF

 figure: Fig. 24

Fig. 24 Equivalence between space and time. In (a), a spatial imaging system is implemented using the combination of diffraction and a spatial lens, i.e., quadratic phase modulations in the space x and wave vector kx domains. In (b), a temporal imaging system is implemented by combining dispersive propagation and a time lens, i.e., quadratic phase modulations in the time t and frequency ω domains.

Download Full Size | PDF

 figure: Fig. 25

Fig. 25 Principle of tomographic reconstruction. Gray shading represents a lower attenuation coefficient, and black a higher one. A two-dimensional attenuation function a(x,y) is projected onto various axes. The set of projections Pθ is then used to reconstruct the attenuation.

Download Full Size | PDF

 figure: Fig. 26

Fig. 26 Principle of the time-to-frequency converter. The frequency marginal of the pulse after quadratic temporal and spectral modulation is the time marginal of the input pulse. The temporal intensity of the test pulse can therefore be determined by a measurement of the optical spectrum of the modulated pulse.

Download Full Size | PDF

 figure: Fig. 27

Fig. 27 Example of simplified chronocyclic tomography. The Wigner function of a pulse with Gaussian spectrum and cubic spectral phase after small positive and negative quadratic temporal phase modulations is shown in (a) and (b). The shears imposed by these modulations are displayed in the insets. In (c), the difference between the two obtained spectral marginals (blue curve) is plotted with the initial spectrum (black curve).

Download Full Size | PDF

 figure: Fig. 28

Fig. 28 Imaging with magnification in the chronocyclic space. Imaging with magnification can be obtained by the successive action of quadratic spectral phase modulation, quadratic temporal phase modulation, and quadratic spectral phase modulation. The shears imposed by these modulations are displayed in the insets. The time marginal of the Wigner function is plotted for the input and output waveforms and shows a magnification equal to 2.

Download Full Size | PDF

 figure: Fig. 29

Fig. 29 Example of implementation of simplified chronocyclic tomography with a phase modulator. In (a), the quadratic temporal phase modulation is obtained via the electro-optic effect in a LiNbO3 phase modulator driven by a sine wave. Synchronization of the pulse under test with the extrema of the phase modulation with a RF phase shifter provides quadratic temporal modulation. These modulations are alternated at a frequency f so that lock-in detection of the signal measured by a Fabry–Perot etalon followed by a photodetector leads to the average spectrum of the modulated pulses (i.e., the spectrum of the input pulse) and the difference of the spectra of the modulated pulses (i.e., the finite difference from which the spectral phase is reconstructed). (b) Spectrum and phase measured on a train of pulses after nonlinear propagation at various power in a nonlinear fiber and dispersion compensation. (c) Calculated temporal intensities, which show significant pulse compression.

Download Full Size | PDF

 figure: Fig. 30

Fig. 30 Experimental implementation of the time-to-frequency converter with XPM. The waveform under test propagates into a dispersive delay line and is then phase modulated by a pump pulse via XPM in a fiber. The temporal intensity of the waveform under test is a scaled version of the spectrum measured by the optical spectrum analyzer (OSA). The pump pulse is assumed to have a parabolic temporal intensity over the temporal support of the dispersed waveform under test.

Download Full Size | PDF

 figure: Fig. 31

Fig. 31 (a) Principle of imaging with temporal magnification using sum-frequency generation, and (b) implementation of a time-to-frequency converter using four-wave mixing in a silicon waveguide (courtesy of A. Gaeta). In (a), the waveform under test propagates through a dispersive delay line, then undergoes a wave mixing process with a chirped pump pulse, then propagates through an additional dispersive delay line. The frequency chirp in the highly chirped pump pulse leads to a quadratic temporal phase in the pulse, therefore allowing the quadratic temporal phase modulation through the wave mixing process. In (b), the waveform under test chirped by a fiber with dispersion D and the pump pulse chirped by a fiber with dispersion 2D interact via four-wave mixing in a silicon-on-insulator waveguide. The spectrum of the generated idler is a scaled representation of the temporal intensity of the input signal. A complete time-to-frequency conversion of the electric field can be obtained if the generated idler signal further propagates in a fiber with dispersion D.

Download Full Size | PDF

 figure: Fig. 32

Fig. 32 Examples of waveforms measured by time-to-frequency conversion in a silicon waveguide compared with waveforms measured by nonlinear cross-correlation (courtesy of A. Gaeta). The left-hand column corresponds to the time-to-frequency converter, and the right-hand column corresponds to nonlinear cross-correlation with a short optical pulse. (a) Interference of two chirped optical waveforms, with the inset displaying the measured waveforms in a 10ps window. In (b), the time-to-frequency converter is used in single-shot operation to measure the intensity of a pair of delayed chirped pulses.

Download Full Size | PDF

 figure: Fig. 33

Fig. 33 Test-plus-reference interferometers for (a) spectral and (b) temporal interferometry. The unknown (test) pulse E(t) is combined on a beam splitter with a known (reference) pulse ER(t). The resulting interference pattern is measured by using (a) a spectrometer and a slow detector or (b) a fast detector, possibly synthesized by using a rapid shutter followed by a slow detector.

Download Full Size | PDF

 figure: Fig. 34

Fig. 34 Self-referencing interferometers for a (a) spectral and (b) temporal shearing interferometry. The unknown pulse is divided into two replicas, each of which follows a different path through the interferometer. One replica is shifted in frequency, the other in time. The two modified replicas are recombined, and the resulting interference pattern is measured by using (a) a spectrometer and a slow detector or (b) a fast detector.

Download Full Size | PDF

 figure: Fig. 35

Fig. 35 (a) Principle of operation of SI. The test and the reference pulse are mixed on a beam splitter, and the resulting joint spectrum measured. (b) Examples of spectral interferograms, corresponding to a test pulse equal to the reference pulse (left) and to a test pulse with a quadratic spectral phase (right).

Download Full Size | PDF

 figure: Fig. 36

Fig. 36 Diagram of the inversion algorithm for Fourier-transform SI. After an initial Fourier transform to the time domain, an ac sideband is digitally filtered to isolate the interference term. An inverse Fourier transform is made, and the amplitude (solid curve) and phase (dashed line) of the interferometric component are extracted.

Download Full Size | PDF

 figure: Fig. 37

Fig. 37 Schematic of a SPIDER device. The input pulse is used to generate a chirped pulse by propagation in a dispersive delay line. Two temporally delayed replicas of the pulse under test are nonlinearly mixed with the chirped pulse, and the resulting interferogram is spectrally resolved by an optical spectrum analyzer.

Download Full Size | PDF

 figure: Fig. 38

Fig. 38 (a) Spatially resolved interferograms for space–time SPIDER, with (upper) spectral shear and (lower) spatial shear. The fringes are due to a delay between interfering pulses in the upper plot and to a tilt between interfering pulses in the lower plot. (b) Reconstructed spatiospectral phase map for a pulse dispersed by a prism, extracted from the interferograms in (a). (c) Space–time intensity of the pulse with this spatiospectral phase, showing pulse-front tilt.

Download Full Size | PDF

 figure: Fig. 39

Fig. 39 SEA-SPIDER apparatus showing the use of a single copy of the test pulse and two chirped ancillae to encode the spectral phase in a spectrally resolved spatial interferogram. The tilted upconverted replicas generated in the crystal are reimaged on the detector through an imaging spectrometer. The shape of the fringes indicates the spectral phase derivative, providing an intuitive diagnostic of the pulse structure.

Download Full Size | PDF

 figure: Fig. 40

Fig. 40 SEA-SPIDER measurements of the output of a hollow-core-fiber compressor system. (a) The spatial fringe maxima in the SEA-SPIDER interferogram map the gradient of the spectral phase across that section of the beam. Note the spectral cut at 950nm due to the limited bandwidth of the chirped mirrors used for compression. (b) Chronocyclic Wigner function of the pulse, indicating the complex character of the compressor output. The pulse has a slight positive chirp and structure away from the main peak. (c) Measured spectral intensity (blue) and reconstructed spectral phase (green), taken at the center of the beam. (d) Fourier-transform-limited temporal intensity (blue) and reconstructed temporal intensity (green). The full width at half-maximum pulse durations are 5.2 and 7.5fs, respectively.

Download Full Size | PDF

 figure: Fig. 41

Fig. 41 Schematic diagram illustrating the nonlinear process for generating two spectrally sheared replicas in (a) SPIDER and (b) ARAIGNEE. In the latter, the direction of propagation of the beams in the long crystal determines the wavelength of upconversion. (c) The ARAIGNEE apparatus, showing the paths for the fundamental (red) and upconverted (blue) beams that are interfered in the spectrometer.

Download Full Size | PDF

 figure: Fig. 42

Fig. 42 (a) Schematic of the two-dimensional spectral shearing interferometry (2DSI) apparatus, showing the generation of two phase-delay-variable chirped ancillae in a Michelson interferometer. (b) Raw interferograms from (a) a 5fs laser pulse and (b) a pulse dispersed by 1mm of fused silica. (c) Predicted and measured interferometric autocorrelation of a 5fs pulse (figure courtesy J. Birge and F. Kärtner).

Download Full Size | PDF

 figure: Fig. 43

Fig. 43 (a) Schematic arrangement for a linear spectral shearing interferometer based on electro-optic modulation. (b) Spectrum (solid black curve) and spectral phase measured for an input average power of 2mW, 10μW, and 270nW (solid red curve, blue dots, and green squares, respectively).

Download Full Size | PDF

 figure: Fig. 44

Fig. 44 Characterization of sub100as XUV pulses by using spectrograms. (a) Measured photoelectron spectrum as a function of the delay between the XUV pulse and an IR ancilla. (b) Spectrogram reconstructed by using an iterative deconvolution algorithm. (c) Temporal intensity (solid curve) and phase (dashed curve) and (d) spectral intensity (solid curve) and phase (dashed curve) of the reconstructed XUV pulse, showing some residual chirp due to the HHG process (figure courtesy E. Goulielmakis and F. Krausz).

Download Full Size | PDF

 figure: Fig. 45

Fig. 45 Schematic diagram of a terahertz time-domain photoconductive detector. (a) Contact geometry of the polarization-sensitive THz receiver, including (b) an electron micrograph of the gap region. A laser pulse forms a gate beam, which generates electrons in the detector material, onto which the THz radiation is focused by using an off-axis parabolic mirror. The delay between the gate and the THz pulses is varied to map out the electric field of the latter. (c), (d) Examples of the vector field of two THz pulses measured by using this device (figure courtesy E. Castro-Camus and M. B. Johnston).

Download Full Size | PDF

 figure: Fig. 46

Fig. 46 Electric field of two pulses with identical envelopes and different CEO phases. The phase between the peak of the envelope is 0 in (a) and π2 in (b). (c) Schematic of a f-2f interferometer. (d) Intensity spectrum of a pulse train with nonzero CEO frequency, showing the comblike structure of modes separated by the pulse repetition frequency frep and offset by fCEO (red), and intensity spectrum of the upconverted pulse train, showing the modes separated by frep and offset by 2fCEO (blue).

Download Full Size | PDF

1. R. L. Fork, C. H. Brito Cruz, P. C. Becker, and C. V. Shank, “Compression of optical pulses to six femtoseconds by using cubic phase compensation,” Opt. Lett. 12, 483–485 (1987). [CrossRef]   [PubMed]  

2. G. Taft, A. Rundquist, M. M. Murnane, H. C. Kapteyn, K. W. Delong, R. Trebino, and I. P. Christov, “Ultrashort optical waveform measurements using frequency-resolved optical gating,” Opt. Lett. 20, 743–745 (1995). [CrossRef]   [PubMed]  

3. P. G. Bollond, L. P. Barry, J. M. Dudley, R. Leonhardt, and J. D. Harvey, “Characterization of nonlinear switching in a figure-of-eight fiber laser using frequency-resolved optical gating,” IEEE Photon. Technol. Lett. 10, 343–345 (1998). [CrossRef]  

4. A. Kasper and K. J. Witte, “Contrast and phase of ultrashort laser pulses from Ti:sapphire ring and Fabry–Perot resonators based on chirped mirrors,” J. Opt. Soc. Am. B 15, 2490–2495 (1998). [CrossRef]  

5. L. Gallmann, D. H. Sutter, N. Matuschek, G. Steinmeyer, U. Keller, C. Iaconis, and I. A. Walmsley, “Characterization of sub-6-fs optical pulses with spectral phase interferometry for direct electric-field reconstruction,” Opt. Lett. 24, 1314–1316 (1999). [CrossRef]  

6. D. H. Sutter, G. Steinmeyer, L. Gallmann, N. Matuschek, F. Morier-Genoud, U. Keller, V. Scheuer, G. Angelow, and T. Tschudi, “Semiconductor saturable-absorber mirror-assisted Kerr-lens mode-locked Ti:sapphire laser producing pulses in the two-cycle regime,” Opt. Lett. 24, 631–633 (1999). [CrossRef]  

7. J. M. Dudley, S. F. Boussen, D. M. J. Cameron, and J. D. Harvey, “Complete characterization of a self-mode-locked Ti:sapphire laser in the vicinity of zero group-delay dispersion by frequency-resolved optical gating,” Appl. Opt. 38, 3308–3315 (1999). [CrossRef]  

8. D. Strickland and G. Mourou, “Compression of amplified chirped optical pulses,” Opt. Commun. 56, 219–221 (1985). [CrossRef]  

9. S. Backus, C. G. Durfee III, M. M. Murnane, and H. C. Kapteyn, “High power ultrafast lasers,” Rev. Sci. Instrum. 69, 1207–1223 (1998). [CrossRef]  

10. B. Kohler, V. V. Yakovlev, K. R. Wilson, J. A. Squier, K. W. Delong, and R. Trebino, “Phase and intensity characterization of femtosecond pulses from a chirped-pulse amplifier by frequency-resolved optical gating,” Opt. Lett. 20, 483–485 (1995). [CrossRef]   [PubMed]  

11. C. Dorrer, B. de Beauvoir, C. Le Blanc, S. Ranc, J.-P. Rousseau, P. Rousseau, and J.-P. Chambaret, “Single-shot real-time characterization of chirped-pulse amplification systems by spectral phase interferometry for direct electric-field reconstruction,” Opt. Lett. 24, 1644–1646 (1999). [CrossRef]  

12. A. Baltuska, M. S. Pshenichnikov, and D. A. Wiersma, “Amplitude and phase characterization of 4.5-fs pulses by frequency-resolved optical gating,” Opt. Lett. 23, 1474–1476 (1998). [CrossRef]  

13. Z. Cheng, A. Furbach, S. Sartania, M. Lenzner, C. Spielmann, and F. Krausz, “Amplitude and chirp characterization of high-power laser pulses in the 5-fs regime,” Opt. Lett. 24, 247–249 (1999). [CrossRef]  

14. S. A. Diddams, H. K. Eaton, A. A. Zozulya, and T. S. Clement, “Characterizing the nonlinear propagation of femtosecond pulses in bulk media,” IEEE J. Sel. Top. Quantum Electron. 4, 306–316 (1998). [CrossRef]  

15. X. Gu, L. Xu, M. Kimmel, E. Zeek, P. O’Shea, A. P. Shreenath, R. Trebino, and R. S. Windeler, “Frequency-resolved optical gating and single-shot spectral measurements reveal fine structure in microstructure-fiber continuum,” Opt. Lett. 27, 1174–1176 (2002). [CrossRef]  

16. G. Stibenz, N. Zhavoronkov, and G. Steinmeyer, “Self-compression of millijoule pulses to 7.8fs duration in a white-light filament,” Opt. Lett. 31, 274–276 (2006). [CrossRef]   [PubMed]  

17. A. M. Weiner, “Femtosecond pulse shaping using spatial light modulators,” Rev. Sci. Instrum. 71, 1929–1960 (2000). [CrossRef]  

18. D. Goswami, “Optical pulse shaping approaches to coherent control,” Phys. Rep. 374, 385–481 (2003). [CrossRef]  

19. C. Dorrer, “High-speed measurements for optical telecommunication systems,” IEEE J. Sel. Top. Quantum Electron. 12, 843–858 (2006). [CrossRef]  

20. J. Debeau, B. Kowalski, and R. Boittin, “Simple method for the complete characterization of an optical pulse,” Opt. Lett. 23, 1784–1786 (1998). [CrossRef]  

21. K. Taira and K. Kikuchi, “Optical sampling system at 1.55 micron for the measurement of pulse waveform and phase employing sonogram characterization,” IEEE Photon. Technol. Lett. 13, 505–507 (2001). [CrossRef]  

22. L. P. Barry, S. Del Burgo, B. C. Thomsen, R. T. Watts, D. A. Reid, and J. D. Harvey, “Optimization of optical data transmitters for 40-Gbs lightwave systems using frequency resolved optical gating,” IEEE Photon. Technol. Lett. 14, 971–973 (2002). [CrossRef]  

23. C. Dorrer and I. Kang, “Simultaneous temporal characterization of telecommunication optical pulses and modulators using spectrograms,” Opt. Lett. 27, 1315–1317 (2002). [CrossRef]  

24. C. Dorrer and I. Kang, “Real-time implementation of linear spectrograms for the characterization of high bit-rate optical pulse trains,” IEEE Photon. Technol. Lett. 16, 858–860 (2004). [CrossRef]  

25. D. J. Bradley and G. New, “Ultrashort pulse measurements,” Proc. IEEE 62, 313–345 (1974). [CrossRef]  

26. C. Froehly, B. Colombeau, and M. Vampouille, “Shaping and analysis of picosecond light pulses,” in Progress in Optics. Vol. XX, E. Wolf ed. (North-Holland, 1983), pp. 63–153. [CrossRef]  

27. A. Laubereau, “Optical nonlinearities with ultrashort pulses,” in Ultrashort Laser Pulses and Applications, W. Kaiser ed., Vol. 60 of Topics in Applied Physics (Springer-Verlag, 1988), pp. 35–112.

28. R. Trebino, ed., Frequency Resolved Optical Gating: the Measurement of Ultrashort Optical Pulses (Kluwer Academic, 2002).

29. J.-C. Diels and W. Rudolph, Ultrashort Laser Pulse Phenomena: Fundamentals, Techniques and Applications on the Femtosecond Time Scale, 2nd ed. (Academic, 2006).

30. All nonstationary optical elements require some reference signal that determines the timing of the modulation. In practice, a number of practically useful nonstationary elements are based on the mixing of an independent control signal with the optical pulse by means of a material nonlinear response. We wish however, to emphasize that self-activated nonlinear responses are not necessary, and that passive elements without nonlinear responses are not sufficient for pulse characterization.

31. V. Wong and I. A. Walmsley, “Analysis of ultrashort pulse-shape measurement using linear interferometers,” Opt. Lett. 19, 287–289 (1994). [CrossRef]   [PubMed]  

32. I. A. Walmsley and V. Wong, “Characterization of the electric field of ultrashort optical pulses,” J. Opt. Soc. Am. B 13, 2453–2463 (1996). [CrossRef]  

33. I. A. Walmsley, L. Waxer, and C. Dorrer, “The role of dispersion in ultrafast optics,” Rev. Sci. Instrum. 72, 1–29 (2001). [CrossRef]  

34. C. Iaconis, V. Wong, and I. A. Walmsley, “Direct interferometric techniques for characterizing ultrashort optical pulses,” IEEE J. Sel. Top. Quantum Electron. 4, 285–294 (1998). [CrossRef]  

35. C. Dorrer, B. de Beauvoir, C. Le Blanc, J.-P. Rousseau, S. Ranc, P. Rousseau, J.-P. Chambaret, and F. Salin, “Characterization of chirped-pulse amplification systems with spectral phase interferometry for direct electric-field reconstruction,” Appl. Phys. B 70, S77–S84 (2000). [CrossRef]  

36. C. Dorrer and I. A. Walmsley, “Measurement of the statistical properties of a train of ultrashort light pulses,” in Summaries of Papers Presented at the Conference on Lasers and Electro-Optics, 2002. CLEO '02. Technical Digest (IEEE, 2002), Vol.1, pp. 85–86.

37. L. Cohen, Time–Frequency Analysis, Prentice Hall Signal Processing Series (Prentice Hall PTR, 1994).

38. J. Paye, “The chronocyclic representation of ultrashort light pulses,” IEEE J. Quantum Electron. 28, 2262–2273 (1992). [CrossRef]  

39. V. Wong and I. A. Walmsley, “Ultrashort-pulse characterization from dynamic spectrograms by iterative phase retrieval,” J. Opt. Soc. Am. B 14, 944–949 (1997). [CrossRef]  

40. J. A. Armstrong, “Measurement of picosecond laser pulse widths,” Appl. Phys. Lett. 10, 16–18 (1967). [CrossRef]  

41. H. P. Weber, “Method for pulsewidth measurement of ultrashort light pulses generation by phase-locked lasers using nonlinear optics,” J. Appl. Phys. 38, 2231–2234 (1967). [CrossRef]  

42. J. A. Giordmaine, P. M. Rentzepis, S. L. Shapiro, and K. W. Wecht, “Two-photon excitation of fluorescence by picosecond light pulses,” Appl. Phys. Lett. 11, 216–218 (1967). [CrossRef]  

43. S. L. Shapiro, “Second harmonic generation in LiNbO3 by picosecond pulses,” Appl. Phys. Lett. 13, 19–21 (1968). [CrossRef]  

44. D. H. Auston, “Measurement of picosecond pulse shape and background level,” Appl. Phys. Lett. 18, 249–251 (1971). [CrossRef]  

45. J. Peatross and A. Rundquist, “Temporal decorrelation of short laser pulses,” J. Opt. Soc. Am. B 15, 216–222 (1998). [CrossRef]  

46. J. H. Chung and A. M. Weiner, “Ambiguity of ultrashort pulse shapes retrieved from the intensity autocorrelation and the power spectrum,” IEEE J. Sel. Top. Quantum Electron. 7, 656–666 (2001). [CrossRef]  

47. E. I. Blount and J. R. Klauder, “Recovery of laser intensity from correlation data,” J. Appl. Phys. 40, 2874–2875 (1969). [CrossRef]  

48. T. Mindl, P. Hefferle, S. Schneider, and F. Dörr, “Characterisation of a train of subpicosecond laser pulses by fringe resolved autocorrelation measurements,” Appl. Phys. B 31, 201–207 (1983). [CrossRef]  

49. J.-C. M. Diels, J. J. Fontaine, I. C. McMichael, and F. Simoni, “Control and measurement of ultrashort pulse shapes (in amplitude and phase) with femtosecond accuracy,” Appl. Opt. 24, 1270–1282 (1985). [CrossRef]   [PubMed]  

50. K. Naganuma, K. Mogi, and H. Yamada, “General method for ultrashort light-pulse chirp measurement,” IEEE J. Quantum Electron. 25, 1225–1233 (1989). [CrossRef]  

51. L. Dahlström and B. Källberg, “Third-order correlation measurement of ultrashort light pulses,” Opt. Commun. 4, 285–288 (1971). [CrossRef]  

52. H. St. Albrecht, P. Heist, J. Kleinschmidt, D. van Lap, and T. Schröder, “Measurement of ultraviolet femtosecond pulses using the optical Kerr effect,” Appl. Phys. B 55, 362–364 (1992). [CrossRef]  

53. E. J. Divall and I. N. Ross, “High dynamic range contrast measurements by use of an optical parametric amplifier correlator,” Opt. Lett. 29, 2273–2275 (2004). [CrossRef]   [PubMed]  

54. S. Luan, M. H. R. Hutchinson, R. A. Smith, and F. Zhou, “High dynamic range third-order correlation measurement of picosecond laser pulse shapes,” Meas. Sci. Technol. 4, 1426–1429 (1993). [CrossRef]  

55. J. Collier, C. Hernandez-Gomez, R. Allott, C. Danson, and A. Hall, “A single-shot third-order autocorrelator for pulse contrast and pulse shape measurements,” Laser Part. Beams 19, 231–235 (2001). [CrossRef]  

56. C. Dorrer, J. Bromage, and J. D. Zuegel, “High-dynamic-range single-shot cross-correlator based on an optical pulse replicator,” Opt. Express 16, 13534–13544 (2008). [CrossRef]   [PubMed]  

57. T. Feurer, S. Niedermeier, and R. Sauerbrey, “Measuring the temporal intensity of ultrashort laser pulses by triple correlation,” Appl. Phys. B 66, 163–168 (1998). [CrossRef]  

58. J. L. A. Chilla and O. E. Martinez, “Direct determination of the amplitude and the phase of femtosecond light pulses,” Opt. Lett. 16, 39–41 (1991). [CrossRef]   [PubMed]  

59. J. L. A. Chilla and O. E. Martinez, “Analysis of a method of phase measurement of ultrashort pulses in the frequency domain,” IEEE J. Quantum Electron. 27, 1228–1235 (1991). [CrossRef]  

60. J. Paye, M. Ramaswamy, J. G. Fujimoto, and E. P. Ippen, “Measurement of the amplitude and phase of ultrashort light pulses from spectrally resolved autocorrelation,” Opt. Lett. 18, 1946–1948 (1993). [CrossRef]   [PubMed]  

61. J.-P. Foing, J.-P. Likforman, M. Joffre, and A. Migus, “Femtosecond pulse phase measurement by spectrally resolved up-conversion: application to continuum compression,” IEEE J. Quantum Electron. 28, 2285–2290 (1992). [CrossRef]  

62. J. Rhee, T. Sosnowski, A. C. Tien, and T. Norris, “Real-time dispersion analyzer of femtosecond laser pulses with use of a spectrally and temporally resolved upconversion technique,” J. Opt. Soc. Am. B 13, 1780–1785 (1996). [CrossRef]  

63. L. Lepetit, G. Chériaux, and M. Joffre, “Linear techniques of phase measurement by femtosecond spectral interferometry for applications in spectroscopy,” J. Opt. Soc. Am. B 12, 2467–2474 (1995). [CrossRef]  

64. D. N. Fittinghoff, J. L. Bowie, J. N. Sweetser, R. T. Jennings, M. A. Krumbügel, K. W. Delong, R. Trebino, and I. A. Walmsley, “Measurement of the intensity and phase of ultraweak, ultrashort laser pulses,” Opt. Lett. 21, 884–886 (1996). [CrossRef]   [PubMed]  

65. J. W. Nicholson, J. Jasapara, W. Rudolph, F. G. Omenetto, and A. J. Taylor, “Full-field characterization of femtosecond pulses by spectrum and cross-correlation measurements,” Opt. Lett. 24, 1774–1776 (1999). [CrossRef]  

66. J. W. Nicholson, M. Mero, J. Jasapara, and W. Rudolph, “Unbalanced third-order correlations for full characterization of femtosecond pulses,” Opt. Lett. 25, 1801–1803 (2000). [CrossRef]  

67. K. H. Hong, Y. S. Lee, and C. H. Nam, “Electric-field reconstruction of femtosecond laser pulses from interferometric autocorrelation using an evolutionary algorithm,” Opt. Commun. 271, 169–177 (2007). [CrossRef]  

68. R. G. M. P. Koumans and A. Yariv, “Time-resolved optical gating based on dispersive propagation: a new method to characterize optical pulses,” IEEE J. Quantum Electron. 36, 137–144 (2000). [CrossRef]  

69. R. G. M. P. Koumans and A. Yariv, “Pulse characterization at 1.5 micron using time-resolved optical gating based on dispersive propagation,” IEEE Photon. Technol. Lett. 12, 666–668 (2000). [CrossRef]  

70. D. Meshulach, D. Yelin, and Y. Silberberg, “Adaptive ultrashort pulse compression and shaping,” Opt. Commun. 138, 345–348 (1997). [CrossRef]  

71. B. Xu, J. M. Gunn, J. M. Dela Cruz, V. V. Lozovoy, and M. Dantus, “Quantitative investigation of the multiphoton intrapulse interference phase scan method for simultaneous phase measurement and compensation of femtosecond laser pulses,” J. Opt. Soc. Am. B 23, 750–759 (2006). [CrossRef]  

72. E. B. Treacy, “Measurement and interpretation of dynamic spectrograms of picosecond light pulses,” J. Appl. Phys. 42, 3848–3858 (1971). [CrossRef]  

73. A. S. L. Gomes, A. S. Gouveia-Neto, and J. R. Taylor, “Direct measurement of chirped optical pulses with picosecond resolution,” Electron. Lett. 22, 41–42 (1986). [CrossRef]  

74. C. M. Olsen and H. Izadpanah, “Time-resolved chirp evaluations of Gbit/s NRZ and gain-switched DFB laser pulses using narrowband Fabry–Perot spectrometer,” Electron. Lett. 25, 1018–1019 (1989). [CrossRef]  

75. K. Mori, T. Morioka, and M. Saruwatari, “Group velocity dispersion measurement using supercontinuum picosecond pulses generated in an optical fibre,” Electron. Lett. 29, 987–989 (1993). [CrossRef]  

76. A. Watanabe, H. Saito, Y. Ishida, and T. Yajima, “Computer-assisted spectrum-resolved SHG autocorrelator for monitoring phase characteristics of femtosecond pulses,” Opt. Commun. 63, 320–324 (1987). [CrossRef]  

77. D. T. Reid, “Algorithm for complete and rapid retrieval of ultrashort pulse amplitude and phase from a sonogram,” IEEE J. Quantum Electron. 35, 1584–1589 (1999). [CrossRef]  

78. P. Winzer, C. Dorrer, R. J. Essiambre, and I. Kang, “Chirped return-to-zero modulation by imbalanced pulse carver driving signals,” IEEE Photon. Technol. Lett. 16, 1379–1381 (2004). [CrossRef]  

79. V. Wong and I. A. Walmsley, “Phase retrieval in time-resolved spectral phase measurement,” Proc. SPIE 2377, 178–186 (1995). [CrossRef]  

80. D. J. Kane and R. Trebino, “Characterization of arbitrary femtosecond pulses using frequency-resolved optical gating,” IEEE J. Quantum Electron. 29, 571–579 (1993). [CrossRef]  

81. K. W. Delong, R. Trebino, and B. White, “Simultaneous recovery of two ultrashort laser pulses from a single spectrogram,” J. Opt. Soc. Am. B 12, 2463–2466 (1995). [CrossRef]  

82. D. J. Kane, G. Rodriguez, A. J. Taylor, and T. S. Clement, “Simultaneous measurement of two ultrashort laser pulses from a single spectrogram in a single shot,” J. Opt. Soc. Am. B 14, 935–943 (1997). [CrossRef]  

83. D. J. Kane, “Recent progress toward real-time measurement of ultrashort laser pulses,” IEEE J. Quantum Electron. 35, 421–431 (1999). [CrossRef]  

84. B. Seifert, H. Stolz, and M. Tasche, “Nontrivial ambiguities for blind frequency-resolved optical gating and the problem of uniqueness,” J. Opt. Soc. Am. B 21, 1089–1097 (2004). [CrossRef]  

85. D. J. Kane, ‘‘Principal components generalized projections: a review,” J. Opt. Soc. Am. B 25, A120–A132 (2008). [CrossRef]  

86. D. Keusters, H.-S. Tan, P. O’Shea, E. Zeek, R. Trebino, and W. S. Warren, “Relative-phase ambiguities in measurements of ultrashort pulses with well-separated multiple frequency components,” J. Opt. Soc. Am. B 20, 2226–2237 (2003). [CrossRef]  

87. B. Yellampalle, K. Kim, and A. J. Taylor, “Amplitude ambiguities in second-harmonic generation frequency-resolved optical gating,” Opt. Lett. 32, 3558 (2007). [CrossRef]   [PubMed]  

88. B. Yellampalle, K. Kim, and A. J. Taylor, “Amplitude ambiguities in second-harmonic generation frequency-resolved optical gating: erratum,” Opt. Lett. 33, 2854 (2008). [CrossRef]  

89. L. Xu, E. Zeek, and R. Trebino, “Simulations of frequency-resolved optical gating for measuring very complex pulses,” J. Opt. Soc. Am. B 25, A70–A80 (2008). [CrossRef]  

90. D. N. Fittinghoff, K. W. Delong, R. Trebino, and C. L. Ladera, “Noise sensitivity in frequency-resolved-optical-gating measurements of ultrashort optical pulses,” J. Opt. Soc. Am. B 12, 1955–1967 (1995). [CrossRef]  

91. D. J. Kane, F. G. Omenetto, and A. J. Taylor, “Convergence test for inversion of frequency-resolved optical gating spectrograms,” Opt. Lett. 25, 1216–1218 (2000). [CrossRef]  

92. M. Munroe, D. H. Christensen, and R. Trebino, “Error bars in intensity and phase measurements of ultrashort laser pulses,” in Summaries of papers presented at the Conference on Lasers and Electro-Optics, 1998. CLEO 98. Technical Digest. (IEEE, 1998), pp. 462–463. [CrossRef]  

93. Z. Wang, E. Zeek, R. Trebino, and P. Kvam, “Determining error bars in measurements of ultrashort laser pulses,” J. Opt. Soc. Am. B 20, 2400–2405 (2003). [CrossRef]  

94. J. L. A. Chilla and O. E. Martinez, “Frequency domain phase measurement of ultrashort light pulses. Effect of noise,” Opt. Commun. 89, 434–440 (1992). [CrossRef]  

95. D. T. Reid, B. C. Thomsen, J. M. Dudley, and J. D. Harvey, “Sonogram characterisation of picosecond pulses at 1.5 micron using waveguide two photon absorption,” Electron. Lett. 36, 1141–1142 (2000). [CrossRef]  

96. I. G. Cormack, R. Ortega-Martinez, W. Sibbett, and D. T. Reid, “Ultrashort pulse characterisation using a scanning Fabry–Perot etalon to rapidly acquire and retrieve a sonogram,” Summaries of Papers Presented at the Conference on Lasers and Electro-Optics, 2001. CLEO '01. Technical Digest (IEEE, 2001), pp. 272–273. [CrossRef]  

97. D. A. Fishman, “Design and performance of externally modulated 1.5 micron laser transmitter in the presence of chromatic dispersion,” J. Lightwave Technol. 11, 624–632 (1993). [CrossRef]  

98. A. Bresson, N. Stelmakh, J.-M. Lourtioz, A. Shen, and C. Froehly, “Chirp measurement of multimode Q-switched laser diode pulses by use of a streak camera and a grating monochromator,” Appl. Opt. 37, 1022–1025 (1998). [CrossRef]  

99. A. S. L. Gomes, V. L. Silva, and J. R. Taylor, “Direct measurement of nonlinear frequency chirp of Raman radiation in single-mode optical fibers using a spectral window method,” J. Opt. Soc. Am. B 5, 373–379 (1988). [CrossRef]  

100. Y. Ozeki, Y. Takushima, H. Yoshimi, K. Kikuchi, H. Yamauchi, and H. Taga, “Complete characterization of picosecond optical pulses in long-haul dispersion-managed transmission systems,” IEEE Photon. Technol. Lett. 17, 648–650 (2005). [CrossRef]  

101. D. T. Reid and J. Garduno-Meija, “General ultrafast pulse measurement using the cross-correlation single-shot sonogram technique,” Opt. Lett. 29, 644–646 (2004). [CrossRef]   [PubMed]  

102. D. T. Reid and I. G. Cormack, “Single-shot sonogram: a real-time chirp monitor for ultrafast oscillator,” Opt. Lett. 27, 658–660 (2002). [CrossRef]  

103. C. Radzewicz, P. Wasylczyk, and J. S. Krasinski, “A poor man’s FROG,” Opt. Commun. 186, 329–333 (2000). [CrossRef]  

104. P. O’Shea, M. Kimmel, X. Gu, and R. Trebino, “Highly-simplified device for ultrashort pulse measurement,” Opt. Lett. 26, 932–934 (2001). [CrossRef]  

105. D. Panasenko and Y. Fainman, “Single-shot sonogram generation for femtosecond laser pulse diagnsotics by use of two-photon conductivity in a silicon CCD camera,” Opt. Lett. 27, 1475–1477 (2002). [CrossRef]  

106. D. Panasenko, P. C. Sun, N. Alic, and Y. Fainman, “Single-shot generation of a sonogram by time gating of a spectrally decomposed ultrashort laser pulse,” Appl. Opt. 41, 5185–5190 (2002). [CrossRef]   [PubMed]  

107. Y. Ishida, K. Naganuma, and T. Yajima, “Self-phase modulation in hybridly mode-locked CW dye lasers,” IEEE J. Quantum Electron. QE-21, 69–77 (1985). [CrossRef]  

108. A. Watanabe, S. Tanaka, and T. Kobayashi, “Microcomputer-based spectrum-resolved second-harmonic generation correlator for fast measurement of ultrashort pulses,” Rev. Sci. Instrum. 56, 2259–2262 (1985). [CrossRef]  

109. R. Trebino, K. W. Delong, D. N. Fittinghoff, J. N. Sweetser, M. A. Krumbügel, B. A. Richman, and D. J. Kane, “Measuring ultrashort laser pulses in the time–frequency domain using frequency-resolved optical gating,” Rev. Sci. Instrum. 68, 3277–3295 (1997). [CrossRef]  

110. J. Paye, “How to measure the amplitude and phase of an ultrashort light pulse with an autocorrelator and a spectrometer,” IEEE J. Quantum Electron. 30, 2693–2697 (1994). [CrossRef]  

111. K. W. Delong, R. Trebino, J. Hunter, and W. E. White, “Frequency-resolved optical gating with the use of second-harmonic generation,” J. Opt. Soc. Am. B 11, 2206–2215 (1994). [CrossRef]  

112. L. P. Barry, J. M. Dudley, P. G. Bollond, J. D. Harvey, and R. Leonhardt, “Complete characterisation of pulse propagation in optical fibres using frequency-resolved optical gating,” Electron. Lett. 32, 2339–2340 (1996). [CrossRef]  

113. L. P. Barry, B. C. Thomsen, J. M. Dudley, and J. D. Harvey, “Characterization of 1.55-μm pulses from a self-seeded gain-switched Fabry–Perot laser diode using frequency-resolved optical gating,” IEEE Photon. Technol. Lett. 10, 935–937 (1998). [CrossRef]  

114. H. Miao, S.-D. Yang, C. Langrock, R. V. Roussev, M. M. Fejer, and A. M. Weiner, “Ultralow-power second-harmonic generation frequency-resolved optical gating using aperiodically poled lithium niobate waveguides,” J. Opt. Soc. Am. B 25A41–A53 (2008). [CrossRef]  

115. A. Baltuska, M. S. Pshenichnikov, and D. A. Wiersma, “Second-harmonic generation frequency-resolved optical gating in the single-cycle regime,” IEEE J. Quantum Electron. 35, 459–478 (1999). [CrossRef]  

116. S. Akturk, C. D’Amico, and A. Mysyrowicz, “Measuring ultrashort pulses in the single-cycle regime using frequency-resolved optical gating,” J. Opt. Soc. Am. B 25, A63–A69 (2008). [CrossRef]  

117. B. A. Richman, M. A. Krumbügel, and R. Trebino, “Temporal characterization of mid-IR free-electron-laser pulses by frequency-resolved optical gating,” Opt. Lett. 22, 721–723 (1997). [CrossRef]   [PubMed]  

118. P. O’Shea, M. Kimmel, X. Gu, and R. Trebino, “Increased-bandwidth in ultrashort-pulse measurement using an angle-dithered nonlinear-optical crystal,” Opt. Express 7, 342–349 (2000). [CrossRef]   [PubMed]  

119. D. N. Fittinghoff, A. C. Millard, J. A. Squier, and M. Muller, “Frequency-resolved optical gating measurement of ultrashort pulses passing through a high numerical aperture objective,” IEEE J. Quantum Electron. 35, 479–486 (1999). [CrossRef]  

120. L. Gallmann, G. Steinmeyer, D. H. Sutter, N. Matuschek, and U. Keller, “Collinear type II second-harmonic-generation frequency-resolved optical gating for the characterization of sub-10-fs optical pulses,” Opt. Lett. 25, 269–271 (2000). [CrossRef]  

121. I. Amat-Roldán, I. G. Cormack, P. Loza-Alvarez, E. J. Gualda, and D. Artigas, “Ultrashort pulse characterization with SHG collinear-FROG,” Opt. Express 12, 1169–1178 (2004). [CrossRef]  

122. G. Stibenz and G. Steinmeyer, “Interferometric frequency-resolved optical gating,” Opt. Express 13, 2617–2626 (2005). [CrossRef]   [PubMed]  

123. G. Stibenz and G. Steinmeyer, “Structures of interferometric frequency-resolved optical gating,” IEEE J. Sel. Top. Quantum Electron. 12, 286–296 (2006). [CrossRef]  

124. J. M. Dudley, C. Finot, D. J. Richardson, and G. Millot, “Self-similarity in ultrafast nonlinear optics,” Nat. Phys. 3, 597–603 (2007). [CrossRef]  

125. R. Trebino and D. J. Kane, “Using phase retrieval to measure the intensity and phase of ultrashort pulses: frequency-resolved optical gating,” J. Opt. Soc. Am. A 10, 1101–1111 (1993). [CrossRef]  

126. A. Kwok, L. Jusinski, M. A. Krumbügel, J. N. Sweetser, D. N. Fittinghoff, and R. Trebino, “Frequency-resolved optical gating using cascaded second-order nonlinearities,” IEEE J. Sel. Top. Quantum Electron. 4, 271–277 (1998). [CrossRef]  

127. T. S. Clement, A. J. Taylor, and D. J. Kane, “Single-shot measurement of the amplitude and phase of ultrashort laser-pulses in the violet,” Opt. Lett. 20, 70–72 (1995). [CrossRef]   [PubMed]  

128. T. Tsang, M. A. Krumbügel, K. W. Delong, D. N. Fittinghoff, and R. Trebino, “Frequency-resolved optical-gating measurements of ultrashort pulses using surface third-harmonic generation,” Opt. Lett. 21, 1381–1383 (1996). [CrossRef]   [PubMed]  

129. G. Ramos-Ortiz, M. Cha, S. Thayumanavan, J. Mendez, S. R. Marder, and B. Kippelen, “Ultrafast-pulse diagnostic using third-order frequency-resolved optical gating in organic films,” Appl. Phys. Lett. 85, 3348–3350 (2004). [CrossRef]  

130. R. Chadwick, E. Spahr, J. A. Squier, C. G. Durfee, B. C. Walker, and D. N. Fittinghoff, “Fringe-free, background-free, collinear third-harmonic generation frequency-resolved optical gating measurements for multiphoton microscopy,” Opt. Lett. 31, 3366–3368 (2006). [CrossRef]   [PubMed]  

131. J. N. Sweetser, D. N. Fittinghoff, and R. Trebino, “Transient-grating frequency-resolved optical gating,” Opt. Lett. 22, 519–521 (1997). [CrossRef]   [PubMed]  

132. P.-A. Lacourt, J. M. Dudley, J.-M. Merolla, H. Porte, J.-P. Goedgebuer, and W. T. Rhodes, “Milliwatt-peak-power pulse characterization at 1.55 μm by wavelength-conversion frequency-resolved optical gating,” Opt. Lett. 27, 863–865 (2002). [CrossRef]  

133. I. Kang, C. Dorrer, L. Zhang, M. Dinu, M. Rasras, L. L. Buhl, S. Cabot, A. Bhardwaj, X. Liu, M. A. Cappuzzo, L. Gomez, A. Wong-Foy, Y. F. Chen, N. K. Dutta, S. S. Patel, D. T. Neilson, C. R. Giles, A. Piccirilli, and J. Jaques, “Characterization of the dynamical processes in all-optical signal processing using semiconductor optical amplifiers,” IEEE J. Sel. Top. Quantum Electron. 14, 758–769 (2008). [CrossRef]  

134. K. Ogawa and M. D. Pelusi, “High-sensitivity pulse spectrogram measurement using two-photon absorption in a semiconductor at 1.5-μm wavelength,” Opt. Express 7, 135–140 (2000). [CrossRef]   [PubMed]  

135. K. Ogawa, “Real-time intuitive spectrogram measurement of ultrashort optical pulses using two-photon absorption in a semiconductor,” Opt. Express 10, 262–267 (2002). [CrossRef]   [PubMed]  

136. S. Linden, J. Kuhl, and H. Giessen, “Amplitude and phase characterization of weak blue ultrashort pulses by downconversion,” Opt. Lett. 24, 569–571 (1999). [CrossRef]  

137. D. T. Reid, P. Loza-Alvarez, C. T. A. Brown, T. Beddard, and W. Sibbett, “Amplitude and phase measurement of mid-infrared femtosecond pulses by using cross-correlation frequency-resolved optical gating,” Opt. Lett. 25, 1478–1480 (2000). [CrossRef]  

138. J. Y. Zhang, A. P. Shreenath, M. Kimmel, E. Zeek, and R. Trebino, “Measurement of the intensity and phase of attojoule femtosecond light pulses using optical-parametric-amplification cross-correlation frequency-resolved optical gating,” Opt. Express 11, 601–609 (2003). [CrossRef]   [PubMed]  

139. J. Y. Zhang, C.-K. Lee, J. Y. Huang, and C.-L. Pan, “Sub femto-joule sensitive single-shot OPA-XFROG and its application in study of white-light supercontinuum generation,” Opt. Express 12, 574–581 (2004). [CrossRef]   [PubMed]  

140. J. M. Dudley, X. Gu, L. Xu, M. Kimmel, E. Zeek, P. O’Shea, R. Trebino, S. Coen, and R. S. Windeler, “Cross-correlation frequency resolved optical gating analysis of broadband continuum generation in photonic crystal fiber: simulations and experiments,” Opt. Express 10, 1215–1221 (2002). [CrossRef]   [PubMed]  

141. A. Efimov and A. J. Taylor, “Supercontinuum generation and solition timing jitter in SF6 soft glass photonic crystal fibers,” Opt. Express 16, 5942–5953 (2008). [CrossRef]   [PubMed]  

142. D. J. Kane, A. J. Taylor, R. Trebino, and K. W. Delong, “Single-shot measurement of the intensity and phase of a femtosecond UV laser-pulse with frequency-resolved optical gating,” Opt. Lett. 19, 1061–1063 (1994). [CrossRef]   [PubMed]  

143. F. Salin, P. Georges, G. Roger, and A. Brun, “Single-shot measurement of a 52-fs pulse,” Appl. Opt. 26, 4528–4531 (1987). [CrossRef]   [PubMed]  

144. D. Lee, Z. Wang, X. Gu, and R. Trebino, “Effect—and removal—of an ultrashort pulse's spatial profile on the single-shot measurement of its temporal profile,” J. Opt. Soc. Am. B 25, A93–A100 (2008). [CrossRef]  

145. S. Akturk, M. Kimmel, P. O’Shea, and R. Trebino, “Extremely simple device for measuring 20-fs pulses,” Opt. Lett. 29, 1025–1027 (2004). [CrossRef]   [PubMed]  

146. S. Akturk, M. Kimmel, P. O’Shea, and R. Trebino, “Measuring pulse-front tilt in ultrashort pulses using GRENOUILLE,” Opt. Express 11, 491–501 (2003). [CrossRef]   [PubMed]  

147. C. Dorrer and I. Kang, “Linear self-referencing techniques for short-optical-pulse characterization,” J. Opt. Soc. Am. B 25, A1–A12 (2008). [CrossRef]  

148. B. C. Thomsen, M. A. F. Roelens, R. T. Watts, and D. J. Richardson, “Comparison between nonlinear and linear spectrographic techniques for the complete characterization of high bit-rate pulses used in optical telecommunications,” IEEE Photon. Technol. Lett. 17, 1914–1916 (2005). [CrossRef]  

149. D. Reid and J. D. Harvey, “Linear spectrograms using electrooptic modulators,” IEEE Photon. Technol. Lett. 19, 535–537 (2007). [CrossRef]  

150. K. T. Vu, A. Malinovski, M. A. F. Roelens, M. Ibsen, P. Petropoulos, and D. J. Richardson, “Full characterization of low-power picosecond pulses from a gain-switched diode laser using electrooptic modulation-based linear FROG,” IEEE Photon. Technol. Lett. 20, 505–507 (2008). [CrossRef]  

151. X. Wei, J. Leuthold, C. Dorrer, D. M. Gill, and X. Liu, “Chirp reduction of π2 alternate-phase pulses by optical filtering,” in Optical Fiber Communication Conference and Exposition and The National Fiber Optic Engineers Conference, Technical Digest (CD) (Optical Society of America, 2005), paper JWA42.

152. R. Maher, P. M. Anandarajah, A. D. Ellis, D. Reid, and L. P. Barry, “Optimization of a 42.7Gbs wavelength tunable RZ transmitter using a linear spectrogram technique,” Opt. Express 16, 11281–11288 (2008). [CrossRef]   [PubMed]  

153. D. M. Marom, C. Dorrer, I. Kang, C. R. Doerr, M. A. Cappuzzo, L. Gomez, E. Chen, A. Wong-Foy, E. Laskowski, F. Klemens, C. Bolle, R. Cirelli, E. Ferry, T. Sorsch, J. Miner, E. Bower, M. E. Simon, F. Pardo, and D. Lopez, “Compact spectral pulse shaping using hybrid planar lightwave circuit and free-space optics with MEMS piston micro-mirrors and spectrogram feedback control,” in The 17th Annual Meeting of the IEEE Lasers and Electro-Optics Society, 2004. LEOS 2004. (IEEE LEOS, 2004), Vol. 2, pp. 585–586.

154. M. A. F. Roelens, J. A. Bolger, D. Williams, and B. J. Eggleton, “Multi-wavelength synchronous pulse burst generation with a wavelength selective switch,” Opt. Express 16, 10152–10157 (2008). [CrossRef]   [PubMed]  

155. C. Dorrer, “Investigation of the spectrogram technique for the characterization of picosecond optical pulses,” in Optical Fiber Communication Conference and Exposition and The National Fiber Optic Engineers Conference, Technical Digest (CD) (Optical Society of America, 2005), paper OTuB3.

156. W. J. Caputi, “Sweep-heterodyne apparatus for changing the time-bandwidth product of a signal,” U.S. Patent 3,283,080 (Nov. 1, 1966).

157. W. J. Caputi, “Pulse-type object detection apparatus,” U.S. Patent 3,354,456 (Nov. 21, 1967).

158. W. J. Caputi, “Stretch: a time transformation technique,” IEEE Trans. Aerosp. Electron. Syst. AES-7, 269–278 (1971). [CrossRef]  

159. P. Tournois, J.-L. Vernet, and G. Bienvenu, “Sur l’analogie optique de certains montages électroniques: formation d’images temporelles de signaux électriques,” Acad. Sci., Paris, C. R. 267, 375–378 (1968).

160. S. A. Akhmanov, A. S. Chirkin, K. N. Drabovich, A. I. Kovrigin, R. V. Khokhlov, and A. P. Sukhorukov, “Non-stationary nonlinear optical effects and ultrashort light pulse formation,” IEEE J. Quantum Electron. QE-4, 598–605 (1968). [CrossRef]  

161. A. Papoulis, Systems and Transforms with Applications in Optics (McGraw-Hill, 1968).

162. L. S. Telegin and A. S. Chirkin, “Reversal and reconstruction of the profile of ultrashort light pulses,” Sov. J. Quantum Electron. 15, 101–102 (1985). [CrossRef]  

163. S. A. Akhmanov, V. A. Vysloukh, and A. S. Chirkin, “Self-action of wave packets in a nonlinear medium and femtosecond laser pulse generation,” Sov. Phys. Usp. 29, 642–677 (1987). [CrossRef]  

164. B. H. Kolner and M. Nazarathy, “Temporal imaging with a time-lens,” Opt. Lett. 14, 630–632 (1989). [CrossRef]   [PubMed]  

165. I. P. Christov, “Theory of a time telescope,” Opt. Quantum Electron. 22, 473–480 (1990). [CrossRef]  

166. B. H. Kolner and M. Nazarathy, “Temporal imaging with a time lens: erratum,” Opt. Lett. 15, 655 (1990). [CrossRef]  

167. S. P. Dijaili, A. Dienes, and J. S. Smith, “ABCD matrices for dispersive pulse propagation,” IEEE J. Quantum Electron. 26, 1158–1164 (1990). [CrossRef]  

168. B. H. Kolner, “Space–time duality and the theory of temporal imaging,” IEEE J. Quantum Electron. 30, 1951–1963 (1994). [CrossRef]  

169. C. V. Bennett and B. H. Kolner, “Aberrations in temporal imaging,” IEEE J. Quantum Electron. 37, 20–32 (2001). [CrossRef]  

170. C. V. Bennett and B. H. Kolner, “Principles of parametric temporal imaging—part I: system configurations,” IEEE J. Quantum Electron. 36, 430–437 (2000). [CrossRef]  

171. C. V. Bennett and B. H. Kolner, “Principles of parametric temporal imaging—part II: system performance,” IEEE J. Quantum Electron. 36, 649–655 (2000). [CrossRef]  

172. M. Vampouille, A. Barthélémy, B. Colombeau, and C. Froehly, “Observation et applications des modulations de fréquence dans les fibres unimodales,” J. Opt. (Paris) 15, 385–390 (1984). [CrossRef]  

173. M. Vampouille, J. Marty, and C. Froehly, “Optical frequency intermodulation between two picosecond laser pulses,” IEEE J. Quantum Electron. QE-22, 192–194 (1986). [CrossRef]  

174. M. T. Kaufman, W. C. Banyai, A. A. Godil, and D. M. Bloom, “Time-to-frequency converter for measuring picosecond optical pulses,” Appl. Phys. Lett. 64, 270–272 (1994). [CrossRef]  

175. L. Kh. Mouradian, F. Louradour, V. Messager, A. Barthélémy, and C. Froehly, “Spectro-temporal imaging of femtosecond events,” IEEE J. Quantum Electron. 36, 795–801 (2000). [CrossRef]  

176. J. Azaña, N. K. Berger, B. Levit, and B. Fischer, “Spectral Fraunhofer regime: time-to-frequency conversion by the action of a single time lens on an optical pulse,” Appl. Opt. 43, 483–490 (2004). [CrossRef]   [PubMed]  

177. A. A. Godil, B. A. Auld, and D. M. Bloom, “Time-lens producing 1.9ps optical pulses,” Appl. Phys. Lett. 62, 1047–1049 (1992). [CrossRef]  

178. C. V. Bennett, R. P. Scott, and B. H. Kolner, “Temporal magnification and reversal of 100Gbs optical data with an up-conversion time microscope,” Appl. Phys. Lett. 65, 2513–2515 (1994). [CrossRef]  

179. C. V. Bennett and B. H. Kolner, “Upconversion time microscope demonstrating 103× magnification of femtosecond waveforms,” Opt. Lett. 24, 783–785 (1999). [CrossRef]  

180. L. F. Mollenauer and C. Xu, “Time-lens timing-jitter compensator in ultra-long haul DWDM dispersion managed soliton transmissions,” in Summaries of Papers Presented at the Conference on Lasers and Electro-Optics, 2002. CLEO '02. Technical Digest (IEEE LEOS, 2002), Vol. 2, CPDB1-1–CPDB1-3.

181. L. A. Jiang, M. E. Grein, A. Haus, E. P. Ippen, and H. Yokoyama, “Timing jitter eater for optical pulse trains,” Opt. Lett. 28, 78–80 (2003). [CrossRef]   [PubMed]  

182. J. van Howe and C. Xu, “Ultrafast optical signal processing based upon space–time dualities,” J. Lightwave Technol. 24, 2649–2662 (2006). [CrossRef]  

183. B. H. Kolner, “The pinhole time camera,” J. Opt. Soc. Am. A 14, 3349–3357 (1997). [CrossRef]  

184. B. H. Kolner, “Generalization of the concepts of focal length and f-number to space and time,” J. Opt. Soc. Am. A 11, 3229–3234 (1994). [CrossRef]  

185. K. Ema, M. Kuwata-Gonokami, and F. Shimizu, “All-optical sub-Tbits/s serial-to-parallel conversion using excitonic giant nonlinearity,” Appl. Phys. Lett. 59, 2799–2801 (1991). [CrossRef]  

186. M. C. Nuss, M. Li, T. H. Chiu, A. M. Weiner, and A. Partovi, “Time-to-space mapping of femtosecond pulses,” Opt. Lett. 19, 684–686 (1994). [CrossRef]  

187. Y. T. Mazurenko, S. E. Putilin, A. G. Spiro, A. G. Beliaev, V. E. Yashin, and S. A. Chizhov, “Ultrafast time-to-space conversion of phase by the method of spectral nonlinear optics,” Opt. Lett. 21, 1753–1755 (1996). [CrossRef]   [PubMed]  

188. Y. T. Mazurenko, A. G. Spiro, S. E. Putilin, A. G. Beliaev, and E. B. Verkhovskij, “Time-to-space conversion of fast signals by the method of spectral nonlinear optics,” Opt. Commun. 118, 594–600 (1996). [CrossRef]  

189. P. C. Sun, Y. T. Mazurenko, and Y. Fainman, “Femtosecond pulse imaging: ultrafast optical oscilloscope,” J. Opt. Soc. Am. A 14, 1159–1170 (1997). [CrossRef]  

190. J. Azaña and M. A. Muriel, “Temporal self-imaging effects: theory and application for multiplying pulse repetition rates,” IEEE J. Sel. Top. Quantum Electron. 7, 728–744 (2001). [CrossRef]  

191. S. Atkins and B. Fischer, “All-optical pulse rate multiplication using fractional Talbot effect and field-to-intensity conversion with cross-gain modulation,” IEEE Photon. Technol. Lett. 15, 132–134 (2003). [CrossRef]  

192. W. J. Lai, P. Shum, and L. N. Binh, “Stability and transient analyses of temporal Talbot-effect-based repetition-rate multiplication mode-locked laser systems,” IEEE Photon. Technol. Lett. 16, 439 (2004). [CrossRef]  

193. C. Dorrer, “Temporal van Cittert–Zernike theorem and its application to the measurement of chromatic dispersion,” J. Opt. Soc. Am. B 21, 1417–1423 (2004). [CrossRef]  

194. J. W. Goodman, Introduction to Fourier Optics (McGraw-Hill, 1996).

195. A. C. Kak and M. Slaney, Principles of Computerized Tomographic Imaging (IEEE, 1987).

196. S. Webb, From the Watching of Shadows: the Origins of Radiological Tomography (Institute of Physics, 1990).

197. U. Leonhardt, Measuring the quantum state of light (Cambridge University Press, 1997).

198. M. Beck, M. G. Raymer, I. A. Walmsley, and V. Wong, “Chronocyclic tomography for measuring the amplitude and phase structure of optical pulses,” Opt. Lett. 18, 2041–2043 (1993). [CrossRef]   [PubMed]  

199. C. Dorrer and I. Kang, “Complete temporal characterization of short optical pulses by simplified chronocyclic tomography,” Opt. Lett. 28, 1481–1483 (2003). [CrossRef]   [PubMed]  

200. N. L. Markaryan and L. Kh. Mouradian, “Determination of the temporal profiles of ultrashort pulses by a fibre-optic compression technique,” Quantum Electron. 25, 668–670 (1995). [CrossRef]  

201. J. A. Valdmanis, “Real time picosecond optical oscilloscope,” in Ultrafast Phenomena V, G. R. Fleming and A. E. Siegman, eds. (Springer-Verlag, 1986), pp. 82–85.

202. A. Galvanauskas, J. A. Tellefsen, A. Krotkus, M. Öberg, and B. Broberg, “Real-time picosecond electro-optic oscilloscope technique using a tunable semiconductor laser,” Appl. Phys. Lett. 60, 145–147 (1992). [CrossRef]  

203. Z. Jiang and X. C. Zhang, “Electro-optic measurement of THz field pulses with a chirped optical beam,” Appl. Phys. Lett. 72, 1945–1947 (1998). [CrossRef]  

204. T. Jannson, “Real-time Fourier transformation in dispersive optical fibers,” Opt. Lett. 8, 232–234 (1983). [CrossRef]   [PubMed]  

205. J. Azaña and M. A. Muriel, “Real-time optical spectrum analysis based on the time-space duality in chirped fiber gratings,” IEEE J. Quantum Electron. 36, 517–526 (2000). [CrossRef]  

206. N. K. Berger, B. Levit, S. Atkins, and B. Fischer, “Time-lens-based spectral analysis of optical pulses by electrooptic phase modulation,” Electron. Lett. 36, 1644–1645 (2000). [CrossRef]  

207. T. Alieva, M. J. Bastiaans, and L. Stankovic, “Signal reconstruction from two close fractional Fourier power spectra,” IEEE Trans. Signal Process. 51, 112–123 (2003). [CrossRef]  

208. I. Kang and C. Dorrer, “Highly sensitive differential tomographic technique for real-time ultrashort pulse characterization,” Opt. Lett. 30, 1545–1547 (2005). [CrossRef]   [PubMed]  

209. C. Dorrer, “Characterization of nonlinear phase shifts by use of the temporal transport-of-intensity equation,” Opt. Lett. 30, 3237–3239 (2005). [CrossRef]   [PubMed]  

210. M. R. Teague, “Irradiance moments: their propagation and use for unique retrieval of phase,” J. Opt. Soc. Am. 72, 1199–1209 (1982). [CrossRef]  

211. F. Roddier, “Wavefront sensing and the irradiance transport equation,” Appl. Opt. 29, 1402–1403 (1990). [CrossRef]   [PubMed]  

212. R. Kienberger, E. Goulielmakis, M. Uiberacker, A. Baltuska, V. V. Yakovlev, F. Bammer, A. Scrinzi, Th. Westerwalbesloh, U. Kleineberg, U. Heinzmann, M. Drescher, and F. Krausz, “Atomic transient recorder,” Nature 427, 817–821 (2004). [CrossRef]   [PubMed]  

213. F. Coppinger, A. S. Bhushan, and B. Jalali, “Photonic time stretch and its application to analog-to-digital conversion,” IEEE Trans. Microwave Theory Tech. 47, 1309–1314 (1999). [CrossRef]  

214. A. S. Bhushan, P. V. Kelkar, B. Jalali, O. Boyraz, and M. Islam, “130-GSas photonic analog-to-digital converter with time stretch preprocessor,” IEEE Photon. Technol. Lett. 14, 684–686 (2002). [CrossRef]  

215. Y. Han and B. Jalali, “Photonic time-stretched analog-to-digital converter: fundamental concepts and practical considerations,” J. Lightwave Technol. 21, 3085–3103 (2003). [CrossRef]  

216. C. Dorrer, “Single-shot measurement of the electric field of optical waveforms by use of time magnification and heterodyning,” Opt. Lett. 31, 540–542 (2006). [CrossRef]   [PubMed]  

217. E. T. J. Nibbering, M. A. Franco, B. S. Prade, G. Grillon, J.-P. Chambaret, and A. Mysyrowicz, “Spectral determination of the amplitude and the phase of intense ultrashort optical pulses,” J. Opt. Soc. Am. B 13, 317–329 (1996). [CrossRef]  

218. J. J. Ferreiro, R. de la Fuente, and E. Lopez-Lago, “Characterization of arbitrarily polarized ultrashort laser pulses by cross-phase modulation,” Opt. Lett. 26, 1025–1027 (2001). [CrossRef]  

219. M. A. Franco, H. R. Lange, J.-F. Ripoche, B. S. Prade, and A. Mysyrowicz, “Characterization of ultrashort pulses by cross-phase modulation,” Opt. Commun. 140, 331–340 (1997). [CrossRef]  

220. H. R. Lange, M. A. Franco, J.-F. Ripoche, B. S. Prade, P. Rousseau, and A. Mysyrowicz, “Reconstruction of the time profile of femtosecond laser pulses through cross-phase modulation,” IEEE J. Sel. Top. Quantum Electron. 4, 295–300 (1998). [CrossRef]  

221. M. D. Thomson, J. M. Dudley, L. P. Barry, and J. D. Harvey, “Complete pulse characterization at 1.5 μm by cross-phase modulation in optical fibers,” Opt. Lett. 23, 1582–1584 (1998). [CrossRef]  

222. T. T. Ng, F. Parmigiani, M. Ibsen, Z. Zhang, P. Petropoulos, and D. J. Richardson, “Compensation of linear distortions by using XPM with parabolic pulses as a time lens,” IEEE Photon. Technol. Lett. 20, 1097–1099 (2008). [CrossRef]  

223. F. Ö. Ilday, J. R. Buckley, W. G. Clark, and F. W. Wise, “Self-similar evolution of parabolic pulses in a laser,” Phys. Rev. Lett. 92, 213902 (2004). [CrossRef]   [PubMed]  

224. C. Finot, B. Barviau, G. Millot, A. Guryanov, A. Sysoliatin, and S. Wabnitz, “Parabolic pulse generation with active or passive dispersion decreasing optical fibers,” Opt. Express 15, 15824–15835 (2007). [CrossRef]   [PubMed]  

225. C. Finot, G. Millot, C. Billet, and J. M. Dudley, “Experimental generation of parabolic pulses via Raman amplification in optical fiber,” Opt. Express 11, 1547–1552 (2003). [CrossRef]   [PubMed]  

226. T. Hirooka and M. Nakazawa, “All-optical 40-GHz time-domain Fourier transformation using XPM with a dark parabolic pulse,” IEEE Photon. Technol. Lett. 20, 1869–1871 (2008). [CrossRef]  

227. F. Parmigiani, P. Petropoulos, M. Ibsen, and D. J. Richardson, “Pulse retiming based on XPM using parabolic pulses formed in a fiber Bragg grating,” IEEE Photon. Technol. Lett. 18, 829–831 (2006). [CrossRef]  

228. R. Salem, M. A. Foster, A. C. Turner, D. F. Geraghty, M. Lipson, and A. L. Gaeta, “Optical time lens based on four-wave mixing on a silicon chip,” Opt. Lett. 33, 1047–1049 (2008). [CrossRef]   [PubMed]  

229. M. A. Foster, R. Salem, D. F. Geraghty, A. C. Turner-Foster, M. Lipson, and A. L. Gaeta, “Silicon-chip-based ultrafast optical oscilloscope,” Nature 456, 81–85 (2008). [CrossRef]   [PubMed]  

230. M. Françon, Optical Interferometry (Academic, 1966).

231. D. Malacara, Optical Shop Testing (Wiley-Interscience, 1991).

232. C. Froehly, A. Lacourt, and J. C. Vienot, “Notions de réponse impulsionelle et de fonction de transfert temporelle des pupilles optiques, justifications expérimentales et applications,” Nouv. Rev. Opt. 4, 183–196 (1973). [CrossRef]  

233. J. Piasecki, B. Colombeau, M. Vampouille, C. Froehly, and J. A. Arnaud, “Nouvelle méthode de mesure de la réponse impulsionnelle des fibres optiques,” Appl. Opt. 19, 3749–3755 (1980). [CrossRef]   [PubMed]  

234. F. Reynaud, F. Salin, and A. Barthélémy, “Measurement of phase shifts introduced by nonlinear optical phenomena on subpicosecond pulses,” Opt. Lett. 14, 275–277 (1989). [CrossRef]   [PubMed]  

235. M. Takeda, H. Ina, and S. Kobayashi, “Fourier-transform method of fringe- pattern analysis for computer-based topography and interferometry,” J. Opt. Soc. Am. 72, 156–160 (1982). [CrossRef]  

236. C. Dorrer, “Influence of the calibration of the detector on spectral interferometry,” J. Opt. Soc. Am. B 16, 1160–1168 (1999). [CrossRef]  

237. C. Dorrer, N. Belabas, J.-P. Likforman, and M. Joffre, “Spectral resolution and sampling issues in Fourier-transform spectral interferometry,” J. Opt. Soc. Am. B 17, 1795–1802 (2000). [CrossRef]  

238. C. Dorrer and F. Salin, “Characterization of spectral phase modulation by classical and polarization spectral interferometry,” J. Opt. Soc. Am. B 15, 2331–2337 (1998). [CrossRef]  

239. L. Lepetit, G. Cheriaux, and M. Joffre, “Two-dimensional nonlinear optics spectroscopy: simulations and experimental demonstration,” J. Nonlinear Opt. Phys. Mater. 5, 465–476 (1996). [CrossRef]  

240. J.-P. Likforman, M. Joffre, and V. ThierryMieg, “Measurement of photon echoes by use of femtosecond Fourier-transform spectral interferometry,” Opt. Lett. 22, 1104–1106 (1997). [CrossRef]   [PubMed]  

241. M. F. Emde, W. P. deBoeij, M. S. Pshenichnikov, and D. A. Wiersma, “Spectral interferometry as an alternative to time-domain heterodyning,” Opt. Lett. 22, 1338–1340 (1997). [CrossRef]  

242. D. Birkedal and J. Shah, “Femtosecond spectral interferometry of resonant secondary emission from quantum wells: resonance Rayleigh scattering in the nonergodic regime,” Phys. Rev. Lett. 81, 2372–2375 (1998). [CrossRef]  

243. S. Haacke, S. Schaer, B. Deveaud, and V. Savona, “Interferometric analysis of resonant Rayleigh scattering from two-dimensional excitons,” Phys. Rev. B 61, R5109–R5112 (2000). [CrossRef]  

244. K. Naganuma, K. Mogi, and H. Yamada, “Group delay measurements using the Fourier transform of an interferometric cross-correlation generated by white light,” Opt. Lett. 15, 393–395 (1990). [CrossRef]   [PubMed]  

245. N. Belabas, J. P. Likforman, L. Canioni, B. Bousquet, and M. Joffre, “Coherent broadband pulse shaping in the mid infrared,” Opt. Lett. 26, 743–745 (2001). [CrossRef]  

246. J. P. Geindre, P. Audebert, A. Rousse, F. Falliès, J.-C. Gauthier, A. Mysyrowicz, A. Dos Santos, G. Hamoniaux, and A. Antonetti, “Frequency-domain interferometer for measuring the phase and amplitude of a femtosecond pulse probing a laser-produced plasma,” Opt. Lett. 19, 1997–1999 (1994). [CrossRef]   [PubMed]  

247. F. Huang, W. G. Yang, and W. S. Warren, “Quadrature spectral interferometric detection and pulse shaping,” Opt. Lett. 26, 382–384 (2001). [CrossRef]  

248. F. K. Fatemi, T. F. Carruthers, and J. W. Lou, “Characterisation of telecommunications pulse trains by Fourier-transform and dual-quadrature spectral interferometry,” Electron. Lett. 39, 921–922 (2003). [CrossRef]  

249. C. Dorrer, “Complete characterization of periodic optical sources by use of sampled test-plus-reference interferometry,” Opt. Lett. 30, 2022–2024 (2005). [CrossRef]   [PubMed]  

250. V. A. Zubov and T. I. Kuznetsova, “Solution of the phase problem for time-dependent optical signals by an interference system,” Sov. J. Quantum Electron. 21, 1285–1286 (1991). [CrossRef]  

251. C. Dorrer, “Implementation of spectral phase interferometry for direct electric-field reconstruction with a simultaneously recorded reference interferogram,” Opt. Lett. 24, 1532–1534 (1999). [CrossRef]  

252. A. Müller and M. Laubscher, “Spectral phase and amplitude interferometry for direct electric-field reconstruction,” Opt. Lett. 26, 1915–1917 (2001). [CrossRef]  

253. T. M. Shuman, M. E. Anderson, J. Bromage, C. Iaconis, L. Waxer, and I. A. Walmsley, “Real-time SPIDER: ultrashort pulse characterization at 20Hz,” Opt. Express 5, 134–143 (1999). [PubMed]  

254. W. Kornelis, J. Biegert, J. W. G. Tisch, M. Nisoli, G. Sansone, C. Vozzi, S. De Silvestri, and U. Keller, “Single-shot kilohertz characterization of ultrashort pulses by spectral phase interferometry for direct electric-field reconstruction,” Opt. Lett. 28, 281–283 (2003). [CrossRef]   [PubMed]  

255. M. E. Anderson, L. E. E. de Araujo, E. M. Kosik, and I. A. Walmsley, “The effects of noise on ultrashort-optical-pulse measurement using spectral phase interferometry for direct electric-field reconstruction,” Appl. Phys. B 70, S85–S93 (2000). [CrossRef]  

256. C. Dorrer and I. A. Walmsley, “Accuracy criterion for ultrashort pulse characterization techniques: application to spectral phase interferometry for direct electric-field reconstruction,” J. Opt. Soc. Am. B 19, 1019–1029 (2001). [CrossRef]  

257. J. R. Birge, R. Ell, and F. X. Kärtner, “Two-dimensional spectral shearing interferometry for few-cycle pulse characterization,” Opt. Lett. 31, 2063–2065 (2006). [CrossRef]   [PubMed]  

258. J. R. Birge and F. X. Kärtner, “Analysis and mitigation of systematic errors in spectral shearing interferometry of pulses approaching the single-cycle limit,” J. Opt. Soc. Am. B 25, A111–A119 (2008). [CrossRef]  

259. C. Dorrer and I. A. Walmsley, “Precision and consistency criteria in spectral phase interferometry for direct electric-field reconstruction,” J. Opt. Soc. Am. B 19, 1030–1038 (2001). [CrossRef]  

260. J. E. Rothenberg and D. Grischkowsky, “Measurement of optical phase with subpicosecond resolution by time-domain interferometry,” Opt. Lett. 12, 99–101 (1987). [CrossRef]   [PubMed]  

261. K. C. Chu, J. P. Heritage, R. S. Grant, K. X. Liu, A. Dienes, W. E. White, and A. Sullivan, “Direct measurement of the spectral phase of femtosecond pulses,” Opt. Lett. 20, 904–906 (1995). [CrossRef]   [PubMed]  

262. K. C. Chu, J. P. Heritage, R. S. Grant, and W. E. White, “Temporal interferometric measurement of femtosecond spectral phase,” Opt. Lett. 21, 1842–1844 (1996). [CrossRef]   [PubMed]  

263. S. Prein, S. A. Diddams, and J.-C. Diels, “Complete characterization of femtosecond pulses using an all-electronic detector,” Opt. Commun. 123, 567–573 (1996). [CrossRef]  

264. R. M. Fortenberry, W. V. Sorin, H. Lin, and S. A. Newton, “Low-power ultrashort optical pulse characterization using linear dispersion,” in Conference on Optical Fiber Communication. OFC 97 (IEEE, 1997), pp. 290–291. [CrossRef]  

265. R. M. Fortenberry and V. Wayne, “Apparatus for characterizing short optical pulses,” U.S. patent 5,684,586 (Nov. 4, 1997).

266. C. Dorrer, “Chromatic dispersion characterization by direct instantaneous frequency measurement,” Opt. Lett. 29, 204–206 (2004). [CrossRef]   [PubMed]  

267. C. Dorrer and S. Ramachandran, “Self-refrencing dispersion characterization of multimode structures using direct instantaneous frequency measurement,” IEEE Photon. Technol. Lett. 16, 1700–1702 (2004). [CrossRef]  

268. P. Kockaert, M. Peeters, S. Coen, P. Emplit, M. Haelterman, and O. Deparis, “Simple amplitude and phase measuring technique for ultrahigh-repetition-rate lasers,” IEEE Photon. Technol. Lett. 12, 187–189 (2000). [CrossRef]  

269. P. Kockaert, M. Haelterman, P. Emplit, and C. Froehly, “Complete characterization of (ultra)short optical pulses using fast linear detectors,” IEEE J. Sel. Top. Quantum Electron. 10, 206–212 (2004). [CrossRef]  

270. P. Kockaert, J. Azaña, L. R. Chen, and S. LaRochelle, “Full characterization of uniform ultrahigh-speed trains of optical pulses using fiber Bragg gratings and linear detectors,” IEEE Photon. Technol. Lett. 16, 1540–1542 (2004). [CrossRef]  

271. V. Messager, F. Louradour, C. Froehly, and A. Barthélémy, “Coherent measurement of short laser pulses based on spectral interferometry resolved in time,” Opt. Lett. 28, 743–745 (2003). [CrossRef]   [PubMed]  

272. M. Lelek, F. Louradour, A. Barthélémy, C. Froehly, T. Mansourian, L. Kh. Mouradian, J.-P. Chambaret, G. Chériaux, and B. Mercier, “Two-dimensional spectral shearing interferometry resolved in time for ultrashort optical pulse characterization,” J. Opt. Soc. Am. B 25, 17–24 (2008). [CrossRef]  

273. M. Lelek, F. Louradour, A. Barthélémy, and C. Froehly, “Time resolved spectral interferometry for single shot femtosecond characterization,” Opt. Commun. 261, 124–129 (2006). [CrossRef]  

274. C. Iaconis and I. A. Walmsley, “Spectral phase interferometry for direct electric-field reconstruction of ultrashort optical pulses,” Opt. Lett. 23, 792–794 (1998). [CrossRef]  

275. C. Iaconis and I. A. Walmsley, “Self-referencing spectral interferometry for measuring ultrashort optical pulses,” IEEE J. Quantum Electron. 35, 501–509 (1999). [CrossRef]  

276. J. Wemans, G. Figueira, N. Lopes, and L. Cardoso, “Self-referencing spectral phase interferometry for direct electric-field reconstruction with chirped pulses,” Opt. Lett. 31, 2217–2219 (2006). [CrossRef]   [PubMed]  

277. G. Stibenz and G. Steinmeyer, “High dynamic range characterization of ultrabroadband white-light continuum pulses,” Opt. Express 12, 6319–6325 (2004). [CrossRef]   [PubMed]  

278. J. Bethge, C. Grebing, and G. Steinmeyer, “A fast Gabor wavelet transform for high-precision phase retrieval in spectral interferometry,” Opt. Express 15, 14313–14321 (2007). [CrossRef]   [PubMed]  

279. L. Gallmann, D. H. Sutter, N. Matuschek, G. Steinmeyer, and U. Keller, “Techniques for the characterization of sub-10-fs optical pulses: a comparison,” Appl. Phys. B 70, S67–S75 (2000). [CrossRef]  

280. G. Stibenz and G. Steinmeyer, “Optimizing spectral phase interferometry for direct electric-field reconstruction,” Rev. Sci. Instrum. 77, 073105-1–073105-9 (2006). [CrossRef]  

281. P. Londero, M. E. Anderson, C. Radzewicz, C. Iaconis, and I. A. Walmsley, “Measuring ultrafast pulses in the near-ultraviolet using spectral phase interferometry for direct electric field reconstruction,” J. Mod. Opt. 50, 179–184 (2003). [CrossRef]  

282. C. Dorrer, P. Londero, and I. A. Walmsley, “Homodyne detection in spectral phase interferometry for direct electric field reconstruction,” Opt. Lett. 26, 1510–1512 (2001). [CrossRef]  

283. C. Ventalon, J. M. Frasaer, J.-P. Likforman, D. M. Villeneuve, P. B. Corkum, and M. Joffre, “Generation and complete characterization of intense mid-infrared ultrashort pulses,” J. Opt. Soc. Am. B 23, 332–340 (2006). [CrossRef]  

284. A. Monmayrant, M. Joffre, T. Oksenhendler, R. Herzog, D. Kaplan, and P. Tournois, “Time-domain interferometry for direct electric-field reconstruction by use of an acousto-optic programmable filter and a two-photon detector,” Opt. Lett. 28, 278–280 (2003). [CrossRef]   [PubMed]  

285. J. Sung, B. Chen, and S. Lim, “Single-beam homodyne SPIDER for multiphoton microscopy,” Opt. Lett. 33, 1404–1406 (2008). [CrossRef]   [PubMed]  

286. L. Gallmann, G. Steinmeyer, D. H. Sutter, T. Rupp, C. Iaconis, I. A. Walmsley, and U. Keller, “Spatially resolved amplitude and phase characterization of femtosecond optical pulses,” Opt. Lett. 26, 96–98 (2001). [CrossRef]  

287. C. Dorrer, E. M. Kosik, and I. A. Walmsley, “Direct space–time characterization of the electric field of ultrashort light pulses,” Opt. Lett. 27, 548–550 (2002). [CrossRef]  

288. E. M. Kosik, A. Radunsky, I. A. Walmsley, and C. Dorrer, “Interferometric technique for measuring broadband ultrashort pulses at the sampling limit,” Opt. Lett. 30, 326–328 (2005). [CrossRef]   [PubMed]  

289. A. S. Wyatt, I. A. Walmsley, G. Stibenz, and G. Steinmeyer, “Sub-10fs pulse characterization using spatially encoded arrangement for spectral phase interferometry for direct electric-field reconstruction,” Opt. Lett. 31, 1914–1916 (2006). [CrossRef]   [PubMed]  

290. A. Radunsky, E. M. Kosik, I. A. Walmsley, P. Wasylczyk, W. Wasilewski, A. U’Ren, and M. E. Anderson, “Simplified SPIDER apparatus using a thick nonlinear crystal,” Opt. Lett. 31, 1–3 (2006). [CrossRef]  

291. A. Radunsky, I. A. Walmsley, S.-P. Gorza, and P. Wasylczyk, “Compact spectral shearing intterferometer for ultrashort pulse characterization,” Opt. Lett. 32, 181–183 (2007). [CrossRef]  

292. P. Baum, S. Lochbrunner, and E. Riedle, “Zero-additional-phase SPIDER: full characterization of visible and sub-20-fs ultraviolet pulses,” Opt. Lett. 29, 210–212 (2004). [CrossRef]   [PubMed]  

293. J. Möhring, T. Buckup, B. von Vacano, and M. Motzkus, “Parametrically amplified ultrashort pulses from a shaped photonic crystal fiber supercontinuum,” Opt. Lett. 33, 186–188 (2008). [CrossRef]   [PubMed]  

294. C. Dorrer and I. Kang, “Highly sensitive direct femtosecond pulse characterization using electro-optic spectral shearing interferometry,” Opt. Lett. 28, 477–479 (2003). [CrossRef]   [PubMed]  

295. I. Kang, C. Dorrer, and F. Quochi, “A novel implementation of spectral shearing interferometry for ultrashort pulse characterization,” Opt. Lett. 28, 2264–2266 (2003). [CrossRef]   [PubMed]  

296. J. Bromage, C. Dorrer, I. A. Begishev, N. G. Usechak, and J. D. Zuegel, “Highly sensitive, single-shot characterization for pulse widths from 0.4to85ps using electro-optic shearing interferometry,” Opt. Lett. 31, 3523–3525 (2006). [CrossRef]   [PubMed]  

297. Y. Ozeki, S. Takasaka, and M. Sakano, “Electrooptic spectral shearing interferometry using a Mach–Zehnder modulator with a bias voltage sweeper,” IEEE Photon. Technol. Lett. 18, 911–913 (2006). [CrossRef]  

298. M. Kwakernaak, R. Schreieck, and A. Neiger, “Spectral phase measurement of mode-locked diode laser pulses by beating sidebands generated by electrooptical mixing,” IEEE Photon. Technol. Lett. 12, 1677–1679 (2000). [CrossRef]  

299. I. Kang and C. Dorrer, “Method of optical pulse characterization using sinusoidal optical phase modulations,” Opt. Lett. 32, 2538–2540 (2007). [CrossRef]   [PubMed]  

300. H. Miao, D. E. Leaird, C. Langrock, M. M. Fejer, and A. M. Weiner, “Optical arbitrary waveform characterization via dual-quadrature spectral shearing interferometry,” Opt. Express 17, 3381–3389 (2009). [CrossRef]   [PubMed]  

301. M. Beck, C. Dorrer, and I. A. Walmsley, “Joint quantum state measurement using unbalanced array detection,” Phys. Rev. Lett. 85, 253601 (2001). [CrossRef]  

302. M. Hentschel, R. Kienberger, C. Spielmann, G. A. Reider, N. Milosevic, T. Brabec, P. B. Corkum, U. Heinzmann, M. Drescher, and F. Krausz, “Attosecond metrology,” Nature 414, 509–513 (2001). [CrossRef]   [PubMed]  

303. M. Drescher, M. Hentschel, R. Kienberger, G. Tempea, C. Spielmann, G. A. Reider, P. B. Corkum, and F. Krausz, “X-ray pulses approaching the attosecond frontier,” Science 291, 1923–1927 (2001). [CrossRef]   [PubMed]  

304. P. M. Paul, E. S. Toma, P. Breger, G. Mullot, F. Augé, P. Balcou, H. G. Muller, and P. Agostini, “Observation of a train of attosecond pulses from high harmonic generation,” Science 292, 1689–1692 (2001). [CrossRef]   [PubMed]  

305. G. Sansone, E. Benedetti, F. Calegari, C. Vozzi, L. Avaldi, R. Flammini, L. Poletto, P. Villoresi, C. Altucci, R. Velotta, S. Stagira, S. De Silvestri, and M. Nisoli, “Isolated single-cycle attosecond pulses,” Science 314, 443–446 (2006). [CrossRef]   [PubMed]  

306. T. Brabec and F. Krausz, “Intense few-cycle laser fields: frontiers of nonlinear optics,” Rev. Mod. Phys. 72, 545–591 (2000). [CrossRef]  

307. I. P. Christov, M. M. Murnane, and H. C. Kapteyn, “High-harmonic generation of attosecond pulses in the “single-cycle” regime,” Phys. Rev. Lett. 78, 1251–1254 (1997). [CrossRef]  

308. P. Antoine, D. B. Milosevic, A. L’Huillier, M. B. Gaarde, P. Salières, and M. Lewenstein, “Generation of attosecond pulses in macroscopic media,” Phys. Rev. A 56, 4960–4969 (1997). [CrossRef]  

309. H. G. Muller, “Reconstruction of attosecond harmonic beating by interference of two-photon transitions,” Appl. Phys. B 74, 17–21 (2002). [CrossRef]  

310. E. M. Kosik, A. S. Wyatt, L. Corner, E. Cormier, and I. A. Walmsley, “Complete characterization of attosecond pulses,” J. Mod. Opt. 52, 361–378 (2005). [CrossRef]  

311. F. Quéré, Y. Mairesse, and J. Itatani, “Temporal characterization of attosecond XUV fields,” J. Mod. Opt. 52, 339–360 (2005). [CrossRef]  

312. J. Mauritsson, R. López-Martens, A. L’Huillier, and K. J. Schafer, “Ponderomotive shearing for spectral interferometry of extreme-ultraviolet pulses,” Opt. Lett. 28, 2393–2395 (2003). [CrossRef]   [PubMed]  

313. F. Quéré, J. Itatani, G. L. Yudin, and P. B. Corkum, “Attosecond spectral shearing interferometry,” Phys. Rev. Lett. 90, 073902 (2003). [CrossRef]   [PubMed]  

314. Y. Mairesse and F. Quéré, “Frequency-resolved optical gating for complete reconstruction of attosecond bursts,” Phys. Rev. A 71, 011401 (2005). [CrossRef]  

315. E. Goulielmakis, M. Schultze, M. Hofstetter, V. S. Yakoklev, J. Gagnon, M. Uiberacker, A. L. Aquila, E. M. Gullikson, D. T. Attwood, R. Kienberger, F. Krausz, and U. Kleineberg, “Single-cycle nonlinear optics,” Science 320, 1614–1617 (2008). [CrossRef]   [PubMed]  

316. J. Itatani, F. Quéré, G. L. Yudin, M. Yu Ivanov, F. Krausz, and P. B. Corkum, “Attosecond streak camera,” Phys. Rev. Lett. 88, 173903 (2002). [CrossRef]   [PubMed]  

317. E. Cormier, L. Corner, E. M. Kosik, A. S. Wyatt, and I. A. Walmsley, “Spectral phase interferometry for complete reconstruction of attosecond pulses,” Laser Phys. 15, 909–915 (2005).

318. Y. Mairesse, O. Gobert, P. Breger, H. Merdji, P. Meynadier, P. Monchicourt, M. Perdrix, P. Salières, and B. Carré, “High harmonic XUV spectral phase interferometry for direct electric-field reconstruction,” Phys. Rev. Lett. 94, 173903 (2005). [CrossRef]   [PubMed]  

319. E. Cormier, I. A. Walmsley, E. M. Kosik, L. Corner, and L. F. DiMauro, “Self-referencing, spectrally or spatially encoded spectral interferometry for the complete characterization of attosecond electromagnetic pulses,” Phys. Rev. Lett. 94, 033905 (2005). [CrossRef]  

320. S. T. Cundiff, W. H. Knox, E. P. Ippen, and H. A. Haus, “Frequency-dependent mode size in broadband Kerr-lens mode-locking,” Opt. Lett. 21, 662–664 (1996). [CrossRef]   [PubMed]  

321. C. Fiorini, C. Sauteret, C. Rouyer, N. Blanchot, S. Seznec, and A. Migus, “Temporal aberrations due to misalignments of a stretcher-compressor system and compensation,” IEEE J. Quantum Electron. 30, 1662–1670 (1994). [CrossRef]  

322. M. M. Wefers and K. A. Nelson, “Space–time profiles of shaped ultrafast optical waveforms,” IEEE J. Quantum Electron. 32, 161–172 (1996). [CrossRef]  

323. H. Kumagai, S.-H. Cho, K. Ishikawa, K. Midorikawa, M. Fujimoto, S. Aoshima, and Y. Tsuchiya, “Observation of the complex propagation of a femtosecond laser pulse in a dispersive transparent bulk material,” J. Opt. Soc. Am. B 20, 597–602 (2003). [CrossRef]  

324. A. Matijosius, J. Trull, P. Di Trapani, A. Piskarskas, R. Dubietis, A. Varanavicius, and A. Piskarskas, “Nonlinear space–time dynamics of ultrashort wave packets in water,” Opt. Lett. 29, 1123–1125 (2004). [CrossRef]   [PubMed]  

325. M. Kempe and W. Rudolph, “Femtosecond pulses in the focal region of lenses,” Phys. Rev. A 48, 4721–4729 (1993). [CrossRef]   [PubMed]  

326. J. Néauport, N. Blanchot, C. Rouyer, and C. Sauteret, “Chromatism compensation of the PETAL multipetawatt high-energy laser,” Appl. Opt. 46, 1568–1574 (2007). [CrossRef]   [PubMed]  

327. J. Jasapara and W. Rudolph, “Characterization of sub-10-fs pulse focusing with high-numerical aperture microscope objective,” Opt. Lett. 24, 777–779 (1999). [CrossRef]  

328. M. M. Wefers, K. A. Nelson, and A. M. Weiner, “Multidimensional shaping of ultrafast optical waveforms,” Opt. Lett. 21, 746–748 (1996). [CrossRef]   [PubMed]  

329. J.-C. Chanteloup, E. Salmon, C. Sauteret, A. Migus, P. Zeitoun, A. Klisnick, A. Carillon, S. Hubert, D. Ros, P. Nickles, and M. Kalachnikov, “Pulse-front control of 15-TW pulses with a tilted compressor, and application to the subpicosecond traveling-wave pumping of a soft-x-ray laser,” J. Opt. Soc. Am. B 17, 151–157 (2000). [CrossRef]  

330. D. Oron and Y. Silberberg, “Spatiotemporal coherent control using shaped, temporally focused pulses,” Opt. Express 13, 9903–9908 (2005). [CrossRef]   [PubMed]  

331. S. Szatmári and G. Kühnle, “Pulse front and pulse duration distortion in refractive optics, and its compensation,” Opt. Commun. 69, 60–65 (1988). [CrossRef]  

332. Z. Sacks, G. Mourou, and R. Danielius, “Adjusting pulse-front tilt and pulse duration by use of a single-shot autocorrelator,” Opt. Lett. 26, 462–464 (2001). [CrossRef]  

333. Z. Bor, Z. Gogolák, and G. Szabó, “Femtosecond-resolution pulse-front distortion measurement by time-of-flight interferometry,” Opt. Lett. 14, 862–864 (1989). [CrossRef]   [PubMed]  

334. Zs. Benkö, Z. Gogolák, Zs. Bor, and G. Szabó, “Pulse front distortion measurements in prisms measured by time-of-flight interferometry,” Exp. Tech. Phys. (Berlin) 39, 447–449 (1991).

335. C. Radzewicz, M. J. la Grone, and J. S. Krasinski, “Interferometric measurement of femtosecond pulse distortion by lenses,” Opt. Commun. 126, 185–190 (1996). [CrossRef]  

336. D. Meshulach, D. Yelin, and Y. Silberberg, “Real-time spatial-spectral interference measurements of ultrashort optical pulses,” J. Opt. Soc. Am. B 14, 2095–2098 (1997). [CrossRef]  

337. W. Amir, T. A. Planchon, C. G. Durfee, J. A. Squier, P. Gabolde, R. Trebino, and M. Müller, “Simultaneous visualization of spatial and chromatic aberrations by two-dimensional Fourier transform spectral interferometry,” Opt. Lett. 31, 2927–2929 (2006). [CrossRef]   [PubMed]  

338. P. Bowlan, P. Gabolde, and R. Trebino, “Directly measuring the spatio-temporal electric field of focusing ultrashort pulses,” Opt. Express 15, 10219–10230 (2007). [CrossRef]   [PubMed]  

339. P. Bowlan, U. Fuchs, R. Trebino, and D. Uwe, “Measuring the spatiotemporal electric field of tightly focused ultrashort pulses with sub-micron spatial resolution,” Opt. Express 16, 13663–13675 (2008). [CrossRef]   [PubMed]  

340. P. Gabolde and R. Trebino, “Self-referenced measurement of the complete electric field of ultrashort pulses,” Opt. Express 12, 4423–4429 (2004). [CrossRef]   [PubMed]  

341. P. Gabolde and R. Trebino, “Single-shot measurement of the full spatio-temporal field of ultrashort pulses with multi-spectral digital holography,” Opt. Express 14, 11460–11467 (2006). [CrossRef]   [PubMed]  

342. C. Dorrer and I. A. Walmsley, “Simple linear technique for the measurement of space–time coupling in ultrashort optical pulses,” Opt. Lett. 27, 1947–1949 (2002). [CrossRef]  

343. C. Rouyer, N. Blanchot, J. Néauport, and C. Sauteret, “Delay interferometric single shot measurement of a petawatt-class laser longitudinal chromatism corrector,” Opt. Express 15, 2019–2032 (2007). [CrossRef]   [PubMed]  

344. D. L. Woolard, W. R. Loerop, and M. S. Shur, Terahertz sensing Technology. Volume 1: Electronic Devices and Advanced Systems Technology, Vol. 30 of Selected Topics in Electronics and Systems (World Scientific Publishing Company, 2003).

345. D. L. Woolard, W. R. Loerop, and M. S. Shur, Terahertz Sensing Technology. Volume 2: Emerging Scientific Applications and Novel Device Concepts, Vol. 32 of Selected Topics in Electronics and Systems (World Scientific, 2004).

346. G. Mourou, C. V. Stancampiano, and D. Blumenthal, “Picosecond microwave pulse generation,” Appl. Phys. Lett. 38, 470–472 (1981). [CrossRef]  

347. D. H. Auston, K. P. Cheung, and P. R. Smith, “Picosecond photoconducting Hertzian dipoles,” Appl. Phys. Lett. 45, 284–286 (1984). [CrossRef]  

348. J. Valdmanis and G. Mourou, “Subpicosecond electrooptic sampling: principles and applications,” IEEE J. Quantum Electron. 22, 69–78 (2008). [CrossRef]  

349. Q. Wu and X. C. Zhang, “Free-space electro-optic sampling of terahertz beams,” Appl. Phys. Lett. 67, 3523–3525 (1996). [CrossRef]  

350. A. Nahata, D. H. Auston, T. F. Heinz, and C. Wu, “Coherent detection of freely propagating terahertz radiation by electro-optic sampling,” Appl. Phys. Lett. 68, 150–152 (1996). [CrossRef]  

351. S.-G. Park, M. R. Melloch, and A. M. Weiner, “Analysis of terahertz waveforms measured by photoconductive and electrooptic sampling,” IEEE J. Quantum Electron. 35, 810–819 (1999). [CrossRef]  

352. Q. Wu and X. C. Zhang, “Ultrafast electro-optic field sensors,” Appl. Phys. Lett. 68, 1604–1606 (1996). [CrossRef]  

353. Y. Cai, I. Brener, J. Lopata, J. Wynn, L. Pfeiffer, J. B. Stark, Q. Wu, X. C. Zhang, and J. F. Federici, “Coherent terahertz radiation detection: direct comparison between free-space electro-optic sampling and antenna detection,” Appl. Phys. Lett. 73, 444–446 (1998). [CrossRef]  

354. J. Bromage, I. A. Walmsley, and C. R. Stroud Jr., “Dithered-edge sampling of THz pulses,” Appl. Phys. Lett. 75, 2181–2183 (1999). [CrossRef]  

355. J. Bromage, I. A. Walmsley, and C. R. Stroud Jr., “Direct measurement of a photoconductive receiver’s temporal response by dithered-edge sampling,” Opt. Lett. 24, 1771–1773 (1999). [CrossRef]  

356. E. Castro-Camus, L. Lloyd-Hughes, L. Fu, H. H. Tan, C. Jagadish, and M. B. Johnston, “An ion-implanted InP receiver for polarization resolved terahertz spectroscopy,” Opt. Express 15, 7047–7057 (2007). [CrossRef]   [PubMed]  

357. Z. Jiang, F. G. Sun, and X. C. Zhang, “Terahertz pulse measurement with an optical streak camera,” Opt. Lett. 24, 1245–1247 (1999). [CrossRef]  

358. Z. Jiang and X. C. Zhang, “Measurement of spatio-temporal terahertz field distribution by using chirped pulse technology,” IEEE J. Quantum Electron. 36, 1214–1222 (2000). [CrossRef]  

359. J. Shan, A. S. Weling, E. Knoesel, L. Bartels, M. Bonn, A. Nahata, G. A. Reider, and T. F. Heinz, “Single-shot measurement of terahertz electromagnetic pulses by use of electro-optic sampling,” Opt. Lett. 25, 426–428 (2000). [CrossRef]  

360. S. P. Jamison, J. Shen, A. M. MacLeod, W. A. Gillespie, and D. A. Jaroszynski, “High-temporal-resolution, single-shot characterization of terahertz pulses,” Opt. Lett. 28, 1710–1712 (2003). [CrossRef]   [PubMed]  

361. Y. Kawada, T. Yasuda, H. Takahashi, and S. Aoshima, “Real-time measurement of temporal waveforms of a terahertz pulse using a probe pulse with a tilted pulse front,” Opt. Lett. 33, 180–182 (2008). [CrossRef]   [PubMed]  

362. K. Y. Kim, B. Yellampalle, A. J. Taylor, G. Rodriguez, and J. H. Glownia, “Single-shot terahertz pulse characterization via two-dimensional electro-optic imaging with dual echelons,” Opt. Lett. 32, 1968–1970 (2007). [CrossRef]   [PubMed]  

363. S. T. Cundiff, J. Ye, and J. L. Hall, “Optical frequency synthesis based on mode-locked lasers,” Rev. Sci. Instrum. 72, 3749–3771 (2001). [CrossRef]  

364. S. T. Cundiff and J. Ye, “Phase stabilization of mode-locked lasers,” J. Mod. Opt. 52, 201–219 (2005). [CrossRef]  

365. J. Ye and S. T. Cundiff, Femtosecond Optical Frequency Comb: Principle, Operation, and Applications (Springer, 2004).

366. L. Xu, C. Spielmann, A. Poppe, T. Brabec, F. Krausz, and T. W. Hänsch, “Route to phase control of ultrashort light pulses,” Opt. Lett. 21, 2008–2010 (1996). [CrossRef]   [PubMed]  

367. K. Osvay, M. Görbe, C. Grebing, and G. Steinmeyer, “Bandwidth-independent linear method for detection of the carrier-envelope offset phase,” Opt. Lett. 32, 3095–3097 (2007). [CrossRef]   [PubMed]  

368. P. Dietrich, F. Krausz, and P. B. Corkum, “Determining the absolute carrier phase of a few-cycle laser pulse,” Opt. Lett. 25, 16–18 (2000). [CrossRef]  

369. G. G. Paulus, F. Grasbon, H. Walther, P. Villoresi, M. Nisoli, S. Stagira, E. Priori, and S. De Silvestri, “Absolute-phase phenomena in photoionization with few-cycle laser pulses,” Nature 414, 182–184 (2001). [CrossRef]   [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (46)

Fig. 1
Fig. 1 Spatially encoded arrangement (SEA-) SPIDER measurements of a few-cycle Ti:sapphire oscillator. (a) SEA-SPIDER interferogram. (b) Measured spectral intensity (black) and reconstructed spectral phase (red). (c) Fourier-transform-limited (black) and reconstructed (red) temporal intensity of the pulse. The pulse durations (full width at half-maximum) are 6.6 and 7.6 fs , respectively.
Fig. 2
Fig. 2 Characterization of the output pulse from a CPA system. (a) Spectrum of the pulse (solid curve), the spectral phase when the two-grating compressor is mismatched with the stretcher, inducing a large third-order spectral phase (long-dashed curve), and the spectral phase after optimization (short-dashed curve). (b) and (c), respectively, show the temporal intensity of the pulse with third-order spectral phase and after optimization.
Fig. 3
Fig. 3 Output of a dual-stage plasma filament compressor under different experimental conditions. (a) and (d) show the temporal intensity, (b) and (e) are the corresponding spectral representations of the electric field, and (c) and (f) are spectrograms in the time–frequency space (courtesy of G. Steinmeyer).
Fig. 4
Fig. 4 Intensity of a train of optical pulses generated by an optical pulse shaper. The intensity was measured by nonlinear cross-correlation with a short unshaped optical pulse (courtesy of A. M. Weiner).
Fig. 5
Fig. 5 Characterization of a pulse train from a Mach–Zehnder modulator driven by a 20 GHz RF drive. (a) Temporal intensity and phase of a 33% return-to-zero train of pulses. (b) Temporal intensity and phase of a 67% carrier-suppressed return-to-zero train of pulses, with the expected π phase shift between successive pulses. (c) Pulse train measured when the bias of the modulator is set at an intermediate value between the values that lead to the pulse trains represented in (a) and (b). The upper plots in (a) and (b) are the corresponding experimental data from which the electric field is reconstructed.
Fig. 6
Fig. 6 Representations of a pulse in the (a) spectral and (b) temporal domains. The temporal phase has been removed for clarity.
Fig. 7
Fig. 7 Wigner functions of (a) a Fourier-transform limited Gaussian pulse, (b) a pulse with Gaussian spectrum and quadratic spectral phase, (c) a pair of identical Fourier-transform-limited Gaussian pulses, and (d) a pulse with Gaussian spectrum and third-order spectral phase. In each case, the temporal and spectral marginals are plotted.
Fig. 8
Fig. 8 (a) Principle of an intensity autocorrelator where only the mixing signal between the two relatively delayed replicas of the input pulse is measured. (d) Principle of an interferometric autocorrelator where the total upconverted signal from two collinear replicas of the input pulse is measured. (b) and (e) are, respectively, the intensity and the interferometric autocorrelations of a pulse with a Gaussian spectrum and a flat spectral phase, while (c) and (f) are, respectively, the intensity and the interferometric autocorrelations of a pulse with a Gaussian spectrum and a quadratic spectral phase.
Fig. 9
Fig. 9 General interferometer for optical pulse characterization. The test pulse encounters a sequence of linear filters, after (possibly) being split into two replicas at a beam splitter. The combined outputs of the filters are incident on a square-law photodetector, usually with a response much slower than the duration of the filter response functions, and certainly much less than that of the input pulse.
Fig. 10
Fig. 10 Linear filter description of type I to type VIII devices. Spectrographic devices, based on two serial amplitude filters in conjugate variables, correspond to (a) type I and (b) type II. Tomographic devices, based on a quadratic phase modulation followed by an amplitude filter in the conjugate variable, correspond to (c) type III and (d) type IV. Interferometric techniques related to Young’s double-slit experiment, with two amplitude filters in parallel followed by one amplitude filter in the conjugate variable, correspond to (e) type V and (f) type VI. Interferometric techniques related to shearing interferometry, with two linear phase modulations in conjugate domains in parallel, correspond to (g) type VII and (h) type VIII.
Fig. 11
Fig. 11 Representations of (a) the effect of dispersive propagation and (c) propagation in a quadratic temporal phase modulator. (b) Dispersive propagation leads to a shear of the chronocyclic representation along the time axis. (d) The quadratic temporal phase modulator leads to a shear of the chronocyclic representation along the frequency axis.
Fig. 12
Fig. 12 Approaches for the measurement of (a) a spectrogram and (b) a sonogram. The spectrogram is measured by first gating the pulse with a time-nonstationary filter and measuring the optical spectrum as a function of the optical frequency and relative delay between the pulse and the gate. The sonogram is measured by first filtering the pulse with a time-stationary filter and measuring the temporal intensity as a function of time and the position of the spectral filter.
Fig. 13
Fig. 13 Spectrogram of a pulse with (a) second-order dispersion, i.e., a linear group delay and (b) third-order dispersion, i.e., a quadratic group delay. The group-delay function has been overlapped on the spectrogram in each case.
Fig. 14
Fig. 14 Block diagram of the principal component generalized projection algorithm.
Fig. 15
Fig. 15 (a) Measurement of a sonogram by using nonlinear optics and (b) measured sonogram of a chirped pulse (courtesy D. T. Reid). The pulse under test is split into two so that one replica is sent to the spectral filter and cross correlated with the input pulse. This setup and variations on this setup can be used for either chirp retrieval or phase retrieval. The sonogram plotted in (b) shows the familiar time-to-frequency correlation indicative of the chirp of the input pulse.
Fig. 16
Fig. 16 (a) Implementation of sonograms with a streak camera; (b) self-referencing implementation of a sonogram in the telecommunication environment by RF phase detection.
Fig. 17
Fig. 17 Single-shot measurements of a sonogram using (a) a thick nonlinear crystal or (b) a two-photon detector. In (a), encoding of time and frequency on the two spatial coordinates is performed with noncollinear upconversion in a thick nonlinear crystal. The pulse under test first travels through a cylindrical lens and a spherical lens to shape the beam, then into a Wollaston prism. This assembly generates two replicas of the pulse that are tightly focused in the vertical direction and spatially extended and noncollinear in the horizontal direction. After interaction in a thick nonlinear crystal, the vertical direction and horizontal position that correspond to the optical frequency of the upconverted field and the relative delay between the two interacting waves are mapped into vertical and horizontal positions with a combination of a spherical and cylindrical lenses. In (b), the encoding of frequency on one spatial coordinate is performed with a diffraction grating and a cylindrical lens acting on one replica of the input pulse. The encoding of the relative delay between the different spectral slices of the pulse and the input pulse acting as a temporal gate is obtained thanks to the noncollinear interaction geometry on a two-photon array.
Fig. 18
Fig. 18 Top, implementation of SHG-FROG with a nonlinear crystal. Two replicas of the pulse at ω 0 are mixed, and the upconverted signal at 2 ω 0 is spectrally resolved. Bottom, example of a SHG-FROG trace of a Gaussian pulse with (left) second- and (right) third-order dispersion.
Fig. 19
Fig. 19 SHG-FROG measurements of the evolution of an arbitrary input pulse into a self-similar asymptotic similariton in an optical amplifier. (a) Experimental and (b) theoretical temporal pulse intensities versus propagation distance. (c) FROG trace of the pulse after exiting the fiber amplifier. (d) Temporal amplitude and phase of the output pulse (courtesy J. Dudley).
Fig. 20
Fig. 20 Top, implementation of PG-FROG with a third-order nonlinearity. A high-energy replica of the pulse at ω 0 rotates the polarization state of a low-energy replica of the pulse set between crossed polarizers, and the low-energy replica is spectrally resolved. Bottom, example of a PG-FROG trace of a Gaussian pulse with (left) second- and (right) third-order dispersion.
Fig. 21
Fig. 21 Implementation of (a) SD-FROG, (b) THG-FROG, (c) and (d) TG-FROG with a third-order nonlinearity.
Fig. 22
Fig. 22 Implementation of GRENOUILLE. The spatially extended pulse under test travels from left to right and propagates into a cylindrical lens and Fresnel biprism. After interaction in the nonlinear crystal, the vertical direction and horizontal position that correspond to the optical frequency of the upconverted field and the relative delay between the two interacting waves generated by the biprism are mapped into vertical and horizontal positions with a combination of two cylindrical lenses.
Fig. 23
Fig. 23 (a): Measurement of a spectrogram as a function of the optical frequency and the relative delay between the modulation and the source under test. (b) and (c) are the spectral representations of a pulse from a mode-locked diode before and after pulse compression by propagation in a highly nonlinear fiber and dispersive fiber. The insets show the corresponding spectrograms.
Fig. 24
Fig. 24 Equivalence between space and time. In (a), a spatial imaging system is implemented using the combination of diffraction and a spatial lens, i.e., quadratic phase modulations in the space x and wave vector k x domains. In (b), a temporal imaging system is implemented by combining dispersive propagation and a time lens, i.e., quadratic phase modulations in the time t and frequency ω domains.
Fig. 25
Fig. 25 Principle of tomographic reconstruction. Gray shading represents a lower attenuation coefficient, and black a higher one. A two-dimensional attenuation function a ( x , y ) is projected onto various axes. The set of projections P θ is then used to reconstruct the attenuation.
Fig. 26
Fig. 26 Principle of the time-to-frequency converter. The frequency marginal of the pulse after quadratic temporal and spectral modulation is the time marginal of the input pulse. The temporal intensity of the test pulse can therefore be determined by a measurement of the optical spectrum of the modulated pulse.
Fig. 27
Fig. 27 Example of simplified chronocyclic tomography. The Wigner function of a pulse with Gaussian spectrum and cubic spectral phase after small positive and negative quadratic temporal phase modulations is shown in (a) and (b). The shears imposed by these modulations are displayed in the insets. In (c), the difference between the two obtained spectral marginals (blue curve) is plotted with the initial spectrum (black curve).
Fig. 28
Fig. 28 Imaging with magnification in the chronocyclic space. Imaging with magnification can be obtained by the successive action of quadratic spectral phase modulation, quadratic temporal phase modulation, and quadratic spectral phase modulation. The shears imposed by these modulations are displayed in the insets. The time marginal of the Wigner function is plotted for the input and output waveforms and shows a magnification equal to 2.
Fig. 29
Fig. 29 Example of implementation of simplified chronocyclic tomography with a phase modulator. In (a), the quadratic temporal phase modulation is obtained via the electro-optic effect in a Li Nb O 3 phase modulator driven by a sine wave. Synchronization of the pulse under test with the extrema of the phase modulation with a RF phase shifter provides quadratic temporal modulation. These modulations are alternated at a frequency f so that lock-in detection of the signal measured by a Fabry–Perot etalon followed by a photodetector leads to the average spectrum of the modulated pulses (i.e., the spectrum of the input pulse) and the difference of the spectra of the modulated pulses (i.e., the finite difference from which the spectral phase is reconstructed). (b) Spectrum and phase measured on a train of pulses after nonlinear propagation at various power in a nonlinear fiber and dispersion compensation. (c) Calculated temporal intensities, which show significant pulse compression.
Fig. 30
Fig. 30 Experimental implementation of the time-to-frequency converter with XPM. The waveform under test propagates into a dispersive delay line and is then phase modulated by a pump pulse via XPM in a fiber. The temporal intensity of the waveform under test is a scaled version of the spectrum measured by the optical spectrum analyzer (OSA). The pump pulse is assumed to have a parabolic temporal intensity over the temporal support of the dispersed waveform under test.
Fig. 31
Fig. 31 (a) Principle of imaging with temporal magnification using sum-frequency generation, and (b) implementation of a time-to-frequency converter using four-wave mixing in a silicon waveguide (courtesy of A. Gaeta). In (a), the waveform under test propagates through a dispersive delay line, then undergoes a wave mixing process with a chirped pump pulse, then propagates through an additional dispersive delay line. The frequency chirp in the highly chirped pump pulse leads to a quadratic temporal phase in the pulse, therefore allowing the quadratic temporal phase modulation through the wave mixing process. In (b), the waveform under test chirped by a fiber with dispersion D and the pump pulse chirped by a fiber with dispersion 2 D interact via four-wave mixing in a silicon-on-insulator waveguide. The spectrum of the generated idler is a scaled representation of the temporal intensity of the input signal. A complete time-to-frequency conversion of the electric field can be obtained if the generated idler signal further propagates in a fiber with dispersion D.
Fig. 32
Fig. 32 Examples of waveforms measured by time-to-frequency conversion in a silicon waveguide compared with waveforms measured by nonlinear cross-correlation (courtesy of A. Gaeta). The left-hand column corresponds to the time-to-frequency converter, and the right-hand column corresponds to nonlinear cross-correlation with a short optical pulse. (a) Interference of two chirped optical waveforms, with the inset displaying the measured waveforms in a 10 ps window. In (b), the time-to-frequency converter is used in single-shot operation to measure the intensity of a pair of delayed chirped pulses.
Fig. 33
Fig. 33 Test-plus-reference interferometers for (a) spectral and (b) temporal interferometry. The unknown (test) pulse E ( t ) is combined on a beam splitter with a known (reference) pulse E R ( t ) . The resulting interference pattern is measured by using (a) a spectrometer and a slow detector or (b) a fast detector, possibly synthesized by using a rapid shutter followed by a slow detector.
Fig. 34
Fig. 34 Self-referencing interferometers for a (a) spectral and (b) temporal shearing interferometry. The unknown pulse is divided into two replicas, each of which follows a different path through the interferometer. One replica is shifted in frequency, the other in time. The two modified replicas are recombined, and the resulting interference pattern is measured by using (a) a spectrometer and a slow detector or (b) a fast detector.
Fig. 35
Fig. 35 (a) Principle of operation of SI. The test and the reference pulse are mixed on a beam splitter, and the resulting joint spectrum measured. (b) Examples of spectral interferograms, corresponding to a test pulse equal to the reference pulse (left) and to a test pulse with a quadratic spectral phase (right).
Fig. 36
Fig. 36 Diagram of the inversion algorithm for Fourier-transform SI. After an initial Fourier transform to the time domain, an ac sideband is digitally filtered to isolate the interference term. An inverse Fourier transform is made, and the amplitude (solid curve) and phase (dashed line) of the interferometric component are extracted.
Fig. 37
Fig. 37 Schematic of a SPIDER device. The input pulse is used to generate a chirped pulse by propagation in a dispersive delay line. Two temporally delayed replicas of the pulse under test are nonlinearly mixed with the chirped pulse, and the resulting interferogram is spectrally resolved by an optical spectrum analyzer.
Fig. 38
Fig. 38 (a) Spatially resolved interferograms for space–time SPIDER, with (upper) spectral shear and (lower) spatial shear. The fringes are due to a delay between interfering pulses in the upper plot and to a tilt between interfering pulses in the lower plot. (b) Reconstructed spatiospectral phase map for a pulse dispersed by a prism, extracted from the interferograms in (a). (c) Space–time intensity of the pulse with this spatiospectral phase, showing pulse-front tilt.
Fig. 39
Fig. 39 SEA-SPIDER apparatus showing the use of a single copy of the test pulse and two chirped ancillae to encode the spectral phase in a spectrally resolved spatial interferogram. The tilted upconverted replicas generated in the crystal are reimaged on the detector through an imaging spectrometer. The shape of the fringes indicates the spectral phase derivative, providing an intuitive diagnostic of the pulse structure.
Fig. 40
Fig. 40 SEA-SPIDER measurements of the output of a hollow-core-fiber compressor system. (a) The spatial fringe maxima in the SEA-SPIDER interferogram map the gradient of the spectral phase across that section of the beam. Note the spectral cut at 950 nm due to the limited bandwidth of the chirped mirrors used for compression. (b) Chronocyclic Wigner function of the pulse, indicating the complex character of the compressor output. The pulse has a slight positive chirp and structure away from the main peak. (c) Measured spectral intensity (blue) and reconstructed spectral phase (green), taken at the center of the beam. (d) Fourier-transform-limited temporal intensity (blue) and reconstructed temporal intensity (green). The full width at half-maximum pulse durations are 5.2 and 7.5 fs , respectively.
Fig. 41
Fig. 41 Schematic diagram illustrating the nonlinear process for generating two spectrally sheared replicas in (a) SPIDER and (b) ARAIGNEE. In the latter, the direction of propagation of the beams in the long crystal determines the wavelength of upconversion. (c) The ARAIGNEE apparatus, showing the paths for the fundamental (red) and upconverted (blue) beams that are interfered in the spectrometer.
Fig. 42
Fig. 42 (a) Schematic of the two-dimensional spectral shearing interferometry (2DSI) apparatus, showing the generation of two phase-delay-variable chirped ancillae in a Michelson interferometer. (b) Raw interferograms from (a) a 5 fs laser pulse and (b) a pulse dispersed by 1 mm of fused silica. (c) Predicted and measured interferometric autocorrelation of a 5 fs pulse (figure courtesy J. Birge and F. Kärtner).
Fig. 43
Fig. 43 (a) Schematic arrangement for a linear spectral shearing interferometer based on electro-optic modulation. (b) Spectrum (solid black curve) and spectral phase measured for an input average power of 2 mW , 10 μ W , and 270 nW (solid red curve, blue dots, and green squares, respectively).
Fig. 44
Fig. 44 Characterization of sub 100 as XUV pulses by using spectrograms. (a) Measured photoelectron spectrum as a function of the delay between the XUV pulse and an IR ancilla. (b) Spectrogram reconstructed by using an iterative deconvolution algorithm. (c) Temporal intensity (solid curve) and phase (dashed curve) and (d) spectral intensity (solid curve) and phase (dashed curve) of the reconstructed XUV pulse, showing some residual chirp due to the HHG process (figure courtesy E. Goulielmakis and F. Krausz).
Fig. 45
Fig. 45 Schematic diagram of a terahertz time-domain photoconductive detector. (a) Contact geometry of the polarization-sensitive THz receiver, including (b) an electron micrograph of the gap region. A laser pulse forms a gate beam, which generates electrons in the detector material, onto which the THz radiation is focused by using an off-axis parabolic mirror. The delay between the gate and the THz pulses is varied to map out the electric field of the latter. (c), (d) Examples of the vector field of two THz pulses measured by using this device (figure courtesy E. Castro-Camus and M. B. Johnston).
Fig. 46
Fig. 46 Electric field of two pulses with identical envelopes and different CEO phases. The phase between the peak of the envelope is 0 in (a) and π 2 in (b). (c) Schematic of a f - 2 f interferometer. (d) Intensity spectrum of a pulse train with nonzero CEO frequency, showing the comblike structure of modes separated by the pulse repetition frequency f rep and offset by f CEO (red), and intensity spectrum of the upconverted pulse train, showing the modes separated by f rep and offset by 2 f CEO (blue).

Equations (138)

Equations on this page are rendered with MathJax. Learn more.

ɛ ( t ) = E ( t ) + E * ( t ) ,
E ( t ) = | E ( t ) | exp [ i ψ ( t ) ] exp ( i ψ 0 ) exp ( i ω 0 t ) ,
E ̃ ( ω ) = | E ̃ ( ω ) | exp [ i ϕ ( ω ) ] = T T d t E ( t ) e i ω t ,
C ( τ ) = d t C ( t , t + τ ) = d ω 2 π | E ̃ ( ω ) | 2 e i ω τ .
μ = d t d t | C ( t , t ) | 2 [ d t C ( t , t ) ] 2 .
C ͌ ( ω , ω ) = E ̃ ( ω ) E ̃ * ( ω ) .
C ͌ ( Δ ω , ω C ) = E ̃ ( ω C + Δ ω 2 ) E ̃ * ( ω C Δ ω 2 ) ,
C ( t C , Δ t ) = E ( t C + Δ t 2 ) E * ( t C Δ t 2 ) ,
W ( t , ω ) = d t E ( t + t 2 ) E * ( t t 2 ) e i ω t .
W ( t , ω ) = d ω 2 π E ̃ ( ω + ω 2 ) E ̃ * ( ω ω 2 ) e i ω t .
I ( t ) = | E ( t ) | 2 = d ω 2 π W ( t , ω ) ,
I ̃ ( ω ) = | E ̃ ( ω ) | 2 = d t W ( t , ω ) .
E OUTPUT ( t ) = d t H ( t , t ) E INPUT ( t ) ,
S ( t ) = d t R ( t t ) | E ( t ) | 2 ,
H ( t , t ) = 1 2 π b exp [ i 2 b ( a t 2 2 t t + d t 2 ) ] ,
d t H ( t , t ) H * ( t , t ) = δ ( t t ) .
Shutter (time gate), N A ( t τ ; τ g ) = exp [ ( t τ ) 2 τ g 2 ] ,
Linear phase modulator, N L P ( t ; ψ ( 1 ) ) = exp ( i ψ ( 1 ) t ) ,
Quadratic phase modulator, N Q P ( t ; ψ ( 2 ) , τ ) = exp [ i ψ ( 2 ) ( t τ ) 2 2 ] ,
Spectrometer, S ̃ A ( ω Ω ; Γ ) = exp [ ( ω Ω ) 2 Γ 2 ] ,
Delay line, S ̃ L P ( ω ; ϕ ( 1 ) ) = exp ( i ϕ ( 1 ) ω ) ,
Dispersive delay line, S ̃ Q P ( ω ; ϕ ( 2 ) , ω R ) = exp [ i ϕ ( 2 ) ( ω ω R ) 2 2 ] ,
S ( Ω ; Γ ) = d t | d t S A ( t t ; Ω , Γ ) E ( t ) | 2 = d ω 2 π | S ̃ A ( ω Ω ) | 2 I ̃ ( ω ) ,
S ( Ω ; Γ ) = d t d ω 2 π W ( t , ω ) W S ( t , ω ; Ω , Γ ) ,
W S ( t , ω ; Ω , Γ ) = d t S A ( t t 2 ; Ω , Γ ) S A ( t + t 2 ; Ω , Γ ) e i ω t .
S ( τ ; τ g ) = d t | N A ( t τ ; τ g ) | 2 I ( t ) .
P ( 2 ) ( t ) = χ ( 2 ) ɛ 1 ( t ) ɛ 2 ( t ) .
E 3 ( t ) = E 1 ( t ) E 2 ( t ) ,
AC ( τ ) = d t | E ( t ) E ( t τ ) | 2 = d t I ( t ) I ( t τ ) .
AC ( τ ) = 4 I ( t ) I ( t τ ) d t + 2 I ( t ) 2 d t .
Δ t AC 2 = τ 2 AC ( τ ) d τ AC ( τ ) d τ = 2 t 2 I ( t ) d t I ( t ) d t = 2 Δ t I 2 .
IAC ( τ ) = d t | E ( t ) + E ( t τ ) | 4 .
S n + 1 ( τ ) = d t I n ( t τ ) I ( t ) .
D ( { p i } ) = d t d ω 2 π W ( t , ω ) F ( t , ω ; { p i } ) .
D ( τ ) = d t d ω 2 π W ( t , ω ) F ( t ; τ ) = d t I ( t ) F ( t ; τ ) .
W M ( t , ω ; { Ω , τ } ) = d ω 2 π | S ̃ A ( ω Ω ) | 2 d t N A ( t + t 2 τ ) N A * ( t t 2 τ ) exp [ i ( ω ω ) t ] .
D ( Ω , τ ) = d ω 2 π d t W ( t , ω ) W M ( t τ , ω Ω ) = | d t E ( t ) N A ( t τ ) exp ( i Ω t ) | 2 .
D ( Ω , τ ) = | d ω 2 π S ̃ A ( ω Ω ) E ̃ ( ω ) e i ω τ | 2 ,
W OUTPUT ( t , ω ) = W INPUT ( t ϕ ( 2 ) ω , ω ) .
W OUTPUT ( t , ω ) = W INPUT ( t , ω + ψ ( 2 ) t ) .
D ( ω C 1 , ω C 2 , τ ) = d t | N A ( t τ ) d ω 2 π [ S ̃ A ( ω ω C 1 ) + S ̃ A ( ω ω C 2 ) ] E ̃ ( ω ) exp ( i ω t ) | 2 .
D ( ω + Δ ω 2 , ω Δ ω 2 , τ ) = I ̃ ( ω + Δ ω 2 ) + I ̃ ( ω Δ ω 2 ) + 2 | C ͌ ( Δ ω , ω ) | cos { arg [ C ͌ ( Δ ω , ω ) ] Δ ω τ } ,
D ( t + Δ t 2 , t Δ t 2 , Ω ) = I ( t + Δ t 2 ) + I ( t Δ t 2 ) + 2 | C ( t , Δ t ) | cos { arg [ C ( t , Δ t ) ] + Δ t Ω } .
D ( ψ ( 1 ) , Ω ; ϕ ( 1 ) ) = d ω 2 π | S ̃ A ( ω Ω ) [ d ω 2 π N ̃ L P ( ω ω , ψ ( 1 ) ) E ̃ ( ω ) + S ̃ L P ( ω , ϕ ( 1 ) ) E ̃ ( ω ) ] | 2 .
D ( Δ ω , ω C Δ ω 2 ; ϕ ( 1 ) ) = I ̃ ( ω C + Δ ω 2 ) + I ̃ ( ω C Δ ω 2 ) + 2 | C ͌ ( Δ ω , ω C ) | cos { arg [ C ͌ ( Δ ω , ω C ) ] ϕ ( 1 ) ( ω C Δ ω 2 ) } ,
D ( τ , Δ t ; ψ ( 1 ) ) = I ( t C + Δ t 2 ) + I ( t C Δ t 2 ) + 2 | C ( t C , Δ t ) | cos { arg [ C ( t C , Δ t ) ] ψ ( 1 ) ( t C Δ t 2 ) } ,
S ( τ , Ω ) = d ω 2 π | R ̃ ( ω Ω ) | 2 | d t E ( t ) g ( t τ ) exp ( i ω t ) | 2 ,
S ( τ , Ω ) = | d t E ( t ) g ( t τ ) exp ( i Ω t ) | 2 .
S ( τ , Ω ) = d t | g ( t τ ) | 2 | d ω 2 π E ̃ ( ω ) R ̃ ( ω Ω ) exp ( i ω t ) | 2 ,
S ( τ , Ω ) = | d ω 2 π E ̃ ( ω ) R ̃ ( ω Ω ) exp ( i ω τ ) | 2 .
S ( τ , Ω ) = d t d ω 2 π W E ( t , ω ) W g ( t τ , Ω ω ) .
Ω S ( τ ) = d Ω Ω S ( τ , Ω ) d Ω S ( τ , Ω ) ,
T S ( Ω ) = d τ τ S ( τ , Ω ) d τ S ( τ , Ω ) .
Ω S ( τ ) = d t I E ( t ) I g ( t τ ) [ Ω E ( t ) + Ω g ( t τ ) ] d t I E ( t ) I g ( t τ ) ,
T S ( Ω ) = d ω I E ( ω ) I g ( Ω ω ) [ T E ( ω ) T g ( Ω ω ) ] d ω I E ( ω ) I g ( Ω ω ) ,
Ω S ( τ ) = Ω E ( τ ) = ψ t ( τ ) .
T S ( Ω ) = T E ( Ω ) = ϕ ω ( Ω ) .
S ( τ , Ω ) = | d t E ( t ) g ( t τ ) exp ( i Ω t ) | 2 = | d t g * ( t ) E * [ ( t τ ) ] exp ( i Ω t ) | 2 .
S ( τ , Ω ) = | d t E ( t ) E R ( t τ ) exp ( i Ω t ) | 2 ,
S ( τ , Ω ) = | d t E ( t ) E ( t τ ) exp ( i Ω t ) | 2 .
S ( τ , Ω ) = | d t E ( t ) E ( t τ ) exp ( i Ω t ) | 2 .
S ( τ , Ω ) = | d t E ( t ) I ( t τ ) exp ( i Ω t ) | 2 .
S ( τ , Ω ) = | d t E 2 ( t ) E * ( t τ ) exp ( i Ω t ) | 2 .
S ( τ , Ω ) = | d t E ( t ) I ( t τ ) exp ( i Ω t ) | 2
S ( τ , Ω ) = | d t E 2 ( t ) E * ( t τ ) exp ( i Ω t ) | 2 ,
S ( τ , Ω ) = | d t E ( t ) E ( t τ ) exp ( i Ω t ) | 2 .
E ̃ OUTPUT ( k x ) = E ̃ INPUT ( k x ) exp ( i L 2 k 0 k x 2 ) .
E ̃ OUTPUT ( ω ) = E ̃ INPUT ( ω ) exp ( i ϕ L ω 2 2 ) .
E OUTPUT ( x ) = E INPUT ( x ) exp ( i k 0 2 f x 2 ) .
E OUTPUT ( t ) = E INPUT ( t ) exp ( i ψ t 2 2 ) .
W OUTPUT ( t , ω ) = W INPUT ( t ϕ ω , ω ) .
W OUTPUT ( t , ω ) = W INPUT ( t , ω + ψ t ) .
I ( t ) = d ω 2 π W ( t , ω ) ,
I ̃ ( ω ) = d t W ( t , ω ) .
P θ ( u ) = d x d y a ( x , y ) δ [ y x tan ( θ ) u cos ( θ ) ] ,
P θ ( u ) = d x a [ x , u cos ( θ ) + x tan ( θ ) ] .
W ϕ , ψ ( t , ω ) = W [ ( 1 ϕ ψ ) t ϕ ω , ω + ψ t ] .
I ̃ ϕ , ψ ( ω ) = d t W ϕ , ψ ( t , ω ) = 1 1 ϕ ψ d t W ( t , ω 1 1 ϕ ψ + t 1 1 ψ ϕ ) .
I ̃ ϕ , ψ ( ω ) = I ̃ θ ( ω ) = d t W [ t , ω θ cos ( θ ) tan ( θ ) t ] .
I ̃ π 2 ( ω ) = d t W [ ϕ ω , ω + ψ t ] = I ( ϕ ω ) .
a ̃ ( u , v ) = d x d y a ( x , y ) exp [ 2 π i ( u x + v y ) ] .
a ̃ ( 0 , v ) = d y [ d x a ( x , y ) ] exp ( 2 π i v y ) ,
a ̃ ( 0 , v ) = P ̃ 0 ( v ) .
a ̃ [ ρ cos ( θ ) , ρ sin ( θ ) ] = P ̃ π 2 θ ( ρ ) .
a ( x , y ) = d u d v a ̃ ( u , v ) exp [ 2 π i ( u x + v y ) ] ,
a ( x , y ) = 0 π d ρ d θ a ̃ [ ρ cos ( θ ) , ρ sin ( θ ) ] | ρ | exp { 2 π i ρ [ x cos ( θ ) + y sin ( θ ) ] } .
a ( x , y ) = 0 π d ρ d θ P ̃ π 2 θ ( ρ ) | ρ | exp { 2 π i ρ [ x cos ( θ ) + y sin ( θ ) ] } .
W 2 ( t , ω ) = W r ( t , ω t ϕ 1 ) .
I ̃ 2 ( ω ) = d ω W 2 ( t , ω ) = d ω W r ( t , ω t ϕ 1 ) | r ( t ϕ 1 ) | 2 .
I ̃ θ θ ( ω ) = d t [ ω θ ( 1 cos ( θ ) ) t θ ( tan ( θ ) ) ] W ω [ t , ω cos ( θ ) tan ( θ ) t ] .
| I ̃ θ θ | θ = 0 ( ω ) = d t t W ω ( t , ω ) = ω d t t W ( t , ω ) .
| I ̃ θ θ | θ = 0 ( ω ) = ω [ I ̃ ( ω ) ϕ ω ] .
I ̃ ψ ( ω ) = d t W [ t , ω + ψ t ] .
I ̃ ψ ψ ( ω ) = d t W ( t , ω + ψ t ) ψ = d t t W ( t , ω + ψ t ) ω = ω d t t W ( t , ω + ψ t ) .
| I ̃ ψ ψ | ψ = 0 ( ω ) = ω d t t W ( t , ω ) = ω [ I ̃ ( ω ) ϕ ω ] .
W 1 ( t , ω ) = W 0 ( t ϕ 1 ω , ω ) ,
W 2 ( t , ω ) = W 1 ( t , ω + ψ t ) ,
W 3 ( t , ω ) = W 2 ( t ϕ 2 ω , ω ) .
W 3 ( t , ω ) = W 0 [ ( 1 ϕ 1 ψ ) t ( ϕ 1 + ϕ 2 ϕ 1 ϕ 2 ψ ) ω , ( 1 ϕ 2 ψ ) ω + ψ t ] .
1 ϕ 1 + 1 ϕ 2 = ψ .
W 3 ( t , ω ) = W 0 ( ϕ 1 ϕ 2 t , ϕ 2 ϕ 1 ω + ψ t ) ,
I 3 ( t ) = I 0 ( ϕ 1 ϕ 2 t ) .
1 z 1 + 1 z 2 = 1 f ,
M = z 2 z 1 .
W 3 ( t , ω ) = W 2 ( t ϕ 2 ω ) = W r [ t ϕ 2 ω , ω ( t ϕ 2 ω ) ϕ 1 ] .
I 3 ( t ) = d ω W 3 ( t , ω ) = d ω W r [ ϕ 2 ω , ω ( 1 + ϕ 2 ϕ 1 ) + t ϕ 2 ]
I 3 ( t ) | r [ t ( 1 + ϕ 2 ϕ 1 ) ] | 2 .
S ( τ , ω ) = | d t E ( t ) exp [ i α I PUMP ( t τ ) ] exp ( i ω t ) | 2 .
ψ ( t ) = π V 0 V π cos ( Ω t ) = ψ 0 π V 0 Ω 2 2 V π t 2
E ( t ) = E ( t ) exp [ i 2 π n 2 L I ( t ) λ ] ,
E ( t ) = E ( t ) exp [ i 4 π n 2 L I PUMP ( t ) λ ] ,
D ( Ω ; τ ) = | E ̃ R ( Ω ) + E ̃ ( Ω ) e i Ω τ | 2
D ( τ ; Ω ) = | E R ( τ ) + E ( τ ) e i Ω τ | 2 ,
E OUT ( t ) = E ( t ) e i Ω t + E ( t τ ) = FT [ E ̃ ( ω + Ω ) + E ̃ ( ω ) e i ω τ ] ,
D ( t ; Ω , τ ) = | E OUT ( t ) | 2 = I ( t ) + I ( t τ ) + 2 R e [ E ( t ) E * ( t τ ) e i Ω t ] ,
I ̃ ( ω ; Ω , τ ) = | E ̃ OUT ( ω ) | 2 = I ̃ ( ω ) + I ̃ ( ω + Ω ) + 2 R e [ E ̃ ( ω ) E ̃ * ( ω + Ω ) e i ω τ ] ,
ϕ ( ω ) ϕ ( ω + Ω ) + ω τ = m π ,
C ͌ ( ω 1 , ω 2 ) = E ̃ ( ω 1 ) E ̃ * ( ω 2 )
C ͌ ( Δ ω , ω C ) = C ͌ ( ω 1 , ω 2 ) ,
C ͌ ( 0 , ω C ) = | E ̃ ( ω C ) | 2
I ̃ ( ω C ; Δ ω , τ ) = | E ̃ ( ω C + Δ ω 2 ) + e i ( ω C Δ ω 2 ) τ E ̃ ( ω C Δ ω 2 ) | 2 ,
I ̃ ( ω C ; Δ ω , τ ) = I ̃ ( ω C + Δ ω 2 ) + I ̃ ( ω C Δ ω 2 ) + 2 | C ͌ ( Δ ω , ω C ) | cos { arg [ C ͌ ( Δ ω , ω C ) ] + τ ( ω C Δ ω 2 ) } .
D ( ω ; τ ) = D ( dc ) ( ω ) + D ( ac ) ( ω ) e i ω τ + [ D ( ac ) ( ω ) e i ω τ ] * ,
D ( dc ) ( ω ) = I ̃ ( ω ) + I ̃ R ( ω ) ,
D ( ac ) ( ω ) = | E ̃ ( ω ) E ̃ R ( ω ) | e i [ ϕ ( ω ) ϕ R ( ω ) ] .
D ̃ ( filtered ) ( t ) = H ( t τ ) D ̃ ( t ) ,
ϕ ( ω ) ϕ R ( ω ) + ω τ = arg [ D ( ac ) ( ω ) exp ( i ω τ ) ] = arg { IFT [ D ̃ ( filtered ) ] ( ω ) } .
D ( dc ) ( ω ) = I ̃ ( ω + Ω ) + I ̃ ( ω ) ,
D ( ac ) ( ω ) = | E ̃ ( ω + Ω ) E ̃ ( ω ) | e i [ ϕ ( ω + Ω ) ϕ ( ω ) ] .
ϕ ( 0 ) = 0 ,
ϕ [ ( n + 1 ) Ω ] = ϕ ( n Ω ) + θ ( n Ω ) .
Δ t 0 2 = Δ t FTL 2 [ 1 + Δ ω 2 Δ t FTL 2 ( ϕ ( 2 ) ) 2 ] ,
Δ t 2 = Δ t FTL 2 [ 1 + Δ ω 2 Δ t FTL 2 ( ϕ ( 2 ) + δ τ Ω ) 2 ] .
ɛ Δ t = 1 + ( N ɛ τ ) 2 1 ,
ɛ Δ t = N ɛ τ Δ t FTL Δ t 0 .
ϕ ( δ ω ) = 0 ,
ϕ [ δ ω + ( n + 1 ) Ω ] = ϕ ( δ ω + n Ω ) + θ ( δ ω + n Ω ) .
I ̃ ( ω ) = | E ̃ 1 ( ω ) + E ̃ 2 ( ω ) | 2 = | E ̃ ( ω ω 0 Ω ) E ̃ R ( ω 0 + Ω ) | 2 + | E ̃ ( ω ω 0 ) E ̃ R ( ω 0 ) | 2 + 2 | E ̃ ( ω ω 0 Ω ) | | E ̃ ( ω ω 0 ) | | E ̃ R ( ω 0 + Ω ) | | E ̃ R ( ω 0 ) | cos [ ϕ ( ω ω 0 Ω ) ϕ ( ω ω 0 ) ϕ R ( ω 0 + Ω ) + ϕ R ( ω 0 ) + ω τ ] .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.