Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Surgical microscope integrated MHz SS-OCT with live volumetric visualization

Open Access Open Access

Abstract

Intraoperative optical coherence tomography is still not overly pervasive in routine ophthalmic surgery, despite evident clinical benefits. That is because today’s spectral-domain optical coherence tomography systems lack flexibility, acquisition speed, and imaging depth. We present to the best of our knowledge the most flexible swept-source optical coherence tomography (SS-OCT) engine coupled to an ophthalmic surgical microscope that operates at MHz A-scan rates. We use a MEMS tunable VCSEL to implement application-specific imaging modes, enabling diagnostic and documentary capture scans, live B-scan visualizations, and real-time 4D-OCT renderings. The technical design and implementation of the SS-OCT engine, as well as the reconstruction and rendering platform, are presented. All imaging modes are evaluated in surgical mock maneuvers using ex vivo bovine and porcine eye models. The applicability and limitations of MHz SS-OCT as a visualization tool for ophthalmic surgery are discussed.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

Corrections

9 February 2023: Minor corrections were made to Figures 1, 2, and 4.

1. Introduction

Optical coherence tomography (OCT) is a non-invasive imaging technology capable of acquiring cross-sectional and volumetric images of human tissue [1]. Its clinical application as a diagnostic and intrasurgical imaging modality is widely distributed. Recently swept-source OCT (SS-OCT) devices for ophthalmic diagnostics have demonstrated superior image quality as compared to spectral-domain OCT (SD-OCT) [2]. Yet, all commercially available intraoperative OCT engines are still based on this technology.

Although both techniques have in theory identical sensitivity, SS-OCT has advantages in practice. In fact, it shows better sensitivity, which can be explained by fewer losses in the detection. SS-OCT systems use pin diodes in a balanced configuration, whereas SD-OCT systems use a spectrometer, which contains additional lenses and a grating that introduce additional losses. Even more significant is the sensitivity penalty introduced by SD-OCT’s signal roll-off over depth [3]. SD-OCT systems often lose several dB per millimeter imaging depth, whereas SS-OCT systems using long coherence length swept lasers can maintain the signal over several meters of optical path length difference. The higher sensitivity of SS-OCT enables a further increase of the imaging speed and depth. The use of longer wavelengths to better penetrate scattering media, as for example the 1060nm wavelength band for retinal and choroidal imaging is not a unique advantage of SS-OCT [4]. But such wavelengths are more accessible with SS-OCT systems because one does not require a spectrometer with an expensive InGaAs line scan camera. For SS-OCT setups the imaging speed depends on the sweep repetition rate of the swept-source, as well as the speed of the data acquisition card (DAQ). In contrast, for SD-OCT, the speed is conditional on the line rate of the line scan camera. Off the shelf, faster lasers and DAQs are available, but high-speed SD-OCT setups with axial scan rates of MHz have also been demonstrated [57].

In SD-OCT all wavelengths are acquired simultaneously which leads to a longer overall integration time per wavelength compared to SS-OCT and thereby potentially to motion-induced interference fringe washout [8]. However, today’s SD-OCT systems are typically fast enough so that eye motion does not cause relevant artifacts. Only inside large vessels fringe washout caused by blood flow could lead to signal drop-out [9]. The bigger concern with respect to SD-OCT’s interference fringe washout is laterally scanning the beam over a scattering sample by more than a spot size during the integration time of an A-scan, which also results in fringe washout [8]. Therefore, a minimum sampling density is required. On the other hand, SS-OCT suffers from a different but related phenomenon. In SS-OCT, one mainly loses axial resolution when lateral sampling of a B-scan gets too sparse [8]. Since the swept-source sweeps through the wavelengths over time [10] not every position is illuminated with all wavelengths in case of insufficient sampling. The resulting reduced bandwidth is inversely proportional to the axial resolution [7,10]. In general, it is easier to achieve high spectral resolution and thereby greater imaging depth in an SS-OCT system compared to SD-OCT because swept-source lasers have extremely narrow instantaneous line width and state-of-the-art digitizers can sample very fast. Finally, SS-OCT provides greater flexibility than SD-OCT, which is what we would like to exploit in this paper. The spectral sampling and therefore imaging depth can be varied freely in SS-OCT in two ways: either by changing the sampling rate and keeping the sweep range constant or by adjusting the sweep range and keeping the sampling rate constant. Whereas in SD-OCT the spectral sampling is fixed to the number of pixels of the spectrometer’s line scan camera.

These advantages of SS-OCT enable faster, more flexible systems and thereby new applications. SS-OCT systems for diagnostic applications with A-scan rates of up to several MHz have been presented that allowed for increasing the field of view (FOV) of structural and angiographic OCT imaging to up to 100 degrees [1113]. This facilitates an earlier diagnosis of retinal diseases that originate at the periphery of the retina. Another use of the faster imaging speed is live volumetric OCT imaging [14]. So-called 4D-OCT systems can visualize dynamic processes or scenes in three dimensions over time. They can for example be integrated into surgical microscopes to provide live 3D-rendered volumes in addition to the microscopic view [15]. The larger imaging depth of SS-OCT setups has been demonstrated to be beneficial for anterior segment scans or even full-eye-length scans in one single acquisition [1618].

Such SS-OCT setups require of fast tunable lasers with sufficient output power, fast sweep repetition rates, and broad tuning ranges. Fourier domain mode locking (FDML) lasers were introduced in 2006 by Huber et al. [19]. They use a tunable Fabry-Perot filter within a fiber-based ring cavity where the sweep repetition rate of the laser is matched to the round-trip time of the light in the cavity. Thus, the speed of FDML lasers is not limited by the buildup time of spontaneous emission. Instead, by having all wavelengths circulate simultaneously, repetition rates of multiple MHz can be reached. However, FDML lasers have the disadvantage of high costs compared to other swept-sources and can only operate at a single sweep repetition rate predefined by their cavity length.

A potentially cheaper alternative are MEMS tunable vertical cavity surface-emitting lasers (MEMS-VCSEL). They consist of a bottom semiconductor mirror, an air gap, a gain medium and a top mirror mounted on a MEMS element which is electrostatically actuated to change the cavity length of the laser and thereby its output wavelength [20]. There are two pumping mechanisms available, electrical and optical pumping. Optical pumping is performed through the top mirror by a laser diode. Even though optically pumped VCSELs still exhibit better performance including wider spectral tuning ranges, electrically pumped VCSELs are expected to make SS-OCT much cheaper as they require fewer components. There are first demonstrations of electrically pumped VCSELS used in ophthalmology [21]. MEMS-VCSELs have been shown to achieve long coherence length, tunable sweep repetition rates, and wide spectral bandwidths [22].

The VCSEL’s flexibility in sweep repetition rate and spectral bandwidth enables the design of application specific imaging modes in a single instrument. One application which benefits greatly from adaptable imaging parameters is ophthalmic surgery. The same surgical microscope is often used for anterior and posterior surgery and during different phases of the procedure, the surgeon may want to use OCT as a visualization, diagnostic, documentation, or measurement tool.

We present in this paper a novel surgical microscope coupled SS-OCT engine with flexible imaging modes, capable of addressing the diverse clinical needs of ophthalmic surgery.

2. Methods

2.1 Hardware setup

The key component of our OCT engine is a swept MEMS-VCSEL module (Thorlabs Quantum Electronics, Inc., Jessup, MD, USA) that can switch between three sweep repetition rates which are 100kHz, 600kHz and 1.2MHz. The 1.2MHz sweep repetition rate results from utilizing the up- and down-sweep of the laser sweeping at 600kHz. The central wavelength fluctuates slightly between the three modes and ranges from 1061nm to 1066nm. For all sweep modes an optical output power higher than 30mW is maintained. The spectral bandwidth for the 100kHz and 1.2MHz mode is larger than 97nm at -15dB and for the 600kHz mode larger than 75nm (Table 1). The reduced bandwidth of the 600kHz mode was chosen intentionally, to realize greater imaging depth for imaging the anterior segment and potentially larger FOVs on the retina. The VCSEL is equipped with an optical k-clock module, which contains three different interferometers, one for each sweep mode. That way, we could maintain fairly constant k-clock frequencies across the different imaging modes and consequently the different imaging depths to make efficient use of our analog front end and digitizer. We utilize this k-clock module to clock a 12bit PCIe digitizer card (ATS9373, Alazar Technologies, Inc., Point-Claire, QC, Canada) for sampling the OCT signal equally in k-space. As a detector we use a 2GHz dual balanced InGaAs receiver (Thorlabs, Inc., Newton, NJ, USA). The maximum k-clocking frequency is 1.44GHz. By using both rising and falling edge of the k-clock signal (dual-edge sampling), where positive zero-crossings correspond to rising edges and negative zero-crossings correspond to falling edges, we clock the digitizer up to a maximum sampling rate of 2.88GSPS. Sampling the clock signal with double the highest clock frequency enables the display of the full imaging depth. In order to switch between the different laser sweep modes, one has to disable the laser and switch the mode, before enabling the laser again. The whole process of switching the laser modes takes a couple of seconds. Designing multiple imaging modes with one laser is in principle also possible with a FDML laser. However, here one could only tune the spectral bandwidth, but not the sweep repetition rate.

Tables Icon

Table 1. Technical specifications of OCT engine.

The infrared light is coupled into a Mach-Zehnder interferometer (Optowaves, Inc., San Jose, CA, USA) (Fig. 1). The interferometer is co-packaged into a custom designed integrated module with a motorized variable delay line and a polarization paddle in the reference arm as well as a second polarization paddle in the sample arm.

 figure: Fig. 1.

Fig. 1. Optics schematic of the 4D-OCT engine highlighting the allocation of components in the integrated module and the add-on module.

Download Full Size | PDF

The OCT engine control is performed on a CompactRIO-Controller (CRIO) (cRIO-9066, National Instruments Corp., Austin, TX, USA) that features a 667MHz Dual-Core ARM Cortex-A9 processor core, a Zynq-7020 FPGA and 256MB of onboard DRAM. For interfacing with various hardware components, the CRIO is equipped with five C-Series input/output modules (NI-9215, NI-9263, NI-9403, NI-9402, NI-9853, National Instruments Corp., Austin, TX, USA).

All scan patterns are developed in MATLAB R2018b (MathWorks, Inc., Ma, USA) and are saved on the CRIO’s DRAM. Thereby the engine control can run independently and interfaces to a host computer over Ethernet. This connection allows for manipulating scanning amplitude and scanning offsets on the fly. The workstation which was employed to perform all required computations (4D-OCT workstation) features an AMD EPYC 7351P CPU, with 16 cores and 32 threads/virtual cores, running at a base clock rate of 2.4GHz, 128GB of synchronous registered DDR4 RAM (Samsung M386A8K40BM2-CTD, 4x 32GB) and two NVIDIA Titan RTX GPUs, connected via NVLink, to enable fast memory transfers between the two devices.

To couple the sample arm to a digital ophthalmic surgical microscope (ZEISS ARTEVO 800, Carl Zeiss Meditec AG, Jena, Germany), we developed an add-on module that can be mechanically attached to the surgical microscope (Fig. 2(a)). Since the add-on module reduces the microscope’s working distance, we compensated for it by adjusting the main objective lens’ focal length accordingly. The add-on module still allows for attaching the ZEISS RESIGHT 700 (Carl Zeiss Meditec AG, Jena, Germany) fundus viewing system (Fig. 2(b)). By inserting an additional set of lenses including an aspheric lens of 60 or 128 diopters the refractive power of cornea and lens is compensated and the beam is focused on the retina. Thereby, the added OCT system can be used for anterior and posterior segment imaging. Furthermore, the module includes a collimating lens, two galvanometric scanners (6215H, Cambridge Technology, Bedford, MA, USA), relay optics to magnify the scan angle, and a motorized OCT focus control (RE10, Maxon Motors, Sachseln, Switzerland) that is operated via the CAN protocol of the surgical microscope to mechanically move the OCT objective lens.

 figure: Fig. 2.

Fig. 2. Add-on module for connecting the sample arm of the OCT engine to the surgical microscope. (a) Interface between microscope and add-on module. (b) RESIGHT 700 attached to the add-on module.

Download Full Size | PDF

2.2 Imaging modes

All imaging modes are tailor-made for specific applications and are based on the three sweep repetition rates of the VCSEL.

To cover documentation as well as live imaging scenarios we developed four types of scan patterns, which are B-scan line, B-scan cross, raster capture and 4D spiral modes (schematically illustrated in Fig. 3).

 figure: Fig. 3.

Fig. 3. Scan patterns used in microscope integrated OCT. (a) B-scan line. (b) B-scan cross. (c) Raster capture. (d) 4D Spiral.

Download Full Size | PDF

For B-scan line, B-scan cross and raster capture patterns, we use the laser’s 100kHz mode.

The B-scan line and B-scan cross patterns are mainly designed for live visualization of surgical procedures with enhanced depth compared to the 4D-OCT modes. The raster capture pattern is laid out for high-resolution documentation purposes of stationary situations before, during, or after surgery. For these patterns, each B-scan consists of 1024 A-scans covering a FOV of 11.8mm with 13312 samples in k. The theoretically resulting imaging depth calculated from the laser specifications was 29.7 mm in tissue and the 102nm sweep provided a theoretical axial resolution of 5.9 µm (FWHM) in tissue. These theoretical performance values will be validated further on.

The 4D live imaging modes used the laser’s 600kHz and 600 × 2kHz sweep modes. We implemented spiral scanning patterns to reduce the strain on the galvo scanners due to high accelerations and to shorten flyback times as presented by Carrasco-Zevallos et al. [23]. We believe that for the visualization of surgical maneuvers involving 4D live imaging, in most cases the FOV is more important than axial resolution. We decided to trade-off axial resolution for lateral sampling rate by utilizing spectral splitting. Meaning that we take the spectral interference signal which is acquired as a function of wavenumbers in time and split the sweep into two equal sub sweeps while scanning the probe beam at twice the speed [24]. We thereby double the effective A-scan rate relative to the laser’s sweep repetition rate. This increases the effective A-scan rate from 600kHz to 1.2MHz. The resulting axial resolution in tissue of the 4D-OCT mode using spectral splitting was reduced to 16.0µm (FWHM). Technically one could even achieve 2.4MHz by applying the spectral splitting technique to laser mode 3. However, due to asymmetries in the up- and down-sweep we were not able to provide a stable and well-registered implementation for 2.4MHz. While adhering to the cardinal theorem of interpolation, we were able to realize FOVs for 4D imaging of 3.1mm to 15.7mm in diameter at volume rates ranging from 16Hz to 3Hz (Table 1). The imaging depths were 5.0mm and 3.1mm in tissue for the 600kHz and 1.2MHz modes respectively. To compare our system to other recently published intrasurgical OCT systems, we added the specifications of the prototypes of Carrasco-Zevallos et al. [23] and Theisen-Kunde et al. [25] in Table 1.

2.3 OCT signal processing

We implemented a signal processing pipeline that is capable of real-time reconstruction of k-clocked SS-OCT data at MHz A-scan rates. In order to reconstruct the continually acquired data in real-time, we implemented the OCT reconstruction in C++ and NVIDIA’s Compute Unified Device Architecture (CUDA) API (CUDA Toolkit V10.1.243). The entire project is compiled using NVIDIA’s CUDA compiler (NVCC), a proprietary cross-platform compiler, that is based on Low Level Virtual Machine and includes the VCC C++-compiler.

As we mentioned above, to increase the effective A-scan rate, we designed the pipeline such, that it is capable to perform spectral splitting. The processing pipeline was implemented to be able of applying this technique whenever required by the imaging mode.

The pipeline was designed in a way that different imaging modes can be executed via a collection of terminal and flag commands and through parsing meta parameters from JSON configuration-files. This means that one only has to change the parameters in the JSON configuration-file and re-run the application to change the imaging mode or imaging and reconstruction parameters. We run the engine in a triggered data acquisition mode (AlazarTech’s no pre-trigger (NPT) dual port autoDMA mode), which copies data buffers from the ring buffer system of the data acquisition card (DAQ), via autoDMA directly into host memory (RAM) while the next buffer is already acquired, in a continuous fashion.

Optimal utilization of GPU resources also includes the optimization of memory transfer operations (memcopy) between the CPU (host) and the GPU (device). After the first buffer has been processed the second one gets copied onto the GPU. Once this loop is running, operations are further parallelized via streams allowing for copying already processed data from the GPU, copying the next raw buffer onto the GPU, all while the current data buffer is processed by the compute kernels. Processing of the current buffer is synchronized with the buffer acquisition. This current implementation leads to an offset in the data transfers and processing and could be further optimized, however, currently only introduces a marginal delay of a maximum of one inter-buffer time interval in the pipeline. If these operations are carried out in a serialized fashion, this often results in the GPU compute units being idle, something that should be avoided in order to optimize processing and data throughput.

For the Fast-Fourier Transform (FFT), we utilized the cuFFT library (NVIDIA), provided with CUDA Toolkit.

We implemented our processing pipeline in such a way, that it utilizes concurrent streams to maximize computing efficiency, reduces blocking of the CPU, and temporally parallelizes data transfer processes.

We implemented as many operations as possible in-place, meaning within the same allocated physical memory on the GPUs’ VRAM. This allows for a significant reduction of GPU idle time. Figure 4 displays a flow chart with the various (optional) steps and workflows of the processing pipeline.

 figure: Fig. 4.

Fig. 4. Schematic of the processing pipeline. (a) Raw buffers are treated according to the spectral splitting demands, via different strides of the pointer through the data buffer. Raw buffers are acquired by the DAQ and then directly copied onto the GPU for processing. (b) SS-OCT signal processing step: we perform as many operations as possible inplace, utilizing three different data buffers during an entire signal reconstruction run-through. (c) Remapping of every A-scan in a buffer onto the Cartesian volume grid. (d) Volume buffer, containing the entire reconstructed volume, ready to be processed by CAMPVis.

Download Full Size | PDF

Scanning in spiral-shaped patterns, where every A-scan is acquired with the same distance along the spiral, cannot be directly mapped on to a densely mapped equidistance Cartesian grid. The actual physical A-scan locations simply do not coincide spatially with an isotropically sampled raster scan pattern. We, therefore, map every A-scan to an equidistantly spaced grid, with a sampling density of half the spot size (FWHM), according to the cardinal theorem of interpolation. Even though considering several neighboring A-scans to interpolate the final pixel value would give the most accurate representation of the originally sampled signals, we implemented a nearest neighbor mapping to minimize processing time using the following approach:

Let R(x,y) be an isotropically sampled grid where x,y $\in$ X,Y, are the respective numbers of lateral samples of the volume. Also, let Θ be the set of A-scans of one entire volume, acquired in a spiral scanning fashion (A-scans per volume). To remap all A-scans, we calculate for every point, i.e. final A-scan positions in the Cartesian grid, R(x,y) the Cartesian distance of every point in our set of A-scans Θ. We then map, for every point in the remapping grid R(x,y), the closest resulting (x,y)-tuple, i.e. A-scan, to all Θ, based on the previous calculation of the nearest neighbor A-scan. The remapping table T(n), then contains up to three possible (x,y)-pairs in R(x,y) in which we map every A-scan a $\in$ Θ at least once and at most three times:

For every A-scan n $\in$ Θ we create the remapped look-up tuples, stored in R(x,y), such that:

$$T(n )= \mathop {\min }\limits_{\Delta ({x,y} )} \sqrt {{\Theta _n}{{({x,y} )}^2} - R{{({x,y} )}^2}} ,\;\forall n \in \Theta $$

As mentioned before, T(n) can then contain up to three (x,y)-pairs to which every nth A-scan gets mapped. The remapping files are generated during scan pattern calculation in MATLAB and cannot be changed on the fly.

2.4 CAMPVis

To display the reconstructed data, we use CAMPVis, a versatile rendering engine, capable of displaying 3D volumes of medical data in real-time [26]. It was originally developed to display ultrasound data in real-time and was adapted to work with our 4D-OCT engine.

CAMPVis is designed around the principle of defining processing pipelines based on a data container, a data structure from which all processors read data from and write new data to. A rendering pipeline is composed of several different processors, each of which is executed essentially to perform a specific data processing or rendering task like filtering, data precomputation or volume rendering. To communicate data from the OCT reconstruction algorithms to the display software, the reconstruction portion of our software pipeline continuously writes remapped data buffers into the RCP buffer RV1. This buffer is made available to the visualization software via an interprocess communication (IPC) interface, leveraging the CUDA IPC API and DeviceToDevice memory transfers to avoid costly memory transfers through the host’s PCI bus. CAMPVis continuously reads out the reconstructed data from the shared buffer and passes it sequentially through all processors within the current pipeline. CAMPVis utilizes GPU2, to run all rendering-related calculations. The shared memory buffer is guarded by an IPC mutex to prevent parallel reading and writing of the buffer from the two. This is crucial to guarantee data consistency and avoid showing intermediate, half-updated buffers. CAMPVis has a multitude of display options, but for our live 4D-OCT visualization we usually display the same canvas layout (see Fig. 5). All figures displaying colored 4D rendered scenes are generated by a processing pipeline that generates a false color overlay. It roughly segments the centroid voxel of every A-scan in an entire volume and fits a hyperbola between them, which functions as a color-code threshold. For better depth perception, anterior voxels are colored on a red color scale, while more posterior voxels are colorized on a blue scale. The 4D-OCT is rendered stereoscopically and displayed on a 3D screen, which works with polarization glasses.

 figure: Fig. 5.

Fig. 5. Layout of live 4D-OCT display rendering canvas. (a) 3D-rendered entire OCT volume. The widget is interactive and allows the user to rotate, move, zoom in and out, or change the origin in space of the rendered volume via mouse and keyboard commands. (b) cross-sectional B-scan (optical x-direction) and (c) the orthogonal direction cross-sectional B-scan (optical y-direction). (d) enface projection of the entire volume. The dynamic display ranges of the 3D-rendered volume (a), the cross-sectional B-scans ((b) and (c)) and of the enface (d) can be fine-tuned individually via the respective transfer functions to optimize the visual impression of every image. The screen capture video of this figure can be viewed in the supplementary materials (Visualization 1).

Download Full Size | PDF

The pipeline was tested with all imaging modes and was able to visualize the entire spectrum of 4D imaging modes in real-time without perceivable latency, which gives a very fluid impression of the displayed scene. We measured the averaged latency for an exemplary 4D mode with 10vol/s display rate to be 209ms (averaged 18 times). Measurements were evaluated as a frame-by-frame latency recorded via video. For the demonstration of a change in the surgical FOV, a needle was mounted to a robotic arm (Meca500, Mecademic Robotics, Inc., Montreal, Quebec, Canada). The arm was then moved out of the FOV with maximum acceleration. The camera we utilized to record the videos was a GoPro 8 (GoPro Inc., San Mateo, CA, USA) and we recorded the videos at 240FPS. To detect movements with this measurement technique at least one entire volume acquisition time slot has to elapse. Additionally, as a worst-case estimate, depending on where in the displayed scene changes take place an additional couple of buffers up to one volume’s acquisition time expire. CAMPVis has been shown to take up to 25ms for all rendering-related computations for similar volume dimensions and therefore provides a suitable estimate for single-volume rendering times [27]. Since we always render every frame in stereo i.e., twice, this adds an additional 50ms of latency. Memory copy operations from the reconstructed CUDA buffer to an OpenGL buffer and the display latency of the screen also introduce additional overall latency to our pipeline. This leads to a theoretical overall latency of 160ms-260ms, which is in good agreement with the range of measured latencies of our experiments. Due to the fact that this delay is carried over for the entire acquisition lower latencies cannot be achieved. In previous publications of 4D-OCT systems only the accumulated processing times of reconstruction and rendering were reported as latency and these details are therefore theoretical values, neglecting additional sources of latency. It is not completely obvious if our conducted measurement of reporting latency is best suited for overall latency assessment compared to prior art but we regard this as the most realistic reporting. CAMPVis updates the displayed scene, depending on the rate of change due to manipulation of the volume by the user in space dynamically. This enables a smooth perception of the volume, even if one moves or rotates within a time frame of less than a volume acquisition. Users can manipulate the volume in the 3D canvas in real-time.

We further developed a data export pipeline, with which we can export HDF5-encapsulated continuous volume series. Users can toggle between exporting every volume or only every nth volume. Additionally, the number of B-scans that get written to disk can be reduced. Data is copied into the host’s memory via a ring-buffer-system and written to disk. If the buffers ever exceed the machine’s memory, frames get dropped, until the currently buffered data is written to disk. If this happens, the number of stored volumes per time is decreased. The storage of these volume series enables offline data analysis and the development of image processing algorithms.

3. Results

To describe our system’s performance and to compare it to the theoretical specification values, we performed several characterization measurements of our OCT engine.

For measuring the lateral resolution, we imaged a USAF resolution test target using a raster capture pattern at 100kHz. After optimizing the OCT focus settings and calculating a maximum intensity enface projection the lateral resolution was measured to be 15.6µm in air (FWHM). The lateral resolution of OCT setups is only dependent on the central wavelength and the numerical aperture of the sample arm optics. Since the difference in central wavelength between the three sweep repetition rates of the VCSEL is negligible, we can assume that the lateral resolution for all laser modes is similar.

To evaluate the axial resolution, sensitivity, and roll-off of our prototype, we placed a neutral density filter in the sample arm and a retroreflector underneath the microscope. The neutral density filter was chosen carefully in order to prevent the detector from saturating. After optimizing OCT focus and polarization settings, we captured an A-scan and measured an axial resolution at FWHM of 6.6µm, 10.8µm, 12µm (for both, up- and down-sweep) for 100kHz, 600kHz, and 1.2MHz respectively. Even though imaging mode 3 has a similar bandwidth compared to imaging mode 1, we end up with a lower axial resolution due to limitations in the k-clock electronics and consequentially less sampling. In addition, the digital resolution was limited which could result in an incorrect pixel pitch. The two partial sweeps of mode 3 differ strongly in their sampling values and duty cycles. We, therefore, had to crop each partial sweep and due to the resulting loss in bandwidth, our actual resolution is much worse than the theoretical one. Because the OCT add-on module is mounted to the surgical microscope, which is held by a large arm, we failed to measure sensitivity with a static beam due to mechanical vibrations. These vibrations lead to large fluctuations of the back coupled intensity into the single-mode fiber. We, therefore, acquired a raster capture scan in the configuration described above and selected the A-scan with the highest SNR for determining the system’s sensitivity. This A-scan should correspond to the measurement where the lateral beam position is centrally aligned to the coupler and hence provides the highest back coupling into the single-mode fiber. The maximum sensitivity of our prototype was measured to be 106.4dB, 98.5dB, 95.3dB (down-sweep), and 95.2dB (up-sweep) each @4mW on the samples for 100kHz, 600kHz, and 600 × 2kHz respectively.

To estimate the imaging depth of the three sweep repetition rates we measured the pixel pitch for each mode by moving a sample over a defined physical distance in z-direction with a linear stage with a micrometer tuning screw. The ratio between physical movement and pixel distance in OCT B-scans gives us the pixel size in z-direction of our system. The following image depths were obtained: 29.7mm, 5.1mm, 3.1mm in tissue (n = 1.36) for all three sweep modes respectively.

3.1 GPU processing cycles

Since the data rate and data throughput varies with the imaging mode, i.e. the current laser setting, the duty cycle of the galvos, as well as the buffer size, we carefully monitored and benchmarked the GPU-utilization and processing times. The results of these benchmark measurements can be seen in Fig. 6, which provides a good qualitative overview on how GPU-utilization varies with different imaging modes.

 figure: Fig. 6.

Fig. 6. Screenshots of the Nvidia Nsight Systems Profiler, displaying the compute times of a full OCT buffer reconstruction cycle for five different selected imaging modes. All displayed timelines ((i)-(v)) were each taken at arbitrary points in time during reconstruction. The green continuous bars display memcopy processes, and the blue continuous ones display compute times of the kernels. The two lines below display the individual kernel execution times and individual memcopy processes. (i) Raster Scan, Cross Scan (ii), Spiral Scan with an effective A-scan rate of 600kHz (iii), Spiral Scan with an effective A-scan rate of 1.2MHz, utilizing spectral splitting on the 600kHz mode (iv), Spiral Scan with an effective A-scan rate of 1.2MHz (v). For a better comparison, we display all time scales identically, even though the selected modes are not meant to be directly compared to each other. However, with an offset from mode to mode.

Download Full Size | PDF

The exact execution times of the individual kernels, timing of the memcopy operations (onto the GPU for processing), as well as the buffer sizes and shapes of every displayed scan mode, can be found in Table 2.

Tables Icon

Table 2. GPU compute times for performance evaluation of the selected imaging modes. Note that all kernels were benchmarked using the same number of blocks and threads per block. The FFT-Kernels are internally treated as two different proprietary kernel invocations during the cuFFT and therefore both average execution times are displayed separately.

It is worth mentioning that these benchmarks were taken to emphasize how the duty cycles of the GPU-processing steps of our pipeline are heavily dependent on the size of the data buffers for a specific scan mode. Scan modes have the potential to be optimized individually regarding the compute time, via altering the number of blocks and threads per block. This, however, requires meticulous fine-tuning and benchmarking for every combination of threads for each and every block. Since compute times were not a limiting factor, we decided to compare the kernels with a common number threads per block, which was chosen as 192 threads per block. However, as one can see in Fig. 6, the compute time in our OCT signal reconstruction pipeline, is currently not the bottleneck as far as the speed of our visualization pipeline is concerned. The exact compute times are listed in Table 2. CAMPVis runs as a separately launched application, in parallel to the OCT signal processing pipeline. Both applications solely share a volume memory buffer, whilst otherwise running independently from each other. This makes it impossible to benchmark the compute times for both applications simultaneously. Additionally, to largely relying on OpenGL functionality, compute requirements in graphics pipelines are highly dynamic, which prevented us from measuring compute times for CAMPVis with the Nsight Systems Profiler individually.

3.2 B-scan and volume capture imaging

For evaluating our imaging modes for surgical use, we performed various wet labs with the help of trained ophthalmic surgeons. The surgeons were asked to perform the procedures as they would normally do in a clinical setting and were allowed to choose between using the microscope view through the oculars or on the 3D 4K screen of the ARTEVO 800. At the same time, surgeons were shown the OCT visualization on the ARTEVO 800’s screen. Although it was challenging staying within the FOV, surgeons could even manipulate the displayed scene and perform mock operations, solely relying on OCT data. All surgical maneuvers were performed on ex vivo bovine and porcine eyes with an optical power of 4.0mW. This power would also be considered safe for imaging the human eye in vivo, as long as the laser power and the motion of the scanners is carefully monitored and the laser is quickly shut off in case of an error. For posterior segment maneuvers, the anterior segment was separated from the posterior segment prior to conducting the retinal procedure. This was necessary because the anatomical dimensions of bovine eyes differ from human eyes and our optics and engine cannot compensate for that. In addition, the signal quality benefited.

We utilized B-scan line and raster capture scan patterns to capture static scenes before, during, and after the operation. These data were reconstructed offline afterward with optimized reconstruction and display parameters. Figure 7 shows a B-scan of a cow’s retina which was acquired at 100kHz and covers a lateral width of 11.8mm. The image was cropped in depth to the region of interest (ROI) of 8.7mm in tissue. In the supplementary materials (Visualization 2) a video of a rendering of a raster capture scan showing the anterior segment of a porcine eye can be found.

 figure: Fig. 7.

Fig. 7. B-scan of cow's retina covering an area of 8.7mmx11.8 mm (in z-, x-direction). One can clearly see a detached membrane (indicated by blue arrows) towards the anterior side of the scan and a large retinal vessel (indicated by yellow arrows).

Download Full Size | PDF

3.3 Biometry scans

To compare our full eye length scan to a biometry scan of a commercial device we imaged the calibration test eye that is included in IOL Master 500 (Carl Zeiss Meditec AG, Jena, Germany) and compared the resulting eye length to a biometry scan of our prototype. The B-scan of the test eye that was taken with our prototype can be seen aside from the analyzed A-scan in Fig. 8. With the IOL Master 500, we measured an eye length of 20.81mm which corresponds well to the depth specified by the manufacturer of the test eye of 20.82 ± 0.05mm. With our prototype, we determined an eye length of 20.96 mm. This deviation of approximately 0.14 mm could be caused by an imprecise estimation of the depth pixel pitch of our imaging system, inaccurate positioning of the B-scan, or the fact that we used a different group refractive index than the manufacturer of the test eye.

 figure: Fig. 8.

Fig. 8. B-scan of a phantom eye mimicking the length of an eye. The blue dashed horizontal lines indicate the positions of cornea and retina and the blue solid line the A-scan that was taken for analyzing the eye length. The intensity profile of that A-scan along the depth is shown on the right. The other white horizontal lines in the B-scan are imaging artifacts, originating from the DAQ and optical system.

Download Full Size | PDF

3.4 4D imaging

For 4D imaging we evaluated several imaging modes which differ mainly by their volume rate and FOV. Imaging modes designed for retina imaging are characterized by smaller FOVs and higher volume rates, whereas imaging modes for the anterior segment typically require larger FOVs in order to cover the area of interest sufficiently.

In Fig. 9 a series of frames from a 4D-OCT video can be seen. A surgeon is moving a 23G forceps above the retina of a cow’s eye. Following the picture series from (a) to (d), the tool gets closer to the retina. This imaging mode covers a FOV of 4.5 mm and runs at 10vol/s. The spectral splitting approach was used to double the A-scan rate from 600kHz to 1.2MHz effectively.

 figure: Fig. 9.

Fig. 9. 23G surgical forceps moving above the retina of a cow’s eye. Looking at the images from (a) to (d), the surgeon gets closer to the retina. The video can be viewed in the supplementary materials (Visualization 3).

Download Full Size | PDF

A side-by-side display of the microscopic enface view and the 3D live rendered OCT volume is illustrated in Fig. 10. These videos were recorded in parallel and aligned by analyzing their individual time stamps. For OCT acquisition the same imaging mode was used as for Fig. 9. The image series presents a retinal membrane peeling process of a cow’s retina. The surgeon grasps the membrane (a) and then starts the pulling process ((b)-(d)). In the OCT images it can nicely be observed how the retinal layers are lifted ((c)-(d)). Both visualizations illustrate the tissue stress which results in a deformation of the retinal vessels ((b)-(d)). A video of the displayed scene of Fig. 10 can be appreciated in the supplementary materials (Visualization 1). As mentioned previously, our prototype can also be used for anterior segment imaging. A 4D video showing a FOV of 9.7mm at 3Hz volume rate of a knife moving above the cornea of a porcine eye can be found in the supplementary material (Visualization 4).

 figure: Fig. 10.

Fig. 10. Microscopic enface view (left) aside 4D-OCT volumes (right) of membrane peeling procedure in ex vivo cow's eyes (see supplementary materials Visualization 1).

Download Full Size | PDF

For 4D-OCT live imaging we utilized additionally the fastest sweep repetition rate of the laser which is 1.2MHz (see supplementary materials Visualization 5 and Visualization 6). Since this imaging mode is limited to an imaging depth of 3.1mm it is mainly suitable for retinal imaging. In Fig. 11 a series of a 4D-OCT visualizations at 1.2MHz can be seen. A cow’s retina was imaged at 10 volumes per second with an imaging mode that covers a FOV of 4.0mm. Instead of a depth colorization this video is rendered in grayscale. A nucleus chopper is moved above the retina (see supplementary materials Visualization 6).

 figure: Fig. 11.

Fig. 11. 4D-OCT volume series of surgical tool moving above the retina of a cow’s eye. (d) Rendering is 180° rotated compared to (a) – (c).

Download Full Size | PDF

4. Conclusion and discussion

We presented in this manuscript a versatile 4D-OCT engine, which we interfaced with a state-of-the-art ophthalmic surgical microscope. It was designed with a multitude of switchable imaging modes for a great variety of ophthalmic surgical procedures. It can toggle between biometry scans, intraoperative volumetric documentation, and diagnostic scans, as well as 4D-OCT imaging for the live visualization of surgical procedures. This was realized by tuning the VCSEL’s sweep repetition rate and spectral sweep range as well as adjusting galvo driving and data acquisition. We evaluated the imaging modes in ex vivo cow’s eye models in various mock retina maneuvers.

With our prototype, intraoperative live B-scan imaging, cross-sectional B-scan imaging, as well as entire raster capture scans are no longer limited to a shallow imaging depth but can capture up to the entire length of a human eye instantaneously.

Intraoperative biometry scans can potentially be used for intrasurgical intraocular lens power estimation which is especially of interest for special situations, such as for eyes with a history of prior myopic surgery or for dense cataracts. For these eyes, the calculation based on preoperative scans lacks accuracy or is not possible at all [28,29]. Intraoperative raster capture scans can not only be used for documentation purposes, but can additionally serve for more sophisticated image processing algorithms, i.e., segmentations, OCTA, thickness maps, and can support surgeons to decide whether a procedure is completed. For epiretinal membrane peelings, it is for instance of interest to identify residual inner limiting membrane in order to prevent recurrence [30]. Checking at the end of the surgery for the presence of membrane might lower recurrence rates.

Compared to live B-scan imaging, 4D-OCT imaging stands out by providing spatially resolved densely sampled volumetric data in real-time. It holds large potential as a visualization tool for performing anterior and posterior segment surgery. 4D imaging modes for both segments of the eye differ mainly by their FOV and volume rate. Retinal procedures usually target very small regions and can therefore be performed with fairly small FOVs. Whereas for anterior segment surgeries, we are often in need of larger FOVs that ideally cover the entire pupil aperture or even the full anterior chamber for better orientation.

Procedures that will most likely benefit from 4D-OCT visualization are especially depth-critical surgical interventions, such as corneal transplantations, subretinal injections or retinal membrane peelings.

One inherent limitation of this technology is that one always faces a trade-off between volume rate, FOV, imaging depth and axial resolution. The achievable FOV and volume rate of our prototype are currently limited by the k-clock electronics and the PCIe interface for data transfer of the digitizer. Therefore, increasing solely the sweep repetition rate of our VCSEL would not allow for gaining FOV or volume rate. Future generations of PCIe interfaces and DAQs will stand out by faster data transfer rates. Our benchmark tests indicate that the computational resources of the employed GPUs would still allow for considerably higher data rates. Besides, their computational power is still following Moore’s law, hence, computational power is also not expected to pose a bottle neck going forward. Depending on the surgical scenario, we chose higher volume rates for retinal procedures or larger FOVs for anterior procedures. Smaller FOVs come at the disadvantage of the displayed scene being easily misaligned with the location of surgery due to displacements of the imaging area. As mentioned above, one can trade-off axial resolution for a higher lateral sampling rate, as we already did by spectrally splitting each sweep to artificially double the effective A-scan rate. In our experience, a loss of axial resolution is tolerable for 3D live rendered images as they are typically displayed in a manner which does not convey high axial resolution content anyway. One could therefore imagine reducing the axial resolution further for 4D imaging modes. Due to the somewhat Gaussian spectral shape, splitting the sweeps in more than two sub-sweeps is however undesirable because it would result in significantly different energies across the sub-sweeps and therefore image artifacts. But one could design native laser sweep modes with significantly lower spectral bandwidth and higher sweep repetition rates to expand the FOV of 4D-OCT. In the presented videos in the supplementary material one can identify motion artifacts. These artifacts appear every time when the performed movement is faster than the interframe rate. Hence, slower volume rates increase this effect. In our experiments trained ophthalmic surgeons did however not complain about these artefacts. Nevertheless, improvements in volume rate are still desirable for an even smoother appearance.

Furthermore, depending on the imaging mode, we still have a confined imaging depth. This is because the laser’s k-clock electronics were limiting our maximum sampling rate.

Since ophthalmic surgeries differ vastly in terms of the required FOV and imaging depth needed to provide surgeons with optimal visual information, each intervention needs completely customized visualizations. Even the workflow of one specific ophthalmic surgery covers a variety of different steps. Each step comes with its unique difficulties regarding optimal visualization. Subretinal injections for instance, require the possibility of switching to live B-scan or the microscope’s video, depending on the current step and position of the needle relative to the retina. There is still a necessity to investigate which path of visualization is most advantageous for which kind or part of the surgery.

In this work, only animal models were used for evaluating surgical procedures. Even though ex vivo porcine and bovine eye models can serve as easily accessible phantoms, they do not show the same anatomy as human eyes. In addition, many ophthalmic diseases are age-related, whereas our model eyes originate from adolescent animals. They, therefore, yield, in addition to the different anatomical conditions, distinct haptic characteristics, such as softer lenses and more fragile capsular bags. Regarding image quality, we expect an increased retinal signal in an in vivo setting because the retina deteriorates quickly after death which leads to a reduced retinal signal and especially decreased contrast between retinal layers. This results in significantly reduced perceived image quality compared to in vivo human retina tomograms.

A challenge for intraoperative OCT in general is the complexity of operation. It is very difficult for the surgeon to pay attention to aligning the OCT scan and continuously optimizing its image quality while performing micro-surgery. Automation of intraoperative OCT systems is therefore going to have a huge impact on their clinical success. This includes lateral and axial positional tracking and optimization of focus and polarization. Additionally, an intuitive way of setting the camera perspective has to be implemented such that it suits the operator’s preference for the current task.

In conclusion, we presented the most flexible MHz OCT in an ophthalmic surgical microscope. By creating application-specific imaging modes that include B-scans, raster capture scans, and 4D live scans we can cover various ophthalmic imaging needs within one engine. To address all imaging scenarios, the engine configurations can be matched to the scanning requirements by tuning the MEMS-VCSEL module. We see this technology as the future visualization modality for ophthalmic surgical procedures and fully believe that surgeons will at some point no longer solely rely on the traditional surgical microscope view. However, as the generated data rates are almost incomprehensible to humans, we believe 4D-OCT is going to reach its true pinnacle when combined with image analysis algorithms and surgical robots. Yet in current clinical settings, its success is going to depend on its usability and further technological advancements to expand its FOV.

Funding

Carl Zeiss Meditec AG.

Acknowledgements

We would like to acknowledge S. Duca, A. Eslami, O. Findl, L. Hattenbach, C. Hauger, A. Hoegele, M. Kendrisic, T. Lang, J.-M. Masch, J. Nienhaus, T. Schlegl, S. Schulz, M. Sommersperger, J. Steffen, S. Pfeiffer, and A. Pollreisz for technical support, fruitful discussions, and helpful input at various stages of the project.

Disclosures

WD and RAL: Carl Zeiss Meditec (C, F)

NHD, HR, TS and BS: Carl Zeiss Meditec (E)

Data Availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. W. Drexler and J. G. Fujimoto, Optical Coherence Tomography: Technology and Applications (Springer, 2015).

2. M. Everett, S. Magazzeni, T. Schmoll, and M. Kempe, “Optical coherence tomography: from technology to applications in ophthalmology,” Translational Biophotonics 3(1), e202000012 (2020. [CrossRef]  

3. J. F. de Boer, R. A. Leitgeb, and M. Wojtkowski, “Twenty-five years of optical coherence tomography: the paradigm shift in sensitivity and speed provided by Fourier domain OCT [Invited],” Biomed. Opt. Express 8(7), 3248–3280 (2017). [CrossRef]  

4. R. K. Wang and L. An, “Multifunctional imaging of human retina and choroid with 1050-nm spectral domain optical coherence tomography at 92-kHz line scan rate,” J. Biomed. Opt. 16(5), 050503 (2011). [CrossRef]  

5. B. Potsaid, I. Gorczynska, V. J. Srinivasan, Y. Chen, J. Jiang, A. Cable, and J. G. Fujimoto, “Ultrahigh speed spectral / Fourier domain OCT ophthalmic imaging at 70,000 to 312,500 axial scans per second,” Opt. Express 16(19), 15149 (2008). [CrossRef]  

6. R. Leitgeb, F. Placzek, E. Rank, L. Krainz, R. Haindl, Q. Li, M. Liu, A. Unterhuber, T. Schmoll, and W. Drexler, “Enhanced medical diagnosis for dOCTors: a perspective of optical coherence tomography,” J. Biomed. Opt. 26(10), 100601 (2021). [CrossRef]  

7. O. Kocaoglu, T. Turner, Z. Liu, and D. Miller, “Adaptive optics optical coherence tomography at 1 MHz,” Biomed. Opt. Express 5(12), 4186 (2014). [CrossRef]  

8. S. H. Yun, G. J. Tearney, J. F. de Boer, and B. E. Bouma, “Motion artifacts in optical coherence tomography with frequency-domain ranging,” Opt. Express 12(13), 2977 (2004). [CrossRef]  

9. F. K. Chen, R. D. Viljoen, and D. M. Bukowska, “Classification of image artefacts in optical coherence tomography angiography of the choroid in macular diseases,” Clin. Exp. Ophthalmol. 44(5), 388–399 (2016). [CrossRef]  

10. S. R. Chinn, E. A. Swanson, and J. G. Fujimoto, “Optical coherence tomography using a frequency-tunable optical source,” Opt. Lett. 22(5), 340 (1997). [CrossRef]  

11. J. P. Kolb, T. Klein, C. L. Kufner, W. Wieser, A. S. Neubauer, and R. Huber, “Ultra-widefield retinal MHz-OCT imaging with up to 100 degrees viewing angle,” Biomed. Opt. Express 6(5), 1534 (2015). [CrossRef]  

12. M. Niederleithner, A. Britten, L. Ginner, M. Salas, H. Ren, M. A. Arain, R. A. Williams, W. Drexler, R. A. Leitgeb, and T. Schmoll, Clinical Megahertz-OCT for ophthalmic applications (Conference Presentation), in Ophthalmic Technologies XXX (2020).

13. M. Niederleithner, L. De Sisternes, H. Stino, A. Sedova, T. Schlegl, H. Bagherinia, A. Britten, P. Matten, U. Schmidt-Erfurth, A. Pollreisz, W. Drexler, R. A. Leitgeb, and T. Schmoll, “Ultra-widefield OCT angiography,” IEEE Trans. Med. ImagingTMI.2022.3222638 (2022). [CrossRef]  

14. J. P. Kolb, W. Draxinger, J. Klee, T. Pfeiffer, M. Eibl, T. Klein, W. Wieser, and R. Huber, “Live video rate volumetric OCT imaging of the retina with multi-MHz A-scan rates,” PLoS One 14(3), e0213144 (2019). [CrossRef]  

15. O. M. Carrasco-Zevallos, B. Keller, C. Viehland, L. Shen, G. Waterman, B. Todorich, C. Shieh, P. Hahn, S. Farsiu, A. N. Kuo, S. A. Toth, and J. A. Izatt, “Live volumetric (4D) visualization and guidance of in vivo human ophthalmic surgery with intraoperative optical coherence tomography,” Sci. Rep. 6(1), 31689 (2016). [CrossRef]  

16. I. Grulkowski, J. J. Liu, B. Potsaid, V. Jayaraman, C. D. Lu, J. Jiang, A. E. Cable, J. S. Duker, and J. G. Fujimoto, “Retinal, anterior segment and full eye imaging using ultrahigh speed swept source OCT with vertical-cavity surface emitting lasers,” Biomed. Opt. Express 3(11), 2733 (2012). [CrossRef]  

17. B. Potsaid, V. Jayaraman, J. Fujimoto, J. Jiang, P. J. Heim, and A. Cable, MEMS tunable VCSEL light source for ultrahigh speed 60kHz - 1 MHz axial scan rate and long range centimeter class OCT imaging, SPIE BiOS. Vol. 8213 (SPIE, 2012).

18. V. Jayaraman, B. Potsaid, J. Jiang, G. Cole, M. Robertson, C. Burgner, D. John, I. Grulkowski, W. Choi, T. H. Tsai, J. Liu, B. Stein, S. Sanders, J. Fujimoto, and A. Cable, High-speed ultra-broad tuning MEMS-VCSELs for imaging and spectroscopy, SPIE Microtechnologies, Vol. 8763. (SPIE, 2013).

19. R. Huber, M. Wojtkowski, and J. G. Fujimoto, “Fourier Domain Mode Locking (FDML): A new laser operating regime and applications for optical coherence tomography,” Opt. Express 14(8), 3225 (2006). [CrossRef]  

20. V. Jayaraman, G. D. Cole, M. Robertson, C. Burgner, D. John, A. Uddin, and A. Cable, “Rapidly swept, ultra-widely-tunable 1060 nm MEMS-VCSELs,” Electron. Lett. 48(21), 1331–1333 (2012). [CrossRef]  

21. D. D. John, C. B. Burgner, B. Potsaid, M. E. Robertson, B. K. Lee, E. J. Choi, A. E. Cable, J. G. Fujimoto, and V. Jayaraman, “Wideband Electrically-Pumped 1050 nm MEMS-Tunable VCSEL for Ophthalmic Imaging,” J. Lightwave Technol. 33(16), 3461–3468 (2015). [CrossRef]  

22. P. Qiao, K. T. Cook, K. Li, and C. J. Chang-Hasnain, “Wavelength-Swept VCSELs,” IEEE J. Sel. Top. Quantum Electron. 23(6), 1–16 (2017). [CrossRef]  

23. O. M. Carrasco-Zevallos, C. Viehland, B. Keller, R. P. McNabb, A. N. Kuo, and J. A. Izatt, “Constant linear velocity spiral scanning for near video rate 4D OCT ophthalmic and surgical imaging with isotropic transverse sampling,” Biomed. Opt. Express 9(10), 5052–5070 (2018). [CrossRef]  

24. L. Ginner, C. Blatter, D. Fechtig, T. Schmoll, M. Groeschl, and R. A. Leitgeb, “Wide-Field OCT Angiography at 400 KHz Utilizing Spectral Splitting,” Photonics 1(4), 369–379 (2014). [CrossRef]  

25. D. Theisen-Kunde, W. Draxinger, M. M. Bonsanto, P. Strenge, N. Detrez, R. Huber, and R. Brinkmann, “1.6 MHz FDML OCT for Intraoperative Imaging in Neurosurgery,” in European Conferences on Biomedical Optics 2021 (ECBO). 2021. Munich: Optica Publishing Group.

26. J. Weiss, U. Eck, M. A. Nasseri, M. Maier, A. Eslami, and N. Navab, Layer-Aware iOCT Volume Rendering for Retinal Surgery. 2019.

27. J. Weiss, M. Sommersperger, A. Nasseri, A. Eslami, U. Eck, and N. Navab, “Processing-Aware Real-Time Rendering for Optimized Tissue Visualization in Intraoperative 4D OCT,” Med. Image Comput. Comput. Assist. Interv. 12265, 267–276 (2020). [CrossRef]  

28. A. Akman, L. Asena, and S. G. Gungor, “Evaluation and comparison of the new swept source OCT-based IOLMaster 700 with the IOLMaster 500,” Br. J. Ophthalmol. 100(9), 1201–1205 (2016). [CrossRef]  

29. T. Ianchulev, K. J. Hoffer, S. H. Yoo, D. F. Chang, M. Breen, T. Padrick, and D. B. Tran, “Intraoperative refractive biometry for predicting intraocular lens power calculation after prior myopic refractive surgery,” Ophthalmology 121(1), 56–60 (2014). [CrossRef]  

30. S. A. Schechet, E. DeVience, and J. T. Thompson, “The effect of internal limiting membrane peeling on idiopathic epiretinal membrane surgery, with a review of the literature,” Retina 37(5), 873–880 (2017). [CrossRef]  

Supplementary Material (6)

NameDescription
Visualization 1       Side-by-side recording of surgical microscope (left) and 4D OCT rendering canvas (right) of mock surgical maneuvers on bovine retina
Visualization 2       Recording of an OCT volume of the anterior segment of a porcine eye with an implanted intraocular lens (IOL).
Visualization 3       Recording of 4D OCT canvas of mock surgical maneuvers on bovine retina, acquired at 600kHz.
Visualization 4       Recording of 4D OCT canvas of mock maneuvers on cornea of porcine eye, acquired at 3 volumes per second.
Visualization 5       Recording of 4D OCT canvas (color) of mock maneuvers including tweezers on artificial retina, acquired at 1200kHz.
Visualization 6       Recording of 4D OCT canvas (black and white) of mock maneuvers including tweezers on artificial retina, acquired at 1200kHz.

Data Availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1.
Fig. 1. Optics schematic of the 4D-OCT engine highlighting the allocation of components in the integrated module and the add-on module.
Fig. 2.
Fig. 2. Add-on module for connecting the sample arm of the OCT engine to the surgical microscope. (a) Interface between microscope and add-on module. (b) RESIGHT 700 attached to the add-on module.
Fig. 3.
Fig. 3. Scan patterns used in microscope integrated OCT. (a) B-scan line. (b) B-scan cross. (c) Raster capture. (d) 4D Spiral.
Fig. 4.
Fig. 4. Schematic of the processing pipeline. (a) Raw buffers are treated according to the spectral splitting demands, via different strides of the pointer through the data buffer. Raw buffers are acquired by the DAQ and then directly copied onto the GPU for processing. (b) SS-OCT signal processing step: we perform as many operations as possible inplace, utilizing three different data buffers during an entire signal reconstruction run-through. (c) Remapping of every A-scan in a buffer onto the Cartesian volume grid. (d) Volume buffer, containing the entire reconstructed volume, ready to be processed by CAMPVis.
Fig. 5.
Fig. 5. Layout of live 4D-OCT display rendering canvas. (a) 3D-rendered entire OCT volume. The widget is interactive and allows the user to rotate, move, zoom in and out, or change the origin in space of the rendered volume via mouse and keyboard commands. (b) cross-sectional B-scan (optical x-direction) and (c) the orthogonal direction cross-sectional B-scan (optical y-direction). (d) enface projection of the entire volume. The dynamic display ranges of the 3D-rendered volume (a), the cross-sectional B-scans ((b) and (c)) and of the enface (d) can be fine-tuned individually via the respective transfer functions to optimize the visual impression of every image. The screen capture video of this figure can be viewed in the supplementary materials (Visualization 1).
Fig. 6.
Fig. 6. Screenshots of the Nvidia Nsight Systems Profiler, displaying the compute times of a full OCT buffer reconstruction cycle for five different selected imaging modes. All displayed timelines ((i)-(v)) were each taken at arbitrary points in time during reconstruction. The green continuous bars display memcopy processes, and the blue continuous ones display compute times of the kernels. The two lines below display the individual kernel execution times and individual memcopy processes. (i) Raster Scan, Cross Scan (ii), Spiral Scan with an effective A-scan rate of 600kHz (iii), Spiral Scan with an effective A-scan rate of 1.2MHz, utilizing spectral splitting on the 600kHz mode (iv), Spiral Scan with an effective A-scan rate of 1.2MHz (v). For a better comparison, we display all time scales identically, even though the selected modes are not meant to be directly compared to each other. However, with an offset from mode to mode.
Fig. 7.
Fig. 7. B-scan of cow's retina covering an area of 8.7mmx11.8 mm (in z-, x-direction). One can clearly see a detached membrane (indicated by blue arrows) towards the anterior side of the scan and a large retinal vessel (indicated by yellow arrows).
Fig. 8.
Fig. 8. B-scan of a phantom eye mimicking the length of an eye. The blue dashed horizontal lines indicate the positions of cornea and retina and the blue solid line the A-scan that was taken for analyzing the eye length. The intensity profile of that A-scan along the depth is shown on the right. The other white horizontal lines in the B-scan are imaging artifacts, originating from the DAQ and optical system.
Fig. 9.
Fig. 9. 23G surgical forceps moving above the retina of a cow’s eye. Looking at the images from (a) to (d), the surgeon gets closer to the retina. The video can be viewed in the supplementary materials (Visualization 3).
Fig. 10.
Fig. 10. Microscopic enface view (left) aside 4D-OCT volumes (right) of membrane peeling procedure in ex vivo cow's eyes (see supplementary materials Visualization 1).
Fig. 11.
Fig. 11. 4D-OCT volume series of surgical tool moving above the retina of a cow’s eye. (d) Rendering is 180° rotated compared to (a) – (c).

Tables (2)

Tables Icon

Table 1. Technical specifications of OCT engine.

Tables Icon

Table 2. GPU compute times for performance evaluation of the selected imaging modes. Note that all kernels were benchmarked using the same number of blocks and threads per block. The FFT-Kernels are internally treated as two different proprietary kernel invocations during the cuFFT and therefore both average execution times are displayed separately.

Equations (1)

Equations on this page are rendered with MathJax. Learn more.

T ( n ) = min Δ ( x , y ) Θ n ( x , y ) 2 R ( x , y ) 2 , n Θ
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.