Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Review of fluorescence guided surgery visualization and overlay techniques

Open Access Open Access

Abstract

In fluorescence guided surgery, data visualization represents a critical step between signal capture and display needed for clinical decisions informed by that signal. The diversity of methods for displaying surgical images are reviewed, and a particular focus is placed on electronically detected and visualized signals, as required for near-infrared or low concentration tracers. Factors driving the choices such as human perception, the need for rapid decision making in a surgical environment, and biases induced by display choices are outlined. Five practical suggestions are outlined for optimal display orientation, color map, transparency/alpha function, dynamic range compression, and color perception check.

© 2015 Optical Society of America

1. Introduction

The goal of this review is to propose a set of best practices for presenting fluorescence-derived information (emission intensity, concentration, ratiometric values, or receptor concentration) to the surgeon in an efficient manner that provides the full potential of fluorescence guided surgery (FGS) while minimizing interpretive error. Despite being a critical step in almost every FGS workflow, forming the link between the detection and characterization of biologically-relevant fluorescence markers and clinical action, data visualization in this context has not received adequate attention. Following a relevant description of the anatomical and neurophysiological basis for human vision and perceptual processing, four major topics are discussed: 1) color map representation of scalar measurements, 2) transparency functions for fusion of measurement data with white-light imaging, 3) real-time image compression for high dynamic range (12 – 22 bit) camera data, and 4) heads-up display visualization techniques. These form the basis for the conclusions which are highlighted thereafter. Finally, we introduce an open-source MATLAB-based graphical user interface (GUI) which can facilitate further investigation and implementation of these best practices. A rigorous and intentional focus on optimizing visualization methods will improve the interpretive value of FGS as a clinical procedure which will become more widely adopted.

Modern intraoperative technology has allows a tremendous amount of information to be provided to the surgeon during a procedure—e.g., pre-operative MRI [1], CT [2], ultrasound [3] and PET [4] scans that are repeatedly viewed during the procedure and updated, intraoperative real-time tracking of surgical tools, haptic feedback from robotic manipulators [5]—for planning and navigation. Nevertheless, the actual resection of malignant tissue depends almost exclusively on the surgeon’s visual perception of the surgical field. Given the preeminence of vision in performing this task, there is reluctance to introduce any additional visual information that might obfuscate the colorimetric, textural, or contextual information that is otherwise observed in the surgical field. Therefore, the majority of information that has entered the surgeon’s visual field as augmented reality has been targeted to the planning or post-resection verification stages. The advent of fluorescence guided surgery brings a stronger impetus for real-time intraoperative visualization, since this information is at the very least intrinsically coregistered and in many cases, can be directly visualized through an operating microscope with the use of a specialized illuminant.

In specific applications, the introduction of this fluorescence-based exogenous visual information is becoming standard of care, exemplified by the adoption of 5-aminoleuvilenic acid-induced protophorphyrin IX [7] for glioma resection. Unlike some of the non-specific vascular optical tracers, ALA-induced PpIX has a high specificity for cancer cells [8], and has a demonstrated positive effect on extent of tumor resection. In addition, it emits light in the visible (red) part of the spectrum, which means it can be viewed naturally and turned on and off with the push of a button. The field of fluorescence guided surgery is relatively nascent compared with other image-guided technology, but is clearly in a boom stage of development (Fig. 1(A)). A myriad of new smart-targeted fluorophores are in various stages of pre-clinical investigation, including labeled monoclonal antibodies [9–11], activatable caspases [12], Affibody molecules [13, 14], nonobodies [15, 16], aptamers [17] and other proteins. It is likely that at least some of these agents will be in tomorrow’s operating room. Many of these probes will be labeled with fluorophores that re-emit light outside the visible spectrum. In an effort to reduce development costs or reduce saturation in vivo, some binding probes will be introduced to the patient in very small doses—termed “microdoses” [18, 19]—a strategy that also reduces the need for multispecies preclinical toxicology studies for federal approval, but necessitates the use of a highly sensitive camera for visualization. In other words, unlike the case of fluorescein sodium or ALA-induced PpIX, most new agents will not be directly observable in the operating microscope because of the wavelength in the NIR or their use at lower concentrations. For these agents to impact patient outcome, they must be accurately visualized and properly integrated into the surgeon’s visual field.

 figure: Fig. 1

Fig. 1 (A) The number of publications in “fluorescence-guided surgery” or “fluorescence-guided resection” in the past 25 years, showing the expotential growth in the field. (B) The Novadaq SPY Elite fluorescence imaging system, which has been at the forefront of the effort to expand fluorescence guided surgery capabilities, leading the commercial market. (C) Laproscopic images acquired under white-light and (D) by exciting indocyanine green which has been pseudocolored blue and overlaid onto C. (Source: Luigi Boni, MD [6]) showing a novel use of this for perfusion imaging of tissue.

Download Full Size | PDF

This type of image integration, often called an “image overlay” or “image fusion”, can be optimized through a deeper understanding of the process of human visual perception and the manner in which scalar information (e.g., fluorescence intensity) can be mapped to color space and integrated with the real or imaged field through transparency [20] or heads-up display [21]. The features affecting the success of this overlay and display process are reviewed here, with the goal of outlining procedures which have maximal benefit for surgical imaging in real time.

2. Data visualization and visual perception in the surgical environment

The goal of data visualization is to present numerical or categorical information in a format that enables efficient and accurate communication. In medical imaging, this is commonly done by mapping spatial distributions of scalar values to a color map representation or look up table (LUT). This enables visual interpretation of the scalars by the observer, since they can be converted back to their original scalar value on the basis of their color. In fluorescence-guided surgery, the scalar function is most often emission intensity [22], fluorophore concentration [23], or more recently, receptor concentration [24, 25], and in the case of emission outside the visible spectrum, it is desirable to map this information onto a white-light RGB or grayscale image through transparency blending. The exact manner in which values should be mapped to color and blended with the underlying image is a multidimensional problem. Elements of this problem space include a desire to maximize the efficiency of information transfer, while at the same time minimizing interpretive errors caused by perceptual artifacts. The framework should minimize loss-of-information due to the limited dynamic range of the human visual system and concerns regarding the need for a clean, uncluttered aesthetic are tangible. To properly address these issues, it is helpful to review the anatomy of vision, including deficiencies such as deuteranopia and protanopia, and understand the relative perceptual features that make human vision, for example, more sensitive to luminance than hue.

Processing and interpreting graphical data involves multiple levels of the human visual and perceptual system. The first step in the visual process involves the detection of photons by a collection of photoreceptor cells in the retina called cone and rod cells [26]. Upon interaction with an incident photon, a photochemical reaction occurs in a group of transmembrane receptors called opsins, which undergo conformational change, and communicate the detection of photons by neuronal firing to the visual cortex and other processing areas [27]. The human retina is comprised of cells with four different opsins: rhodopsin, which is expressed in rod cells and used in night vision, and three different photopsins known as short, medium and long wavelength sensitive, and have peak sensitivities at 420 nm, 534, and 564 nm, respectively (Fig. 2(A)). Rod cells are 1000x more sensitive to light than cone cells, contribute much more strongly to peripheral vision, and are about 20x more plentiful. Cone cells have much greater visual acuity, are sensitive to motion, and populate the central region of the fovea centralis. Fundamentally, these cells limit the perception of light to within the visible spectral range (400-700 nm). The spectral emission from an object is determined by the illuminant and the absorption and scattering by that object. Because the color of an object is influenced by the illuminant, the FDA introduced guidelines in 1998 encouraging adherence of surgical task lighting to standards defining total irradiance, central illuminance, correlated color temperature, and general color rendering. Indeed, surgical lighting has come a long way since the 19th century, when operating rooms in the northern hemisphere were built with southeast facing windows to maximize the natural lighting required for procedures to take place. The range of colors produced by absorption by tissue represents only a small gamut of the overall color space. Figure 2(C) shows the gamut determined from white-light cortical images acquired in 10 patients during craniotomy. Across subjects, the gamut is well conserved and occupies only 5% of the total perceivable color space: a limited dynamic range from which healthy tissue must be visually separated from malignant tissue. The impetus for fluorescence-based contrast is to provide an additional level of discrimination.

 figure: Fig. 2

Fig. 2 (A) The sensitivity of human photoreceptors for different wavelengths of light is shown (NB: the abscissa is defined according to a logarithmic scale). (B) The emission spectra of four common FGS fluorophores: fluorescein sodium (FS), protoporphyrin IX (PpIX), IRDye® 800CW, and indocyanine green (ICG). (C) The CIE 1931 x,y chromaticity map showing the sRGB gamut used by most LED and LCD monitors, the trajectory of the exemplary color map (koufonisi), and the gamut representing brain tissue. The koufonisi colormap is perceptually balanced and has mid-high colors which circumscribe the brain tissue gamut, giving a uniform chromatic contrast. The average brain tissue gamut was characterized from intracranial images acquired from 10 patients, of which (D) is an example.

Download Full Size | PDF

The spectral sensitivity of the opsins convolved with the emission of light from the visual field is not sufficient to explain how color is perceived. Instead, it is more helpful to understand color in CIELAB: a perceptually-uniform, device-independent, basis for modeling human color perception. In this color space, all perceivable colors are defined according to lightness (L*), red/green (a*) and yellow/blue (b*). The human visual system is most sensitive to changes in lightness, and therefore, is considered the primary means by which to convey scalar information through color. Furthermore, discontinuities or inflection points in the L* dimension of a color map, for example when viewed using the pyramid test [28], has been identified as a major source of interpretive error when representing continuous data by color [29, 30].

When considering data visualization in a complex visual environment, it is important to understand the process of “attentive capture”: the directing of attention towards a subset of the larger visual environment so that it may be perceived in greater detail. The processes of visual attention are often divided into either goal-directed or stimulus-driven, based on whether it is deliberately deployed or a result of something which occurs unexpectedly within the subject’s visual field. Goal-directed capture, also known as “top-down”, is the principle mechanism in the intraoperative context, and it is modulated by experience and familiarity. Regions of tissue which have properties that are consistent with appearance of malignant tissue, based on previous experience, will be given the highest salience. The addition of fluorescence information introduces a perceptual dimension which will compete for attention, and the surgeon will direct her or his attention towards the perceptual dimension that is most important at achieving the current goal [31]. In fluorescence mode, areas of high intensity as observed by visual fluorescence or through a graphical interface are likely to be given priority over low- or non-fluorescing regions. Therefore, resection of a series of cancer foci might occur in order of highest to lowest overall signal intensity. Another determinant is how much a given object differs from neighbouring objects within some given perceptual dimension [32]. For example, a color map representation of fluorescence which shows an area mapped as red, surrounded by an area mapped as green, may be given more attention than if it were surrounded by an area mapped as orange. A number of other factors will influence the attentive capture of objects including their distance from each other and any dynamic changes in intensity or spatial movement. Of particular importance to this review, work by Rock and Guttman demonstrate that it is possible to selectively attend to one of two objects which occupy the same spatial location, based on a unique perceptual dimension [33].

Given the complexity of the human visual system, there is unlikely to be a single “one-size-fits-all” approach to parametric data visualization. Nevertheless, by applying the biological and neurophysiological concepts discussed in the preceding section, it is possible to make deliberate choices when attempting to display data in the intraoperative environment that will enhance clinical procedure. The remainder of the review will be a practical discussion of the decision-points typically encountered in fluorescence visualization.

3. Color map visualization choices

Given the complexity of human color perception, the selection of color map for the representation of scalar values represents a challenging task. Poor color map selection can lead to misrepresentation of reality; however, good color map selection can highlight clinically salient regions. Broadly speaking, there are three main types of color maps: sequential, diverging, and segmenting (also called qualitative or categorizing). These are depicted in Fig. 3. Sequential and diverging color maps are well-suited to displaying continuous scalar values: not surprisingly, these are the most common scalar representation systems encountered in medical imaging. Segmenting or categorizing color maps are used to depict discrete types of information such as numerical ranges and categories (e.g., WHO tumor grade index). When a natural order to the categories is present, such as an image segmented into its statistical quintiles, discretized versions of sequential or diverging color maps can be used. However, when no order is obvious, unordered nominal color maps are recommended [29]. There is a subcategory of sequential color maps, which would correctly be described as a “uniform color” or “constant-value” color map—whereby no scalar information is encoded by the chromaticity or luminosity, but rather the information is entirely conveyed by the transparency with respect to an underlying image. In other words, the alpha channel (α) is varied while R,G, and B are held constant. The manner in which transparency is used to blend scalar maps with the underlying image is dealt with separately in Section 4.

 figure: Fig. 3

Fig. 3 (A) A representative sample of color maps used in medical imaging overlays (available in the OiM Overlay GUI), which include sequential, diverging, and categorical palettes. Examples of their use from literature include: (B) Doppler ultrasound image of placentia previa of blood flow using an opaquely-overlaid diverging hot/cold color map centered about a luminance nadir [34], (C) Axial fused PET/MR image, with FDG SUV values encoded by a hot color map and blended by a uniform transparency function [35], (D) Zeiss OPMI infrared 800 blood flow module showing a pseudocolor representation of ICG wash-in delay during arteriovenous malformation [36], (E) the resection of a sentinel lymph node in laproscopic surgery detected using ICG [37].

Download Full Size | PDF

It should be noted that the uniform color map is probably the most commonly used FGS literature [10, 38–48]. It is likely that the dominance of uniform color maps is largely attributable to their similarity to visible fluorescence emission actually observed, for example, when using ALA (Fig. 4(A), 4(F)), as well as the relative ease of overlaying single-channel 8-bit images in image processing software such as ImageJ (NIH, Bethesda, MD). For uniform color maps, green is by far the most common color used for these overlays, likely because it is complimentary to the red hues present in the surgical cavity. Despite the popularity of the uniform green overlay scheme for visualization, there are several issues which make it suboptimal for intraoperative applications. While transparency enables the integration of information from the scalar map with the underlying white-light image, using the alpha channel as the only means of encoding scalar data in the image space is problematic when the underlying image varies in brightness.

 figure: Fig. 4

Fig. 4 (A) The visual fluorescence emission from PpIX under blue light excitation on the Zeiss Pentero OPMI 800 surgical microscope during glioma resection, and (B) the RGB image acquired with white-light (~5500 K) illumination. (C) The PpIX concentration map recovered using hyperspectral imaging. (D) The [PpIX] is visualized using the multivariate koufonisi colormap and overlaid on the RGB image using the logistic function in (H) [max = 0.78, midpoint = 11.6 ug/ml, k = 11.8]. (E) The same information as in Panel C, but visualized using the myCarta cube1 color map (F) and as a single-value [RGB (7, 246, 64)] color map blended into the RGB image with the same transparency function (Panel H). (G) The 1931 CIE xy chromaticity plot showing the trajectories of the three color maps and the gamut from the RGB image.

Download Full Size | PDF

In a recent review, Nguyen and Tsien state that one of the major factors limiting sensitivity of FGS is the presence of shadows with decreased excitation power, and pooling blood which absorbs emitted photons [49]. Near-infrared fluorophores may offer improvements in both these areas, as light in this spectral band penetrates tissue deeper and is absorbed less by blood [50]; however, these gains may be diminished if the information is subsequently obscured by the low brightness regions of the bottom image. Another problem with this visualization scheme is that it offers poor perceptual resolution—the ability to visually determine the scalar values represented by the overlay—compared with sequential and diverging color maps [29]. While some applications will undoubtedly desire to remove “anything that glows”, other applications may require the surgeon to integrate a threshold value of fluorescence against other visual cues such as texture, shape and color on the white-light image. In either case, since no imaging agent provides “infinite” contrast with background, categorizing scalar information into either high probability or low probability of malignancy is a necessary step for successful application of FGS. Finally, the popularity of green uniform maps is of concern to the approximately 5% of the population that suffer from color vision deficiencies. In particular, the ability to distinguish between red and green may result in interpretive errors for those with deuteranopic and protanopic vision (Fig. 5).

 figure: Fig. 5

Fig. 5 (A) Color map image overlay of quantitative fluorescence (qFI) during ALA-induced PpIX human glioma resection [51]. (B) Intensity image of folate conjugated to fluoresceinisothiocyanate (FITC), pseudocolored and overlaid onto an RGB image during ovarian cancer resection [52], (C) Pseudocolored ICG overlay during breast cancer lymph node resection [53] (D) Sentinal lymph node mapping of non-small cell lung cancer metastesis which has been pseudocolored and overlaid onto an RGB image intraoperatively [46].

Download Full Size | PDF

The major requirements when choosing a color map for FGS are very similar to those in other applications of scientific visualization [29] and can be summarized as follows:

  • 1. Maximal perceptual resolution
  • 2. Minimal interference when blended with underlying shadows and 3D surfaces
  • 3. Perception of color map matches the underlying scalars.
  • 4. Maximal contrast with the color gamut represented in the bottom image.

A perceptually balanced, evenly contrasting, and chromaticity varying color map such as koufinisi, is presented in Fig. 2(C), which appears to fit the needs well. Regarding luminance, a monotonically increasing color map is more intuitive than a diverging one, which has a maximum luminance in the center. Many such color maps have been described and published [30, 54], and some examples are presented in Fig. 3. Diverging color maps have the added benefit of doubling the perceptual resolution, since the full range of luminance is available on either side of the midpoint to encode scalar values, and together with the transparency function, can be modulated to highlight the clinical threshold if such a value has been defined. Within this range of options, the choice will be largely a personal one on the part of the surgeon—this should not introduce significant variability as long as balanced, contrasting color maps are selected, and objective transparency functions are used, which is the topic of the next section.

4. Transparency function or look-up table choices

Appropriate blending of top and bottom images through transparency should augment the surgeon’s visual field with fluorescence information, and avoid introducing interpretive errors or obscuring the native information. The most basic information that the bottom image provides is context, so that the surgeon can safely navigate around anatomical features and approach suspicious areas with precision. Therefore, areas of low fluorescence information should be made transparent so that anatomical information is conveyed with the highest fidelity possible. Areas where fluorescence is extremely high—and associated with a correspondingly high positive predictive value—should be presented to the surgeon with high opacity, so that clinical decisions are weighted heavily towards this information. On these two aspects of transparency, there is wide agreement. How the bulk of the information in between these limits is blended is a matter of active debate.

Three major functions are used to control the alpha channel of the top image (Fig. 6): uniform distribution, linear function, and power function. The uniform distribution is straight forward and commonly used in multimodal imaging such as PET/CT or PET/MRI [55]. A benefit of this approach is that it is straight-forward and unbiased. However, with respect to FGS, areas of uniform background fluorescence will cause a reduction in the dynamic range of the bottom image, resulting in loss of information. A linear function involves selecting two control points—a low point, below which uniform minimum opacity is applied, and a high point, above which a uniform maximum opacity is applied. In this regard, it mimics a standard window and leveling procedure. This is a common choice due to its simplicity in implementation. The third function has some similarities to the linear procedure, but in addition to high- and low- thresholds, it has an additional control point allowing the line between these points to be a curve defined by a power function. Also known as the gamma function approach (due to its similarity to the “gamma correction” procedure used to compensate for input-output characteristic of a cathode ray tube display), it is a popular choice in molecular imaging and is supported by most pre-clinical imaging systems for a simple reason: it is aesthetically pleasing. The additional control point provides a great deal of control over how the top image appears, and parameters can be adjusted arbitrarily to provide either a large, diffuse region of fluorescence information, or a small concentrated area of fluorescence information. If the goal is to localize a bioluminescence signal from a xenograft tumor with a reporter transfect, it is an ideal choice. However, in the context of FGS, it is dangerous: it introduces a large source of bias and obfuscates the surgical margin.

 figure: Fig. 6

Fig. 6 The four transparency functions or look-up tables widely used in creating overlays.

Download Full Size | PDF

Given the current practices in the field, there is significant room for improvement. We seek well-defined control points that enable the surgeon to identify and act on the clinical thresholds bringing about the greatest effect on patient outcome. To this end, we suggest a fourth function—the logistic function—which combines much of the functionality of the linear and power function approaches, but provides control points with statistical underpinnings. Let y(x) be the function which describes opacity as a function of x, the scalar values representing the min-max normalized top image:

f(x)=L1+ek(xx0)
Free parameters include L, the amplitude of the upper asymptote, k, which defines the maximum slope of the transition which is the derivative taken at x0, the mid-point of the transition between 0 and L.

The benefit to using this formulation in comparison to the power function is that for applications involving FGS, the tumor probability function—the function mapping the continuous variable such as fluorophore concentration to the binary classifier (tumor vs. normal tissue)—also approximates a logistic function (Fig. 7) [56]. A probability function can be determined through clinical trials [57, 58], and then applied through the transparency function for future resections within a population representative of the sample group. The result is that the middle point of the function is made equal to the clinical threshold defining the boundary between positive and negative assignment, greatly facilitating the surgeons ability to perform the sort of visual ROC procedure which is implicitly required in FGS. Additionally, for an area of high positive predictive value, fluorescence information is automatically emphasized, and conversely, for an area of negative predictive value, fluorescence information is deemphasized, promoting prioritization of attentive capture based on other white-light image derived features, all while avoiding the introduction of bias normally associated with fusion and thresholding (Fig. 8). As a final point, such a function has the desirable result of being compatible with both sequential and diverging color maps, which achieve different goals in this context. Divergent color maps applied with this transparency function will help highlight the clinical threshold, and sequential color maps will exhibit a smoother blending of information from the bottom image so that in regions of low fluorescence, tumor probability is determined by other visual cues.

 figure: Fig. 7

Fig. 7 The relationship between normal distributions of measured parameter values in normal and tumor regions, and the true positive rate (TPR) if that value was selected as the clinical threshold for diagnosis. The resulting TPR logistic function for (A) scarcely overlapping and (B) greatly overlapping distributions are shown.

Download Full Size | PDF

 figure: Fig. 8

Fig. 8 The same fluorescence map overlaid using a uniform colormap but with different transparency functions. (A) Logistic function with x = 5 μg/ml (B) logistic function with x = 10 μg/ml, (D) linear function intersecting point (x = 14 μg/ml, y = 50%), (C) logistic function with x = 15 μg/ml. How the transparency function is defined will have large effects on the perceived margin of malignant tissue, highlighting the need for standardization.

Download Full Size | PDF

5. Image compression choices

A significant challenge for fluorescence-guided surgery is how to display a signal with high dynamic range (or bit-depth) in real-time. The traditional radiologic approach of window and level adjustment to maximize the display contrast may not be feasible quickly, and automated methods for display optimization are necessary. Fluorescence imaging display quality is a mixture of several features, and the detected intensity at the camera can vary by orders of magnitude within a scene. To compensate for undesirable background signals from tissue, non-linear components such as camera filters, and noise, many systems have shifted to high dynamic range cameras, so that simple removal or threshold can be applied to remove the background [59]. As camera bit-depths grow to accept 16-bit or higher bit-depth cameras and mainstream displays continue use 8-bits per channel, the problem of fluorescence intensity image representation and display is further compounded as images with large dynamic ranges of luminescence are now available. The value of image compression here will gain importance. This is obviously critical in areas where the real-time video stream is guiding the resection for tissue, such as with ALA-induced PpIX fluorescence imaging [7, 22, 60, 61] or with newer classes of molecular probes [44, 49, 62–64].

Dynamic range reduction of high-dynamic range (HDR) images [65], display optimization, and HDR display systems [66] are very active areas of research that have gained prominence within the computer vision and graphics communities in the last decade. Dynamic range-reduction of HDR images using automated mapping techniques is being explored [67] to provide automatic, fast, high-quality tone-mapping methods to improve quality of image display on devices with limited dynamic range. However, adaptation of such methods in display of fluorescence images has been limited, reflecting the nascence of high bit-depth cameras in fluorescence imaging for surgical guidance, as opposed to pre-clinical imaging where off-line equalization methods are permissible. Mapping techniques that are free from user bias similar to logarithmic compression used in ultrasound displays [68] could potentially improve the availability of information to a surgeon especially in real-time imaging situations where features within a scene can span several orders of magnitude in intensity (Fig. 9). Similar to the case of transparency function, the goal is to create a bias-free, consistent and reproducible framework for visualization that will not introduce interpretive errors such as the over-interpretation of low signal. We anticipate that dynamic range compression will receive more attention the technological ability to capture bit-depth continues to gain distance on the human visual system’s ability to perceive it.

 figure: Fig. 9

Fig. 9 Lymphatic uptake of fluorophore in a mouse is shown as the green overlay on grayscale white-light images. The leftmost column shows the original image and each subsequent column shows a processed image. Window-level adjustment with contrast-limited adaptive histogram equalization (CLAHE) was applied to the original images for comparison with log-compressed images. Well plates with various concentrations of fluorophore ranging over 3 orders of magnitude in concentration and fluorescence intensity are shown, and histograms corresponding to lymphatic uptake images. White arrows indicate a lymph vessel that is hard to detect at early time points without log compression. 5mm scale bars are shown.

Download Full Size | PDF

6. Display hardware choices

There is an active debate in the field of FGS as to which display hardware is best suited for visualizing fluorescence information: stand-alone monitor [39, 69] or microscope integrated heads-up display (HUD) [70]. More than simply user preference, studies have demonstrated that surgical task performance is influenced by location of image display devices [71] and the physical way in which the images are represented [72]. The answer may depend largely on the surgical application and the current workflow used for those procedures. For example, in sinus surgery, almost all procedures are performed using image-guided endoscopy, often through observation on a stand-alone monitor. It can therefore be expected that the addition of FGS data into this architecture would be of minimal burden. On the other hand, neurosurgical resection is principally done using operating microscopes. It would be highly desirable in this context to integrate FGS visualization into the actual ocular focal plane used by the surgeon, either by HUD or video feed. A hybrid option may well be the most viable solution: HUD visualization by contour (Fig. 10(B)) and stand-alone monitor visualization of a multivariate color map overlay (Fig. 4(D)).

 figure: Fig. 10

Fig. 10 (A) An early prototype of a HUD unit that was integrated into a clinical operating microscope allowing display of the augmented information into the surgeons microscope view. (B) An example contour representation of the data in Fig. 8 at threshold = 70%, showing the outline of the region for the surgeon. (C) A density point-cloud representation of the same data.

Download Full Size | PDF

Heads-up display (HUD) is a visualization framework which displays information semi-transparently so that the user isn’t required to divert attention from their present viewpoint. The term is borrowed from aviation, where it refers to a display that can be viewed while the pilot has their head positioned “up” and looking forward. In the context of an operating microscope, it enables the surgeon to view the information while keeping their head “down”, rather than having their head up to observe information on an auxiliary display monitor. These exist in two forms: integrated into the operating microscope eyepieces, or stand-alone in the form of eye ware [59, 73].

The case of an integrated configuration in the operating microscope (i.e., Zeiss MultiVision TM) a binary map is used to outline (Fig. 10(B)) or shade a region through hatching or dotting (Fig. 10(C)). In much the same way as the transparency function was informed by the tumor probability function in Section 4, the contours projected into the display could be determined by the clinical threshold, and additional contours, such as the line indicating values ± 10% from the clinical threshold, could be visualized by patterned lines or additional colors which is supported by many models. Traditionally, these have been viewed mainly to incorporate anatomical imaging into the planning and execution of a surgical procedure. For example, a surgeon might plan the location of an electrode implant using MRI, and then co-localize the locations of each electrode using the HUD. However, recent work has demonstrated their utility in visualizing ICG fluoresence [21].

Ongoing development in high dynamic-range and wide color gamut monitors, with accompanying software and operating system support, could have large implications for both data compression and color map visualization. Visual spectrum fluorophores such as PpIX emit fully saturated light, occupying the outermost region of the 1931 CIE chromaticity color space. When viewed under blue-light illumination the fuchsia-colored emission provides strong contrast with the background tissue that cannot be fully appreciated in the RGB image in Fig. 4(A). The ability to display a wider color gamut could extend this high-contrast imagery to color map representations of NIR fluorophores.

6. Guidelines for effective surgical fluorescence visualization

The following five conclusions have been determined based on the review of the literature highlighted in the preceding sections. Note that these are general guidelines applicable for many surgical applications; however, specific aspects of a procedure may require that modifications be made.

  • I. Display inset images of top and bottom images alone

    - When viewed on an auxiliary monitor, the white-light only and fluorescence only images are ideally independently displayed.

    - When fed into the operating microscope, or used in a heads-up display framework, there should be a simple way to toggle between these three modes (WL, FLI, and composite) in real time.

  • II. Select a univariate or multivariate perpetually-balanced color-map

    - Hue(s) should be selected based on their contrast to the surgical field and their compatibility with color vision deficiencies (Fig. 2(C)).

    - For multivariate color maps, scalar values should be encoded primarily by luminance. A second hue trajectory can be used to provide additional dynamic range when used in conjunction with a divergent luminance scheme.

    - For neurosurgery applications, we recommend the koufonisi color map, which is available for download as part of the OiM Overlay GUI.

  • III. Map alpha to a known or estimated logistic function

    - The logistic function should be determined based on an estimated or measured threshold of tumor positivity.

  • IV. Consider compression when actual dynamic range of camera is greater than 8-bit.

    - Logarithmic or other non-linear data compression should provide automated window and level adjustments for 14-bit and 16-bit information extending the range of information that can be simultaneously perceived.

  • V. Perform color vision deficiency, perception and display checks

    - To ensure maximum compatibility with the surgical team, a test for compatibility with deuteranopic and protanopic vision should be performed.

    - As an optional step, the Retinex model might be employed to test how an image luminance is being perceived due to other features present in the visual field.

    - The ability of the operating surgeon to easily test and make fine adjustments to the heads-up display during surgery is essential.

7. OiM Overlay GUI: an open-source MATLAB overlay generator

To facilitate the investigation and establishment of data visualization best practices, we have created an easy-to-use open-source MATLAB graphical user interface (GUI) (Fig. 11) that enables the user to overlay a scalar map over top of a white-light RGB or grayscale image. The GUI accepts a number of different imaging formats and MATLAB data structures—8-bit RGB images, DICOM, 14- and 16-bit TIFFs acquired with scientific cameras, *.mat data files, 22-bit LI-COR Pearl image sets—and enables the user to create an overlay through the selection of color map, transparency and other options. Additional file formats can be added in by users interested.

 figure: Fig. 11

Fig. 11 (A) Screenshot of the main window of Overlay GUI. A number of different sliders, radio-buttons and drop-down menus enable the user to quickly make fully customized color overlays, selecting from 18 different color maps and the four different transparency functions discussed in Section 4. (B) The normalized scalar magnitude vs. red, green and blue values as well as the lightness (L*) are shown for the present colormap (koufonisi) along with the Pyramid Test for lightness uniformity [28]. (C) The 1931 CIE x,y chromaticity plot showing the gamut for the current bottom image (contour plot overlay) and the trajectory of the current color map through the color space.

Download Full Size | PDF

At the time of this paper publication, the options for pseudocolor visualization of the top intensity image include multivariate colormaps (i.e., koufonisi, cube1, cubeYF, and patriotic, a divergent palette of red, white and blue). Univariate (i.e., transparency encoded) color maps can be selected by means of a color picker user interface, or by directly inputting the desired RGB values. The contour plot enables the user to overlay isobars of the intensity distribution with a solid color selected by means of the same color picker. Window and leveling can be adjusted through the use of sliders, and intensity data can be visualized with log compression as well.

Transparency LUTs are controlled by sliders and radio buttons: the type of function can be selected from logistic (default), power, linear or uniform. Free parameters modifying the shapes of these functions are subsequently selected using the sliders. The relationship between the opacity and the normalized intensity is plotted in real-time to inform the LUT selection.

Analytical functions are provided as an additional feature of the application. These enable the user to gain a deeper understanding of how the various choices will impact the perception of the visualized data. Drop-down menu options include a comparison between the actual and perceived visually-encoded information using the Retinex model [74, 75], and diagrams of the available perceptually-balanced color maps such as koufonisi showing the color map trajectory and the white-light image color distributions plotted on a 1931 CIE luminosity diagram. Also included is a colorblindness simulator enabling the user to view the overlay as it would be perceived with deuteranopic and protanopic vision. The source code, as well as the most current stable release can be found at http://dartmouth.edu/~overlay .

8. Conclusion

Adoption of fluorescence guided surgery is facilitated by employing visualization strategies that maximize the information provided by fluorescence while minimizing disruption to the visual field or changes to the clinical workflow. When considering the importance of non-visible fluorophores (either spectrally near-infrared, or because they are administered in low doses), image overlays—via both LED monitor display units and operating microscope heads-up display—will provide the primary means of conveying fluorescence parameters. Effective image overlays depend mainly on the selection of color map and transparency functions, which should be intentional and unbiased by customized thresholding or light balance. To facilitate the exploration and adherence to standards of best practice, we provide an open-source MATLAB based tool that can be downloaded free of charge.

Acknowledgements

JTE is supported by a Canadian Institutes of Health Research fellowship. This work was funded by an NIH grants R01CA167413 and R01CA109558. The authors wish to thank other members of the Optics in Medicine and Neurosurgery Research teams at Dartmouth for feedback and for beta-testing pre-release versions of the Overlay GUI.

References and links

1. M. I. Miga, K. D. Paulsen, J. M. Lemery, S. D. Eisner, A. Hartov, F. E. Kennedy, and D. W. Roberts, “Model-updated image guidance: initial clinical experiences with gravity-induced brain deformation,” IEEE Trans. Med. Imaging 18(10), 866–874 (1999). [CrossRef]   [PubMed]  

2. D. W. Roberts, J. W. Strohbehn, J. F. Hatch, W. Murray, and H. Kettenberger, “A frameless stereotaxic integration of computerized tomographic imaging and the operating microscope,” J. Neurosurg. 65(4), 545–549 (1986). [CrossRef]   [PubMed]  

3. R. M. Comeau, A. F. Sadikot, A. Fenster, and T. M. Peters, “Intraoperative ultrasound for guidance and tissue shift correction in image-guided neurosurgery,” Med. Phys. 27(4), 787–800 (2000). [CrossRef]   [PubMed]  

4. S. B. Sobottka, J. Bredow, B. Beuthien-Baumann, G. Reiss, G. Schackert, and R. Steinmeier, “Comparison of functional brain PET images and intraoperative brain-mapping data using image-guided surgery,” Comput. Aided Surg. 7(6), 317–325 (2002). [CrossRef]   [PubMed]  

5. B. T. Bethea, A. M. Okamura, M. Kitagawa, T. P. Fitton, S. M. Cattaneo, V. L. Gott, W. A. Baumgartner, and D. D. Yuh, “Application of haptic feedback to robotic surgery,” J. Laparoendosc. Adv. Surg. Tech. A 14(3), 191–195 (2004). [CrossRef]   [PubMed]  

6. L. Boni, G. David, A. Mangano, G. Dionigi, S. Rausei, S. Spampatti, E. Cassinotti, and A. Fingerhut, “Clinical applications of indocyanine green (ICG) enhanced fluorescence in laparoscopic surgery,” Surg. Endosc. 29(7), 2046–2055 (2015). [CrossRef]   [PubMed]  

7. W. Stummer, H. J. Reulen, A. Novotny, H. Stepp, and J. C. Tonn, “Fluorescence-guided resections of malignant gliomas--an overview,” Acta Neurochir. Suppl. (Wien) 88, 9–12 (2003). [PubMed]  

8. W. Stummer, U. Pichlmeier, T. Meinel, O. D. Wiestler, F. Zanella, H. J. Reulen, and ALA-Glioma Study Group, “Fluorescence-guided surgery with 5-aminolevulinic acid for resection of malignant glioma: a randomised controlled multicentre phase III trial,” Lancet Oncol. 7(5), 392–401 (2006). [CrossRef]   [PubMed]  

9. K. S. Samkoe, K. Sexton, K. M. Tichauer, S. K. Hextrum, O. Pardesi, S. C. Davis, J. A. O’Hara, P. J. Hoopes, T. Hasan, and B. W. Pogue, “High vascular delivery of EGF, but low receptor binding rate is observed in AsPC-1 tumors as compared to normal pancreas,” Mol. Imaging Biol. 14(4), 472–479 (2012). [CrossRef]   [PubMed]  

10. C. H. Heath, N. L. Deep, L. Sweeny, K. R. Zinn, and E. L. Rosenthal, “Use of panitumumab-IRDye800 to image microscopic head and neck cancer in an orthotopic surgical model,” Ann. Surg. Oncol. 19(12), 3879–3887 (2012). [CrossRef]   [PubMed]  

11. K. R. Zinn, M. Korb, S. Samuel, J. M. Warram, D. Dion, C. Killingsworth, J. Fan, T. Schoeb, T. V. Strong, and E. L. Rosenthal, “IND-directed safety and biodistribution study of intravenously injected cetuximab-IRDye800 in cynomolgus macaques,” Mol. Imaging Biol. 17(1), 49–57 (2015). [CrossRef]   [PubMed]  

12. J. S. Mieog, A. L. Vahrmeijer, M. Hutteman, J. R. van der Vorst, M. Drijfhout van Hooff, J. Dijkstra, P. J. Kuppen, R. Keijzer, E. L. Kaijzel, I. Que, C. J. van de Velde, and C. W. Löwik, “Novel intraoperative near-infrared fluorescence camera system for optical image-guided cancer surgery,” Mol. Imaging 9(4), 223–231 (2010). [PubMed]  

13. R. W. Holt, J. L. Demers, K. J. Sexton, J. R. Gunn, S. C. Davis, K. S. Samkoe, and B. W. Pogue, “Tomography of epidermal growth factor receptor binding to fluorescent Affibody in vivo studied with magnetic resonance guided fluorescence recovery in varying orthotopic glioma sizes,” J. Biomed. Opt. 20(2), 026001 (2015). [CrossRef]   [PubMed]  

14. K. Sexton, K. Tichauer, K. S. Samkoe, J. Gunn, P. J. Hoopes, and B. W. Pogue, “Fluorescent affibody peptide penetration in glioma margin is superior to full antibody,” PLoS One 8(4), e60390 (2013). [CrossRef]   [PubMed]  

15. S. Oliveira, R. Heukers, J. Sornkom, R. J. Kok, and P. M. van Bergen En Henegouwen, “Targeting tumors with nanobodies for cancer imaging and therapy,” J. Control. Release 172(3), 607–617 (2013). [CrossRef]   [PubMed]  

16. M. Kijanka, F. J. Warnders, M. El Khattabi, M. Lub-de Hooge, G. M. van Dam, V. Ntziachristos, L. de Vries, S. Oliveira, and P. M. van Bergen En Henegouwen, “Rapid optical imaging of human breast tumour xenografts using anti-HER2 VHHs site-directly conjugated to IRDye 800CW for image-guided surgery,” Eur. J. Nucl. Med. Mol. Imaging 40(11), 1718–1729 (2013). [CrossRef]   [PubMed]  

17. H. Shi, W. Cui, X. He, Q. Guo, K. Wang, X. Ye, and J. Tang, “Whole cell-SELEX aptamers for highly specific fluorescence molecular imaging of carcinomas in vivo,” PLoS One 8(8), e70476 (2013). [CrossRef]   [PubMed]  

18. E. M. Sevick-Muraca, R. Sharma, J. C. Rasmussen, M. V. Marshall, J. A. Wendt, H. Q. Pham, E. Bonefas, J. P. Houston, L. Sampath, K. E. Adams, D. K. Blanchard, R. E. Fisher, S. B. Chiang, R. Elledge, and M. E. Mawad, “Imaging of lymph flow in breast cancer patients after microdose administration of a near-infrared fluorophore: feasibility study,” Radiology 246(3), 734–741 (2008). [CrossRef]   [PubMed]  

19. U.S.D.H.H.S.C.D.E., Research, “Guidance for industry, investigators, and reviewers exploratory IND studies,” U.D.H.H. Services, ed. (Rockville, MD, 2006).

20. J. Glatz, P. Symvoulidis, P. B. Garcia-Allende, and V. Ntziachristos, “Robust overlay schemes for the fusion of fluorescence and color channels in biological imaging,” J. Biomed. Opt. 19(4), 040501 (2014). [CrossRef]   [PubMed]  

21. N. L. Martirosyan, J. Skoch, J. R. Watson, G. M. Lemole Jr, M. Romanowski, and R. Anton, “Integration of indocyanine green videoangiography with operative microscope: augmented reality for interactive assessment of vascular structures and blood flow,” Neurosurgery 11(Suppl 2), 252–258 (2015). [CrossRef]   [PubMed]  

22. D. W. Roberts, P. A. Valdés, B. T. Harris, K. M. Fontaine, A. Hartov, X. Fan, S. Ji, S. S. Lollis, B. W. Pogue, F. Leblond, T. D. Tosteson, B. C. Wilson, and K. D. Paulsen, “Coregistered fluorescence-enhanced tumor resection of malignant glioma: relationships between δ-aminolevulinic acid-induced protoporphyrin IX fluorescence, magnetic resonance imaging enhancement, and neuropathological parameters. Clinical article,” J. Neurosurg. 114(3), 595–603 (2011). [CrossRef]   [PubMed]  

23. P. A. Valdés, A. Kim, F. Leblond, O. M. Conde, B. T. Harris, K. D. Paulsen, B. C. Wilson, and D. W. Roberts, “Combined fluorescence and reflectance spectroscopy for in vivo quantification of cancer biomarkers in low- and high-grade glioma surgery,” J. Biomed. Opt. 16(11), 116007 (2011). [CrossRef]   [PubMed]  

24. K. M. Tichauer, K. S. Samkoe, J. R. Gunn, S. C. Kanick, P. J. Hoopes, R. J. Barth, P. A. Kaufman, T. Hasan, and B. W. Pogue, “Microscopic lymph node tumor burden quantified by macroscopic dual-tracer molecular imaging,” Nat. Med. 20(11), 1348–1353 (2014). [CrossRef]   [PubMed]  

25. K. S. Samkoe, K. M. Tichauer, J. R. Gunn, W. A. Wells, T. Hasan, and B. W. Pogue, “Quantitative In Vivo Immunohistochemistry of Epidermal Growth Factor Receptor Using a Receptor Concentration Imaging Approach,” Cancer Res. 74(24), 7465–7474 (2014). [CrossRef]   [PubMed]  

26. E. R. Kandel, J. H. Schwartz, and T. M. Jessell, Principles of Neural Science (McGraw-Hill Companies, Inc., New York, N.Y., 2000).

27. G. Wald, “The molecular basis of visual excitation,” Nature 219(5156), 800–807 (1968). [CrossRef]   [PubMed]  

28. M. Niccoli, “Geophysics tutorial: how to evaluate and compare color maps,” Leading Edge (Tulsa Okla.) 33, 910–912 (2014). [CrossRef]  

29. K. Moreland, “Diverging color maps for scientific visualization,” in Advances in Visual Computing, G. Bebis, R. Boyle, B. Parvin, D. Koracin, Y. Kuno, J. Wang, R. Pajarola, P. Lindsrom, A. Hinkenjann, M. L. Encarnacao, C. T. Silva, and D. Coming, eds. (Springer-Verlag Berlin Heidelberg, 2009).

30. B. E. Rogowitz, and A. D. Kalvin, “The “Which Blair project”: A quick visual method for evaluating perceptual color maps,” Visualization 2001, Proceedings, 183–190 (2001).

31. J. M. Wolfe, “Guided Search 2.0 A revised model of visual search,” Psychon. Bull. Rev. 1(2), 202–238 (1994). [CrossRef]   [PubMed]  

32. H. E. Egeth and S. Yantis, “Visual attention: control, representation, and time course,” Annu. Rev. Psychol. 48(1), 269–297 (1997). [CrossRef]   [PubMed]  

33. I. Rock and D. Gutman, “The effect of inattention on form perception,” J. Exp. Psychol. Hum. Percept. Perform. 7(2), 275–285 (1981). [CrossRef]   [PubMed]  

34. M. M. Chou, E. S. Ho, and Y. H. Lee, “Prenatal diagnosis of placenta previa accreta by transabdominal color Doppler ultrasound,” Ultrasound Obstet. Gynecol. 15(1), 28–35 (2000). [CrossRef]   [PubMed]  

35. K. Kitajima, Y. Suenaga, Y. Ueno, T. Kanda, T. Maeda, N. Makihara, Y. Ebina, H. Yamada, S. Takahashi, and K. Sugimura, “Value of fusion of PET and MRI in the detection of intra-pelvic recurrence of gynecological tumor: comparison with 18F-FDG contrast-enhanced PET/CT and pelvic MRI,” Ann. Nucl. Med. 28(1), 25–32 (2014). [CrossRef]   [PubMed]  

36. K. Fukuda, H. Kataoka, N. Nakajima, J. Masuoka, T. Satow, and K. Iihara, “Efficacy of FLOW 800 with indocyanine green videoangiography for the quantitative assessment of flow dynamics in cerebral arteriovenous malformation surgery,” World. Neurosurg. 83(2), 203–210 (2015). [CrossRef]   [PubMed]  

37. E. L. Jewell, J. J. Huang, N. R. Abu-Rustum, G. J. Gardner, C. L. Brown, Y. Sonoda, R. R. Barakat, D. A. Levine, and M. M. Leitao Jr., “Detection of sentinel lymph nodes in minimally invasive surgery using indocyanine green and near-infrared fluorescence imaging for uterine and cervical malignancies,” Gynecol. Oncol. 133(2), 274–277 (2014). [CrossRef]   [PubMed]  

38. N. Tagaya, H. Aoyagi, A. Nakagawa, A. Abe, Y. Iwasaki, M. Tachibana, and K. Kubota, “A novel approach for sentinel lymph node identification using fluorescence imaging and image overlay navigation surgery in patients with breast cancer,” World J. Surg. 35(1), 154–158 (2011). [CrossRef]   [PubMed]  

39. S. L. Troyan, V. Kianzad, S. L. Gibbs-Strauss, S. Gioux, A. Matsui, R. Oketokoun, L. Ngo, A. Khamene, F. Azar, and J. V. Frangioni, “The FLARE intraoperative near-infrared fluorescence imaging system: a first-in-human clinical trial in breast cancer sentinel lymph node mapping,” Ann. Surg. Oncol. 16(10), 2943–2952 (2009). [CrossRef]   [PubMed]  

40. B. E. Schaafsma, J. S. Mieog, M. Hutteman, J. R. van der Vorst, P. J. Kuppen, C. W. Löwik, J. V. Frangioni, C. J. van de Velde, and A. L. Vahrmeijer, “The clinical use of indocyanine green as a near-infrared fluorescent contrast agent for image-guided oncologic surgery,” J. Surg. Oncol. 104(3), 323–332 (2011). [CrossRef]   [PubMed]  

41. Q. T. Nguyen, E. S. Olson, T. A. Aguilera, T. Jiang, M. Scadeng, L. G. Ellies, and R. Y. Tsien, “Surgery with molecular fluorescence imaging using activatable cell-penetrating peptides decreases residual cancer and improves survival,” Proc. Natl. Acad. Sci. U.S.A. 107(9), 4317–4322 (2010). [CrossRef]   [PubMed]  

42. E. S. Olson, T. Jiang, T. A. Aguilera, Q. T. Nguyen, L. G. Ellies, M. Scadeng, and R. Y. Tsien, “Activatable cell penetrating peptides linked to nanoparticles as dual probes for in vivo fluorescence and MR imaging of proteases,” Proc. Natl. Acad. Sci. U.S.A. 107(9), 4311–4316 (2010). [CrossRef]   [PubMed]  

43. J. V. Frangioni, “In vivo near-infrared fluorescence imaging,” Curr. Opin. Chem. Biol. 7(5), 626–634 (2003). [CrossRef]   [PubMed]  

44. S. L. Gibbs-Strauss, K. A. Nasr, K. M. Fish, O. Khullar, Y. Ashitate, T. M. Siclovan, B. F. Johnson, N. E. Barnhardt, C. A. Tan Hehir, and J. V. Frangioni, “Nerve-highlighting fluorescent contrast agents for image-guided surgery,” Mol. Imaging 10(2), 91–101 (2011). [PubMed]  

45. S. Kim, Y. T. Lim, E. G. Soltesz, A. M. De Grand, J. Lee, A. Nakayama, J. A. Parker, T. Mihaljevic, R. G. Laurence, D. M. Dor, L. H. Cohn, M. G. Bawendi, and J. V. Frangioni, “Near-infrared fluorescent type II quantum dots for sentinel lymph node mapping,” Nat. Biotechnol. 22(1), 93–97 (2004). [CrossRef]   [PubMed]  

46. E. G. Soltesz, S. Kim, R. G. Laurence, A. M. DeGrand, C. P. Parungo, D. M. Dor, L. H. Cohn, M. G. Bawendi, J. V. Frangioni, and T. Mihaljevic, “Intraoperative sentinel lymph node mapping of the lung using near-infrared fluorescent quantum dots,” Ann. Thorac. Surg. 79(1), 269–277 (2005). [CrossRef]   [PubMed]  

47. P. S. Adusumilli, D. P. Eisenberg, Y. S. Chun, K. W. Ryu, L. Ben-Porat, K. J. Hendershott, M. K. Chan, R. Huq, C. C. Riedl, and Y. Fong, “Virally directed fluorescent imaging improves diagnostic sensitivity in the detection of minimal residual disease after potentially curative cytoreductive surgery,” J. Gastrointest. Surg. 9, 1138–1146 (2005).

48. M. Hutteman, J. S. Mieog, J. R. van der Vorst, G. J. Liefers, H. Putter, C. W. Löwik, J. V. Frangioni, C. J. van de Velde, and A. L. Vahrmeijer, “Randomized, double-blind comparison of indocyanine green with or without albumin premixing for near-infrared fluorescence imaging of sentinel lymph nodes in breast cancer patients,” Breast Cancer Res. Treat. 127(1), 163–170 (2011). [CrossRef]   [PubMed]  

49. Q. T. Nguyen and R. Y. Tsien, “Fluorescence-guided surgery with live molecular navigation--a new cutting edge,” Nat. Rev. Cancer 13(9), 653–662 (2013). [CrossRef]   [PubMed]  

50. M. Jermyn, K. Kolste, J. Pichette, G. Sheehy, L. Angulo-Rodríguez, K. D. Paulsen, D. W. Roberts, B. C. Wilson, K. Petrecca, and F. Leblond, “Macroscopic-imaging technique for subsurface quantification of near-infrared markers during surgery,” J. Biomed. Opt. 20(3), 036014 (2015). [CrossRef]   [PubMed]  

51. P. A. Valdés, F. Leblond, V. L. Jacobs, B. C. Wilson, K. D. Paulsen, and D. W. Roberts, “Quantitative, spectrally-resolved intraoperative fluorescence imaging,” Sci. Rep. 2, 798 (2012). [CrossRef]   [PubMed]  

52. G. M. van Dam, G. Themelis, L. M. Crane, N. J. Harlaar, R. G. Pleijhuis, W. Kelder, A. Sarantopoulos, J. S. de Jong, H. J. Arts, A. G. van der Zee, J. Bart, P. S. Low, and V. Ntziachristos, “Intraoperative tumor-specific fluorescence imaging in ovarian cancer by folate receptor-α targeting: first in-human results,” Nat. Med. 17(10), 1315–1319 (2011). [CrossRef]   [PubMed]  

53. C. Chi, J. Ye, H. Ding, D. He, W. Huang, G. J. Zhang, and J. Tian, “Use of indocyanine green for detecting the sentinel lymph node in breast cancer patients: from preclinical evaluation to clinical validation,” PLoS One 8(12), e83927 (2013). [CrossRef]   [PubMed]  

54. B. Rogowitz, A. D. Kalvin, A. Pelah, and A. Cohen, “Which trajectories through which perceptually uniform color spaces produce appropriate colors scales for interval data?” in The Seventh Color Imaging Conference: Color Science, Systems, and Applications (Society for Imaging Science and Technology, 1999).

55. S. H. Keller, S. Holm, A. E. Hansen, B. Sattler, F. Andersen, T. L. Klausen, L. Højgaard, A. Kjær, and T. Beyer, “Image artifacts from MR-based attenuation correction in clinical, whole-body PET/MRI,” MAGMA 26(1), 173–181 (2013). [CrossRef]   [PubMed]  

56. D. W. Hosmer and S. Lemeshow, Applied Logistic Regression (Wiley-Intersceince, New York, 2000).

57. A. A. Boxwala, J. Kim, J. M. Grillo, and L. Ohno-Machado, “Using statistical and machine learning to help institutions detect suspicious access to electronic health records,” J. Am. Med. Inform. Assoc. 18(4), 498–505 (2011). [CrossRef]   [PubMed]  

58. M. Carpelan-Holmström, J. Louhimo, U. H. Stenman, H. Alfthan, H. Järvinen, and C. Haglund, “Estimating the probability of cancer with several tumor markers in patients with colorectal disease,” Oncology 66(4), 296–302 (2004). [CrossRef]   [PubMed]  

59. S. Gao, S. B. Mondal, N. Zhu, R. Liang, S. Achilefu, and V. Gruev, “Image overlay solution based on threshold detection for a compact near infrared fluorescence goggle system,” J. Biomed. Opt. 20(1), 016018 (2015). [CrossRef]   [PubMed]  

60. P. A. Valdes, V. L. Jacobs, B. C. Wilson, F. Leblond, D. W. Roberts, and K. D. Paulsen, “System and methods for wide-field quantitative fluorescence imaging during neurosurgery,” Opt. Lett. 38(15), 2786–2788 (2013). [CrossRef]   [PubMed]  

61. B. W. Pogue, S. Gibbs-Strauss, P. A. Valdés, K. Samkoe, D. W. Roberts, and K. D. Paulsen, “Review of Neurosurgical Fluorescence Imaging Methodologies,” IEEE J. Sel. Top. Quantum Electron. 16(3), 493–505 (2010). [CrossRef]   [PubMed]  

62. T. Nakajima, M. Mitsunaga, N. H. Bander, W. D. Heston, P. L. Choyke, and H. Kobayashi, “Targeted, activatable, in vivo fluorescence imaging of prostate-specific membrane antigen (PSMA) positive tumors using the quenched humanized J591 antibody-indocyanine green (ICG) conjugate,” Bioconjug. Chem. 22(8), 1700–1705 (2011). [CrossRef]   [PubMed]  

63. N. Thekkek, T. Muldoon, A. D. Polydorides, D. M. Maru, N. Harpaz, M. T. Harris, W. Hofstettor, S. P. Hiotis, S. A. Kim, A. J. Ky, S. Anandasabapathy, and R. Richards-Kortum, “Vital-dye enhanced fluorescence imaging of GI mucosa: metaplasia, neoplasia, inflammation,” Gastrointest. Endosc. 75(4), 877–887 (2012). [CrossRef]   [PubMed]  

64. K. J. Rosbach, M. D. Williams, A. M. Gillenwater, and R. R. Richards-Kortum, “Optical molecular imaging of multiple biomarkers of epithelial neoplasia: epidermal growth factor receptor expression and metabolic activity in oral mucosa,” Transl. Oncol. 5(3), 160–171 (2012). [CrossRef]   [PubMed]  

65. E. Reinhard, W. Heidrich, P. Debevec, S. Pattanaik, G. Ward, and K. Myszkowski, High dynamic range imaging: acquisition, display, and image-based lighting (Morgan Kaufmann, 2010).

66. S. Helge, H. Wolfgang, S. Wolfgang, W. Greg, W. Lorne, T. Matthew, G. Abhijeet, and V. Andrejs, “High dynamic range display systems,” ACM Trans. Graph. 23(3), 760–768 (2004). [CrossRef]  

67. F. Drago, K. Myszkowski, T. Annen, and N. Chiba, “Adaptive Logarithmic Mapping For Displaying High Contrast Scenes,” Comput. Graph. Forum 22(3), 419–426 (2003). [CrossRef]  

68. V. Dutt and J. F. Greenleaf, “Adaptive speckle reduction filter for log-compressed B-scan images,” IEEE Trans. Med. Imaging 15(6), 802–813 (1996). [CrossRef]   [PubMed]  

69. S. Keereweer, J. D. Kerrebijn, P. B. van Driel, B. Xie, E. L. Kaijzel, T. J. Snoeks, I. Que, M. Hutteman, J. R. van der Vorst, J. S. Mieog, A. L. Vahrmeijer, C. J. van de Velde, R. J. Baatenburg de Jong, and C. W. Löwik, “Optical image-guided surgery--where do we stand?” Mol. Imaging Biol. 13(2), 199–207 (2011). [CrossRef]   [PubMed]  

70. A. Raabe, P. Nakaji, J. Beck, L. J. Kim, F. P. Hsu, J. D. Kamerman, V. Seifert, and R. F. Spetzler, “Prospective evaluation of surgical microscope-integrated intraoperative near-infrared indocyanine green videoangiography during aneurysm surgery,” J. Neurosurg. 103(6), 982–989 (2005). [CrossRef]   [PubMed]  

71. G. B. Hanna, S. M. Shimi, and A. Cuschieri, “Task performance in endoscopic surgery is influenced by location of the image display,” Ann. Surg. 227(4), 481–484 (1998). [CrossRef]   [PubMed]  

72. G. B. Hanna, S. M. Shimi, and A. Cuschieri, “Randomised study of influence of two-dimensional versus three-dimensional imaging on performance of laparoscopic cholecystectomy,” Lancet 351(9098), 248–251 (1998). [CrossRef]   [PubMed]  

73. Y. Liu, Y. M. Zhao, W. Akers, Z. Y. Tang, J. Fan, H. C. Sun, Q. H. Ye, L. Wang, and S. Achilefu, “First in-human intraoperative imaging of HCC using the fluorescence goggle system and transarterial delivery of near-infrared fluorescent imaging agent: a pilot study,” Transl. Res. 162(5), 324–331 (2013). [CrossRef]   [PubMed]  

74. B. Funt, F. Ciurea, and J. McCann, “Retinex in Matlab,” Eighth Color Imaging Conference: Color Science and Engineering Systems, Technologies, Applications, 112–121 (2000).

75. E. H. Land and J. J. McCann, “Lightness and retinex theory,” J. Opt. Soc. Am. 61(1), 1–11 (1971). [CrossRef]   [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1
Fig. 1 (A) The number of publications in “fluorescence-guided surgery” or “fluorescence-guided resection” in the past 25 years, showing the expotential growth in the field. (B) The Novadaq SPY Elite fluorescence imaging system, which has been at the forefront of the effort to expand fluorescence guided surgery capabilities, leading the commercial market. (C) Laproscopic images acquired under white-light and (D) by exciting indocyanine green which has been pseudocolored blue and overlaid onto C. (Source: Luigi Boni, MD [6]) showing a novel use of this for perfusion imaging of tissue.
Fig. 2
Fig. 2 (A) The sensitivity of human photoreceptors for different wavelengths of light is shown (NB: the abscissa is defined according to a logarithmic scale). (B) The emission spectra of four common FGS fluorophores: fluorescein sodium (FS), protoporphyrin IX (PpIX), IRDye® 800CW, and indocyanine green (ICG). (C) The CIE 1931 x,y chromaticity map showing the sRGB gamut used by most LED and LCD monitors, the trajectory of the exemplary color map (koufonisi), and the gamut representing brain tissue. The koufonisi colormap is perceptually balanced and has mid-high colors which circumscribe the brain tissue gamut, giving a uniform chromatic contrast. The average brain tissue gamut was characterized from intracranial images acquired from 10 patients, of which (D) is an example.
Fig. 3
Fig. 3 (A) A representative sample of color maps used in medical imaging overlays (available in the OiM Overlay GUI), which include sequential, diverging, and categorical palettes. Examples of their use from literature include: (B) Doppler ultrasound image of placentia previa of blood flow using an opaquely-overlaid diverging hot/cold color map centered about a luminance nadir [34], (C) Axial fused PET/MR image, with FDG SUV values encoded by a hot color map and blended by a uniform transparency function [35], (D) Zeiss OPMI infrared 800 blood flow module showing a pseudocolor representation of ICG wash-in delay during arteriovenous malformation [36], (E) the resection of a sentinel lymph node in laproscopic surgery detected using ICG [37].
Fig. 4
Fig. 4 (A) The visual fluorescence emission from PpIX under blue light excitation on the Zeiss Pentero OPMI 800 surgical microscope during glioma resection, and (B) the RGB image acquired with white-light (~5500 K) illumination. (C) The PpIX concentration map recovered using hyperspectral imaging. (D) The [PpIX] is visualized using the multivariate koufonisi colormap and overlaid on the RGB image using the logistic function in (H) [max = 0.78, midpoint = 11.6 ug/ml, k = 11.8]. (E) The same information as in Panel C, but visualized using the myCarta cube1 color map (F) and as a single-value [RGB (7, 246, 64)] color map blended into the RGB image with the same transparency function (Panel H). (G) The 1931 CIE xy chromaticity plot showing the trajectories of the three color maps and the gamut from the RGB image.
Fig. 5
Fig. 5 (A) Color map image overlay of quantitative fluorescence (qFI) during ALA-induced PpIX human glioma resection [51]. (B) Intensity image of folate conjugated to fluoresceinisothiocyanate (FITC), pseudocolored and overlaid onto an RGB image during ovarian cancer resection [52], (C) Pseudocolored ICG overlay during breast cancer lymph node resection [53] (D) Sentinal lymph node mapping of non-small cell lung cancer metastesis which has been pseudocolored and overlaid onto an RGB image intraoperatively [46].
Fig. 6
Fig. 6 The four transparency functions or look-up tables widely used in creating overlays.
Fig. 7
Fig. 7 The relationship between normal distributions of measured parameter values in normal and tumor regions, and the true positive rate (TPR) if that value was selected as the clinical threshold for diagnosis. The resulting TPR logistic function for (A) scarcely overlapping and (B) greatly overlapping distributions are shown.
Fig. 8
Fig. 8 The same fluorescence map overlaid using a uniform colormap but with different transparency functions. (A) Logistic function with x = 5 μg/ml (B) logistic function with x = 10 μg/ml, (D) linear function intersecting point (x = 14 μg/ml, y = 50%), (C) logistic function with x = 15 μg/ml. How the transparency function is defined will have large effects on the perceived margin of malignant tissue, highlighting the need for standardization.
Fig. 9
Fig. 9 Lymphatic uptake of fluorophore in a mouse is shown as the green overlay on grayscale white-light images. The leftmost column shows the original image and each subsequent column shows a processed image. Window-level adjustment with contrast-limited adaptive histogram equalization (CLAHE) was applied to the original images for comparison with log-compressed images. Well plates with various concentrations of fluorophore ranging over 3 orders of magnitude in concentration and fluorescence intensity are shown, and histograms corresponding to lymphatic uptake images. White arrows indicate a lymph vessel that is hard to detect at early time points without log compression. 5mm scale bars are shown.
Fig. 10
Fig. 10 (A) An early prototype of a HUD unit that was integrated into a clinical operating microscope allowing display of the augmented information into the surgeons microscope view. (B) An example contour representation of the data in Fig. 8 at threshold = 70%, showing the outline of the region for the surgeon. (C) A density point-cloud representation of the same data.
Fig. 11
Fig. 11 (A) Screenshot of the main window of Overlay GUI. A number of different sliders, radio-buttons and drop-down menus enable the user to quickly make fully customized color overlays, selecting from 18 different color maps and the four different transparency functions discussed in Section 4. (B) The normalized scalar magnitude vs. red, green and blue values as well as the lightness (L*) are shown for the present colormap (koufonisi) along with the Pyramid Test for lightness uniformity [28]. (C) The 1931 CIE x,y chromaticity plot showing the gamut for the current bottom image (contour plot overlay) and the trajectory of the current color map through the color space.

Equations (1)

Equations on this page are rendered with MathJax. Learn more.

f ( x ) = L 1 + e k ( x x 0 )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.