Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Adaptive automated sinogram normalization for ring artifacts suppression in CT

Open Access Open Access

Abstract

Ring artifacts pose a major barrier to obtaining precise reconstruction in computed tomography (CT). The presence of ring artifacts complicates the use of automatic means of processing CT reconstruction results, such as segmentation, correction of geometric shapes, alignment of reconstructed volumes. Although there are numerous efficient methods for suppressing ring artifacts, many of them appear to be manual. Along with this, a large proportion of the automatic methods cope unsatisfactorily with the target task while requiring computational capacity. The current work introduces a projection data preprocessing method for suppressing ring artifacts that constitutes a compromise among the outlined aspects – automaticity, high efficiency and computational speed. Derived as the automation of the classical sinogram normalization method, the proposed method specific advantages consist in adaptability in relation to the filtered sinograms and the edge-preservation property proven within the experiments on both synthetic and real CT data. Concerning the challenging open-access data, the method has performed superior quality comparable to that of the advanced methods: it has demonstrated 70.4% ring artifacts suppression percentage (RASP) quality metric. In application to our real laboratory CT data, the proposed method allowed us to gain significant refinement of the reconstruction quality which has not been surpassed by a range of compared manual ring artifacts suppression methods.

© 2024 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

In computed tomography (CT), the procedure of collecting projection data of the irradiated object under study is impossible without a position-sensitive detector [14]. At the same time, in practice the detector appears to be far from an ideal registration device. Actually, it regularly becomes miscalibrated [5,6]. Moreover, the technical malfunction of individual pixels of the detector, which also arises in the course of operation, cannot be ruled out [6,7].

Detector’s defects cause the appearance of so-called ring artifacts in the reconstructed image. Ring artifacts are distortions in the form of circular arcs (most frequently, rings or semi-rings) of increased or decreased constant or varying intensity throughout the object’s reconstruction (Fig. 1). They have a common center coinciding with the point of intersection of the reconstructed object’s slice with the object’s rotation axis. Ring artifacts usually take the form of closed rings if the object is full-circle rotated, i.e. its rotation angle ranges from 0$^{\circ }$ up to 360$^{\circ }$, and half-rings when the object is rotated at angles in the half-circle range 0$^{\circ }$-180$^{\circ }$ [8]. In the general case, ring artifacts manifest as the arcs of circles whose angular measures correspond to the object’s rotation angle range.

 figure: Fig. 1.

Fig. 1. Reconstruction of the central slice of a real bell pepper data package: a) reconstruction with ring artifacts; b) reconstruction with suppressed ring artifacts.

Download Full Size | PDF

Each row of the image of a sinogram, by definition, accounts registered values of the corresponding detector row while the object is being probed at different angles. In the sinogram domain, ring artifacts transform into stripes of locally extremal intensity known in the literature as stripe, band or orthotropic artifacts [8,9] (Fig. 2). The ring artifacts’ characteristic stripes are often barely visible to the naked eye. For the latter reason, it is intricate to indicate the presence of stripe artifacts and determine their precise position in the sinogram image.

 figure: Fig. 2.

Fig. 2. Example of a sinogram with stripe artifacts. Rectangular frames mark the enlarged part of the sinogram before and after sinogram correction (ring artifacts suppression in sinogram domain). a) Original sinogram with stripe artifacts, b) Sinogram without stripe artifacts, c) Absolute pixel-wise difference of sinograms before and after correction.

Download Full Size | PDF

In addition to the technical imperfections of the registration device, the appearance of ring artifacts in the reconstructed image might be brought about by either inaccuracies in measurements at different angular positions of the source-detector system, the presence of impurities in the scintillation crystal, the dustiness of the scintillation screen or fluctuations in the operating mode of the X-ray tube [1,2,9,10]. Detector’s dead pixels generate strong isolated rings in the reconstruction, whereas miscalibrated detector cells (this explains another name of ring artifacts – non-uniformity artifacts) produce weaker, wider rings or groups of rings [1114]. Impurities in the scintillation crystal or dust on the scintillation screens might initiate the appearance of strong isolated (usually narrow) rings throughout the reconstructed image [11,13,14].

Ring artifacts significantly degrade the quality of the reconstruction image, making it unsuitable for further analysis by the researcher. In this regard, to obtain a high-quality digital image of the reconstruction of the object’s internal structure, the utilization of ring artifacts suppression means is crucial. Due to the widespread application of the CT method, for instance, in industry [1523] and medicine [2429], automatic software methods for suppressing ring artifacts are considered to be especially imperative as they facilitate the tomograph setup’s utilization by specialists of various qualification levels [30].

2. Related ring artifacts suppression methods

Most existing ring artifacts methods can be roughly divided into 3 major groups:

  • 1. Hardware methods;
  • 2. Projection data preprocessing methods;
  • 3. Reconstruction postprocessing methods.

Hardware ring artifacts suppression methods involve deliberately changing the parameters of the tomographic measuring circuit while scanning the sample under study. Within this group of methods, it has been proposed, for example, during the collection of projection data to vary the angular velocities of the rotating X-ray source and detector [31]. Rotation of the X-ray source with a constant angular velocity and the detector block’s angular velocity simultaneous modification according to the harmonic law (such evolution of the detector block’s velocity is called sinusoidal modulation) may reduce ring artifacts in the reconstruction.

Some hardware methods are based on shifting the detector or object in certain horizontal steps whilst collecting projection data [32]. With such organization of the tomographic measurement procedure, the object is projected onto different areas of the detector window after shifting its rotation axis or the detector itself. Thereby, an individual detector pixel occurs to be covered by projections of different parts of the object and the influence of non-uniform detector pixels’ responses is equilibrated.

Hardware methods provide an option to effectively suppress ring artifacts, but, on the contrary, they require complex preparation of the measuring circuit, which may not be always possible. A significant drawback of the hardware methods’ group lies in the substantial reduction in spatial resolution of the reconstruction unless the displacements of the detector or the object are accurately estimated. Approximately specified shift values might contribute to the severe rotation axis artifacts appearance in the reconstruction [33].

The most common methods of ring artifacts suppression are methods of preprocessing the measured projection data and postprocessing the reconstructed image (groups 2 and 3 in the list above). The group of projection images preprocessing methods (group 2) includes, firstly, so-called flat field correction methods which are normally performed by acquiring bright field and dark field images: bright field and dark field images are defined as images of the detector response under conditions of the X-ray source being turned on or off respectively while the object being removed away. Within the confines of classical flat field correction procedure, a dark field image is subtracted from each projection image and projection’s normalization is performed, that is, the projection image’s pixel-wise division by the pixel-wise difference between the images of the bright field and the dark field images. Provided several bright and dark field images are available, instead of a linear normalization transformation of the projection images pixels’ intensity values, a piecewise linear transformation might be employed [34]. As a generalization of the described approach, non-linear splines are of potential use. The splines’ parameters are to be determined from the obtained bright and dark field images. Ring artifacts preprocessing methods based on the characterization of scintillator variations are alike to the flat field correction methods in terms of the applied theoretical apparatus [35]. Some other distinctive methods appealing to the notation of bright and dark field images have been proposed in the works [36,37]. Notwithstanding flat field correction methods’ efficiency and simplicity, the availability of bright and dark field images comes up to be their apparent essential requirement that limits the correction methods’ applicability. Bright and dark field images’ registration takes additional time for the researcher, may be either constrictive or not always possible.

Furthermore, projection data preprocessing methods appeal to the projection data filtering approaches using low-pass filters prior to the reconstruction procedure. On this path, modifications of the classic Ramachandran-Lakshminarayan (briefly, Ram-Lak or ramp) filter [38] are put into use within the convolution stage of integral reconstruction algorithms. The capacity of the Hann filter [39] and its modifications [38,40,41] to suppress weakly pronounced rings has been repeatedly studied. When applying integral reconstruction algorithms, sinogram convolution with a Butterworth filter in the frequency domain can also be harnessed to suppress ring artifacts [42]. Despite the greater computational complexity, the Butterworth filter may be preferable to a sharp cutoff filter due to the latter’s tendency to introduce noise caused by the slow decay of the function $\text {sinc} \:x=\frac {\sin x}{x}=O(\frac {1}{x})$ as $x \longrightarrow \infty$ in the spatial domain.

The sinogram normalization method [12,4346] represents a somewhat predominant another projection data preprocessing ring artifacts suppression method. Its pivotal construct is the sinogram averaged along its angular dimension or, in other words, the sinogram mean row, i.e. the array whose elements are calculated as the arithmetical mean of the attenuated intensities registered by the observed detector row’s pixels over the range of angles the object was rotated at. As a part of the sinogram normalization method, the sinogram stripe artifacts are modeled as constant intensity stripes. The assumption that the ideal sinogram mean row barren of stripe artifacts may be obtained by filtering the real measured sinogram mean row with some known filter, for example, Gaussian filter, underlies the method. Thus, to correct the sinogram and eliminate its stripe artifacts, the pixel intensity fluctuation error associated with the rings artifacts presence along the detector row is evaluated as the pixel-wise difference (or quotient within the multiplicative model [46]) of the unfiltered and filtered sinogram mean rows. Further, the estimated additive (alternatively, multiplicative) ring artifacts error might be subtracted from each row of the sinogram being processed (or one carries out pixel-wise division of the sinogram row by the multiplicative error [46]) (Fig. 3). In order to select the smoothing filtering of the sinogram mean row, various approaches have been adopted: arithmetic averaging over a certain window, median filtering, Gaussian filtering, morphological filtering, filtering in the frequency domain using the previously listed high-pass filters such as Ram-Lak and Hann filters. The weak point of the momentous sinogram normalization method, which relates to rather uncomplicated methods to implement, is considered to be the incomplete suppression of ring artifacts in the reconstruction. This occurs due to the fact that the method equally filters both strongly and weakly expressed stripe artifacts throughout the sinogram images. In addition, the coarse Gaussian filtering which is usually incorporated into the sinogram normalization method yields edges blurring of the reconstructed image. Filtering each object sinogram with the same filter (with the same parameters) also badly affects the reconstruction quality as the constant shaped filter leads to the ignorance of the object features in the sinogram domain.

 figure: Fig. 3.

Fig. 3. Sinogram normalization approach: the initial sinogram $\widetilde {P}$ having been arithmetically averaged along the axis $\theta$ forms an array $\overline {\widetilde {P}}$ the distribution of which values is presented in the figure. The result of filtering the array $\overline {\widetilde {P}}$ designated as $\overline {P}$ is further pixel-wise subtracted from the sinogram mean row $\overline {\widetilde {P}}$. The obtained difference addressed as $E$ in the following sinogram normalization step is subtracted from each initial sinogram’s row that results in the stripe artifacts suppressed sinogram $P$.

Download Full Size | PDF

Towards amending the sinogram normalization method, one may filter not the entire sinogram mean row, but only the part in which stripe artifacts were detected [13,14,47]. Thus, the procedure for correcting stripe artifacts in a sinogram domain consists of several steps: detection of stripe artifacts and subsequent filtering of the sinogram mean row in the areas of detected artifacts [14,48,49] or detection of stripe artifacts position, their classification into several groups (for example, artifacts induced by dead pixels, uncalibrated pixels, artifacts from variable intensity of the X-ray source radiation) and subsequent filtering of the sinogram mean row areas of detected grouped stripe artifacts so that separate filters are applied to different separable groups of artifacts [11]. Classifying detected stripe artifacts into groups according to the degree of their expression in the sinogram images proves to be a non-trivial task. For highly effective suppression of stripe artifacts, threshold classification based on the values of the sinogram mean row when identifying groups of stripe artifacts often appears to be insufficient and more sophisticated techniques are of demand. Incidental classification inaccuracies of stripe artifacts groups result in the formation of side-effect ring artifacts.

Also, the methods for preprocessing projection data include some other fairly efficient methods based on the identification of stripe artifacts in the sinogram and their subsequent removal from the sinogram image. For this purpose, particularly, the wavelet transform turns out to be beneficial [8,50]: a wavelet transform is performed over the sinogram, the transformation components corresponding to wavelet basis functions whose support elongates along the vertical direction in the image are selected (the selected vertically oriented part of the wavelet transform components would be called vertical component). Next, the vertical component, which is commonly associated with the present sinogram stripe artifacts, is either nulled or filtered out in the frequency domain, after which an inverse wavelet transform is performed, yielding a sinogram with considerably less pronounced stripe artifacts. It should be noted that methods for correcting sinogram stripe artifacts by optimizing various functions of the sinogram related to its statistical characteristics are also highly efficient in combating reconstruction ring artifacts [51,52]. Some other preprocessing methods accumulate the information of stripe artifacts combining the bright and dark field images structure and the mean projection image peculiarities [53].

Significant disadvantages of most previously created relatively simple and straightforward filtering-based preprocessing ring artifacts suppression methods are that they use filters that either require manual user settings or are constant for all object’s sinograms and thus do not take into account the morphological features of the object. The latter leads to the removal of structural details of the object from the reconstruction image, blurring of the object’s boundaries and the appearance of additional undesired ring artifacts.

Reconstruction post-processing methods (group 3) are aimed at suppressing ring artifacts after reconstruction is completed, and there are also methods for suppressing artifacts directly during the reconstruction procedure. These methods have been proposed to suppress rings in reconstructions by minimizing the isotropic or anisotropic norm of total variation using iterative reconstruction algorithms [54]. Regularization in the form of anisotropic total variation might preserve the sharpness of images with high-frequency elements and reduce the severity of reconstruction ring artifacts.

In addition, reconstruction postprocessing methods often deal with the reconstruction image stripe artifacts in a polar coordinate system [55] (the center of concentric rings may be determined by means of the circular Hough transform [56] whose acknowledged importance in image reconstruction, namely CT, is being regularly underscored [5759]). In such a coordinate system, the rings are transformed into parallel lines. Parallel lines derived from arc ring artifacts can be removed by utilizing previously discussed filters noted within the sinogram preprocessing methods. Following such an approach, the wavelet transform, in particular, finds application in the polar coordinate system [60]: the vertical or horizontal component of the wavelet transform is either zeroed or filtered in the frequency domain (for instance, with a Gaussian filter), after which the inverse wavelet transform is to be performed. The transition to the initial coordinate system allows to gain reconstruction quality improvement. An indispensable requirement of postprocessing methods of such type is an accurate preliminary determination of the position of the center of the rings in the reconstruction image. Postprocessing methods, even though they do not require the immediate intervention of the researcher in the process of collecting object’s projection data, can be applied only after the reconstruction procedure is completed, which often takes an unsatisfactorily long time. Furthermore, many postprocessing approaches inadvertently tend to reduce the resolution of the reconstructed image.

Finally, today machine and deep learning approaches comprise promising solutions to the ring artifacts suppression problem [6165]. Nowadays, the notorious transfer learning technique is being intensively availed of at any stage of the CT data processing. For instance, to suppress ring artifacts, convolutional neural networks may be trained [61], the input of which is both the original sinogram and the sinogram preprocessed with some classical ring artifacts filter, similar to the filters described above and based on the wavelet transform [8,60]. Such neural network approaches demand reliable datasets of CT data with and without ring artifacts for further training and validation, require large data storage, frequently appear to be computationally expensive and hardly interpretable as well. Either way, neural network methods in the task of suppressing ring artifacts are still under extensive development and perhaps other novel neural network coherent approaches, whereby the reconstruction artifacts elimination is increasingly efficient, are forthcoming. The current paper discusses exclusively classical, non-neural network ring artifacts suppression methods from which the method we propose has been derived.

Summing up the conducted overview, quite a large number of methods for suppressing reconstruction ring artifacts in CT have been elaborated. Note that a considerable proportion of existing methods are not automated: the methods require either the direct intervention of the researcher into the procedure of collecting object measurements and its intentional change, or manual scrupulous search and selection of method parameters for the best reconstruction correction in pursuit of its quality augmentation. Manual methods may be rather time-consuming. What is more, manual methods are confidently used exclusively by a specialist, while for other unsophisticated users, optimal usage of a particular method might reveal to be a daunting task. Some automatic methods are not universal to the degree of expression of the ring artifacts, and their use may, conversely, trigger the appearance of new ring artifacts throughout the reconstruction volume. All this highlights the current paramount necessity to evolve easy-to-use automatic ring artifact suppression tools.

3. Developed method’s description

We propose a method for automatically suppressing ring artifacts of an object’s reconstructed image using exclusively object projection data (sinograms). Our method may be regarded as an automatic modification of amalgam of both sinogram normalization method and method based on the application of the guided filter [66]. In particular, the method appeals to the additive model of stripe artifacts arizen within the sinogram normalization method [12] (for the multiplicative model see [46]). More accurately, the actual measured real sinogram $s$ is considered to be equal to the pixel-wise sum of the ideal target sinogram $\widetilde {s}$ and the component of constant intensity stripes $e$ associated with ring artifacts, and it is assumed that the stripe component $e$ might be computed as the pixel-wise difference of the initial sinogram $s$ (or its version $\overline {s}$ averaged along the object rotation angle variable direction) with the result of its filtering with some smoothing filter $\mathcal {F}[s]$ (Fig. 3):

$$s = \widetilde{s} + e, \: e = s - \mathcal{F}[s],$$
or, in terms of the averaged sinogram row $\overline {s}$,
$$s = \widetilde{s} + \overline{e}, \: \overline{e} = \overline{s} - \mathcal{F}[\overline{s}],$$
where in the latter equation by putting down the addition sign one implies pixel-wise summation of array $\overline {e} \equiv e$ with each individual row of sinogram $\widetilde {s}$.

Also, the elaborated method resembles the automated sinogram normalization method presented in the study [46]: both methods belong to the classical sinogram normalization methods paradigm [12], especially they put into execution the mean sinogram row, follow the additive or multiplicative model described above (Eq. (1), (2)) and the flowchart in the Fig. (3). But, on the contrary, the developed method is fully automated while the resemblant one needs unique parameter to be manually adjusted. Moreover, our method favors bilateral filtering which tends to the soft object edges processing and the sinogram normalization based method [46] utilizes the mean average filter [67] for the sinogram mean row filtering that produces edge-destructive blurring of the whole reconstruction when the filter width is not delicately chosen. In the flowchart of our method, the bilateral filtering parameters values are automatically computed and are strictly individual for each object sinogram which allows to take into account information of the object morphology to a greater extent than it seems to be possible while employing averaging filtering with constant width window [46]. The latter remark on the tendency to reconstruction blurring remains actual considering the sinogram normalization method [68] with Gaussian filtering implemented.

This method embodies a general concept that might be accounted for to automate and expedite sinogram normalization based methods. Furthermore, the method presents a fast and efficient automatic preprocessing tool and does not require additional configuration by the user. Employing the proposed method, it allows to take into consideration the specifics of the morphological structure of the object by processing each sinogram of the object with individual filter (with individually and automatically determined filter parameters’ values) whereby the enhancement of the reconstruction quality may be approached. The utilization of the proposed tool might precede either the stage of obtaining target measurements with the hardware complex (as part of the stage of equipment calibration using phantoms) or directly the step of reconstructing the internal structure of the object using the available projection data as well. Our method combats both single and grouped narrow and wide reconstruction ring artifacts of constant and variable intensity.

The main steps of the proposed method are presented in the form of a flowchart below (Fig. 4). The following subsections are devoted to a detailed description of each of the method’s steps. Besides, the intermediate results of the method’s steps are visualized by the example of a projection data package of real bell pepper that has been measured with a laboratory tomograph.

 figure: Fig. 4.

Fig. 4. The flowchart encompassing the main steps of the developed method

Download Full Size | PDF

3.1 Calculation of sinograms from object’s projection data

The method takes as input a projection images array of the object under study $\{p_k\}_{k=0}^{\Phi - 1}$, where $\Phi$ stands for the number of projection images in the array, $\Phi \geq 1$. Each projection image $p_k$ is a two-dimensional array (matrix), or equivalently a single-channel image. Here, the intensity value of the pixel with indices $(i,j)$ of the projection image $p_k$ will be denoted by $p_k(i,j)$, $i$ indicates the row number of the projection image, $j$ is the image column number, $0 \leq i < N, \: 0 \leq j < M, \: N \geq 1, \: M \geq 1$. The dimensions $(N,M)$ of all projection images are normally supposed to be the same and are determined by the resolution of the flat panel detector matrix. The number of projection images $\Phi$ exactly equals the number of angles at which the object was probed.

Projection images of an object can be raw, that is, unpreprocessed projection images registered by the detector, or normalized and logarithmically transformed with the use of bright and dark field images. Other preprocessing of the array of object’s projection images is not excluded. Within the framework of the described method for automatic suppression of ring artifacts, both raw and preprocessed, in particular normalized and logarithmically transformed, projection data can be utilized. Optional preprocessing of projection data, which precedes the application of the ring artifact suppression method, does not affect the method steps outline, and therefore in the ongoing description of the method we are dealing with the concept of projection images of an object without specifying the fact of the preprocessing performed. An example of a normalized and logarithmically transformed projection image picked up from the bell pepper projection data is shown in the Fig. 5.

 figure: Fig. 5.

Fig. 5. An example of a normalized and logarithmically transformed projection image

Download Full Size | PDF

A sinogram is generally understood as a two-dimensional array, or a two-dimensional single-channel image, the row values of which are the original (in the case of having measured raw projection data) or preprocessed (in the case of utilizing, for example, normalized and logarithmically transformed projection data) registered values of the same row of the detector matrix at different angles of object’s rotation. The sinogram array of the object $\{s_k\}_{k=0}^{N-1}$ is calculated from the projection data array of the object $\{p_k\}_{k=0}^{\Phi - 1}$ as follows: the value $s_k (i,j)$ of the pixel intensity with indices $(i,j)$ of the sinogram $s_k$ is determined by the formula

$$s_k (i,j)=p_i (k,j),$$
$k$ is a sinogram number, $i$ means the angle of the object rotation, $j$ denotes detector column number, $0 \leq i < \Phi, \: 0 \leq j < M$. The number of sinograms equals the number of rows of the detector matrix $N$, the sizes of all sinograms are the same and equal to $(\Phi,M)$.

The step Calculation of sinograms from object’s projection data of the method being described consists of converting an array of projection images (raw or preprocessed) $\{p_k\}_{k=0}^{\Phi - 1}$ into an array of sinograms $\{s_k\}_{k=0}^{N-1}$ according to the Eq. (3). An example of a sinogram is shown in Fig. 6. In the sinogram image, ring artifacts emerge in the form of bands of prominent intensity.

 figure: Fig. 6.

Fig. 6. Example sinogram: pepper fruit sinogram

Download Full Size | PDF

Sinograms can be computed both from preprocessed projection images, in particular from normalized and logarithmically transformed projection images, and from raw images. In the case of using raw projection images, the rows of the sinogram are precisely pixel-by-pixel equal to a certain row of the detector matrix when the object is probed at different angles. Further in the description of the method, when appealing to the concept of a sinogram, the fact of utilizing preprocessed or raw projection data for its calculation is not specified due to the immutability of the method in both cases. At the end of the current step, a calculated array of sinograms of the object $\{s_k\}_{k=0}^{N-1}$ is obtained.

Follow-up steps of the method are designed in such a manner that each sinogram is to be processed separately. Algorithmically, individual processing of sinograms corresponds to a loop through the entire array of sinograms (see Fig. 4). The loop runs a number of iterations equal to the number of sinograms $N$: at the $k$-th iteration, the $k$-th sinogram is to be individually processed. Next, the body of the loop is described, that is, in other words, the steps of the algorithm for processing one separate sinogram $s \equiv s_k$, produced within one $k$-th iteration of the loop, are delineated.

3.2 Step 1: calculation of the sinogram mean row (pixel-wise arithmetic mean of pixel intensities of the projection image rows)

The first step of the loop body involves arithmetic averaging of the sinogram $s$ values over its angular variable, which is equivalent to pixel-by-pixel arithmetic averaging of an individual row values of the detector matrix over different angles of the object having been probed.

A sinogram mean row (sinogram row values averaged over an angular variable that is the object rotation angle) is a one-dimensional array $\overline {s}$ of length $M$, the elements of which are determined by the formula

$$\overline{s}(j) = \frac{1}{\Phi} \sum_{\varphi = 0}^{\Phi-1}s(\varphi, j),$$
where $\Phi$ denotes the number of object’s rotation angles.

The Fig. 7 illustrates the distribution of the mean values $\overline {s}(j)$ of the sinogram presented in the Fig. 6.

 figure: Fig. 7.

Fig. 7. Example of the distribution of the sinogram mean values.

Download Full Size | PDF

The sinogram mean row accumulates information about individual pixels of the detector matrix. Thereby, particularly, the points of local peaks of $\overline {s}$ as a 1D signal might be related to miscalibrated or broken pixels of the corresponding row of the detector matrix.

At the end of step 1 of the loop iteration, we acquire the averaged sinogram $\overline {s}$ (4) that is a one-dimensional array of length $M$. During subsequent iteration steps, a filtered sinogram row $(Bilateral_{wing, \sigma _x, \sigma _i} \circ Median_{wing = 1})[\overline {s}]$ is to be defined. It inherits such a property that its pixel-by-pixel difference with the original sinogram mean row estimates the additive error value of the registered individual pixels values of the detector matrix, in turn initiated by the miscalibration of the detector matrix pixels or the presence of dead pixels in the detector matrix.

3.3 Step 2: filtering the sinogram mean row with a median filter

At the second step of the loop iteration, the sinogram mean row $\overline {s}$ is filtered with a median filter $Median_{wing=1}[\cdot ] \equiv Median[\cdot ]$ with a window size of 3 pixels (or, equivalently, a window wing size parameter equal to 1 pixel). The result of filtering of the sinogram mean row $\overline {s}$ with a median filter with a window size of 3 pixels will be denoted by $Median[\overline {s}]$:

$$Median_{wing=1}[\overline{s}](i) \equiv Median[\overline{s}](i) = \begin{cases} median\{\overline{s}(i-1), \overline{s}(i), \overline{s}(i+1) \}, \: 0<i<M-1, \\ \frac{\overline{s}(0)+\overline{s}(1)}{2}, \: i =0 \: (M \geq 2), \\ \frac{\overline{s}(M-2)+\overline{s}(M-1)}{2}, \: i = M-1 \: (M \geq 2), \end{cases}$$
where notation $median\{ \mathcal {M} \}$ defines the median value of the finite numerical set $\mathcal {M}$, $\mathcal {M} \subset \mathbb {R}$.

The median filtering procedure $Median[\cdot ]$ (with a window size of 3 pixels) removes small isolated speckle noise (high-frequency noise) and salt-and-pepper noise (bipolar impulse noise) from the initial sinogram mean row $\overline {s}$, the median filter is not prone to blurring object edges.

Thus, the result of step 2 is the median filtered sinogram mean row $Median[\overline {s}]$. Then, filtering computed array $Median[\overline {s}]$ with an adaptive one-dimensional bilateral filter is to be produced in order to remove residual noise and errors in the values registered by miscalibrated or broken pixels of the detector matrix.

The resulting median filtered sinogram mean row array is put on here, in the Fig. 8 (using the example sinogram in the Fig. 6).

 figure: Fig. 8.

Fig. 8. The result of median filtering of the sinogram mean row. The enlarged part of the array is enframed with a rectangular area.

Download Full Size | PDF

3.4 Step 3: automatic determination of bilateral filter parameters

At the third step of the loop iteration, we proceed to calculate the following parameters of the one-dimensional bilateral filter $Bilateral_{wing,\sigma _x,\sigma _i}[\cdot ]$ which is to be applied to the result of filtering by the median filter of the sinogram mean row $Median[\overline {s}]$:

  • 1. filter window wing size $wing$,
  • 2. spatial standard deviation of the filter $\sigma _x$,
  • 3. brightness (intensity) standard deviation of the filter $\sigma _i$.

The filter parameters are not specified by default and are proposed to be calculated individually for each sinogram, which allows to efficiently take into account the structure of the target object while suppressing ring-like reconstruction artifacts. The algorithm for calculating the specified filter parameters is presented in the form of a flowchart in the Fig. 9.

 figure: Fig. 9.

Fig. 9. Algorithm for calculating the parameters of a one-dimensional bilateral filter.

Download Full Size | PDF

3.4.1 Calculating the wing size of a bilateral filter window

The wing size of the bilateral filter window $wing$ sets up the size of the filter window equal to $2 \cdot wing + 1$ pixels.

To determine the size of the filter window wing, we will use the concept of the effective area of the sinogram – the minimum width of the rectangular part of the sinogram image, outside of which the values of the sinogram pixels are zero, and the height of the rectangle coincides with the height of the sinogram image. In other words, the effective area of the sinogram is the minimum width rectangular area of the sinogram image, containing all pixels of non-zero intensity and equal in height to the image of the original sinogram (see Fig. 10). The effective sinogram area is uniquely defined because the intersection of two different effective sinogram areas would otherwise produce an effective sinogram area of smaller width.

 figure: Fig. 10.

Fig. 10. Effective area and effective width of the sinogram (on the example of the pepper sinogram).

Download Full Size | PDF

The width of the effective area of the sinogram would be called the effective width of the sinogram (see Fig. 10). We estimate the effective width $EffectiveWidth(s)$ of the sinogram $s$ using the formula:

$$EffectiveWidth(s) \approx {\#} \{i: \: \overline{s}(i)>threshold, \: 0\leq i<M\},$$
where the threshold value $threshold$ is defined by the expression
$$threshold=0.015 \cdot (\max(\overline{s})-\min(\overline{s}))+\min(\overline{s}),$$
$\max (\overline {s})$ and $min(\overline {s})$ stand for the maximum and minimum elements of the sinogram mean row array $\overline {s}$ respectively, ${\# } \mathcal {A}$ denotes the cardinality of the set $\mathcal {A}$. To calculate the minimum and maximum elements of the sinogram mean row array, fast array sorting algorithms are of actual use. Estimation of the effective width of the sinogram according to the proposed formula is computationally efficient, since it is performed not over the original sinogram $s$, but from its averaging $\overline {s}$.

Within the framework of the proposed algorithm for automatically determining the parameters of a one-dimensional bilateral filter utilized to filter the sinogram mean row filtered by a median filter $Median[\overline {s}]$, the one-dimensional bilateral filter wing is assumed to be equal to 0.55% of the effective width of the sinogram $EffectiveWidth(s)$:

$$wing=[0.0055 \cdot EffectiveWidth(s)]$$
with $[x]$ denoting the integer part of the real number $x$. The factors used in the formulae (7), (8) have been empirically selected in the course of numerous experiments. A small multiplier 0.015 in the formula (7) allows us to avoid taking projection data noise into account when calculating the effective sinogram width. A relatively small percentage 0.55% of the effective width $EffectiveWidth(s)$ in the formula of $wing$ (8), in turn, makes it possible to adhere to utilization of filtering windows of not very large widths, which are most commonly efficient in practice.

Also in practice, it is natural to limit from above the set of valid values of the wing parameter $wing$, that is, consider the wing parameter $wing$ to be an early minimum between the number $[0.0055 \cdot EffectiveWidth(s)]$ and some constant value $wing \_ max$ determined by the experimental conditions.

Setting up a filter window size proportional to the effective width of the sinogram allows one to take into account the extent of manifestation and the nature of sinogram noise during filtering.

3.4.2 Calculating the spatial standard deviation value of a bilateral filter

The value of the spatial standard deviation parameter $\sigma _x$ of a one-dimensional bilateral filter is proposed to be considered equal to one sixth of the filter window size, that is, in terms determined in the previous paragraph,

$$\sigma_x = \frac{2 \cdot wing + 1}{6}.$$

Following the three-sigma rule, when choosing the specified value of the spatial standard deviation $\sigma _x$, the negligibly small probability mass of the distribution, the probability function of which defines the kernel of a one-dimensional bilateral filter, goes beyond the filter window. In this case, the interval of length equal to $6\cdot \sigma _x$ pixels, that is $2 \cdot wing + 1$ pixels, according to the stated formula (9), around the filter window center, covers the whole filter window and at the same time the 99.7% of the most probable array values due to the three-sigma rule. In other words, one may assume with high accuracy that the distribution, the probability function of which is given by the kernel of a one-dimensional bilateral filter with suggested value (9) of the parameter $\sigma _x$, is completely concentrated in the spatial region of the filter window. The latter makes it possible to more accurately operate with the information of the neighbourhood of the considered pixel of the sinogram mean row during further filtering with a bilateral filter.

3.4.3 Calculating the brightness standard deviation value of a bilateral filter

As part of the subsequent filtering step – filtering with a one-dimensional bilateral filter $Bilateral_{wing, \sigma _x, \sigma _i}[\cdot ]$ of the $Median[\overline {s}]$ array, – the value of the brightness standard deviation parameter $\sigma _i$ is set equal to the following expression:

$$\sigma_i=0.95 \cdot \text{absmax}((Bilateral_{2 \cdot wing,2 \cdot \sigma_x- \frac{1}{6}, q} \circ Median) [\overline{s}]-Median[\overline{s}]).$$

Here, $q$ stands equal to the 0.9-quantile of the sample, composed of the values of the array elements of the sinogram mean row $\overline {s}$ or the pixel intensity values of the original sinogram $s$. Actually, the usage of the sinogram mean row $\overline {s}$ enables fast computation of the brightness standard deviation value and, besides, replacement of $s$ by $\overline {s}$ in the formula (10) does not degrade the proposed mathod’s quality as our experiments have outlined. The difference of arrays $(Bilateral_{2 \cdot wing,2 \cdot \sigma _x-\frac {1}{6}, q} \circ Median)[\overline {s}]$ and $Median[\overline {s}]$ should be regarded as the element-wise difference of the corresponding arrays, $\text {absmax}(m)$ indicates the (absolute) value ot the array $m$ element having maximal modulus ($\text {absmax}(m) \geq 0$), both the values of the wing parameter $wing$ and $\sigma _x$ have been introduced in the previous paragraphs 3.4.1 and 3.4.2. The exact factors provided in the formula (10) have been empirically adjusted during the testing procedure of the proposed method.

To evaluate the value of $q$ using either $\overline {s}$ or $s$, arrays’ histograms might be constructed from the values of not every pixel of the sinogram rows, but, for example, every second or third, depending on the specific dimensions of the sinogram.

Note that as the value of the parameter $\sigma _i$ tends to positive infinity, the result of bilateral filtering $Bilateral_{wing,\sigma _x,\sigma _i}[\cdot ]$ element-by-element tends to the result of filtering with a Gaussian filter with parameters $wing$, $\sigma _x$. Therefore, considering $q$ to be sufficiently large and exceeding the elements’ wide range values of arrays $\overline {s}$ or $s$ (depending on the method of calculating $q$), the result of filtering with the filter $Bilateral_{2 \cdot wing, 2 \cdot \sigma _x- \frac {1}{6}, q}[\cdot ]$ is close to the result of filtering with a Gaussian filter with the values of the parameters wing size $wing$ and spatial standard deviation $\sigma _x$. Thus, the array difference $(Bilateral_{2 \cdot wing, 2 \cdot \sigma _x - \frac {1}{6}, q} \circ Median)[\overline {s}]-Median[\overline {s}]$ contains localized errors in the values registered by the detector, concentrated on several consecutive pixels of sinogram rows, caused by detector defects and not cleared out by the median filter $Median[\cdot ]$ (the median filter removes small “sparse” noise and errors of the detector response observed throughout the sinogram mean row). The last statement means that the value of $\sigma _i$, calculated according to the formula above, may be interpreted as the order of magnitude of the error caused by several consecutive uncalibrated or broken pixels of the detector matrix row.

Using a specified value of $\sigma _i$ makes it possible to fairly accurately estimate the order of magnitude of the values’ error recorded by the localized groups of detector pixels and to effectively suppress ring artifacts, additionally taking into account an individual sinogram’s features and without generating extra ring-like reconstruction artifacts. The step of automatic calculation of the parameters of the further applied bilateral filter is completed.

3.5 Step 4: filtering the sinogram mean row with a one-dimensional bilateral filter with automatically predetermined parameters

After median filtering of the sinogram mean row $\overline {s}$, which results in designated as $Median[\overline {s}]$ array, and automatic calculating the triple of the parameters values of the bilateral filter $wing$, $\sigma _x$, $\sigma _i$, the array $Median[\overline {s}]$ is to be filtered with the bilateral filter $Bilateral_{wing,\sigma _x, \sigma _i}[\cdot ]$ with the precalculated values of the parameters $wing$, $\sigma _x$, $\sigma _i$, individual for the sinogram $s$ processed at the current loop iteration. The result of the bilateral filtering would be denoted by $(Bilateral_{wing, \sigma _x,\sigma _i} \circ Median)[\overline {s}]$. More specifically, the array element values $(Bilateral_{wing,\sigma _x,\sigma _i} \circ Median)[\overline {s}]$ are calculated as follows:

$$Bilateral_{wing, \sigma_x, \sigma_i}[m](i) \equiv (Bilateral_{wing, \sigma_x, \sigma_i} \circ Median)[\overline{s}](i) = $$
$$\frac{\sum_{w \in window} m(w) \cdot e^{-\frac{(w-i)^2}{2 \sigma_x^2}-\frac{(m(w)-m(i))^2}{2\sigma_i^2}}}{\sum_{w \in window} e^{-\frac{(w-i)^2}{2 \sigma_x^2}-\frac{(m(w)-m(i))^2}{2\sigma_i^2}}} $$
where $m$ for the sake of brevity denotes the array $Median[\overline {s}]$, $window = \{k: \: \max (i-wing, 0) \leq k \leq \min (i+wing, M-1) \}$ is the set of elements’ indices of the filtered array $m$ lying in the bilateral filter window, $0 \leq i \leq M-1$, $M$ equals the detector width, or the sinogram mean row width.

Bilateral filtering $Bilateral_{wing,\sigma _x, \sigma _i}[\cdot ]$ repairs the errors in the sinogram mean row values registered by the detector matrix pixels, which are caused by technical defects of the detector and left after median filtering $Median[\cdot ]$. What is more, bilateral filtering preserves information about the object’s edges, does not blur details of the object’s reconstruction image and does not lead to the appearance of extra rings in subsequent reconstruction. The utilization of automatically calculated values of the bilateral filter parameters brings a possibility to take into consideration the information about the features of the individual sinogram and retain the object’s morphology information from the sinogram mean row during bilateral filtering.

At the end of step 3.5 within the loop iteration while processing a single sinogram image, one has a sinogram mean row filtered by median and bilateral filters $(Bilateral_{wing,\sigma _x,\sigma _i} \circ Median)[\overline {s}]$. An example image obtained in the ongoing step is presented in the Fig. 11 (demonstrated on the example sinogram presented in the Fig. 6).

 figure: Fig. 11.

Fig. 11. The result of filtering the sinogram mean row with a one-dimensional bilateral filter (after median filtering). The enlarged part of the array is framed with a rectangle.

Download Full Size | PDF

3.6 Step 5: calculation of an array of error values registered by defective and miscalibrated pixels of the detector matrix

At step 5, an error array $e$ of length $M$ is being calculated - the element-wise difference between the array of the original mean sinogram row $\overline {s}$ and the sinogram mean row filtered by median and bilateral filters $(Bilateral_{wing, \sigma _x, \sigma _i} \circ Median)[\overline {s}]$:

$$e(i) \equiv \overline{e} (i)=\overline{s}(i)-(Bilateral_{wing, \sigma_x,\sigma_i} \circ Median)[\overline{s}](i),$$
where $0 \leq i \leq M-1$ is the index number of the error array $e$ element.

The array $e$ contains errors in the registered values of the detector matrix pixels, which are caused by the presence of uncalibrated or broken detector pixels and hence trigger the appearance of ring artifacts of the target reconstruction.

The result of the current step is the calculated array of pixel errors of the detector matrix $e$ (an example is placed in the Fig. 12 in the form of the error values distribution along the detector’s width: the height of the column corresponds to the error value registered by the corresponding pixel of the detector matrix, the direction of the column is associated with the sign of the error value; the sinogram from the Fig. 6 was used).

 figure: Fig. 12.

Fig. 12. An example of an error values distribution along the detector width.

Download Full Size | PDF

3.7 Step 6: pixel-wise subtraction of the errors array from each row of the original sinogram

In order to remove from the registered object data the error values initiated by broken and uncalibrated detector matrix pixels, the array of errors $e$ is to be pixel-wise subtracted from each row of the original sinogram $s$. We would adhere to the notation $\hat {s}$ regarding the corrected (with detector matrix pixels errors removed) sinogram:

$$\hat{s}(i,j) = s(i,j)-e(j),$$
where $i$ stands for the row number of the corrected sinogram, $j$ signifies the column number of the corrected sinogram, or the index number of the error array element, the number of projection angles $\Phi$ and the width of the detector matrix $M$ define the size of the corrected sinogram, $0 \leq i \leq \Phi -1$, $0 \leq j \leq M-1$. The dimensions of the original $s$ and corrected $\hat {s}$ sinograms are the same.

At the end of step 6, a sinogram $\hat {s}$ is obtained, which does not contain erroneous values of defective detector matrix pixels. Incorrect pixel values have been corrected. The sinogram $\hat {s}$ corresponds to measurements of the object under conditions when utilizing an ideal detector. Next, contrast correction of the evalueated corrected sinogram $\hat {s}$ is to be optionally performed.

An example of the corrected sinogram $\hat {s}$ is presented in the Fig. 13.

 figure: Fig. 13.

Fig. 13. An example of corrected sinogram (with stripe artifacts suppressed): corrected pepper sinogram.

Download Full Size | PDF

3.8 Step 7: sinogram’s contrast correction

The contrast correction of the corrected sinogram $\hat {s}$ is performed by pixel-wise adding to it an image that is pixel-by-pixel proportional to the original sinogram $s$. Denoting the contrast-corrected sinogram $\hat {\hat {s}}$, which results from contrasting sinogram $\hat {s}$, one can write down:

$$\hat{\hat{s}}(i,j) = \hat{s}(i,j)+ \frac{mean(e)}{M} \cdot s(i, j),$$
where $i$ is the row number of the contrast-corrected sinogram $\hat {\hat {s}}$, $j$ signifies the column number of the contrast-corrected sinogram $\hat {\hat {s}}$, the multiplier $mean(e)= \frac {1}{M} \sum _{i=0}^{M-1}e(i)$ means the arithmetic mean value of the error array $e$, the number of projection angles $\Phi$ and the width of the detector matrix $M$ characterize the size of the contrast-corrected sinogram $\hat {\hat {s}}$, $0 \leq i \leq \Phi - 1$, $0\leq j \leq M-1$. The dimensions of the corrected sinogram $\hat {\hat {s}}$ and the original sinogram $s$ are equal.

An example of a contrast-corrected sinogram $\hat {\hat {s}}$ is presented in the Fig. 14.

 figure: Fig. 14.

Fig. 14. Example of a contrast-corrected sinogram: a contrast-corrected pepper sinogram.

Download Full Size | PDF

The loop iteration of the automatic ring artifacts suppression method is now completed (see the flowchart in Fig. 4): the $k$-th iteration of the method’s loop ends with the calculation of the contrast-corrected sinogram $\hat {\hat {s}} \equiv \hat {\hat {s}}_k$ from the original sinogram $s \equiv s_k$.

A set of contrasted corrected sinograms $\{ \hat {\hat {s}}_k \}_{k=0}^{N-1}$, computed from a set of original sinograms $\{ s_k \}_{k=0}^{N-1}$ and obtained after executing all iterations of the loop of the ring artifacts suppression method (in the number of $N$ iterations), are of utilization for high-quality reconstruction devoid of unwanted ring artifacts. Contrast-corrected sinograms might be converted, if necessary, into contrast-corrected projection images according to the formula (3).

Figure 15 demonstrates a comparison of the initial and final contrast-corrected sinogram obtained after performing an iteration of the loop of developed method for automatic suppression of ring artifacts of reconstruction. It can be marked that the contrast-corrected sinogram contains stripes of prominent intensity of the original sinogram, which generate ring artifacts in the reconstruction domain. The component of the original sinogram containing stripe artifacts of prominent intensity, which is extracted and removed from the sinogram during the iteration of the algorithm loop, is shown in Fig. 15 c). Remind that sinogram stripe artifacts are often not visually distinguishable, which makes it somewhat challenging to manually correct sinograms to suppress ring-like reconstruction artifacts.

 figure: Fig. 15.

Fig. 15. Comparison of the initial and final contrasted sinograms: a) initial sinogram, b) corrected sinogram (result of the method loop iteration), c) pixelwise difference of the initial and final corrected sinograms images. The pixel-by-pixel difference is presented in a cyan palette: the positive and negative components of the normalized sinogram difference are highlighted, the positive component is placed in the image channel corresponding to the red color, and the inverted negative component fills the image channels corresponding to the green and blue colors.

Download Full Size | PDF

3.9 Reconstruction of the internal structure of an object or additional corrections of projection data

After completing all iterations of the method’s loop, a set of contrasted corrected sinograms $\{\hat {\hat {s}}_k \}_{k=0}^{N-1}$, containing proper values initially registered by defective pixels of the detector matrix, is obtained. As part of the current step, contrast-corrected sinograms $\{\hat {\hat {s}}_k\}_{k=0}^{N-1}$ are used to optionally implement other corrections of projection data (in particular, correction of sinograms using images of the bright field and dark field images). These ones are further utilized during subsequent application of the tomographic reconstruction algorithm to obtain an image of the internal structure of the object under study and targeted for the analysis of the tomographic reconstruction. The usage of contrast-corrected sinograms $\{\hat {\hat {s}}_k \}_{k=0}^{N-1}$ allows us to advance towards a reconstruction of significantly higher quality than when merely utilizing the original sinograms $\{s_k\}_{k=0}^{N-1}$.

The algorithm for automatic suppression of ring reconstruction artifacts has been completed. Visualization of the method’s steps results is interspersed throughout the section body.

3.10 Computational complexity of the proposed method

To maintain the integrity of the method’s description, we address the asymptotic computational complexity of the presented automatic method for suppressing ring artifacts (that is, we estimate the number of arithmetic and comparison operations performed).

Within the previously introduced notation, computing a sinogram mean row $\overline {s}$ (Step 1) requires $O(\Phi \cdot M)$ operations, where $O(f(\Phi, N, M))$ standardly means that the number of operations can be uniformly upper bounded by the value $C \cdot f(\Phi, N, M)$ with some positive constant $C$ or, in other words, the number of operations is of order $f(\Phi, N, M)$ as arguments $\Phi, N, M$ are large enough.

Further, in the proposed method, median filtering (step 2) is performed with a window of constant size, which is equal to 3 pixels, and which gives $O(M)$ additional computational complexity. When determining for the 1D bilateral filtering such parameters as $threshold$, $wing$, and $\sigma _x$, it suffices to perform $O(M)$ operations by utlizing, in the most suboptimal case, an element-by-element comparison of array $\overline {s}$ elements to calculate the maximum and minimum array values. And to produce bilateral filtering with wing size of value $wing$ of the array of size $M$, $O(wing \cdot M)$ operations are required. This gives $O(M)$ complexity due to the common practical restrictions on the wing parameter $wing \leq wing \_ max$ with some constant bound value of wing $wing \_ max$. Consequently, the calculation of 0.9-quantile $q$ and filter brightness standard deviation $\sigma _i$ is characterized by complexity $O(M)$, and in total $O(M)$ operations are to be performed to implement step 3.

Similarly, step 4 needs $O(wing \cdot M)=O(M)$ arithmetic operations ($wing \leq wing \_ max$). Next, $O(M)$ operations are used for element-wise subtraction of the filtered and original sinogram mean rows (step 5). Compensation of the stripe artifacts (step 6) and optional sinogram contrasting (step 7) also take overall $O(M)$ arithmetic operations. As a result, the computational complexity of one method loop executed for one sinogram is equal to $O(\Phi \cdot M+M)=O(M\cdot (\Phi +1))$, or, equivalently, $O(M\cdot \Phi )$.

Suppressing stripe artifacts over a set of all $N$ object sinograms would therefore take $O(N \cdot M \cdot (\Phi +1)) = O(N \cdot M \cdot \Phi )$ asymptotic computational complexity. The resulting asymptotic computational complexity $O(N \cdot M \cdot \Phi )$, depending on the shape of the projection data array, exactly matches the complexity of the classic non-automatic sinogram normalization method for suppressing stripe artifacts. The latter suggests that it is possible to automate the sinogram normalization method (for selecting filtering parameters) while preserving computational complexity. An example of such automation is our method.

4. Experiments and discussion

The elaborated described method has been approbated on both synthetic and real data. Inputting this synthetic and real data into the reconstruction algorithms results in object reconstruction with superimposed ring artifacts if no data preprocessing or postproceesing is performed. Willing to ascertain the place of the described method among other existing ring artifacts suppression methods, the authors have gauged its performance and compared it with that of other established widespread preprocessing ring artifacts suppression methods, namely

  • • guided filter based method [66],
  • • wavelet FFT based method (put into table below under the name “Remove stripe based wavelet FFT method”) [8],
  • • FFT based method (“Remove stripe based FFT”) [42],
  • • normalization based method (“Remove stripe based normalization”) [68],
  • • regularization based method (“Remove stripe based regularization”) [69],
  • • superior method for eliminating all types of stripe artifacts (“Remove all stripe”) [14].

For all methods having been compared, except for the proposed method and guided filter based one, we used the implementations available in a a Python package, named Algotom [70], containing tomographic data processing tools which are immensely valuable for the methods verification and further development. The annotations set up for the compared ring artifacts suppression methods inherit the names of corresponding Algotom functions: for example, “Remove stripe based wavelet FFT method” refers to the part of the Algotom function name “remove_stripe_based_wavelet_fft”. For the guided filter based method implementation, we appealed to the Python script annexed to the original study [66]. Concerning the proposed method, its implementations (in Python and C++) were performed using our closed libraries for fast image processing to achieve a realistic execution time together with adequate comparison to listed methods. For this reason, unfortunately, the implementation can not be provided in open access. Although we hope that our detailed flowcharts accompanied with plenty of step-by-step descriptions and discussions would be enough to reproduce the results.

Almost all compared non-automated methods have got some parameters of the involved filtering, such as type of the filter, window size and, for example, standard deviation value for the Gaussian smoothing filter. FFT based method additionally uses parameters of the cutoff frequency, filter order, number of rows to be applied to the filter. Regularization parameters should be taken into consideration when tuning regularization based and guided filter based methods. Wavelet-FFT based suppression method requires values of wavelet decomposition level, filter size, wavelet type and high-pass filter type to be chosen. List of the superior stripe artifacts suppression method parameters includes SNR ratio for stripe detection, sizes of the median filters employed to filter out small and small-to-medium stripes and ratio of pixels to be dropped, which is used to reduce the possibility of the false detection of stripes. The large number of the superior stripe artifacts elimination method parameters makes it difficult to use it quickly, in the investigation and industrial production scanning cycles. In the experiments below, the authors devoted a large amount of time to careful and accurate selection of the listed parameters of the methods.

The reconstructions of the synthetic and real projection data have been acquired via the Smart Tomo Engine v.2.0.0 software [71]. To be precise, for all addressed projection data packages, we adopted either the reconstruction algorithm based on the convolution and backprojection method, known as Filtered Back-Projection [72] (FBP, in parallel-beam model of X-ray beam propagation), or the Feldkamp-Davis-Kress algorithm [73] (FDK, within cone-beam model of X-ray beam propagation).

Apart from the visual control, to assess the quality of reconstruction with suppressed ring artifacts after employing the automatic ring artifacts suppression method, the following metrics of the reconstruction accuracy have been calculated:

  • • the normalized root mean squared error (NRMSE) between actual resultant reconstructed image and reference ideal image which value equals according to the formula
    $$\text{NRMSE} = \frac{\sqrt{\sum_{i=0,j=0}^{n, m}(r_{ij}-\hat{r}_{ij})^2}}{\sqrt{\sum_{i=0,j=0}^{n, m}\hat{r}_{ij}^2}} \equiv \frac{\|r-\hat{r}\|_2}{\|\hat{r}\|_2},$$
    where the real reconstructed 2D image (or slice) is represented by the vector $r=(r_{ij})_{i=0,j=0}^{n,m}$, along with $\hat {r}=(\hat {r}_{ij})_{i=0,j=0}^{n,m}$ referring to the ideal 2D image of the object inner structure (ideal image, or markup) of the height and width equal to $n$ and $m$ pixels respectively, notation $\| \cdot \|_2$ stands for the Euclidean vector norm. To measure the NRMSE value the whole image region of interest (ROI) has been handled. Decrease (indicated by the corresponding arrow $\downarrow$ in the table below) of the NRMSE value signals approaching the desired ideal reconstruction and might mean reduction of reconstruction artifacts;
  • • the peak signal-to-noise ratio [53,74,75] (PSNR) amid two images measured in decibels (dB). The ratio between the maximum possible signal power and the power of the distorting noise affecting the quality of its representation is dealed with via PSNR metric. The higher ($\uparrow$) the value of PSNR is, the better will be the quality of the output reconstructed image;
  • • the signal-to-noise ratio (SNR, dB) to compare a level of power of a meaningful signal attenuated by the object to a level of noise power [53]. A higher SNR value ($\uparrow$) is commonly associated with the clearer reconstructed image signal that is easier to interpret and analyze. In order to monitor the SNR and PSNR values evolution among the considered ring artifacts suppression methods, certain ROI has been utilized: the ROI is depicted with the rectangular frame in the reconstructed images below;
  • • the ring artifact suppression percentage (RASP) metric proposed in the study [36]. The RASP metric, being evaluated for the pair of the uncorrected baseline reconstruction and the one with suppressed ring artifacts, quantifies the improvement of the standard deviation of pixel intensities along the ring patterns of the reconstructed image throughout the particular ROI with rather homogeneous background (particular ROI is marked with the rectangular-shaped frame in the reconstructions images below, it coincides with that picked for the PSNR and SNR values estimation). To be precise, the ring artifact suppression percentage (RASP), as the percentage reduction in the standard deviation of the azimuthally averaged radial profile from the object’s rotation center in the homogeneous region, is evaluated according to the formula [36]
    $$\text{RASP} = \left(1 - \frac{\sigma_c}{\sigma_u} \right) \times 100{\%} = \left( 1 - \frac{\sqrt{\frac{1}{N} \sum_{i=1}^N(x_{c,i}-\mu_{c,i})^2}}{\sqrt{\frac{1}{N} \sum_{i=1}^N(x_{u,i}-\mu_{u,i})^2}} \right) \times 100{\%}.$$
    Indexes $c$ and $u$ refer to the corrected and uncorrected images, respectively. $N$ is the total number of radial bins (it depends on the ROI to be chosen), $x_i$ is the azimuthally-averaged value within each bin, and $\mu _i$ is the mean value in each bin. So, here, $\sigma _c$ and $\sigma _u$ mean standard deviation of radial bin values for the corrected and initial uncorrected reconstructed images. The increasing RASP value ($\uparrow$) is bound to the reconstruction refinement.

To measure these metrics, the Python scikit-image library functionality has been involved [76].

Experimental measurements of methods execution time were conducted on an Arch Linux system with x86$\_$64 architecture, driven by an AMD Ryzen 9 7950X 16-Core processor clocked at 4.5 GHz, featuring a 64 MB L3 cache, and 128 GB of RAM. All measurements were executed in a single thread, and each measurement was repeated either 100 or 50 times (relevant information placed in the captions of tables with experiments results). The recorded time values were averaged with an assessment of the standard deviation. The recorded time corresponds to the computation duration of the compared ring artifacts suppression methods.

From here, the next two subsections are devoted to a description of the experiments conducted with synthetic and real data.

4.1 Experiments with synthetic data

To simulate synthetic projection data packages with stripe artifacts, we selected porous phantom imitating structures of considerable interest in the industrial applications. For the simulation purpose, the simulation method, navigated by the idea of taking into account the discreteness of the detector response, has been harnessed (for the strong isolated ring artifacts simulation, see [77]). Within the confines of it, one supposes that a detector count covers a certain limited set of photons.

Based on the porous phantom markup, 7 projection data packages with varying degrees of ring artifact severity have been simulated (from slight to strong full stripes, see Fig. 16). All simulated ring artifacts are full-circular and characterized predominantly by the prominent constant intensity in the reconstruction domain. They imitate ring artifacts similar to those that usually appear when the detector is calibrated incorrectly. Regarding types of superimposed reconstruction ring artifacts (isolated, groups of rings, etc.), they have been designed to be uneven. The reconstructions of data packages 1, 2 and 4 possess more isolated sparse both wide and narrow rings, while the reconstructions of the remaining packages are affected rather more by dense groups of rings.

 figure: Fig. 16.

Fig. 16. Synthetic data (porous phantom) reconstructions with no ring artifacts suppression applied. Blue and violet rectangles indicate zoomed reconstruction images areas which are positioned to the right of the full size reconstructions.

Download Full Size | PDF

In the notation below (for example, Fig. 17, Table 1), we link the sequence number of the projection data package with the degree of manifestation of ring artifacts in the case of non-use of additional ring artifacts suppression: the higher the data package sequence number, the more pronounced the artifacts have been intended to be simulated and hence the more destructive they appear throughout the resultant reconstruction image.

 figure: Fig. 17.

Fig. 17. Synthetic data (porous phantom) reconstructions obtained when employing different preprocessing ring artifacts suppression methods. Blue and violet rectangles indicate the reconstruction images areas which are zoomed and positioned to the right of the full size reconstructions, while green one bounds the relatively homogeneous area over which SNR, PSNR and RASP metrics values have been calculated. The rectangular area for calculating metrics is such that it contains an inscribed circle near the center of the image with a radius of 20 pixels.

Download Full Size | PDF

Tables Icon

Table 1. Experiments with synthetic data distorted by ring artifacts. Green color specifies the best metric values registered among the compared methods, red color – the worst ones. Blue color distinguishes metrics values referred to the reconstructions with no ring artifacts suppression produced. The severity of ring artifacts and the variety of ring artifacts types imposed into the projection data packages increases with increasing serial data package number.

The projection data has been obtained within a parallel-beam X-ray propagation model, i.e. all X-rays are assumed to propagate in the medium parallel to each other, and they come from an extended or infinitely distant point source of radiation. Each projection represents a 1D array of length 512 pixels, corresponding to one of 720 object rotation angles, uniformly distributed from $0^{\circ }$ to $360^{\circ }$ in increments of $0.5^{\circ }$, so the result of the reconstruction procedure is a 2D image of size 512 pix $\times$ 512 pix. Detector linear pixel size is put to be equal to $1.0 \: \mu \text {m}$.

The reconstruction algorithm employed is FBP [72]. The ideal phantom, as well as the results of reconstruction without and with the use of preprocessing-type ring artifacts suppression methods are displayed in the Fig. 17. There, the rectangular frames indicate the zoomed ROI and the ROI for computing metrics values. The results of calculating metrics have been tabulated (Table 1). The authors have calculated PSNR, SNR and RASP metrics values over a green framed ROI containing a circle of radius 20 pixels around the center of the reconstruction. Recall that these metrics values are to be found for the pair of uncorrected (Without correction) and those corrected by the suppression method under consideration, the other NRMSE metric requires the pair of ideal image and that obtained after the artifacts suppression performed. To add, the parameters of the compared manual methods were scrutinizingly tuned manually, which took up a bulk of our research. As part of the normalization based method, we utilized solely a Gaussian filtering to detect blurring and other accompanying method effects.

Analyzing the acquired resultant reconstruction images, it may be marked that the proposed method demonstrates a significant improvement in visually assessed reconstruction images quality, although it does not completely suppress artifacts in rather complicated cases. Residual rings may be visible, for instance, throughout the reconstruction of the package 7 (right column, row 2, Fig. 17) around the reconstruction image center. Besides, the results of the developed method are not inferior to such well-known methods as the superior method for eliminating ring artifacts (“Remove all stripe”) [14] and the wavelet-FFT-based method (“Remove stripe based wavelet FFT method”) [8]. The normalization-based and regularization-based methods manifest themselves to be unable to delete all rings. What is more, the FFT-based and guided filter based methods give raise to extra isolated rings outside the initial area of the object localization and cause perceptible streak artifacts and fluctuations in intensity near the center of the object. The guided-filter implies emergence of the tangential distortions near the inner spherical cavities of the object.

In the reconstructions obtained by our method, the edges of the object cavities are sharply preserved (see central profiles of the reconstructed images after compared methods applied in the Fig. 18), which is possibly due to the property of edge-preservation of the utilized bilateral 1D filter. At the same time, when employing normalization and regularization based methods, some blurring of the boundary circle of the object is observed.

 figure: Fig. 18.

Fig. 18. Central line profiles of the synthetic data package 1 reconstructions with compared ring artifacts suppression methods applied. Profiles correspond to the red lines depicted in the reconstruction domain. Red-colored rectangle bounds the zoomed area around one of the object edges.

Download Full Size | PDF

Let us add that, being in some sense a modification of the fast normalization-based method on the path of automation, the proposed method performs to be expeditious (this is guaranteed by extracting the bilateral filter parameters from the mean sinogram row, and not from the full height sinogram).

The numerical values (Table 1) of the observed metrics are consistent with the conclusions drawn on the basis of the visual inspection. Monitoring of the selected metrics shows that, regarding projection data packages with a low degree of ring artifacts manifestation, the proposed method demonstrates results that are no worse than those obtained using the other methods. Actually, this confirms the relevance of the developed method provided there is an adequate level of stripe artifacts presence in the sinogram data. With a noticeably drastic manifestation of ring artifacts, the proposed method still allows one to achieve fast and automatic acceptable quality of suppression (according to the NRMSE, PSNR and SNR metrics), although it does not always provide such an improvement in RASP as the superior ring artifacts elimination method.

As the severity of the ring artifacts pronounced within the data package increases, the confrontation between compared methods becomes more tense and apparent. In particular, for some of the latest projection data packages, the superior method for eliminating all types of ring artifacts, wavelet-FFT-based, FFT-based and normalization-based approaches provide gains in individual metrics.

The worst metrics values for most packages are brought by the guided filter based method and the regularization-based one. Although when testing on certain packages the regularization method loses in terms of the NRMSE, PSNR, SNR metrics values, it yields a considerable augmentation in the RASP metric value. The latter might be explained by the extra smoothing of the reconstruction nearby its center, which, in turn, may be connected with poorly matched method parameters.

Concerning data packages 1 and 3, after applying some of the compared methods, negative RASP metric values were obtained. This means that the use of these methods (guided filter based, FFT based and wavelet-FFT based methods) not only did not reduce the manifestation of ring artifacts, but, on the contrary, seemed to increase their severity. One may explain this observation by too little and insufficient manifestation of synthetic ring artifacts in the data packages 1 and 3. For a number of methods for suppressing ring artifacts, a prerequisite for their acceptable operation appears to be the necessary presence of ring-like artifacts. Otherwise, when there are no target artifacts or they can be practically neglected, excessive overzealousness of some suppression methods is rather possible, resulting in additional artifacts similar to ring ones. We also note that on the considered data packages 1 and 3, the utilization of our proposed method did not lead to a deterioration in the reconstruction, and thereby may indicate the adaptability of the method to the degree of manifestation of ring artifacts in the projection data.

In order to study the performance speed of the proposed method, the authors measured the operating time of the compared methods on synthetic data packages. To measure time, each method (implemented in the Python programming language) was run 100 times on each data package. The obtained time values were averaged, the standard deviation was estimated, and the values are given in the Table 2.

Tables Icon

Table 2. Time measurements (in milliseconds) of the operation of compared ring artifacts suppression methods (Python implementations) applied to synthetic data distorted by ring artifacts. For calculating mean time series of 100 measurements have been averaged for each data package. Green color specifies the shortest method operating time measured among the compared methods, red color – the longest operating time periods. The severity of ring artifacts and the variety of ring artifacts types imposed into the projection data packages increases with increasing serial data package number, i.e. data package 1 possesses slight stripe-like artifacts, whereas data package 7 is characterized by the severe stripe artifacts.

The Python implementation of the proposed automated normalization method showed the shortest operating time among the compared ring artifacts suppression methods. On average, its execution on synthetic data packages turned out to be 10 times faster than that of the superior method for eliminating ring artifacts (“Remove all stripe”). Our method also showed greater performance speed compared to the other methods, namely to the classical sinogram normalization method (“Remove stripe based normalization”) and guided filter based method (“Guided filter based method”).

The implementation of the proposed method (within Smart Tomo Engine tomographic software [71]) in the C++ programming language made it possible to gain an even more significant acceleration (Table 3): on synthetic data, the proposed method performed more than 38 times faster compared to the superior ring artifacts elimination method. Such performance speed may open up a promising opportunity towards fast and efficient automatic suppression of ring artifacts, for example, for the range of industrial purposes.

Tables Icon

Table 3. Execution time measurements (in milliseconds) for the proposed method implemented in C++. For calculating mean time and its standard deviation series of 100 measurements have been used for each data package.

4.2 Experiments with real data

Aiming at objective comparison of the developed method efficiency to that of one of the state-of-the-art ring artifacts suppression methods [14] (“Remove all stripe”), the authors have addressed the challenging open-access synchrotron-collected X-ray micro-tomographic dataset for testing, demonstrating, and developing methods of removing ring artifacts [78]. The data was collected over time at the I12-JEEP beamline, Diamond Light Source using the settings of 53keV X-rays, 1800 projections, 3.2 $\mu \text {m}$ pixel size, and 1000 m sample-detector distance [14,79].

We employed Smart Tomo Engine tomographic software [71] to produce a reconstruction of the data of the sample [78]. Preceding direct ring artifacts auto-suppression and data reconstruction, a linear flat-field correction was applied to the projection data. An automatic rotation axis alignment [33,71] was also performed, which occurred to be possible thanks to the broad functionality of Smart Tomo Engine software: a rotation axis misalignment of 6.25 pixels was found and compensated. Further, ring artifacts were suppressed and a targeted reconstruction utilizing the FBP algorithm was obtained. Here, the outcome is inserted in the Fig. 19 (“Proposed method”).

 figure: Fig. 19.

Fig. 19. Zenodo dataset reconstructions [78] (slice 1468) after compared ring artifacts suppression methods have been applied. Blue ROI enframes the magnified region. The area which was involved into the PSNR, SNR and RASP metrics evaluation is contoured with the rectangle colored in green. The regions of pixel-by-pixel difference images of reconstructions before and after ring artifacts suppression are presented in a cyan palette: the positive and negative components of the difference are highlighted, the positive component is placed in the image channel corresponding to the red color, and the inverted negative component fills the image channels corresponding to the green and blue colors.

Download Full Size | PDF

The reconstructions obtained after applying the wavelet-FFT-based method and the composite superior method [14] were contained in the sample data archive [78] and were duplicated by us from there. Considering the guided filter based suppression method, we tried to fit the guided filter parameters thoroughly to stimulate maximization of the proportion of the stripe artifacts eliminated in the sinogram domain. Then, we proceeded with the flat-field correction, rotation axis alignment and FBP reconstruction procedure via Smart Tomo Engine software which result is also placed in the Fig. 19 (“Guided filter based method”).

To commence with, the elaborated method aided to reasonably ameliorate the reconstruction quality (compare to the reconstruction “Without correction” with no preliminary suppression operated). The resultant reconstruction with our ring artifacts suppression method executed is visually more preferable than the reconstruction with a guided filter based suppression applied (as in the latter some ring-like artifacts persisted). Furthermore, it seems to be practically indistinguishable from the reconstruction when utilizing the superior ring artifacts removal technique [14], which is one of the most advanced methods for suppressing ring artifacts.

In our reconstruction, in particular, near the edges, some banding is visible: the bands are probably initiated by the artifacts of the rotation axis misalignment. And we were not able to completely get rid of them. In general, the same stripes can be seen in all the other reconstruction images.

The intensity range of our reconstruction images were clipped to that of provided reconstruction images with the wavelet-FFT-based and superior ring artifacts suppression methods employed. Then, we evaluated the RASP metric value for the pair of the unmodified reconstruction with no rings suppressed and the one when compared suppression methods were utilized. The results of evaluations may be found within the Table 4. They align with the previously conveyed visual assessment.

Tables Icon

Table 4. RASP ($\uparrow$) metric measured for the pair of open-access projection dataset [78] reconstructions without and with ring artifacts suppression applied. The names of the compared methods entitle the table columns. Blue color refers to the RASP metric value measured for the reconstruction with the proposed ring artifacts method employed, whereas red and green colors indicate the worst and the best obtained metric values respectively when executing compared methods.

In fact, the superior ring artifacts suppression method output [14] is characterized by the best RASP metric value. The wavelet-FFT-based method reaches slightly worse metric value. Although the proposed method RASP metric value is approximately 16% less than the superior ring artifacts suppression method [14] RASP metric value, such a percentage difference in metric numerical value might be rather minor as it appears to be virtually undetectable during the visual assessment. We notice that the proposed method managed to surpass the RASP metric mark set by the guided filter based ring artifacts suppression method.

In addition to the above, an analysis of difference images of the reconstructed images before and after ring artifacts suppression reveals that the proposed method tends to remove predominantly thin or medium-sized rings of uniform radial intensity. In parallel with this, the superior ring artifacts elimination method can be applied to a larger number of types of ring artifacts, especially, semi-circular rings of variable intensity.

Besides, the proposed ring artifacts suppression method has proven itself to be a helpful tool in our personal laboratory experiments [80]. Employing this method, we excelled in automatically improving the quality of the reconstructions of the projection data collected with our laboratory tomographic scanner (compare “Without correction” and “Proposed method” columns in the Fig. 20). To demonstrate the reconstruction accuracy augmentation, we put into comparison the reconstructions of several objects: a battery sample, a bell pepper and a head of garlic. The objects’ projection data has been measured within the cone-beam model of the X-ray beams propagation, i.e. assuming that all X-ray beams take their start from the point-like X-ray source.

 figure: Fig. 20.

Fig. 20. The reconstruction images of the pepper, battery and garlic data collected under the laboratory conditions. The figure demonstrates the resultant reconstructions without and with suppressed ring artifacts when one of the compared suppression methods was applied (for the method name see the column title). Blue ROIs enframe the magnified fragments of the reconstructions. Green rectangles mark the relatively homogeneous areas which were brought into consideration to evaluate the reconstructions RASP metric. In the bell pepper, battery sample and garlic head reconstruction images, green ROIs bound circles of radii which are equal to 150 pix, 100 pix and 20 pix and which show metric robustness while perturbing the green ROI contour.

Download Full Size | PDF

The alkaline AA battery’s projection data package contains 200 projections, each projection is of size of 3000 pix $\times$ 3000 pix and corresponds to one of the battery’s rotation angles in the full-circle range of $0^\circ$$360^\circ$, with a rotation angle increments equal to $1.8 ^\circ$. In the experiment with the battery sample, the source-to-detector distance is 425 mm and the source-to-object distance is 76 mm; the cone magnification factor, that one defines as the ratio of the source-to-detector and the source-to-object distances, equals 5.592; the flat-panel detector was manufactured to have pixels of linear size of 139 $\mu \text {m}$; the source voltage was set to 130 kV and we used the source current of 150 $\mu \text {A}$.

Concerning the other two objects, namely a bell pepper sample and a garlic head, their projection data packages account 400 projection images referring to object rotation angles distributed with the constant step of $0.9^\circ$ in the range $0^\circ$$360^\circ$. When we were carrying out the experiments with these objects, the setup configuration was so that the source-to-detector and source-to-object distances equal 425 mm and 169 mm, respectively; the cone magnification factor reached the mark of 2.515; the detector pixel linear size was assumed to equal 139 $\mu \text {m}$. Herewith, the X-ray source voltage was equal to 70 kV and 115 kV for the bell pepper and the garlic head samples respectively, we utilized the source current of 150 $\mu \text {A}$. In all conducted laboratory experiments the relative motion trajectory of the system composed by the target object and the X-ray source was circular, i.e. the object under study rotated around a fixed axis while being irradiated by X-rays released from a stationary source. To obtain the reconstruction images (Fig. 20), we used the FDK reconstruction algorithm [73] implemented within the Smart Tomo Engine software [71]. Before the reconstruction procedure, we employed different ring artifacts suppression methods to our data to compare their efficiency. The resultant reconstruction images are of size 3000 pix $\times$ 3000 pix, they may be observed in the Fig. 20.

Our experiments with laboratory data suggest an improvement in the visually assessed reconstruction quality in the case of employing the developed automatic ring artifacts suppression method. In automatic mode, with no extra manual operations, the proposed method allowed us to produce reconstruction close to the benchmark reconstruction obtained with the superior suppression technique applied [14]. Reversely, the guided filter based and wavelet-FFT-based suppression methods did not manage to efficiently cope with the reasonable proportion of the full-circle ring artifacts in some data packages, despite the painstaking selection of parameters of these suppression methods. The latter is especially noticeable in the enlarged areas nearby the image center within the reconstruction of the battery and garlic head samples.

According to the RASP metric measurements (Table 5), reconstruction preceded by the developed method pretends to be rather similar to that brought by the superior method [14].

Tables Icon

Table 5. RASP ($\uparrow$) metric measured for the pair of the reconstruction images, which were obtained from our laboratory projection data, without and with ring artifacts suppression applied. The names of the compared methods entitle the table columns. Blue color refers to the RASP metric value measured for the reconstruction with the proposed ring artifacts method employed, whereas red and green colors indicate the worst and the best obtained metric values respectively when executing compared methods.

Moreover, the RASP metric measurements linked to the other two methods signify worse reconstruction accuracy dynamics. This is confirmed by the visual analysis, which reveals residual rings disrupting central part of the reconstruction images when these methods were employed.

On the example of the battery sample data (Fig. 21), the authors were also able to illustrate the advantage of such a step of the developed method as a sinogram-adaptive selection of the filtering parameters. In some cases, a smart selection of the parameters of the 1D bilateral filter in our method offers the perfect basis for more efficient suppression of ring artifacts, which is probably due to a more accurate account of the object morphology reflected in the sinogram domain.

 figure: Fig. 21.

Fig. 21. The reconstruction of the battery projection data when utilizing bilateral filtering with constant adjusted parameters, which are the same for all sinogram images (on the left), and the battery reconstruction when putting into use the proposed sinogram-adaptive suppression method (on the right) with filtering parameters calculated individually for each sinogram. The ROI with blue frame means the zoomed area. The red arrows and the dashed circle highlight the ring artifacts position in the reconstruction image.

Download Full Size | PDF

Finally, the time measurements results of the compared methods applied to real laboratory data (Table 6) are tally with those obtained in case of application to synthetic data. Our method, regarding real projection data case, along with acceptable suppression quality, demonstrates a fairly high speed. Concerning performance speed, the proposed method is inferior only to guided filter based and wavelet-FFT based methods. The loss of speed of our method can probably be due to insufficiently efficient implementation in the Python programming language.

Tables Icon

Table 6. Time measurements (in seconds) of the operation of compared ring artifacts suppression methods (Python implementations) applied to our laboratory data distorted by ring artifacts. For calculating mean time series of 50 measurements have been averaged for each data package. Green color indicates the shortest method operating time measured among the compared methods, red color – the longest operating time.

However, the loss of speed is more than made up for when utilizing an implementation in the C++ programming language (Table 7) within Smart Tomo Engine software [71]. Even compared to the classical sinogram normalization, wavelet-FFT based and guided filter based ring artifacts suppression methods, our method speeds up by more than 3 times (bell pepper and head of garlic data packages) reaching approximate execution time of around 16 seconds on our typical laboratory datasets.

Tables Icon

Table 7. Execution time measurements (in seconds) for the proposed method implemented in C++. For calculating mean time and its standard deviation series of 50 measurements have been used for each data package.

5. Conclusions

In this work we have introduced an expeditious uncomplicated pre-processing automatic method for suppressing reconstruction ring artifacts, which allows one to efficiently accumulate the morphological information of the object under study. The method was devised by automating the classical sinogram normalization method. As a step of our method, 1D bilateral filtering is performed, partially identical in its properties to the guided filtering from the study [66]. However, the key difference between our method and the two mentioned ones lies in its adaptability, which manifests itself in the fact that the parameters of bilateral filtering are automatically and individually selected for each sinogram.

The developed method processes reconstruction edges tolerantly and does not lead to excessive blurring that erases image features. Along with this, the sinogram-adaptability of the filter facilitates a step towards considering the structure of the object reflected in the sinogram domain. Thereby, it enables gaining complete elimination of the reconstruction ring artifacts. Everything marked ultimately results in a substantial amelioration of the reconstruction accuracy.

Herewith, the method was approbated on both synthetic and real data. In the experiments performed, it was possible to confirm the method edge-preservation property and emphasize the advantage of employing bilateral filtering within sinogram-adaptive mode. On a challenging open-access projection dataset [78], utilizing the proposed method, we were able to produce a reconstruction that appeared to be visually indistinguishable from the benchmark specified by the superior ring artifacts suppression method [14]. A numerical assessment of the reconstruction metrics ensures the similarity of our result to the benchmark.

Besides, the study contains the results of testing the proposed method on the authors’ laboratory data. With no extra manual tuning of the method parameters, in the automatic mode, the developed method aided to drastically improve the laboratory data reconstruction quality. The automaticity of the proposed method has made the use of our ring artifacts suppression method extremely relevant and in demand in our everyday laboratory experiments. Among the authors, the developed method serves as an easy-to-use tool which automatically combats major ring artifacts hampering further detailed analysis of the reconstruction. What is more, it does neither require additional costs for selecting filtering parameters, nor rely on the researcher’s experience of dealing with means of processing tomographic data.

It is worth mentioning that, within our experiments, the proposed method demonstrates a speed consistent with that of the fast sinogram normalization method. This is achieved largely through the quick evaluation of the sinogram-individual filtering parameters from a sinogram mean row, or from a mean projection image. An exact comparison of the speed of the relative suppression methods execution is of particular scientific interest and might be put into further exploration. Moreover, the described approach towards automating the sinogram normalization method seems to be a versatile technique to automate ring artifacts suppression methods that harness bilateral filtering. When utilizing two-parameter filtering similar to bilateral one, the intriguing potential of the approach in quickly automating advanced methods for suppressing ring artifacts warrants attention. Employing such an approach towards automation in tandem with the advanced non-automated ring artifact reduction techniques would offer a promising avenue for the refinement of the reconstruction accuracy in the automatic mode.

Funding

Russian Science Foundation (project no. 23-21-00524).

Disclosures

The authors declare no conflicts of interest.

Data availability

Our synthetic and laboratory tomographic data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request. Challenging data used to assess the performance of the developed method which also underlie the results presented in this paper are available in Ref. [78].

References

1. M. Overdick, Detectors for X-ray Imaging and Computed Tomography (Springer Netherlands, 2006), pp. 49–64.

2. H. Qiu, “The principle and state-of-art applications for ct detector,” J. Phys.: Conf. Ser. 2386(1), 012060 (2022). [CrossRef]  

3. E. Shefer, A. Altman, R. Behling, et al., “State of the art of ct detectors and sources: a literature review,” Curr. Radiol. Rep. 1(1), 76–91 (2013). [CrossRef]  

4. V. L. Arlazarov, D. P. Nikolaev, V. V. Arlazarov, et al., “X-ray tomography: the way from layer-by-layer radiography to computed tomography,” Comput. Opt. 45(6), 897–906 (2021). [CrossRef]  

5. M. Wedekind, S. Castillo, and M. Magnor, “Feature-sensitive ring artifact correction for computed tomography,” J. Nondestruct. Eval. 42(1), 5 (2023). [CrossRef]  

6. A. K. Jha, N. C. Purandare, S. Shah, et al., “Identification of a unique cause of ring artifact seen in computed tomography trans-axial images,” Indian J. Nucl. Med. IJNM: Off. J. Soc. Nucl. Med. India 28(4), 232–233 (2013). [CrossRef]  

7. S. Artul, “Ring artefact in multidetector ct,” BMJ Case Rep. 2013(dec27 1), bcr-2013-201379 (2013). [CrossRef]  

8. B. Münch, P. Trtik, F. Marone, et al., “Stripe and ring artifact removal with combined wavelet—fourier filtering,” Opt. Express 17(10), 8567–8591 (2009). [CrossRef]  

9. F. E. Boas and D. Fleischmann, “Ct artifacts: causes and reduction techniques,” Imaging Med. 4(2), 229–240 (2012). [CrossRef]  

10. R. Cranage and P. Tofts, “Ring artefacts in x-ray computed tomography,” The Br. J. Radiol. 61(726), 529 (1988). [CrossRef]  

11. S. Rashid, S. Y. Lee, and M. K. Hasan, “An improved method for the removal of ring artifacts in high resolution ct imaging,” EURASIP J. on Adv. Signal Process. 2012(1), 93 (2012). [CrossRef]  

12. R. A. Ketcham, “New algorithms for ring artifact removal,” in Developments in X-ray tomography V, vol. 6318 (SPIE, 2006), pp. 216–222.

13. E. M. A. Anas, S. Y. Lee, and M. K. Hasan, “Classification of ring artifacts for their effective removal using type adaptive correction schemes,” Comput. Biol. Med. 41(6), 390–401 (2011). [CrossRef]  

14. N. T. Vo, R. C. Atwood, and M. Drakopoulos, “Superior techniques for eliminating ring artifacts in x-ray micro-tomography,” Opt. Express 26(22), 28396–28412 (2018). [CrossRef]  

15. A. S. Ingacheva, A. V. Sheshkus, T. S. Chernov, et al., “X-ray computed tomography scanner - a new tool in recognition,” Trudy ISA RAN (Proc. ISA RAS) 68, 90–99 (2018). DOI: [CrossRef]  

16. L. De Chiffre, S. Carmignato, J.-P. Kruth, et al., “Industrial applications of computed tomography,” CIRP Ann. 63(2), 655–677 (2014). [CrossRef]  

17. J. Kastner, C. Heinzl, B. Plank, et al., “New x-ray computed tomography methods for research and industry,” in 7th Conference on Industrial Computed Tomography (iCT2017). ICT, Leuven, Belgium. http://www.ndt.net, (2017).

18. H. Martz, S. Azevedo, J. Brase, et al., “Computed tomography systems and their industrial applications,” Int. J. Radiat. Appl. Instrumentation. Part A. Appl. Radiat. Isot. 41(10-11), 943–961 (1990). [CrossRef]  

19. S. Carmignato, W. Dewulf, and R. Leach, Industrial X-ray computed tomography, vol. 10 (Springer, 2018).

20. J. Silomon, J. Gluch, A. Clausner, et al., “Crack identification and evaluation in beol stacks of two different samples utilizing acoustic emission testing and nano x-ray computed tomography,” Microelectron. Reliab. 121, 114137 (2021). [CrossRef]  

21. N. Potrahov, A. Anoshkin, V. Y. Zuiko, et al., “Numerical and experimental study of composite bulkhead partition strength with in-situ x-ray monitoring,” PNRPU Mechanics Bulletin pp. 118–133 (2017).

22. M. V. Chukalina, A. V. Khafizov, V. V. Kokhan, et al., “Algorithm for post-processing of tomography images to calculate the dimension-geometric features of porous structures,” Comput. Opt. 45(1), 110–121 (2021). [CrossRef]  

23. D. V. Polevoy, P. A. Kulagin, A. S. Ingacheva, et al., “From tomographic reconstruction to automatic text recognition - the next frontier task for the artifcial intelligence,” in ICMV 2022, vol. 12701W. Osten, D. Nikolaev, J. Zhou, eds. (Society of Photo-Optical Instrumentation Engineers (SPIE), Bellingham, Washington 98227-0010 USA, 2023), pp. 127010P1–127010P11. DOI: [CrossRef]  

24. C. Liguori, G. Frauenfelder, C. Massaroni, et al., “Emerging clinical applications of computed tomography,” Med. Devices: Evidence Res. 8, 265–278 (2015). [CrossRef]  

25. K. B. Bulatov, A. S. Ingacheva, M. I. Gilmanov, et al., “Reducing radiation dose for nn-based covid-19 detection in helical chest ct using real-time monitored reconstruction,” Expert Syst. with Appl. 229, 120425 (2023). [CrossRef]  

26. D. Kravchenko, P. Karakostas, D. Kuetting, et al., “The role of dual energy computed tomography in the differentiation of acute gout flares and acute calcium pyrophosphate crystal arthritis,” Clin. Rheumatol. 41(1), 223–233 (2022). [CrossRef]  

27. P. Sibolt, L. M. Andersson, L. Calmels, et al., “Clinical implementation of artificial intelligence-driven cone-beam computed tomography-guided online adaptive radiotherapy in the pelvic region,” Phys. Imaging Radiat. Oncol. 17, 1–7 (2021). [CrossRef]  

28. B. A. Kobrinskij, “Umnaya bol’nica kak instrument cifrovoj mediciny,” Informacionnye tekhnologii i vychislitel’nye sistemy pp. 3–14 (2018).

29. I. Bukreeva, A. Ingacheva, M. Fratini, et al., “Artifacts suppression in biomedical images using a guided filter,” in ICMV 2020, vol. 11605 (Society of Photo-Optical Instrumentation Engineers (SPIE), Bellingham, Washington 98227-0010 USA, 2021), pp. 116050S1–116050S8. DOI: [CrossRef]  

30. A. S. Ingacheva, M. I. Gilmanov, A. V. Yamaev, et al., “Kompyuternaya tomografiya kak iskusstvennyy intellekt: obzor nauchnoy shkoly v.l. arlazarova,” Pattern Recognit. Image Anal. (2024).

31. K. Hong, O. Nalcioglu, and Z.-H. Cho, “Velocity modulated scanning technique for the elimination of ring artifacts in x-ray computed tomography,” in Physics and Engineering of Computerized Multidimensional Imaging and Processing, vol. 671 (SPIE, 1986), pp. 67–71.

32. G. Davis and J. Elliott, “X-ray microtomography scanner using time-delay integration for elimination of ring artefacts in the reconstructed image,” Nucl. Instrum. Methods Phys. Res., Sect. A 394(1-2), 157–162 (1997). [CrossRef]  

33. D. Kazimirov, A. Ingacheva, A. Buzmakov, et al., “Mean projection image application to the automatic rotation axis alignment in cone-beam ct,” in Fifteenth International Conference on Machine Vision (ICMV 2022), vol. 12701 (SPIE, 2023), pp. 437–446.

34. J. Lifton and T. Liu, “Ring artefact reduction via multi-point piecewise linear flat field correction for x-ray computed tomography,” Opt. Express 27(3), 3217–3228 (2019). [CrossRef]  

35. W. Vågberg, J. C. Larsson, and H. M. Hertz, “Removal of ring artifacts in microtomography by characterization of scintillator variations,” Opt. Express 25(19), 23191–23198 (2017). [CrossRef]  

36. L. C. Croton, G. Ruben, K. S. Morgan, et al., “Ring artifact suppression in x-ray computed tomography using a simple, pixel-wise response correction,” Opt. Express 27(10), 14231–14245 (2019). [CrossRef]  

37. K. O. Bangsgaard, G. Burca, E. Ametova, et al., “Low-rank flat-field correction for artifact reduction in spectral computed tomography,” Appl. Math. Sci. Eng. 31(1), 2176000 (2023). [CrossRef]  

38. N. H. Bui, T. H. Bui, T. D. Tran, et al., “Evaluation of the effect of filters on reconstructed image quality from cone beam ct system,” Nucl. Sci. Technol. 11(1), 35–47 (2021). [CrossRef]  

39. D. Chesler and S. Riederer, “Ripple suppression during reconstruction in transverse tomography,” Phys. Med. & Biol. 20(4), 632–636 (1975). [CrossRef]  

40. R. Kruger and R. Morris, “Simulated neutron tomography for nondestructive assays,” in Imaging Applications for Automated Industrial Inspection and Assembly, vol. 182 (SPIE, 1979), pp. 158–163.

41. B. Axelsson, A. Israelsson, and S. Larsson, “Non-uniformity induced artifacts in single-photon emission computed tomography,” Acta Radiol. Oncol. 22(3), 215–224 (1983). [CrossRef]  

42. C. Raven, “Numerical removal of ring artifacts in microtomography,” Rev. Sci. Instrum. 69(8), 2978–2980 (1998). [CrossRef]  

43. G. Kowalski, “Suppression of ring artefacts in ct fan-beam scanners,” IEEE Trans. Nucl. Sci. 25(5), 1111–1116 (1978). [CrossRef]  

44. M. K. Hasan, F. Sadi, and S. Y. Lee, “Removal of ring artifacts in micro-ct imaging using iterative morphological filters,” Signal, Image Video Process. 6(1), 41–53 (2012). [CrossRef]  

45. M. E. Eldib, M. Hegazy, Y. J. Mun, et al., “A ring artifact correction method: Validation by micro-ct imaging with flat-panel detectors and a 2d photon-counting detector,” Sensors 17(2), 269 (2017). [CrossRef]  

46. M. Boin and A. Haibel, “Compensation of ring artefacts in synchrotron tomographic images,” Opt. Express 14(25), 12071–12075 (2006). [CrossRef]  

47. J. Šalplachta, T. Zikmund, M. Zemek, et al., “Complete ring artifacts reduction procedure for lab-based x-ray nano ct systems,” Sensors 21(1), 238 (2021). [CrossRef]  

48. M. Yousuf and M. Asaduzzaman, “An efficient ring artifact reduction method based on projection data for micro-ct images,” J. Sci. Res. 2(1), 37–45 (2009). [CrossRef]  

49. E. M. A. Anas, S. Y. Lee, and M. K. Hasan, “Removal of ring artifacts in ct imaging through detection and correction of stripes in the sinogram,” Phys. Med. Biol. 55(22), 6911–6930 (2010). [CrossRef]  

50. X. Yang, Y. Meng, H. Gong, et al., “Abnormal pixel detection using sum-of-projections symmetry in cone beam computed tomography,” Opt. Express 20(10), 11014–11030 (2012). [CrossRef]  

51. V. Titarenko, “1-d filter for ring artifact suppression,” IEEE Signal Process. Lett. 23(6), 800–804 (2016). [CrossRef]  

52. E. X. Miqueles, J. Rinkel, F. O’Dowd, et al., “Generalized titarenko’s algorithm for ring artefacts reduction,” J. Synchrotron Radiat. 21(6), 1333–1346 (2014). [CrossRef]  

53. K. An, J. Wang, R. Zhou, et al., “Ring-artifacts removal for photon-counting ct,” Opt. Express 28(17), 25180–25193 (2020). [CrossRef]  

54. D.-J. Ji, G.-R. Qu, C.-H. Hu, et al., “Anisotropic total variation minimization approach in in-line phase-contrast tomography and its application to correction of ring artifacts,” Chin. Phys. B 26(6), 060701 (2017). [CrossRef]  

55. Y. Yang, D. Zhang, F. Yang, et al., “Post-processing method for the removal of mixed ring artifacts in ct images,” Opt. Express 28(21), 30362–30378 (2020). [CrossRef]  

56. D. Jha, H. Sørensen, S. Dobberschütz, et al., “Adaptive center determination for effective suppression of ring artifacts in tomography images,” Appl. Phys. Lett. 105(14), 143107 (2014). [CrossRef]  

57. D. Nikolaev, E. Ershov, A. Kroshnin, et al., “On a fast hough/radon transform as a compact summation scheme over digital straight line segments,” Mathematics 11(15), 3336 (2023). [CrossRef]  

58. D. Polevoy, M. Gilmanov, D. Kazimirov, et al., “Tomographic reconstruction: General approach to fast back-projection algorithms,” Mathematics 11(23), 4759 (2023). [CrossRef]  

59. L. Chandrasekar and G. Durga, “Implementation of hough transform for image processing applications,” in 2014 International Conference on Communication and Signal Processing, (IEEE, 2014), pp. 843–847.

60. Z. Wei, S. Wiebe, and D. Chapman, “Ring artifacts removal from synchrotron ct image slices,” J. Instrum. 8(06), C06006 (2013). [CrossRef]  

61. S. Chang, X. Chen, J. Duan, et al., “A cnn-based hybrid ring artifact reduction algorithm for ct images,” IEEE Trans. Radiat. Plasma Med. Sci. 5(2), 253–260 (2021). [CrossRef]  

62. D. Kazantsev, L. Beveridge, V. Shanmugasundar, et al., “Conditional generative adversarial networks for stripe artefact removal in high-resolution x-ray tomography,” Tomogr. Mater. Struct. 4, 100019 (2024). [CrossRef]  

63. W. Fang and L. Li, “Comparison of ring artifacts removal by using neural network in different domains,” in 2019 IEEE Nuclear Science Symposium and Medical Imaging Conference (NSS/MIC), (IEEE, 2019), pp. 1–3.

64. L. Yuan, Q. Xu, B. Liu, et al., “A deep learning-based ring artifact correction method for x-ray ct,” Radiat. Detect. Technol. Methods 5(1), 1–7 (2021).10.1007/s41605-021-00286-1 [CrossRef]  

65. T. Fu, Y. Wang, K. Zhang, et al., “Deep-learning-based ring artifact correction for tomographic reconstruction,” J. Synchrotron Radiat. 30(3), 620–626 (2023). [CrossRef]  

66. E. E. Berlovskaya, A. V. Buzmakov, A. S. Ingacheva, et al., “Suppression algorithm for the orthotropic artifacts in images registered in x-ray and terahertz band,” Informatsionnye protsessy 19, 200–207 (2019).

67. S. W. Smith, “The scientist and engineer’s guide to digital signal processing,” (1997).

68. M. Rivers, “Tutorial introduction to x-ray computed microtomography data processing,” University of Chicago (1998).

69. S. Titarenko, P. J. Withers, and A. Yagola, “An analytical formula for ring artefact suppression in x-ray tomography,” Appl. Math. Lett. 23(12), 1489–1495 (2010). [CrossRef]  

70. N. T. Vo, R. C. Atwood, M. Drakopoulos, et al., “Data processing methods and data acquisition for samples larger than the field of view in parallel-beam tomography,” Opt. Express 29(12), 17849–17874 (2021). [CrossRef]  

71. “Ct software smart tomo engine,” https://smartengines.com/ocr-engines/tomo-engine/ (accessed on January 20, 2024).

72. R. M. Lewitt, “Reconstruction algorithms: transform methods,” Proc. IEEE 71(3), 390–408 (1983). [CrossRef]  

73. L. A. Feldkamp, L. C. Davis, and J. W. Kress, “Practical cone-beam algorithm,” J. Opt. Soc. Am. A 1(6), 612–619 (1984). [CrossRef]  

74. M. Nadipally, “Optimization of methods for image-texture segmentation using ant colony optimization,” in Intelligent data analysis for biomedical applications, (Elsevier, 2019), pp. 21–47.

75. U. Sara, M. Akter, and M. S. Uddin, “Image quality assessment through fsim, ssim, mse and psnr—a comparative study,” J. Comput. Commun. 07(03), 8–18 (2019). [CrossRef]  

76. S. Van der Walt, J. L. Schönberger, J. Nunez-Iglesias, et al., “scikit-image: image processing in python,” PeerJ 2, e453 (2014). [CrossRef]  

77. M. S. Shutov, M. I. Gilmanov, A. S. Ignacheva, et al., Metody modelirovaniya koltsevykh artefaktov v kompyuternoy tomografii (Redaktsionno-izdatelskiy otdel Moskovskogo fiziko-tekhnicheskogo instituta (RIO MFTI), 41701, Moscow region, Dolgoprudny, Institutsky per., 9., 2022).

78. N. T. Vo, R. C. Atwood, and M. Drakopoulos, “Tomographic data for testing, demonstrating, and developing methods of removing ring artifacts,” Zenodo (2018), https://doi.org/10.5281/zenodo.1443568.

79. M. Drakopoulos, T. Connolley, C. Reinhard, et al., “I12: the joint engineering, environment and processing (jeep) beamline at diamond light source,” J. Synchrotron Radiat. 22(3), 828–838 (2015). [CrossRef]  

80. “Eltech-med company site,” https://eltech-med.com/home_en (accessed on January 20, 2024).

Data availability

Our synthetic and laboratory tomographic data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request. Challenging data used to assess the performance of the developed method which also underlie the results presented in this paper are available in Ref. [78].

78. N. T. Vo, R. C. Atwood, and M. Drakopoulos, “Tomographic data for testing, demonstrating, and developing methods of removing ring artifacts,” Zenodo (2018), https://doi.org/10.5281/zenodo.1443568.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (21)

Fig. 1.
Fig. 1. Reconstruction of the central slice of a real bell pepper data package: a) reconstruction with ring artifacts; b) reconstruction with suppressed ring artifacts.
Fig. 2.
Fig. 2. Example of a sinogram with stripe artifacts. Rectangular frames mark the enlarged part of the sinogram before and after sinogram correction (ring artifacts suppression in sinogram domain). a) Original sinogram with stripe artifacts, b) Sinogram without stripe artifacts, c) Absolute pixel-wise difference of sinograms before and after correction.
Fig. 3.
Fig. 3. Sinogram normalization approach: the initial sinogram $\widetilde {P}$ having been arithmetically averaged along the axis $\theta$ forms an array $\overline {\widetilde {P}}$ the distribution of which values is presented in the figure. The result of filtering the array $\overline {\widetilde {P}}$ designated as $\overline {P}$ is further pixel-wise subtracted from the sinogram mean row $\overline {\widetilde {P}}$. The obtained difference addressed as $E$ in the following sinogram normalization step is subtracted from each initial sinogram’s row that results in the stripe artifacts suppressed sinogram $P$.
Fig. 4.
Fig. 4. The flowchart encompassing the main steps of the developed method
Fig. 5.
Fig. 5. An example of a normalized and logarithmically transformed projection image
Fig. 6.
Fig. 6. Example sinogram: pepper fruit sinogram
Fig. 7.
Fig. 7. Example of the distribution of the sinogram mean values.
Fig. 8.
Fig. 8. The result of median filtering of the sinogram mean row. The enlarged part of the array is enframed with a rectangular area.
Fig. 9.
Fig. 9. Algorithm for calculating the parameters of a one-dimensional bilateral filter.
Fig. 10.
Fig. 10. Effective area and effective width of the sinogram (on the example of the pepper sinogram).
Fig. 11.
Fig. 11. The result of filtering the sinogram mean row with a one-dimensional bilateral filter (after median filtering). The enlarged part of the array is framed with a rectangle.
Fig. 12.
Fig. 12. An example of an error values distribution along the detector width.
Fig. 13.
Fig. 13. An example of corrected sinogram (with stripe artifacts suppressed): corrected pepper sinogram.
Fig. 14.
Fig. 14. Example of a contrast-corrected sinogram: a contrast-corrected pepper sinogram.
Fig. 15.
Fig. 15. Comparison of the initial and final contrasted sinograms: a) initial sinogram, b) corrected sinogram (result of the method loop iteration), c) pixelwise difference of the initial and final corrected sinograms images. The pixel-by-pixel difference is presented in a cyan palette: the positive and negative components of the normalized sinogram difference are highlighted, the positive component is placed in the image channel corresponding to the red color, and the inverted negative component fills the image channels corresponding to the green and blue colors.
Fig. 16.
Fig. 16. Synthetic data (porous phantom) reconstructions with no ring artifacts suppression applied. Blue and violet rectangles indicate zoomed reconstruction images areas which are positioned to the right of the full size reconstructions.
Fig. 17.
Fig. 17. Synthetic data (porous phantom) reconstructions obtained when employing different preprocessing ring artifacts suppression methods. Blue and violet rectangles indicate the reconstruction images areas which are zoomed and positioned to the right of the full size reconstructions, while green one bounds the relatively homogeneous area over which SNR, PSNR and RASP metrics values have been calculated. The rectangular area for calculating metrics is such that it contains an inscribed circle near the center of the image with a radius of 20 pixels.
Fig. 18.
Fig. 18. Central line profiles of the synthetic data package 1 reconstructions with compared ring artifacts suppression methods applied. Profiles correspond to the red lines depicted in the reconstruction domain. Red-colored rectangle bounds the zoomed area around one of the object edges.
Fig. 19.
Fig. 19. Zenodo dataset reconstructions [78] (slice 1468) after compared ring artifacts suppression methods have been applied. Blue ROI enframes the magnified region. The area which was involved into the PSNR, SNR and RASP metrics evaluation is contoured with the rectangle colored in green. The regions of pixel-by-pixel difference images of reconstructions before and after ring artifacts suppression are presented in a cyan palette: the positive and negative components of the difference are highlighted, the positive component is placed in the image channel corresponding to the red color, and the inverted negative component fills the image channels corresponding to the green and blue colors.
Fig. 20.
Fig. 20. The reconstruction images of the pepper, battery and garlic data collected under the laboratory conditions. The figure demonstrates the resultant reconstructions without and with suppressed ring artifacts when one of the compared suppression methods was applied (for the method name see the column title). Blue ROIs enframe the magnified fragments of the reconstructions. Green rectangles mark the relatively homogeneous areas which were brought into consideration to evaluate the reconstructions RASP metric. In the bell pepper, battery sample and garlic head reconstruction images, green ROIs bound circles of radii which are equal to 150 pix, 100 pix and 20 pix and which show metric robustness while perturbing the green ROI contour.
Fig. 21.
Fig. 21. The reconstruction of the battery projection data when utilizing bilateral filtering with constant adjusted parameters, which are the same for all sinogram images (on the left), and the battery reconstruction when putting into use the proposed sinogram-adaptive suppression method (on the right) with filtering parameters calculated individually for each sinogram. The ROI with blue frame means the zoomed area. The red arrows and the dashed circle highlight the ring artifacts position in the reconstruction image.

Tables (7)

Tables Icon

Table 1. Experiments with synthetic data distorted by ring artifacts. Green color specifies the best metric values registered among the compared methods, red color – the worst ones. Blue color distinguishes metrics values referred to the reconstructions with no ring artifacts suppression produced. The severity of ring artifacts and the variety of ring artifacts types imposed into the projection data packages increases with increasing serial data package number.

Tables Icon

Table 2. Time measurements (in milliseconds) of the operation of compared ring artifacts suppression methods (Python implementations) applied to synthetic data distorted by ring artifacts. For calculating mean time series of 100 measurements have been averaged for each data package. Green color specifies the shortest method operating time measured among the compared methods, red color – the longest operating time periods. The severity of ring artifacts and the variety of ring artifacts types imposed into the projection data packages increases with increasing serial data package number, i.e. data package 1 possesses slight stripe-like artifacts, whereas data package 7 is characterized by the severe stripe artifacts.

Tables Icon

Table 3. Execution time measurements (in milliseconds) for the proposed method implemented in C++. For calculating mean time and its standard deviation series of 100 measurements have been used for each data package.

Tables Icon

Table 4. RASP ( ) metric measured for the pair of open-access projection dataset [78] reconstructions without and with ring artifacts suppression applied. The names of the compared methods entitle the table columns. Blue color refers to the RASP metric value measured for the reconstruction with the proposed ring artifacts method employed, whereas red and green colors indicate the worst and the best obtained metric values respectively when executing compared methods.

Tables Icon

Table 5. RASP ( ) metric measured for the pair of the reconstruction images, which were obtained from our laboratory projection data, without and with ring artifacts suppression applied. The names of the compared methods entitle the table columns. Blue color refers to the RASP metric value measured for the reconstruction with the proposed ring artifacts method employed, whereas red and green colors indicate the worst and the best obtained metric values respectively when executing compared methods.

Tables Icon

Table 6. Time measurements (in seconds) of the operation of compared ring artifacts suppression methods (Python implementations) applied to our laboratory data distorted by ring artifacts. For calculating mean time series of 50 measurements have been averaged for each data package. Green color indicates the shortest method operating time measured among the compared methods, red color – the longest operating time.

Tables Icon

Table 7. Execution time measurements (in seconds) for the proposed method implemented in C++. For calculating mean time and its standard deviation series of 50 measurements have been used for each data package.

Equations (17)

Equations on this page are rendered with MathJax. Learn more.

s = s ~ + e , e = s F [ s ] ,
s = s ~ + e ¯ , e ¯ = s ¯ F [ s ¯ ] ,
s k ( i , j ) = p i ( k , j ) ,
s ¯ ( j ) = 1 Φ φ = 0 Φ 1 s ( φ , j ) ,
M e d i a n w i n g = 1 [ s ¯ ] ( i ) M e d i a n [ s ¯ ] ( i ) = { m e d i a n { s ¯ ( i 1 ) , s ¯ ( i ) , s ¯ ( i + 1 ) } , 0 < i < M 1 , s ¯ ( 0 ) + s ¯ ( 1 ) 2 , i = 0 ( M 2 ) , s ¯ ( M 2 ) + s ¯ ( M 1 ) 2 , i = M 1 ( M 2 ) ,
E f f e c t i v e W i d t h ( s ) # { i : s ¯ ( i ) > t h r e s h o l d , 0 i < M } ,
t h r e s h o l d = 0.015 ( max ( s ¯ ) min ( s ¯ ) ) + min ( s ¯ ) ,
w i n g = [ 0.0055 E f f e c t i v e W i d t h ( s ) ]
σ x = 2 w i n g + 1 6 .
σ i = 0.95 absmax ( ( B i l a t e r a l 2 w i n g , 2 σ x 1 6 , q M e d i a n ) [ s ¯ ] M e d i a n [ s ¯ ] ) .
B i l a t e r a l w i n g , σ x , σ i [ m ] ( i ) ( B i l a t e r a l w i n g , σ x , σ i M e d i a n ) [ s ¯ ] ( i ) =
w w i n d o w m ( w ) e ( w i ) 2 2 σ x 2 ( m ( w ) m ( i ) ) 2 2 σ i 2 w w i n d o w e ( w i ) 2 2 σ x 2 ( m ( w ) m ( i ) ) 2 2 σ i 2
e ( i ) e ¯ ( i ) = s ¯ ( i ) ( B i l a t e r a l w i n g , σ x , σ i M e d i a n ) [ s ¯ ] ( i ) ,
s ^ ( i , j ) = s ( i , j ) e ( j ) ,
s ^ ^ ( i , j ) = s ^ ( i , j ) + m e a n ( e ) M s ( i , j ) ,
NRMSE = i = 0 , j = 0 n , m ( r i j r ^ i j ) 2 i = 0 , j = 0 n , m r ^ i j 2 r r ^ 2 r ^ 2 ,
RASP = ( 1 σ c σ u ) × 100 % = ( 1 1 N i = 1 N ( x c , i μ c , i ) 2 1 N i = 1 N ( x u , i μ u , i ) 2 ) × 100 % .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.