Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Methodology for diagnosing of skin cancer on images of dermatologic spots by spectral analysis

Open Access Open Access

Abstract

In this paper a new methodology for the diagnosing of skin cancer on images of dermatologic spots using image processing is presented. Currently skin cancer is one of the most frequent diseases in humans. This methodology is based on Fourier spectral analysis by using filters such as the classic, inverse and k-law nonlinear. The sample images were obtained by a medical specialist and a new spectral technique is developed to obtain a quantitative measurement of the complex pattern found in cancerous skin spots. Finally a spectral index is calculated to obtain a range of spectral indices defined for skin cancer. Our results show a confidence level of 95.4%.

© 2015 Optical Society of America

1. Introduction

Skin cancer is one of the diseases that affect humans. It is caused by the development of cancerous cells on any of the layers of skin and occurs when cells in a body part begin to grow out of control and spread to other organs and tissues. United States is one of the countries that has had greater incidence of this disease and represents an important health problem. There are three major types of skin cancer which are basal cell carcinoma, squamous cell carcinoma and melanoma [1, 2]. Basal cell carcinoma is the most common skin cancer, however is the least dangerous if it is detected early. This cancer appears in cells that are in the deeper layers of skin, usually in parts of the body that are exposed to the sun such as the face, head, neck, ears, shoulders and back, however occurs most often in the face, this type rarely causes a metastasis [3–6]. Squamous cell carcinoma is the second most common type of skin cancer, which appears in the cells that make up the upper skin layers, it is more likely to spread to areas under the skin. It is usually found on parts of the body that are exposed to uv light, therefore it can appear on legs or feet, it can be locally invasive, it may be able to reach large sizes if is not detected early. It is able to cause metastasis [6], classically it is presented as a nodule, papule or tumor [5]. Melanoma is the least common compared to other types, but is the most dangerous because it can cause death if is not detected early. It is able to take place anywhere on the skin, but is more likely to develop in certain locations. Like basal cell and squamous cell cancers, melanoma is almost always curable in its early stages [6, 7].

Skin cancer is very common in Europe, Australia and USA [6] and is almost always curable if recognized and treated early. The major risk factors related are skin color, sun exposure, climate, advanced age, genetic and familiar history. The best way to detect melanoma is to recognize a new spot in the skin or a spot that is changing in size, shape and color. Early detection of skin cancer can avoid death [8].

Currently this disease represents a serious health problem, the search for an accurate clinical diagnosis has been a constant concern for dermatologists. Nowadays some methodologies on the area of image processing have been developed using algorithms or systems for detection and classification by means of techniques and computational methods, which have been applied in solving medical problems. These methodologies can be a very effective tool especially where there are not specialists, on other hand it is also a noninvasive tool for the patient [7–9]. Over the last decades image processing has been applied in different areas, allowing to improve the information on an image for its interpretation, representation, description, and processing.

In the last years several studies and works related with images of pigmented skin lesions for diagnosis and classifying skin lesion such as skin cancer have been developed by means of digital images analysis. Their main objective have been to provide an accurate diagnosis. Most studies are related to the diagnosis of malignant melanoma. Gola et al. [10] developed an automated dermatological tool to identify melanoma. Their algorithms are based on identifying three categories: reticular, globular and homogeneous blue pigmentation. An important aspect of their work is to extract the shape of the skin lesion and then extract features of interest. Each algorithm cannot make a final decision therefore they will develop a system correlating all algorithms in order to perform a correct diagnosis. Rahman and Bhattacharya [9] proposed a similar method to recognize melanoma malignant. They presented a decision system using different classifiers such as support vector machine (SVMs), k-nearest neighbors (K-NNs) and Gaussian maximum likelihood (G-ML). The morphology of the lesion is detected applying a thresholding-based segmentation method then the lesion mask is obtained which contains the area of the lesion in the grey level. Color features are extracted from the lesion mask to train the respective classifiers after a comparison of single classifiers are performed, where the highest percentage of precision obtained with one of the classifiers was 72.45%. In their method melanoma, benign lesions and dysplastic nevi were classified. Cavalcanti and Scharcanski [11] proposed a method for classifying pigmented skin lesions as benign or malignant using two classifiers: the k-nearest neighbors (KNN) and the KNN followed by a Decision three (KNN-DT). First a preprocessing is applied to the image where shading effects are attenuated, for this the original image of the RGB color space is converted to HSV color space. Then a segmentation method is developed considering texture and color patterns of the image, in this step some operation are applied to eliminate imperfections caused by noise. Of the segmented image a set of features is extracted according to the ABCD rule (i.e., asymmetry, border irregularity color variation and diameter). Both classifiers were trained with these features. The result showed an accuracy of 94.54% to predict a lesion as benign or malignant. The methodology proposed by Jaleel et al. [12] is based on image processing techniques using Artificial Neuronal Network. Their main objective was classify melanoma from other skin diseases. The image was preprocessed in order to remove the noise present in it. Then it is smoothed by the median filter. After the image is segmented and binarized using threshold segmentation. 2D wavelet transform is applied over the segmented image to extract the features such as mean, standard deviation, absolute mean, L1 norm and L2 norm. The network was trained with features. Therefore its accuracy rate is good however it can be improved for this system. Given a dermoscopic image Sadegui et al. [13] classified the absence or presence of a pigmented network. First the image was preprocessed applying a high-pass filter to remove the low-frequency noise. This step was made on different color transformations (NTSC, L*a*b*, red, green and blue). A Laplacian of Gaussian filter (LoG) was applied in order to find meshes or cyclic structures, which represent the presence of a pigmented network region. The set of subgraphs obtained is converted to a graph using an eight connected components analysis, after noise or unwanted structures are removed. Considering distance between nodes corresponding to a hole found in each subgraph a high-level graph is created. By mean of this graph is obtained the density ratio, which compares the number of edges in the graph with its vertices and the entire lesion area. Density is used to detect a pigmented network. They reported a percentage of classification of 94.3%. Barata et al. [14] proposed a system for detecting a pigmented network. This method uses directional filters. The image is converted in grayscale to remove hair and reflections caused by the dermoscopy gel. The intensity property is taken to enhance the pigment network applying directional filters and geometry or spatial organization is used to generate a binary net-mask. Therefore spatial organization is carried out whit the connection of all pixels. Then a label is assigned to each binary image that is with or without pigmented network. Features are extracted and used to train a classifier using the boosting algorithm. The algorithm was tested on a data set of 200 dermoscopy image (88 with pigment network and 112 without). They reported results of 82.1% for the classification. Betta et al. [15] described a method for detecting a typical pigmented network. This methodology is based on the combination of structural and spectral technique. The structural technique is made to identify the texture defined by local discontinuities such as lines and/or points. This is obtained by comparing the monochromatic image with the image which applied median filter. The spectral technique is based on the Fourier analysis of the image to obtain the spatial period of texture in this way a “regions with network” mask is created. Therefore this mask joins with segmented mask obtaining a “network image”, where the lesion area and pigment network are highlight. Finally to quantify the nature of the network two indices related to the spatial and chromatic variability have been obtained. To evaluate the performance of this method 30 images were evaluated.

The majority of the studies published in the literature such as [16–18] include similar stages as in the works mentioned above. These methods consists, generally of four stages which are: (i) acquire de image set, (ii) segmentation (which include several methods i.e techniques based on edges, thresholding, histogram segmentation, region growing, etc.), (iii) feature extraction (iv) classification or detection of a lesion for diagnosis.

It is important to evaluate patients with spotted skin quickly and efficiently through a noninvasive technique easy to implement. The importance of obtaining an accurate diagnostic method has led to develop techniques based on image processing

The aim of this work has been to develop a new methodology for diagnosing skin cancer based on images processing techniques applying different techniques of Fourier spectral filtering such as classic, inverse and k-law nonlinear [19].

Fourier spectral analysis techniques have demonstrated the capability to analyze important features from images in several fields being one of the most powerful tools.

2. Materials and methods

2.1 Image acquisition and construction of image bank

The image bank was created with images of skin cancer provided by the Dermatology Department of Centenario Hospital Hidalgo and the Instituto Nacional de Cancerología (INcan) in Mexico City. The images were classified as basal cell carcinoma, squamous cell carcinoma and melanoma by certified dermatologists after histopathological examination. Some images are shown in Fig. 1.

 figure: Fig. 1

Fig. 1 Skin cancer images.

Download Full Size | PDF

2.2 Multispectral analysis

Appendix A shows the mathematical analysis for skin cancer images. It can be observed after applying the methodology, the results do not present significant difference, thus any color channel or grayscale can be used.

2.3 Skin lesion segmentation

Image segmentation is one of the most important stages in image processing. Its objective is to partition an input image into regions or categories, it is still one of the most difficult tasks, however several segmentation techniques have been developed thus allowing its improvement. Biological images, because its complexity, have been more difficult for obtaining a good segmentation [20]. In this case our objective is to remove the healthy skin from the image to obtain the region that contain the skin lesion.

The thresholding method is used in this paper. First the original image is converted to grayscale, then the threshold method is applied, and finally the output is a binary image which we call a binary mask. Figure 2 shows the segmentation process.

 figure: Fig. 2

Fig. 2 Segmentation (a) Original image, (b) grayscale image, (c) Binary mask.

Download Full Size | PDF

The binary mask function,Imask(x,y), to obtain the binary image can be expressed as:

IMask(x,y)={1iffwλGray(x,y)>T0iffwλGray(x,y)T.

Where T is a threshold value and x, y are the coordinates of the threshold value point. Then all the gray level values greater than T will be classified as white taking the value 1, and all values less than or equal to T will be black taking the value 0. Finally, in order to do the binary mask the values are interchanged, of this way the interest area is filled with ones.

To avoid problems when determining the threshold value, the Otsu method was used which provides the optimal threshold value under the criterion of maximum variance between the background and the object. This can be expressed by

T=max(σ2).

The functionAmaskcontains the total area of binary mask over the region of interest to analyze, given by:

Amask=x,yIMask(x,y),forIMask>0.

The intensity matrix data of the selected channel are bitwise multiplied by the binary mask functionIMask(x,y), obtaining the function Iwλc(x,y) which contains the information of interest of the spot that will be analyzed, thus Iwλc(x,y)=fwλc(x,y)IMask(x,y) where the symbolrepresents a multiplication.

2.4 Sub-images

The binary image was divided into twenty five sub-imagesIMask(x,y), where each sub-image AMaskj has its respective area, (j=1,...,25th). The sub-images are show in Fig. 3.

 figure: Fig. 3

Fig. 3 Sub-images Imaskj of the binary mask.

Download Full Size | PDF

Moreover, the image of the function Iwλc(x,y)is divided into twenty five sub-images as shown in Fig. 4 each sub-image is represented by the corresponding function Iwjλc(x,y).

 figure: Fig. 4

Fig. 4 Sub-images Iwjλc(x,y),j=1,...,25th.

Download Full Size | PDF

The sub-image Iwjλc(x,y) is selected under the condition AMaskj13[I(x,y)Maskj], in order to select only sub-images that contain information greater than or equal to one third of the total area of the sub-image. Figure 5 shows the sub-images that meet the condition for this case.

 figure: Fig. 5

Fig. 5 Sub-image Iwjλc(x,y) with information greater than or equal to one third of the area.

Download Full Size | PDF

2.5 Digital filters (spatial filters)

Appendix B gives us a complete description of the three spatial filters used in this work: classic filter [21], inverse filter [22, 23] and the k-law nonlinear filter [24].

2.6 Spectral index

The spectral density SSFmay be defined as a function that contains the spectral properties of fwjλc(u,v)function, therefore the spectral index is described as a quantitative measure of complex patterns [25].

The function fwjλc(u,v)is analyzed with the conditions

SSF1(fwjλc(u,v))={1,IfRe(fwjλc(u,v))00,otherwise},SSF2(fwjλc(u,v))={1,IfRe(fwjλc(u,v))<00,otherwise},SSF3(fwjλc(u,v))={1,IfIm(fwjλc(u,v))00,otherwise},SSF4(fwjλc(u,v))={1,IfIm(fwjλc(u,v))<00,otherwise},
where Re(fwjλc(u,v)),Im(fwjλc(u,v)) are the real and imaginary parts respectively. These conditions are applied to determine which provides better results.

The spectral index for each sub-image is obtained by the ratio of the sum of the unit values present in Eq. (4), and the sum of the unit values of the binary mask present in Eq. (3) defined by

ijss={SSF(fwjλc(u,v))AMaskj}.

Finally the spectral index of the image analyzed is obtained by averaging the indices computed as

i¯ss=1jissj.

3. Results

The methodology was applied to a set of 332 dermatologic images of different sizes provided by medical specialists. This image set contains images of skin cancer (260) and benign lesions (72). The classic, inverse and k-law nonlinear filter using different color transformation was applied to each image.

Figure 6 shows the boxplots of the spectral indices using the mean of the values with two standard errors(±2SE), using a k-law filter. Box plots in red color makes reference to images of skin cancer and box plots in green color indicates the conditions for benign lesions. We observe that there is not an overlap of the whiskers in each of the conditions for images of skin cancer and benign lesions. The different filters showed results very similar in all cases. Letters A, B, C and D show the used conditions for skin cancer and A, B, C and D show the used conditions for benign lesions (Table 1). In this figure all three categories of skin cancer defined above are considered.

 figure: Fig. 6

Fig. 6 Malignant-benign classification. Box plot of the spectral indices of k-law filter using RGB channels and grayscale: a) red channel, b) green channel, c) blue channel and d) grayscale images.

Download Full Size | PDF

Tables Icon

Table 1. Conditions given for real and imaginary part

Table 2 presents the intervals of spectral indices obtained by statistical analysis applying the k-law filter. In each channel it can be seen that there is a clear difference between the intervals obtained for skin cancer and lesion benign images, so the results shows that the both groups can be located in well-defined fringe. The classical and inverse filters were also analyzed obtaining the same values, this is due to the conditions applied over the spectral densities in each sub-image. This methodology has a confidence level of 95.4%.

Considering now, separately different skin lesions like Basal cell carcinoma, Squamous cell carcinoma and Melanoma, Figs. 7, 8, and 9 show the boxplots of the spectral indices using the mean of the values with two standard errors, using a k-law filter. For Basal cell carcinoma 32 images, for Squamous cell carcinoma 30 images and for Melanoma 198 images were used. Again, we observe that there is not an overlap of the whiskers in each of the conditions for images of skin cancer and benign lesions. The different filters showed very similar results in all the cases. The range of values for the first two cancer types are a little lower when are compared with the range of values for melanoma. However, these well-defined fringes can be useful to determine the kind of skin cancer. The biological variability (hence the error bars) are much larger on benign than on malignant lesions. The main difference between Figs. 7 and 8 is the condition C (Table 1). Thus the results in the three channels and in the grayscale are very similar, the numerical differences can be observed in the Tables 3, 4, and 5.

 figure: Fig. 7

Fig. 7 Basal cell carcinoma. Box plot of the spectral indices of k-law filter using RGB channels and grayscale: a) red channel, b) green channel, c) blue channel and d) grayscale images.

Download Full Size | PDF

 figure: Fig. 8

Fig. 8 Squamous cell carcinoma. Box plot of the spectral indices of k-law filter using RGB channels and grayscale: a) red channel, b) green channel, c) blue channel and d) grayscale images.

Download Full Size | PDF

 figure: Fig. 9

Fig. 9 Melanoma. Box plot of the spectral indices of k-law filter using RGB channels and grayscale: a) red channel, b) green channel, c) blue channel and d) grayscale images.

Download Full Size | PDF

Table 6 shows a comparison about level of confidence with others works.

Tables Icon

Table 6. Comparison with other works

4. Conclusions

This paper presents a new methodology for the diagnosis of skin cancer on images with spots through a spectral analysis using different types of filters such as classic, inverse and k-law nonlinear, making a frequency analysis into the RGB and grayscale channels. The confidence level is of 95.4%.

After testing the filters the results obtained show a no significant difference, this is due to apply the conditions of the spectral density for obtaining the binarized image, therefore any filter on any channel can be used. Images of skin cancer and benign lesion were analyzed. The results show a clear difference between the intervals for the diagnosing of skin cancer and benign lesions. The results show that skin cancer presents a well-defined fringe of spectral indices, so the values obtained inside this fringe will be diagnosed as skin cancer.

It may be helpful for medical doctors who do not have enough experience in dermatology. Finally this system will provide a fast diagnosis for the medical field and it will be a non-invasive tool for the patient, therefore it can help prevent skin cancer

Appendix A

The image is intensity data set in the spatial domain, which is represented by the multispectral function fwλc(x,y) where every pixel coordinate x and y and λc={λR,λG,λB} where red (R), green (G) and blue (B) are the channels represented by RGB color model produced by a digital image with a range of [0, 255].

The filter bank is made up of f1λ,f2λ,f3λ,...,fwλmultispectral functions, w is the image taken of size NxP pixels wherex=1,...,N,y=1,...,P.

Each image fwλc(x,y)can be separated in their respective RGB channels, thus obtaining the data for three intensity matrices fwλR(x,y),fwλG(x,y) andfwλB(x,y).

Every intensity matrix channel of skin cancer is analyzed by taking an intensity profile vector set {ξqλ}w where q=1,...,Qvectors and {ξqλ}wfwλc(x,y),thus{ξqλ}wcan be defined by

{ξqλ}w=fwλc(x,ζVt),forVt=[Q2],...,[Q2],
where Vt,ζVt=N2+Vt and x=1,...,P.

Let Ψλbe a vector of mean values of intensity profile vector set, on each channel, where Ψλ, using Eq. (7) these mean values can be calculated as

Ψ^wλ=Σq{ξqλ}wQ.
Q=30 was the value taken in this study for all images. Figure 10 shows the intensity profile vector set {ξqλ}wwith calculated graphics from each skin cancer channel using Eq. (7), afterwards, using Eq. (8) can be obtained the pattern measurement for every skin cancer channel ΨλR,ΨλG,ΨλBand ΨGrayrespectively. The x axes represents the distance along profile

 figure: Fig. 10

Fig. 10 Intensity profiles, (a) Intensity vector set, (b) mean values of red channel (c) mean values or green channel, (d) mean values of blue channel and (e) mean values of grayscale.

Download Full Size | PDF

Fig. 11 shows all the intensity profiles, where a small difference between each intensity profile can be observed, however after applying the methodology the results do not present significant difference, thus any color channel or grayscale can be used.

 figure: Fig. 11

Fig. 11 Intensity profiles of all channels and grayscale.

Download Full Size | PDF

Appendix B

Filters allow us to extract the interested information on an image, the methodology presented in this work is based on the classic, inverse and nonlinear k-law.

a) A classic filter [21] is denoted as

CF=|Iwjλc(u,v)|exp[jφwj(u,v)],
where Iwjλc(u,v)is the Fourier transformation of the function Iwjλc(x,y)and φwj(u,v)is the phase.

b) Inverse filter [22, 23] is defined as

IF=exp[jφwj(u,v)]|Iwjλc(u,v)|.

c) A k-law nonlinear filter [24] is expressed as follows

klawF=|Iwjλc(u,v)|kexp[jφwj(u,v)],
Where |Iwjλc(u,v)| is the modulus of the Fourier transform and u, v are variables obtained in the frequency domain, k is the level of nonlinearity that takes values0<k<1, φwj(u,v)is the phase of the Fourier transform. The nonlinearity factork=0.1was used.

Then each filter was applied over the function Iwjλc(x,y)to obtain the function for each filter respectively.

Acknowledgments

This document is based on work partially supported by CONACYT under grant No. 169274 and partially supported by Centro de Investigación Científica y de Educación Superior de Ensenada (CICESE). Esperanza Guerra-Rosas is a student in the PhD Departamento de Investigación en Física offered by Universidad de Sonora and supported by CONACyT’s scholarship.

References and links

1. M. J. Eide, M. M. Asgari, S. W. Fletcher, A. C. Geller, A. C. Halpern, W. R. Shaikh, L. Li, G. L. Alexander, A. Altschuler, S. W. Dusza, A. A. Marghoob, E. A. Quigley, M. A. Weinstock, and Informed (Internet course for Melanoma Early Detection) Group, “Effects on skills and practice from a web-based skin cancer course for primary care providers,” J. Am. Board Fam. Med. 26(6), 648–657 (2013). [CrossRef]   [PubMed]  

2. American Cancer Society. Cancer Facts and Figs. (2014).

3. N. R. Telfer, G. B. Colver, C. A. Morton, and British Association of Dermatologists, “Guidelines for the management of basal cell carcinoma,” Br. J. Dermatol. 159(1), 35–48 (2008). [CrossRef]   [PubMed]  

4. A. Kricker, B. Armstrong, V. Hansen, A. Watson, G. Singh-Khaira, C. Lecathelinais, C. Goumas, and A. Girgis, “Basal cell carcinoma and squamous cell carcinoma growth rates and determinants of size in community patients,” J. Am. Acad. Dermatol. 70(3), 456–464 (2014). [CrossRef]   [PubMed]  

5. E. A. Gordon Spratt and J. A. Carucci, “Skin cancer in immunosuppressed patients,” Facial Plast. Surg. 29(5), 402–410 (2013). [CrossRef]   [PubMed]  

6. S. Ogden and N. R. Telfer, “Skin cancer,” Medicine (Baltimore) 37(6), 305–308 (2009). [CrossRef]  

7. K. Korotkov and R. Garcia, “Computerized analysis of pigmented skin lesions: a review,” Artif. Intell. Med. 56(2), 69–90 (2012). [CrossRef]   [PubMed]  

8. A. O. BergD. Best, and US Preventive Services Task Force, “Screening for Skin Cancer: recommendations and rationale,” Am. J. Prev. Med. 20(3Suppl), 44–46 (2001). [CrossRef]   [PubMed]  

9. M. M. Rahman and P. Bhattacharya, “An integrated and interactive decision support system for automated melanoma recognition of dermoscopic images,” Comput. Med. Imaging Graph. 34(6), 479–486 (2010). [CrossRef]   [PubMed]  

10. A. G. Isasi, B. G. Zapirain, and A. M. Zorrilla, “Melanomas non-invasive diagnosis application based on the ABCD rule and pattern recognition image processing algorithms,” Comput. Biol. Med. 41(9), 742–755 (2011). [CrossRef]   [PubMed]  

11. P. G. Cavalcanti and J. Scharcanski, “Automated prescreening of pigmented skin lesions using standard cameras,” Comput. Med. Imaging Graph. 35(6), 481–491 (2011). [CrossRef]   [PubMed]  

12. J. A. Jaleel, S. Salim, and R. B. Aswin, “Artificial neural network based detection of skin cancer,” IJAREEIE 1, 200–205 (2012).

13. M. Sadeghi, M. Razmara, T. K. Lee, and M. S. Atkins, “A novel method for detection of pigment network in dermoscopic images using graphs,” Comput. Med. Imaging Graph. 35(2), 137–143 (2011). [CrossRef]   [PubMed]  

14. C. Barata, J. S. Marques, and J. Rozeira, “A system for the detection of pigment network in dermoscopy images using directional filters,” IEEE Trans. Biomed. Eng. 59(10), 2744–2754 (2012). [CrossRef]   [PubMed]  

15. G. Betta, G. Di Leo, G. Fabbrocini, A. Paolillo, and P. Sommella, “Dermoscopic image-analysis system: Estimation of atypical pigment network and atypical vascular pattern,” presented at the International Workshop on Medical Measurement and Applications, Benevento, Italy, 20–21 April 2006.

16. M. E. Celebi, H. A. Kingravi, B. Uddin, H. Iyatomi, Y. A. Aslandogan, W. V. Stoecker, and R. H. Moss, “A methodological approach to the classification of dermoscopy images,” Comput. Med. Imaging Graph. 31(6), 362–373 (2007). [CrossRef]   [PubMed]  

17. N. Smaoui and S. Bessassi, “A developed system for melanoma diagnosis,” Int. J. Comput. Vis. 3(1), 10–17 (2013).

18. J. Premaladha and K. S. Ravichandran, “Asymmetry analysis of malignant melanoma using image processing: a survey,” Science Alert 7(2), 45–53 (2014).

19. E. Guerra-Rosas, J. Álvarez-Borrego, and Á. Coronel-Beltrán, “Diagnosis of skin cancer ussing image processing,” Proceedings of AIP Conference 1618, 155 (2014). [CrossRef]  

20. S. Uchida, “Image processing and recognition for biological images,” Dev. Growth Differ. 55(4), 523–549 (2013). [CrossRef]   [PubMed]  

21. A. B. Vander Lugt, “Signal detection by complex spatial filtering,” IEEE Trans. Inf. Theory 10(2), 139–145 (1964). [CrossRef]  

22. A. A. S. Awwal, M. A. Karim, and S. R. Jahan, “Improved correlation discrimination using an amplitude-modulated phase-only filter,” Appl. Opt. 29(2), 233–236 (1990). [CrossRef]   [PubMed]  

23. B. V. K. V. Kumar and L. Hassebrook, “Performance measures for correlation filters,” Appl. Opt. 29(20), 2997–3006 (1990). [CrossRef]   [PubMed]  

24. R. E. Guerrero and J. Álvarez Borrego, “Nonlinear composite filter performance,” Opt. Eng. 48(6), 067021 (2009).

25. M. A. Bueno-Ibarra, M. C. Chávez-Sánchez, and J. Álvarez-Borrego, “Nonlinear law spectral technique to analyze white spot syndrome virus infection,” IJALS 2(3,4), 125–132 (2010).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1
Fig. 1 Skin cancer images.
Fig. 2
Fig. 2 Segmentation (a) Original image, (b) grayscale image, (c) Binary mask.
Fig. 3
Fig. 3 Sub-images I mas k j of the binary mask.
Fig. 4
Fig. 4 Sub-images I w j λ c (x,y),j=1,..., 25 th .
Fig. 5
Fig. 5 Sub-image I w j λ c (x,y) with information greater than or equal to one third of the area.
Fig. 6
Fig. 6 Malignant-benign classification. Box plot of the spectral indices of k-law filter using RGB channels and grayscale: a) red channel, b) green channel, c) blue channel and d) grayscale images.
Fig. 7
Fig. 7 Basal cell carcinoma. Box plot of the spectral indices of k-law filter using RGB channels and grayscale: a) red channel, b) green channel, c) blue channel and d) grayscale images.
Fig. 8
Fig. 8 Squamous cell carcinoma. Box plot of the spectral indices of k-law filter using RGB channels and grayscale: a) red channel, b) green channel, c) blue channel and d) grayscale images.
Fig. 9
Fig. 9 Melanoma. Box plot of the spectral indices of k-law filter using RGB channels and grayscale: a) red channel, b) green channel, c) blue channel and d) grayscale images.
Fig. 10
Fig. 10 Intensity profiles, (a) Intensity vector set, (b) mean values of red channel (c) mean values or green channel, (d) mean values of blue channel and (e) mean values of grayscale.
Fig. 11
Fig. 11 Intensity profiles of all channels and grayscale.

Tables (6)

Tables Icon

Table 1 Conditions given for real and imaginary part

Tables Icon

Table 6 Comparison with other works

Equations (11)

Equations on this page are rendered with MathJax. Learn more.

I Mask (x,y)={ 1if f w λ Gray (x,y)>T 0if f w λ Gray (x,y)T .
T=max( σ 2 ).
A mask = x,y I Mask (x,y) ,for I Mask >0.
SS F 1 ( f w j λ c (u,v))={ 1,IfRe( f w j λ c (u,v))0 0,otherwise }, SS F 2 ( f w j λ c (u,v))={ 1,IfRe( f w j λ c (u,v))<0 0,otherwise }, SS F 3 ( f w j λ c (u,v))={ 1,IfIm( f w j λ c (u,v))0 0,otherwise }, SS F 4 ( f w j λ c (u,v))={ 1,IfIm( f w j λ c (u,v))<0 0,otherwise },
i j ss ={ SSF( f w j λ c (u,v)) A Mas k j }.
i ¯ ss = 1 j i ss j .
{ ξ q λ } w = f w λ c ( x, ζ Vt ),forVt=[ Q 2 ],...,[ Q 2 ],
Ψ ^ w λ = Σ q { ξ q λ } w Q .
CF=| I w j λ c (u,v) |exp[ j φ w j (u,v) ],
IF= exp[ j φ w j (u,v) ] | I w j λ c (u,v) | .
klawF= | I w j λ c (u,v) | k exp[ j φ w j (u,v) ],
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.