Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Identifying modulation formats through 2D Stokes planes with deep neural networks

Open Access Open Access

Abstract

A lightweight convolutional (deep) neural networks (CNNs) based modulation format identification (MFI) scheme in 2D Stokes planes for polarization domain multiplexing (PDM) fiber communication system is proposed and demonstrated. Influences of the learning rate of CNN is discussed. Experimental verifications are performed for the PDM system at a symbol rate of 28GBaud. Six modulation formats are identified with a trained CNN from images of received signals. They are PDM-BPSK, PDM-QPSK, PDM-8PSK, PDM-16QAM, PDM-32QAM, and PDM-64QAM. By taking advantage of computer vision, the results show that the proposed scheme can significantly improve the identification performance over the existing techniques.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

With the increase of the demand for various data services, such as cloud, 5G, and video-on-demand (VoD), the intelligent optical network was proposed [1]. Since it has the capability of allowing transceiver configurations such as modulation format and mixed data rates [2] according to instantaneous link margin without interrupting network traffic. The research on it has recently drawn a considerable interest. The key to implement the intelligent optical network is to design a hitless flexible transceiver [3,4], which is possible to reconfigure the digital signal processing (DSP) flow at the receiver end (Rx) when the modulation format (MF) of received signals changes. This makes the optical network “software-programmable”. In this scenario, the receiver is expected to able to identify the MF of the received signals to ensure proper demodulation. That is, the modulation format identification (MFI) is required when one designs the hitless flexible transceiver.

To meet this requirement, various MFI schemes based on various characters of modulation formats were proposed, including coefficients of discrete Fourier transform (DFT) [5], the number of clusters or the higher order statistics [4,6–9] of the power distributions in the Stokes space or constellation planes [10,11] of received signals. Moreover, some blind algorithm [12] and pilot aided [13] MFI scheme were proposed too. Among these methods are Stokes-space-based methods that have attracted great attention, owing to their insensitiveness to carrier phase noise, frequency offset, and polarization mixing [11].

The successes of machine learning in area of artificial vision and artificial intelligence have inspirited a wide range of applications in various areas. By noticing that the MFI problem is in fact an identification problem by using distribution of constellations of the received singles, it is nature to design a MFI scheme based on the convolutional neural networks (CNNs). In [14], a CNN based intelligent MFI method was proposed. By training a diagram analyzer, the method can identify six types of modulation formats (QPSK, 8PSK, 8QAM, 16QAM, 32QAM, 64QAM) over a wide OSNR range (20~30dB for 64QAM and 15~30dB for others). The complexity of the algorithm is very low (O(1)). However, in the polarization domain multiplexing (PDM) communication system, it will be a great challenge for the MFI scheme in [14].

In this paper, we propose a new CNN based MFI method to identify modulation formats in PDM optical fiber communication system by using the images of constellations in 2D Stokes planes. Because of mapping the received signals into the 3D Stokes space, the proposed MFI scheme is insensitive to the carrier phase noise, frequency offset and polarization mixing. By using MobileNet V2 [15] which is a lightweight CNN, the complexity of the proposed MFI scheme is also compatible with scheme in [14] (O(1)). In section 2, the MFI scheme is introduced. In section 3, the setup of our experiments is given. In section 4, the performances of the scheme are discussed. Finally, our conclusion is in section 5.

2. Modulation formats identification scheme

2.1 Rotation of state of polarization

The rotation of the state of polarization (SOP) is one of the impairments in the PDM optical fiber communication systems. In general, the rotation of SOP can be modeled by a unitary 2 × 2 matrix Γ which rotates the horizontal and vertical polarization components at the transmitter to a pair of arbitrary but orthogonal polarization states. Formally

Γ=[cosθejΔsinθejΔsinθcosθ],θ[0°,90°],Δ[0°,360°]
where θ is the azimuth angle and Δ is the phase angle. Figure 1 shows the constellations of PDM-16QAM and PDM-64QAM signals at azimuth angles θ = 0, 15, 30, 45 degrees and Δ = 0 degree. The rotation of SOP makes it difficult to identify modulation formats from the constellations of the received signals, even for the cases where the OSNR is high. Figure 2 illustrates two images of constellations of different modulation formats at the same OSNR (25dB) but different azimuth angles (θ = 15 degree and 10 degree, Δ = 0 degree). Without prior information about the modulation format, it will be very difficult or even impossible to distinguish them from each other by using the method introduced in [14].

 figure: Fig. 1

Fig. 1 Images of constellations generated by using 20000 received signals in PDM optical fiber communication system at 25dB OSNR. From the left to the right, the value of azimuth angle becomes larger and larger. The laser sources are working at a frequency of 193.3THz with linewidth of 100 kHz.

Download Full Size | PDF

 figure: Fig. 2

Fig. 2 Images of constellations generated by using received 20000 signals for different modulation formats. Without prior information about the modulation format of the received signals, it is very challenging to distinguish them from each other, even when the OSNR is relatively high (25dB).

Download Full Size | PDF

2.2 The Stokes mapping and image generation

At the coherent receiver, the received PDM complex signals firstly go through modulation format independent chromatic dispersion (CD) compensation and Pol-deMUX. Then, the PDM signals are mapped into Stokes space with Eq. (2).

S=(s0s1s2s3)=(exex*+eyey*exex*eyey*eyex*+exey*jeyex*+jexey*)=(ax2+ay2ax2ay22axaycosδ2axaysinδ)
where (ex, ey) are PDM complex signals after algorithms and (ax, ay) are the amplitudes of the complex signals. δ is the phase difference between ex and ey. Note that after being mapped into Stokes space, the received signals keep amplitudes and their relative phase, and therefore Stokes-space-based signals are independent of frequency offset and phase noise.

A 3D Stokes space constellation can be obtained by the last three components (s1, s2, s3) of the Stokes vector S and different modulation formats (MF) have different characteristics in constellations. Meanwhile, in 2D Stokes planes (s1, s2), (s2, s3), and (s1, s3), different MFs also have different patterns. After mapping 3D Stokes space constellations into 2D Stokes planes [11], “images” can be obtained based on the collection of constellation points in 2D Stokes planes. To generate images, we separate the area containing the constellation points into a grid of sub-areas, e.g., 224 rows and 224 columns, and count the number of constellation points falling in each sub-area. Then, we normalize the number of constellation points of each sub-area by dividing the maximum number of all sub-areas. Finally, we obtain images by taking the normalized value of each sub-area as the intensity of its corresponding pixel in the image of size 224 × 224. The generated images encode the distribution of constellation points. Figure 3 shows images of 20000 symbols of six different PDM signals in 2D Stokes planes with their corresponding 3D Stokes space and Jones constellations. In the proposed scheme, the generated images will be used as the input of a three-channel MobileNet V2 for training. Each input of the MobileNet includes three images of three 2D Stokes planes obtained from same 3D Stokes space. The combination of these three images is called an image combination.

 figure: Fig. 3

Fig. 3 Images of 20000 symbols of PDM-BPSK, PDM-QPSK, PDM-8PSK, PDM-16QAM, PDM-32QAM, and PDM-64QAM signals in 2D Stokes planes with their corresponding 3D Stokes space constellations. Images in the first column are Jones constellations of the corresponding signals.

Download Full Size | PDF

2.3 The structure of MobileNet V2

MobileNet V1 [16] is a lightweight efficient architecture of deep neural network that employs the depthwise separable convolution [17]. Therefore, MovileNet V1 runs much faster than the traditional convolution and hence has a low computational burden but only at the cost of a slight accuracy decline. MobileNet V2 improves over Mobile V1 in two aspects: 1) improve the former depthwise separable convolution with inverted residual structure [18]; 2) insert linear bottleneck layers into the convolutional blocks to prevent nonlinearities from destroying too much information. The inverted residual bottleneck layer (bottleneck) boosts the training accuracy while reducing the computational cost.

In this paper, we adopt the architecture of MobileNet V2 [15]. It contains 21 layers: 17 residual bottleneck layers, 3 convolutional layers, and 1 average pooling layer. Table 1 gives the details of the architecture for the image size 224 × 224. Each row of Table 1 describes a sequence of one or more identical (modulo stride) layers repeated n times. t is the expansion factor applied to the input size, c presents the number of output channels for each layer, and s means the stride step. ReLU6 is used as activation function and the kernel size is 3 × 3.

Tables Icon

Table 1. Parameters of the MobileNet V2 network

2.4 The proposed scheme

The proposed CNN-MFI scheme is designed for PDM m-PSK and PDM m-QAM coherent system as shown in Fig. 4. It is placed just behind the standard modulation format independent chromatic dispersion (CD) compensation and Pol-deMUX. Once the modulation format is identified, other modulation format dependent equalization algorithms can be carried out.

 figure: Fig. 4

Fig. 4 The simulation platform setup.

Download Full Size | PDF

The proposed CNN-MFI scheme comprises three main steps (see Fig. 5): 1) mapping received signals into Stokes space and generating images by projecting 3D Stokes into three 2D Stokes planes; 2) training deep neural networks with the generated images, and 3) identifying modulation format with the trained neural networks in a real time manner.

 figure: Fig. 5

Fig. 5 Identifying the modulation format for a sequence of modulated signals by converting the sequence to an image combination of 2-D images and classifying the image combination with MobileNet V2.

Download Full Size | PDF

The received signals are mapped into the 3D Stokes space with Eq. (2). Then constellations in 3D Stokes space are mapped into three 2D Stokes planes to generate three images. The three images are fed into the trained three-channel MobileNet V2 model for identifying the modulation format.

3. Numeral simulations platform

The numeral simulation platform is implemented by using VPIphotonics system and its structure is shown in Fig. 4. At the transceiver end (Tx), six PDM modulation formats are generated including PDM-BPSK, PDM-QPSK, PDM-8PSK, PDM-16QAM, PDM-32QAM, and PDM-64QAM at 28GBaud. The transmitter laser works at a frequency of 193.3THz with linewidth of 100kHz and the frequency offset is set to be 1GHz. The modulated optical signal is launched into a simulated fiber re-circulating loop consists of 8 spans of 80km SSMF. An OSNR setting module and a SOP setting module are used to simulate the real optical signals. The OSNR is set to 9 to 35 dB at the step of 1 dB. After coherent detection (LO linewidth is 100kHz) and two-fold oversampling, signals are sent to the digital signal processing (DSP) module. Inside the DSP module, the CD compensation and Pol-deMUX [20] are performed firstly, then signals are mapped into Stokes space and images are generated for three 2D Stokes planes.

The modulation format is identified by feeding the generated images to the proposed MFI scheme, and then modulation format dependent equalizations can be carried out.

We did not adopt data augmentation methods including image flip, translation, etc., since the generated images are not images of natural color scenes. To train the network, we use 64800 image combinations in the training process (each modulation format has 10800 image combinations). The size of input images is 224 × 224 pixels. 20000 symbols are used to generate images and 100 epochs are conducted to make the neural network converge. We choose 10−5 as the initial learning rate and the Adam algorithm is used to adaptively adjust the learning rate during the training process.

In the test stage, we use VPIphotonics system to generate modulated test signals. 16200 image combinations for six modulation formats (OSNR from 9dB to 35dB) are used to test the accuracy of the trained model.

4. Results and discussion

The Identification result is given in Fig. 6. The system can recognize all six modulation formats precisely at relatively low OSNRs. However, in our opinion the criteria of the evaluation for whether the performance of our system is good enough is that the identification accuracy for each modulational format is higher than 95%. Figure 6 shows that the minimum value of OSNR when the criteria can be reached is 15dB, so we give the identification result for OSNR ranges from 15dB to 35dB in Table 2. The last column gives the identification accuracy and for each of the six modulation formats 2100 test image combinations were used.

 figure: Fig. 6

Fig. 6 Identification accuracy at different OSNRs (9dB to 35dB) with image size 224 × 224 pixels.

Download Full Size | PDF

Tables Icon

Table 2. Number of test image combinations and accuracy matrix for the proposed CNN-MFI scheme with image size 224 × 224. Each modulation format contains 2100 test image combinations (OSNR from 15dB to 35dB). Five image combinations of PDM-16QAM are misclassified as PDM-64QAM (at OSNR 15dB). The total accuracy is 99.96%.

It can be observed in Table 2 that five image combinations of PDM-16QAM are misclassified as PDM-64QAM. The reason is that the visual appearance of the two formats are very similar to each other when the OSNR is too low, e.g., less than 16dB (see Fig. 7). However, as OSNR gets higher than 15dB, the CNN-MFI scheme can identify all six modulation formats precisely as shown in Fig. 6.

 figure: Fig. 7

Fig. 7 Images of PDM-16QAM and PDM-64QAM in three different 2D Stokes planes at OSNR 15dB.

Download Full Size | PDF

Unlike other methods such as Stokes space cluster algorithm in [11] and [19] which still require OSNR values to reach around the theoretical OSNR values under 7% FEC threshold for a precise identification accuracy, the proposed CNN-MFI scheme can greatly reduce the required OSNR values especially for high order QAM signals. Theoretical OSNR values under 7% FEC threshold of 28 GBaud 16QAM, 32QAM, and 64QAM are around 18dB, 21dB, and 24dB respectively. The proposed scheme can reduce the required OSNR value to16dB for all of the three modulation formats. For PSK signals, all three modulation formats can be identified precisely when OSNR value is 9dB which is lower than 7% FEC threshold of 28 GBaud BPSK (9.73dB).

The proposed CNN-MFI scheme runs pretty fast and needs only a small number of storage memory. The average time for identifying modulation format for a combination is about 5ms. The hardware platform is Intel Xeon E5 v4, frequency 1.7GHz, 8 cores; GTX TITAN Xp GPU card, 12G memory. Only one generated image of size 224 × 224 needs to be stored in the memory. The training process of the CNN can be done off-line.

In order to investigate the best performance the proposed CNN-MFI scheme can achieve, we also test accuracy with image combinations for different image sizes including 56 × 56 pixels, 112 × 112 pixels, and 224 × 224 pixels as in Fig. 8. It can be seen from Fig. 8 that higher OSNR values are required for smaller image size to achieve the same accuracy. Meanwhile, we also find that if the input resolution is too small (56 × 56 pixels), the accuracy of QAM signals may fluctuate even at a high OSNR value. This is because many sampled characteristics coincide in one pixel when the resolution is low which will eventually lead to the difficulty for CNN to learn these characteristics.

 figure: Fig. 8

Fig. 8 Identification accuracy for input images with different resolutions of 56 × 56 pixels, 112 × 112 pixels, 224 × 224 pixels, each resolution contains six modulation formats.

Download Full Size | PDF

The identification accuracy of images combinations of different number of symbols with different image sizes is also tested. Results are shown in Fig. 9. We can find that when resolution is small (56 × 56 pixels), the identification accuracy is much lower than that of higher resolution ones even when the number of symbols is 25000. For images with higher resolution (112 × 112 pixels and 224 × 224 pixels), the identification accuracy will not increase when the number of symbols reaches 20000 and images with resolution of 224 × 224 pixels has better identification accuracy than images with resolution of 112 × 112 pixels.

 figure: Fig. 9

Fig. 9 Identification accuracy for input images of different number of symbols from 5000 to 20000 at the step of 5000 with different resolutions.

Download Full Size | PDF

We also test the convergence rate with the three input image resolutions above. The number of symbols we used is 20000. It can be seen from Fig. 10 each resolution converges around its highest accuracy at the epoch of 50. The higher resolution achieves higher total accuracy whatever the epoch is. It seems that the resolution is higher the better from our conclusion, so why not use a high input image resolution like 448 × 448 pixels or even higher? Here, as we take the training time into consideration, a CNN with bigger size input takes more time to be trained, the training time of images with resolution of 224 × 224 pixels takes almost 4 times longer than training a model with input image resolution of 112 × 112 pixels. This conclusion is also hold for our online system to identify an image combination. Therefore, we finally choose the 224 × 224 pixels as a reasonable resolution and 20000 as the number of symbols for our input images to train our CNN and design our online system.

 figure: Fig. 10

Fig. 10 The total accuracy at different epochs for input images with different resolutions of 56 × 56 pixels, 112 × 112 pixels, 224 × 224 pixels.

Download Full Size | PDF

The influence of impairments like residual chromatic dispersion (CD) and fiber nonlinearity in optical transmission system to our proposed MFI system will be discussed in following.

As we known, CD has a static influence on transmission signals and can be compensated independent of modulation formats [21]. However, in practical applications, it may not be possible to compensate CD precisely so after CD compensation, there usually remains residual CD in the system. We give an identification accuracy figure under residual CD in a certain range (−300~300 ps/nm) of 16QAM signal at 19dB OSNR and 64QAM signal at 25dB OSNR.

In Fig. 11, we can find out that our MFI system can tolerate a wide range of residual CD, −130 to 140 ps/nm for PDM-16QAM at 19dB OSNR and −250 to 260 ps/nm for PDM-64QAM at 25dB OSNR. Here, we notice that PDM-64QAM has a better tolerance than PDM-16QAM, since lower order modulation formats tend to be misclassified as higher order modulation formats if signal degradation made these formats look like each other in 2D Stokes planes. However, this will not influence the identification accuracy of PDM-64QAM as it is the highest order modulation format in our analysis.

 figure: Fig. 11

Fig. 11 Identification accuracy under residual CD values from −300 to 300 ps/nm for PDM-16QAM (19dB) signal and PDM-64QAM (25dB) signal.

Download Full Size | PDF

In our numerical simulation system, the launched signal power range is around 0 dBm. As in [22], the impact of fiber nonlinearities is in acceptable range when launch power 0dBm and fiber length is less than 670km. Moreover, the deterioration on OSNR caused by fiber nonlinearities can be described as the following equation [23]

OSNR=PchPASE+PNID,

where Pch is the channel power, PASE is the ASE noise power, and PNID is the nonlinearity-induced distortion (NID) power.

However, the result of our work shows that our system can achieve excellent performance at a very low OSNR especially for QAM signals. As a result, our system can have a strong tolerance against fiber nonlinearities.

5. Conclusion

This paper proposed a lightweight CNN based MFI scheme that converted the modulation identification to a problem of image classification. Received signals were used for generating images in 2D Stokes planes for each type of modulation format, and then the generated images were fed into the CNN for training. Currently, six modulation formats were addressed: PDM-BPSK, PDM-QPSK, PDM-8PSK, PDM-16QAM, PDM-32QAM and PDM-64QAM. Benefiting from the nature of image classification, the proposed scheme can identify an arbitrary number of modulation formats without the change of CNN architecture. The proposed MFI scheme greatly improved the accuracy of identifying modulation formats over the traditional MFI scheme. In future, we hope to add more kinds of optical losses to test the tolerance of our scheme and we will expand the method to other optical performance monitoring fields like OSNR, CD and PMD estimate.

Funding

National Natural Science Foundation of China (NSFC) (61571057, 61575082, 61527820).

References and links

1. Q. Zhuge, M. Morsy-Osman, X. Xu, M. Chagnon, M. Qiu, and D. V. Plant, “Spectral Efficiency-Adaptive Optical TransmissionUsing Time Domain Hybrid QAM for Agile Optical Networks,” J. Lightwave Technol. 31(15), 2621–2628 (2013). [CrossRef]  

2. A. Nag, M. Tornatore, and B. Mukherjee, “Optical Network Design With Mixed Line Rates and Multiple Modulation Formats,” J. Lightwave Technol. 28(4), 466–475 (2010). [CrossRef]  

3. Z. Zhang and C. Li, “Hitless Multi-rate Coherent Transceiver,” in Advanced Photonics 2015, OSA Technical Digest (online) (Optical Society of America, 2015), SpS3D.2.

4. A. K. Nandi and E. E. Azzouz, “Automatic analogue modulation recognition,” Signal Process. 46(2), 211–222 (1995). [CrossRef]  

5. M. Xiang, Q. Zhuge, M. Qiu, X. Zhou, F. Zhang, M. Tang, D. Liu, S. Fu, and D. V. Plant, “Modulation format identification aided hitless flexible coherent transceiver,” Opt. Express 24(14), 15642–15655 (2016). [CrossRef]   [PubMed]  

6. R. Boada, R. Borkowski, and I. T. Monroy, “Clustering algorithms for Stokes space modulation format recognition,” Opt. Express 23(12), 15521–15531 (2015). [CrossRef]   [PubMed]  

7. L. Cheng, L. Xi, D. Zhao, X. Tang, W. Zhang, and X. Zhang, “Improved modulation format identification based on Stokes parameters using combination of fuzzy c-means and hierarchical clustering in coherent optical communication system,” Chin. Opt. Lett. 13(10), 100604 (2015). [CrossRef]  

8. P. Isautier, A. Stark, K. Mehta, R. de Salvo, and S. E. Ralph, “Autonomous Software-Defined Coherent Optical Receivers,” in Optical Fiber Communication Conference/National Fiber Optic Engineers Conference 2013, OSA Technical Digest (online) (Optical Society of America, 2013), paper OTh3B.4. [CrossRef]  

9. P. Isautier, J. Pan, R. DeSalvo, and S. E. Ralph, “Stokes Space-Based Modulation Format Recognition for Autonomous Optical Receivers,” J. Lightwave Technol. 33(24), 5157–5163 (2015). [CrossRef]  

10. J. Liu, Z. Dong, K. P. Zhong, A. P. T. Lau, C. Lu, and Y. Lu, “Modulation Format Identification Based on Received Signal Power Distributions for Digital Coherent Receivers,” in Optical Fiber Communication Conference, OSA Technical Digest (online) (Optical Society of America, 2014), paper Th4D.3. [CrossRef]  

11. L. Jiang, L. Yan, A. Yi, Y. Pan, T. Bo, M. Hao, W. Pan, and B. Luo, “Blind density-peak-based modulation format identification for elastic optical networks,” J. Lightwave Technol. 36, 2850–2858 (2018).

12. S. M. Bilal, G. Bosco, Z. Dong, A. P. T. Lau, and C. Lu, “Blind modulation format identification for digital coherent receivers,” Opt. Express 23(20), 26769–26778 (2015). [CrossRef]   [PubMed]  

13. M. Xiang, Q. Zhuge, M. Qiu, X. Zhou, M. Tang, D. Liu, S. Fu, and D. V. Plant, “RF-pilot aided modulation format identification for hitless coherent transceiver,” Opt. Express 25(1), 463–471 (2017). [CrossRef]   [PubMed]  

14. D. Wang, M. Zhang, J. Li, Z. Li, J. Li, C. Song, and X. Chen, “Intelligent constellation diagram analyzer using convolutional neural network-based deep learning,” Opt. Express 25(15), 17150–17166 (2017). [CrossRef]   [PubMed]  

15. M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen, “Inverted Residuals and Linear Bottlenecks: Mobile Networks for Classification, Detection and Segmentation,” arXiv preprint arXiv:1801.04381 (2018).

16. A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam, “Mobilenets: Efficient convolutional neural networks for mobile vision applications,” arXiv preprint arXiv:1704.04861 (2017).

17. L. Sifre and P. Mallat, “Rigid-motion scattering for image classification,” (Citeseer, 2014).

18. K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016), 770–778. [CrossRef]  

19. X. Mai, J. Liu, X. Wu, Q. Zhang, C. Guo, Y. Yang, and Z. Li, “Stokes space modulation format classification based on non-iterative clustering algorithm for coherent optical receivers,” Opt. Express 25(3), 2038–2050 (2017). [CrossRef]   [PubMed]  

20. N. J. Muga and A. N. Pinto, “Adaptive 3-D Stokes Space-Based Polarization Demultiplexing Algorithm,” J. Lightwave Technol. 32(19), 3290–3298 (2014). [CrossRef]  

21. S. Savory, “Compensation of fibre impairments in digital coherent systems,” in 2008 34th European Conference on Optical Communication, 2008), 1–1.

22. B. Chomycz, Planning fiber optics networks (McGraw-Hill Education Group, 2009), Chap. 7.

23. H. G. Choi, J. H. Chang, H. Kim, and Y. C. Chung, “Nonlinearity-Tolerant OSNR Estimation Technique for Coherent Optical Systems,” in Optical Fiber Communication Conference, OSA Technical Digest (online) (Optical Society of America, 2015), W4D.2. [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1
Fig. 1 Images of constellations generated by using 20000 received signals in PDM optical fiber communication system at 25dB OSNR. From the left to the right, the value of azimuth angle becomes larger and larger. The laser sources are working at a frequency of 193.3THz with linewidth of 100 kHz.
Fig. 2
Fig. 2 Images of constellations generated by using received 20000 signals for different modulation formats. Without prior information about the modulation format of the received signals, it is very challenging to distinguish them from each other, even when the OSNR is relatively high (25dB).
Fig. 3
Fig. 3 Images of 20000 symbols of PDM-BPSK, PDM-QPSK, PDM-8PSK, PDM-16QAM, PDM-32QAM, and PDM-64QAM signals in 2D Stokes planes with their corresponding 3D Stokes space constellations. Images in the first column are Jones constellations of the corresponding signals.
Fig. 4
Fig. 4 The simulation platform setup.
Fig. 5
Fig. 5 Identifying the modulation format for a sequence of modulated signals by converting the sequence to an image combination of 2-D images and classifying the image combination with MobileNet V2.
Fig. 6
Fig. 6 Identification accuracy at different OSNRs (9dB to 35dB) with image size 224 × 224 pixels.
Fig. 7
Fig. 7 Images of PDM-16QAM and PDM-64QAM in three different 2D Stokes planes at OSNR 15dB.
Fig. 8
Fig. 8 Identification accuracy for input images with different resolutions of 56 × 56 pixels, 112 × 112 pixels, 224 × 224 pixels, each resolution contains six modulation formats.
Fig. 9
Fig. 9 Identification accuracy for input images of different number of symbols from 5000 to 20000 at the step of 5000 with different resolutions.
Fig. 10
Fig. 10 The total accuracy at different epochs for input images with different resolutions of 56 × 56 pixels, 112 × 112 pixels, 224 × 224 pixels.
Fig. 11
Fig. 11 Identification accuracy under residual CD values from −300 to 300 ps/nm for PDM-16QAM (19dB) signal and PDM-64QAM (25dB) signal.

Tables (2)

Tables Icon

Table 1 Parameters of the MobileNet V2 network

Tables Icon

Table 2 Number of test image combinations and accuracy matrix for the proposed CNN-MFI scheme with image size 224 × 224. Each modulation format contains 2100 test image combinations (OSNR from 15dB to 35dB). Five image combinations of PDM-16QAM are misclassified as PDM-64QAM (at OSNR 15dB). The total accuracy is 99.96%.

Equations (3)

Equations on this page are rendered with MathJax. Learn more.

Γ=[ cosθ e jΔ sinθ e jΔ sinθ cosθ ], θ[ 0°, 90° ],Δ[ 0°,360° ]
S=( s 0 s 1 s 2 s 3 )=( e x e x * + e y e y * e x e x * e y e y * e y e x * + e x e y * j e y e x * +j e x e y * )=( a x 2 + a y 2 a x 2 a y 2 2 a x a y cosδ 2 a x a y sinδ )
OSNR= P ch P ASE + P NID ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.