Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Automated segmentation of knee menisci from magnetic resonance images by using ATTU-Net: a pilot study on small datasets

Open Access Open Access

Abstract

We proposed a neural network model trained with a small amount of meniscus data (only 144 MR images) to improve the segmentation performance of CNNs, such as U-Net, by overcoming the challenges caused by surrounding tissues. We trained and tested the proposed model on 204 T2-weighted MR images of the knee from 181 patients. The trained model provided excellent segmentation performance for lateral menisci with a mean Dice similarity coefficient of 0.864 (range, 0.743-0.990; SD, ±0.077). The results were superior to those of contemporarily published meniscus segmentation methods based on CNNs.

© 2021 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

The knee menisci are hydrated fibrocartilaginous soft tissues, which are wedge-shaped in cross section, enable mobility, and absorb excess loads on the knee [1]. Menisci are an absolutely essential part of the human knee. In the last century, removal of the knee menisci was discovered to lead to articular cartilage degeneration and have detrimental effects on the knee joint biomechanics [2]. Deterioration and weakening of the meniscal structure will trigger the disorder of osteoarthritis, which is one of the most common causes of disability, especially among young athletes and elderly people. Therefore, detecting meniscal lesions or injuries is extremely important for early diagnosis.

To this end, magnetic resonance imaging (MRI) is one of the commonly used techniques in clinical applications. Although arthroscopic examination is known to be the gold standard for detecting meniscal lesions or injuries [3], magnetic resonance (MR) images provide more detailed information of the anatomical position of menisci and surrounding tissues. In addition, performing MR scans is a noninvasive and harmless procedure, which ensures minimal discomfort to patients. Compared with arthroscopic and clinical examination, the detection of meniscal lesions via MRI has a precision of 85% [4]. Currently, meniscus region segmentation is often performed manually. The manual segmentation represents the gold standard and is employed as the ground truth for evaluating automatic segmentation methods. However, it is a time-consuming and challenging task because those surrounding tissues have overlapping contrast behavior when viewed in MR images [5]. Thus, to distinguish and segment the menisci from MR images depends mainly on the expertise of the orthopedists. Furthermore, manual segmentation suffers from intra- and inter-observer variabilities in human MRI [6,7], which makes it difficult to reach a consensus for the orthopedists.

To segment the menisci, researchers have proposed fully automatic methods since 2010. In accordance with a recent systematic review paper [8], these fully automatic methods are divided into two kinds: statistical shape model (SSM)- and convolutional neural network (CNN)-based segmentation. Nevertheless, few studies have been conducted using SSM methods. Paproki et al. introduced their automated segmentation method by combining SSMs and active shape models. The accuracy of segmentation was 78.3% for medial menisci and 83.9% for lateral menisci [9]. In a continuation of this work, the same authors proposed their improved segmentation method for 2D and 3D turbo spin echo (TSE) MR images [10]. However, the segmentation accuracy was improved by only 1.2%-6% (84.3% for medial and 85.1% for lateral) for 3D sequences. Importantly, the segmentation accuracy became lower (76.4%) than that in their previous study when employing the standard 2D TSE MR images.

In recent years, the use of deep learning methods, particularly CNN-based meniscus segmentation algorithms, has tremendously increased. Tack et al. proposed a fully automated method, in which menisci were segmented using a combination of CNN and SSM [11]. The neural network consisted of two U-Net architectures (2D and 3D U-Net), which were heavily inspired by the original U-Net architecture [12]. Their evaluation of segmentation accuracy with respect to 88 MRI datasets from the Osteoarthritis Initiative (OAI) led to a mean Dice similarity coefficient (DSC) of 83.14% for medial menisci and 88.25% for lateral menisci. The segmentation time of per pair menisci was reported as 1.5 min. Norman et al. presented a U-Net-based meniscus segmentation approach using two kinds of MRI sequences [13]. The MRI volumes of 628 patients from the OAI dataset were employed. Per volume consisted of 160 slices with a thickness of 0.7 mm. Therefore, a large number of image data, a minimum of 53.3% of the entire datasets, were needed to train the neural network. Similarly, Byra et al. proposed a fully automated method for knee meniscus segmentation by using an attention U-Net architecture [14]. Compared with that in study [13], the employed MR image number (61 patients) was less, but the authors produced 2748 images through image rotation and horizontal flipping prior to the training process. The segmentation accuracy varied from 83.1% to 87.2% for medial menisci and from 80.5% to 84.7% for lateral menisci. Nonetheless, the segmentation time was not given in this study. Gaj et al. used a conditional generative adversarial network (CGAN) in combination with U-Net to improve the performance of cartilage and meniscus segmentation [15]. Although the number of patients and MRI sequence were the same as those in [9] and [11], CGAN still showed higher segmentation accuracy, which was 89.5% for lateral menisci and 87.38% for medial menisci. The segmentation time of approximately 1 min was highly competitive. Lmez et al. proposed a region with CNN features (R-CNN) for meniscus region detection and segmentation by using two different MRI sequences [16]. The aim of using two different image sequences was to separate the meniscus tissue from the surrounding tissues. The surrounding tissues have different intensity values in both sequences, whereas the pixels of the meniscus tissue show similar intensity values. The automatic segmentation accuracy with respect to 10 individuals’ MRI showed a high mean DSC of 88.9% for overall menisci. However, the segmentation results presented in this literature were unsatisfactory because some parts of the meniscus tissue were unintentionally deleted during the filtration process. Although they reconstructed the deleted meniscus parts through morphological reconstruction, major “holes” remained in the final meniscus region. Furthermore, the paper did not show the ground truth masks segmented by an expert, indicating an unconvincing segmentation result.

The meniscus segmentation methods based on SSM and CNN are summarized in Table 1 in descending order of the published year. Most of the researchers focused on OAI, an open-access public-domain research database, in which the MRI scans were taken by a high-field scanner (3-Tesla) in accordance with the water-excited double-echo steady-state (weDESS) format. We attribute the possible reasons to the following two points. For one thing, OAI not only is the largest database of MR images and biomarkers for the research of osteoarthritis, but also provides already segmented data used for ground truth. For another, compared with other format of MR images, the weDESS sequence can produce a higher contrast difference between the menisci and the surrounding tissues [17], allowing to achieve better results in segmentation. Thus, in study [13], the segmentation quality of weDESS was higher than that of SPGR T1ρ Sequence. However, the weDESS sequence is only available in Siemens scanners, which limits its usage in other scanners. That is, the fact that these studies were performed only using the weDESS sequence indicates that their success in different excitation sequences is unknown [18], and further validation is required to assess the applicability of these methods [9]. Furthermore, the weDESS sequence is not commonly used in practical clinical applications, considering that 2D MRI can meet the needs of common diagnoses.

Tables Icon

Table 1. Comparison of meniscus segmentation methods based on SSM and CNN proposed since 2010.

In this paper, we proposed a customized network model to improve the meniscus segmentation accuracy of the original U-Net by using clinical MR images. The main contributions of our study are as follows:

  • 1. This was a pioneering study by using the smallest amount of training data (only 144 MR slices) compared with the contemporarily published journal articles related to knee menisci.
  • 2. Being one of the clinically focused sequences, the T2-weighted MR sequence was first time used in our study; it was usually more general but few used it in previous meniscus studies due to low tissue contrast.
  • 3. To reduce false-positive predictions in U-Net, we utilized modified attention gates. They were embedded in the standard U-Net architecture to improve the prediction accuracy of the menisci.

The rest of this paper is organized as follows:

In Section 2, we indicate the details of the proposed ATTU-Net. The experiment and results are shown in Section 3, followed by the discussion and conclusions in Sections 4 and 5, respectively.

2. Methodology

2.1 Architecture of ATTU-Net

U-Net was initially proposed to segment neuronal structures by training an end-to-end fully convolutional network on electron microscopic images [12]. The pro-posed ATTU-Net for meniscus segmentation was based on the standard U-Net, and the entire architecture used in our study is shown in Fig. 1. For each block the number of filters is depicted on the top of the block. A 2D transposed convolutional block with a kernel size of 2 × 2, and a stride of 2 × 2 was used in the upsampling. Except for the first and last blocks, the other convolutional blocks used the rectified linear unit as the activation function and 3 × 3 convolutional filters. The first block used a 1D 1 × 1 convolutional filter, and no activation function was used for this layer. The last block used the sigmoid activation function which was suitable for binary classification.

 figure: Fig. 1.

Fig. 1. Entire architecture of ATTU-Net for meniscus segmentation.

Download Full Size | PDF

To reduce false-positive predictions in U-Net, the attention gates [19] were combined into the standard U-Net architecture to process the feature maps propagated via skip connections, as shown in Fig. 1. The schematic of the attention gates is shown in Fig. 2. The usage of attention gates let the network focus on particular regions in feature maps, instead of analyzing the entire image representations. In particular, we utilized batch normalization [20] in the structure to accelerate the gradient descent.

 figure: Fig. 2.

Fig. 2. Illustration of attention gates.

Download Full Size | PDF

2.2 Database creation

We retrospectively collected 204 T2-weighted MR slices/frames from 181 patients who underwent knee MRI examinations between 2020 and 2021 from our orthopedic rehabilitation centers in China. The MR images used in studies on segmentation of menisci were composed of images of healthy people and/or people with knee joint problems. Whole knee joint imaging was performed using T2-weighted sequences on a GE 3.0 Tesla scanner. On the basis of our clinical research needs, only the lateral menisci were scanned with a sagittal interval of 4 mm. The MR images and reports used for the database were anonymized with removal of personal information.

Given that orthopedists need considerable time to label the meniscus region on MR images, obtaining large high-quality datasets manually may be difficult. Therefore, the main difference of our study from other meniscus automated segmentation studies is the pilot application of small training data. For each MR sequence, we chose only one sagittal image (one slice) from the 180 patients, and one MR sequence containing 24 images was additionally used for 3D construction of the menisci. Thus, the database totally contained 204 images from 181 knees. The image size was 512 × 512 pixels. These 181 knees’ images were randomly split into 70:30 ratio for training and testing. In the training set, the region of menisci was manually segmented by a professional orthopedist with more than 10 years of experience, which was considered the ground truth. The test set comprised raw MR images without any annotations, which was used for evaluating our models. The datasets used in this work were summarized in Table 2.

Tables Icon

Table 2. Summary of the used MRI datasets.

2.3 Implementation settings

The implementation of ATTU-Net was based on the PyTorch framework. The training and testing experiments were performed using an ASUS TUF- RTX 3090 GPU with 24 GB memory. The RMSProp algorithm was used to optimize the entire training process [21]. The initial learning rate was set to 2 × 10−6, the weight decay was set to 1 × 10−8, and the momentum was set to 0.9. ATTU-Net and U-Net were trained with a batch size of eight.

2.4 Evaluation metrics

To evaluate the overlap between the ground truth and the predicted segmentation masks, we adopted the DSC [22], a widely used evaluation metric in the field of medical image segmentation, to measure the meniscus segmentation accuracy throughout the article. Mathematically, the formula can be expressed as

$$DSC = \frac{{2|{GT \cap PS} |}}{{|{GT} |+ |{PS} |}},$$
where region GT refers to the ground truth, and region PS refers to the predicted segmentation results. This coefficient ranges from 0, meaning no overlap, to 1, indicating a complete overlap between the two regions.

In the training process of meniscus segmentation, we employed binary cross entropy as the loss function. It can be defined as

$$L_{BCE} = -\displaystyle{1 \over N}\mathop \sum \limits_{i = 1}^N y_i\log \left( {\dot{y}_i} \right) + \left( {1-y_i} \right)\log \left( {1-\dot{y_i} } \right),$$
where ${y_i} \in Y,\; \dot{y_i} \in \; \dot{Y},\; Y$ stands for the ground truth, $\dot{Y}$ stands for the prediction, and N indicates the number of image pixels.

3. Results and discussion

3.1 Comparison with U-Net segmentation

Meniscus segmentation on T2-weighted MR images is a challenging task. The mainly reasons are as follows: (1) the surrounding tissues always show a similar signal to the menisci on only single MRI sequence, and even professional orthopedists sometimes hardly distinguish it; (2) a T2-weighted MR image is of low tissue contrast. Although U-Net has already achieved satisfactory results in medical image segmentation, higher segmentation accuracy is still needed in real clinical applications. Accordingly, we used ATTU-Net, in comparison with U-Net, to segment menisci to verify whether ATTU-Net is effective in dealing with the challenges in meniscus segmentation.

After the training, the best-performing models of U-Net and ATTU-Net were utilized to evaluate DSC by using the test set, which contained 36 MR slices from 36 knees. The bottom of Table 3 shows the meniscus segmentation results by using U-Net and the proposed ATTU-Net. ATTU-Net achieved approximately 1.2% higher score on the mean DSC, reaching 86.45%; by contrast, the mean DSC of U-Net was 85.24%. A paired-sample t-test showed a significant difference (p = 0.0247) between the segmentation results of U-Net and ATTU-Net, as depicted in Fig. 3.

 figure: Fig. 3.

Fig. 3. Test set Dice similarity coefficient distribution. + indicates the mean of data set, * indicates a statistically difference with p value less than 0.05.

Download Full Size | PDF

Tables Icon

Table 3. Comparison of meniscus segmentation results in other recent studies. The results are given as mean ± standard deviation if the standard deviation is available. N/A indicates that the value is not reported/available in accordance with the literature. The best results on each column are highlighted in bold font.

Table 3 also provides meniscus segmentation results by using other methods in the recent 10 years. ATTU-Net obtained the best max DSC by using the smallest amount of training data. The mean DSC was also close to the best segmentation results. Note that the size calculation of the training set summarized in Table 3 was based on the 2D frame from the MRI sequence. In some related studies, the authors may use the word of image. One image means one volume of an MRI scan which includes 160 frames in it. Based on mean with standard deviation, Fig. 4 shows the DSC comparison between ATTU-Net and other methods in the recent 10 years.

 figure: Fig. 4.

Fig. 4. Comparison between the average Dice similarity coefficient of different segmentation methods. * indicates a statistically difference with p value less than 0.05, *** indicates a statistically difference with p value less than 0.001, ns indicates non-significantly different.

Download Full Size | PDF

3.2 Meniscus segmentation of random MR slices

In this subsubsection, we analyzed the visualization results on the test set. Figure 5 shows a comparison of meniscus segmentation results. ATTU-Net handled the details better. The segmented shape of ATTU-Net was as sharp as the ground truth and sharper than that of U-Net. In addition, a false truth was detected in slice 2# by U-Net, as depicted by the white arrow.

 figure: Fig. 5.

Fig. 5. Visualization of meniscus segmentation results in three slices from the test set: (a) original image, (b) ground truth, (c) segmentation result of U-Net, and (d) segmentation results of our method. Considering that the meniscus is small, we show the local amplification effect for a clear comparison. The image size is 256 × 256 pixels.

Download Full Size | PDF

 figure: Fig. 6.

Fig. 6. Visualization of the meniscus segmentation results of U-Net in a knee MRI sequence. The image size here is 256 × 144 pixels for a clear observation. The red cross indicates that a false truth exists close to it.

Download Full Size | PDF

3.3 Meniscus segmentation of a knee MRI sequence

An orthopedist generally considers more than one slice when making preclinical assessment on MRI. On this basis, we used the trained model to segment the meniscus region in a full MRI sequence. The additional sequence contained 24 frames with an original size of 512 × 512 pixels. Figure 6 and Fig. 7 show the predicted results of U-Net and ATTU-Net, respectively. ATTU-Net still outperformed U-Net on details. Considerable false truth, a small region, was extracted by U-Net, as shown by the red cross in Fig. 6 (slices 5#, 6#, 20#, and 23#). On the contrary, ATTU-Net did not detect any wrong region. Furthermore, slice 3# shown in Fig. 7 predicted an extra true positive, whereas U-Net did not.

 figure: Fig. 7.

Fig. 7. Visualization of the meniscus segmentation results of ATTU-Net in a knee MRI sequence. The image size here is 256 × 144 pixels for a clear observation. The red circle indicates that a true positive exists close to it.

Download Full Size | PDF

Figure 8 shows the 3D point distribution of the menisci from an individual. The 3D points were based on the segmented results of the MRI sequence. We provided the sagittal value with an interval of 4 mm. That is, the Z value of the first frame was set to 0, then that of the second frame was set to 4 mm. As depicted in Fig. 8(a), the whole menisci were segmented well. The orthopedists can evaluate the meniscus thickness in accordance with the side view shown in Fig. 8(b).

 figure: Fig. 8.

Fig. 8. 3D point distribution of the menisci from an individual: (a) the isometric view and (b) the side view of the menisci.

Download Full Size | PDF

4. Discussion

We have presented an efficient small-dataset-based approach to knee meniscus segmentation in T2-weighted MR images. The experimental results showed that the proposed approach outperformed U-Net, and was able to segment menisci with similar accuracy to the best results obtained by a recent study.

Only attention gates were incorporated into the standard U-Net architecture, reducing false-positive predictions distinctly. No backbone, such as VGGNet [24], ResNet [25], and MobileNet [26], exists in our network; thus, the number of trainable parameters and computational complexity would be less. In addition, ATTU-Net does not need any training of multiple models [27] and a large number of extra model parameters, such as the usage of transfer learning [16]. Accuracy improvements over U-Net were experimentally observed even on a small amount of training data. The database creation in our study was also simple and fast. Only one frame of each T2-weighted MR scan was selected. On the contrary, study [14] had to break down the 3D MR volume into 2D images and then send those images into the network for training, which was time time-consuming. Furthermore, radiologists were asked to select the best possible images for annotations.

An evident drawback of our approach was its limited images for the test set. The more test images we used, the more the robustness of our approach should be verified. In future works, (1) we will extend the proposed model to other MR images of menisci, such as coronal and transverse MRIs; (2) we will adopt more clinical images or open-access data sources to verify the effectiveness of ATTU-Net; and (3) a 3D meniscus model should be constructed and provided for orthopedists to calculate the volume or area of medial and lateral menisci.

5. Conclusions

In this paper, we have proposed an application-oriented segmentation model for menisci MR images, ATTU-Net, with enhanced segmentation accuracy. To the best of our knowledge, it is the first meniscus segmentation based on T2-weighted MRI and smallest training dataset. In comparison with the original U-Net and other U-Net-based architectures, the ATTU-Net model has few parameters and high efficiency because no backbone exists inside for the extraction of a feature map. We have evaluated ATTU-Net on clinical images and compared it with the original U-Net. Segmentation results demonstrated that ATTU-Net can effectively avoid false positives and improve the location accuracy of interested region. In addition, the proposed segmentation model trained using 144 MR slices could even achieve a Dice score similar to those trained by a large annotated dataset. The proposed model can serve as a key step to expending small datasets for clinical applications. The proposed segmentation approach can be used as a computer-aided diagnosis approach for orthopedists, especially for inexperienced clinicians to improve clinical diagnoses.

Funding

Beijing Natural Science Foundation (3204039); National Natural Science Foundation of China (52005046, 52175452); Beijing Municipal Commission of Education (KM201911232021).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. D. C. Fithian, M. A. Kelly, and V. C. Mow, “Material properties and structure-function relationships in the menisci,” Clinical Orthopaedics and Related Research 252, 19–31 (1990). [CrossRef]  

2. T. J. Fairbank, “Knee joint changes after menisectomy,” The Journal of Bone and Joint Surgery. British volume 30-B(4), 664–670 (1948). [CrossRef]  

3. L. Nicolas, N. J. Francois, H. Serge, G. Antoine, K. Shahnaz, and B. Yoann, “A Current Review of the Meniscus Imaging: Proposition of a Useful Tool for Its Radiologic Analysis,” Radiology Research and Practice 2016, 1–25 (2016). [CrossRef]  

4. A. Jah, S. Keyhani, R. Zarei, and A. K. Moghaddam, “Accuracy of MRI in comparison with clinical and arthroscopic findings in ligamentous and meniscal injuries of the knee,” Acta Orthopaedica Belgica 71, 189 (2005).

5. K. Zhang, W. Lu, and P. Marziliano, “The unified extreme learning machines and discriminative random fields for automatic knee cartilage and meniscus segmentation from multi-contrast MR images,” Machine Vision and Applications 24(7), 1459–1472 (2013). [CrossRef]  

6. M. S. Swanson, J. W. Prescott, T. M. Best, K. Powell, R. D. Jackson, F. Haq, and M. N. Gurcan, “Semi-automated segmentation to assess the lateral meniscus in normal and osteoarthritic knees,” Osteoarthritis and Cartilage 18(3), 344–353 (2010). [CrossRef]  

7. P. A. Yushkevich, J. Piven, H. C. Hazlett, R. G. Smith, S. Ho, J. C. Gee, and G. Gerig, “User-guided 3D active contour segmentation of anatomical structures: significantly improved efficiency and reliability,” NeuroImage 31(3), 1116–1128 (2006). [CrossRef]  

8. M. M. Rahman, L. Dürselen, and A. M. Seitz, “Automatic Segmentation of Knee Menisci - A Systematic Review,” Artificial Intelligence in Medicine 105, 101849 (2020). [CrossRef]  

9. A. Paproki, C. Engstrom, S. S. Chandra, A. Neubert, J. Fripp, and S. Crozier, “Automated segmentation and analysis of normal and osteoarthritic knee menisci from magnetic resonance images – data from the Osteoarthritis Initiative,” Osteoarthritis and Cartilage 22(9), 1259–1270 (2014). [CrossRef]  

10. A. Paproki, C. Engstrom, M. Strudwick, K. J. Wilson, R. K. Surowiec, C. Ho, S. Crozier, and J. Fripp, “Automated T2-mapping of the Menisci From Magnetic Resonance Images in Patients with Acute Knee Injury,” Academic Radiology 24(10), 1295–1304 (2017). [CrossRef]  

11. A. Tack, A. Mukhopadhyay, and S. Zachow, “Knee Menisci Segmentation using Convolutional Neural Networks: Data from the Osteoarthritis Initiative,” Osteoarthritis and Cartilage 26(5), 680–688 (2018). [CrossRef]  

12. O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional Networks for Biomedical Image Segmentation,” International Conference on Medical Image Computing and Computer-Assisted Intervention9351, 234–241 (2015).

13. B. Norman, V. Pedoia, and S. Majumdar, “Use of 2D U-Net Convolutional Neural Networks for Automated Cartilage and Meniscus Segmentation of Knee MR Imaging Data to Determine Relaxometry and Morphometry,” Radiology 288(1), 177–185 (2018). [CrossRef]  

14. M. Byra, M. Wu, X. Zhang, H. Jang, Y.-J. Ma, E. Y. Chang, S. Shah, and J. Du, “Knee menisci segmentation and relaxometry of 3D ultrashort echo time cones MR imaging using attention U-Net with transfer learning,” Magn. Reson. Med. 83(3), 1109–1122 (2020). [CrossRef]  

15. S. Gaj, M. Yang, K. Nakamura, and X. Li, “Automated cartilage and meniscus segmentation of knee MRI with conditional generative adversarial networks,” Magn. Reson. Med. 84(1), 437–449 (2020). [CrossRef]  

16. E. Lmez, V. Akdoan, M. Korkmaz, and E. R. Orhan, “Automatic Segmentation of Meniscus in Multispectral MRI Using Regions with Convolutional Neural Network (R-CNN),” J Digit Imaging 33(4), 916–929 (2020). [CrossRef]  

17. J. Fripp, P. Bourgeat, C. Engstrom, S. Ourselin, S. Crozier, and O. Salvado, “Automated segmentation of the menisci from MR images,” 2009 IEEE international symposium on biomedical imaging: from nano to macro510–513 (2009).

18. A. Saygili and S. Varlı Albayrak, “Knee Meniscus Segmentation and Tear Detection from MRI: A Review,” Curr. Med. Imaging Rev. 16(1), 2–15 (2020). [CrossRef]  

19. O. Oktay, J. Schlemper, L. L. Folgoc, M. Lee, M. Heinrich, K. Misawa, K. Mori, S. McDonagh, N. Y. Hammerla, B. Kainz, B. Glocker, and D. Rueckert, “Attention U-Net: Learning Where to Look for the Pancreas,” arXiv:1804.03999, (2018).

20. “Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift,” JMLR.org (2015).

21. S. Ruder, “An overview of gradient descent optimization algorithms,” CoRR arxiv.org/abs/1609.04747, (2016).

22. L. R. Dice, “Measures of the Amount of Ecologic Association Between Species,” Ecology 26(3), 297–302 (1945). [CrossRef]  

23. A. Saygılı and S. Albayrak, “A new computer-based approach for fully automated segmentation of knee meniscus from magnetic resonance images,” Biocybernetics and Biomedical Engineering 37(3), 432–442 (2017). [CrossRef]  

24. K. Simonyan and A. Zisserman, “Very Deep Convolutional Networks for Large-Scale Image Recognition,” arXiv e-prints arXiv:1409.1556 (2014).

25. K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition,” in2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016), pp. 770–778.

26. A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam, “MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications,” arXiv:1704.04861, (2017).

27. H. Ma, Y. Zou, and P. X. Liu, “MHSU-Net: A more versatile neural network for medical image segmentation,” Computer Methods and Programs in Biomedicine 208, 106230 (2021). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. Entire architecture of ATTU-Net for meniscus segmentation.
Fig. 2.
Fig. 2. Illustration of attention gates.
Fig. 3.
Fig. 3. Test set Dice similarity coefficient distribution. + indicates the mean of data set, * indicates a statistically difference with p value less than 0.05.
Fig. 4.
Fig. 4. Comparison between the average Dice similarity coefficient of different segmentation methods. * indicates a statistically difference with p value less than 0.05, *** indicates a statistically difference with p value less than 0.001, ns indicates non-significantly different.
Fig. 5.
Fig. 5. Visualization of meniscus segmentation results in three slices from the test set: (a) original image, (b) ground truth, (c) segmentation result of U-Net, and (d) segmentation results of our method. Considering that the meniscus is small, we show the local amplification effect for a clear comparison. The image size is 256 × 256 pixels.
Fig. 6.
Fig. 6. Visualization of the meniscus segmentation results of U-Net in a knee MRI sequence. The image size here is 256 × 144 pixels for a clear observation. The red cross indicates that a false truth exists close to it.
Fig. 7.
Fig. 7. Visualization of the meniscus segmentation results of ATTU-Net in a knee MRI sequence. The image size here is 256 × 144 pixels for a clear observation. The red circle indicates that a true positive exists close to it.
Fig. 8.
Fig. 8. 3D point distribution of the menisci from an individual: (a) the isometric view and (b) the side view of the menisci.

Tables (3)

Tables Icon

Table 1. Comparison of meniscus segmentation methods based on SSM and CNN proposed since 2010.

Tables Icon

Table 2. Summary of the used MRI datasets.

Tables Icon

Table 3. Comparison of meniscus segmentation results in other recent studies. The results are given as mean ± standard deviation if the standard deviation is available. N/A indicates that the value is not reported/available in accordance with the literature. The best results on each column are highlighted in bold font.

Equations (2)

Equations on this page are rendered with MathJax. Learn more.

D S C = 2 | G T P S | | G T | + | P S | ,
L B C E = 1 N i = 1 N y i log ( y ˙ i ) + ( 1 y i ) log ( 1 y i ˙ ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.