Retinal Fluid Segmentation Using Ensembled 2-Dimensionally and 2.5-Dimensionally Deep Learning Networks
Affiliation auteurs | !!!! Error affiliation !!!! |
Titre | Retinal Fluid Segmentation Using Ensembled 2-Dimensionally and 2.5-Dimensionally Deep Learning Networks |
Type de publication | Journal Article |
Year of Publication | 2020 |
Auteurs | Alsaih K, Yusoff MZuki, Faye I, Tang TBoon, Meriaudeau F |
Journal | IEEE ACCESS |
Volume | 8 |
Pagination | 152452-152464 |
Type of Article | Article |
ISSN | 2169-3536 |
Mots-clés | 25D networks, Deep learning, ensembled networks, Retinal diseases, SD-OCT segmentation |
Résumé | Morphological changes related to different diseases that occur in the retina are currently extensively researched. Manual segmentation of retinal fluids is time-consuming and subject to variability, giving prominence to the demand for robust automatic segmentation methods. The standard in assessing the existence and mass of retinal fluids at present is through the optical coherence tomography (OCT) modality. In this study, semantic segmentation deep learning networks were examined in 2.5D and ensembled with 2D networks. This analysis aims to show how these networks can perform in-depth than using only a single B-scan and the effects of 2.5 patches when fitted to the deep networks. All experiments were evaluated using public data from the RETOUCH challenge as well as the OPTIMA challenge dataset and Duke dataset. The networks trained in 2.5D performed slightly better than 2D networks in all datasets. The average performance of the best network was 0.867, using the dice similarity coefficient score (DSC) metric on the RETOUCH dataset. On the DUKE dataset, Deeplabv3CPa outperformed other networks in this study with a dice score of 0.80. Experiments showed a more robust performance when networks were ensembled. Intraretinal fluid (IRF) was recognized better than other fluids with a DSC of 0.924. Deeplabv3CPa model outperformed all other networks with a p-value average of 0.03 on the RETOUCH challenge dataset. Methods used in this study to distinguish retinal disorders outperform human performance as well as showed competitive results to the teams who joined both challenges. Three consecutive B-scans, including partial depth information in training neural networks, were stacked as a single image built for more robust networks compared to providing only 2D information. |
DOI | 10.1109/ACCESS.2020.3017449 |