A deep learning method for real-time intraoperative US image segmentation in prostate brachytherapy
Affiliation auteurs | !!!! Error affiliation !!!! |
Titre | A deep learning method for real-time intraoperative US image segmentation in prostate brachytherapy |
Type de publication | Journal Article |
Year of Publication | 2020 |
Auteurs | Girum KBerihu, Lalande A, Hussain R, Crehange G |
Journal | INTERNATIONAL JOURNAL OF COMPUTER ASSISTED RADIOLOGY AND SURGERY |
Volume | 15 |
Pagination | 1467-1476 |
Date Published | SEP |
Type of Article | Article |
ISSN | 1861-6410 |
Mots-clés | convolutional neural networks, Image Segmentation, Image-guided brachytherapy, Intraoperative, Shape models, Transrectal ultrasound |
Résumé | Purpose This paper addresses the detection of the clinical target volume (CTV) in transrectal ultrasound (TRUS) image-guided intraoperative for permanent prostate brachytherapy. Developing a robust and automatic method to detect the CTV on intraoperative TRUS images is clinically important to have faster and reproducible interventions that can benefit both the clinical workflow and patient health. Methods We present a multi-task deep learning method for an automatic prostate CTV boundary detection in intraoperative TRUS images by leveraging both the low-level and high-level (prior shape) information. Our method includes a channel-wise feature calibration strategy for low-level feature extraction and learning-based prior knowledge modeling for prostate CTV shape reconstruction. It employs CTV shape reconstruction from automatically sampled boundary surface coordinates (pseudo-landmarks) to detect the low-contrast and noisy regions across the prostate boundary, while being less biased from shadowing, inherent speckles, and artifact signals from the needle and implanted radioactive seeds. Results The proposed method was evaluated on a clinical database of 145 patients who underwent permanent prostate brachytherapy under TRUS guidance. Our method achieved a mean accuracy of 0.96 +/- 0.01 and a mean surface distance error of 0.10 +/- 0.06 mm. Extensive ablation and comparison studies show that our method outperformed previous deep learning-based methods by more than 7% for the Dice similarity coefficient and 6.9 mm reduced 3D Hausdorff distance error. Conclusion Our study demonstrates the potential of shape model-based deep learning methods for an efficient and accurate CTV segmentation in an ultrasound-guided intervention. Moreover, learning both low-level features and prior shape knowledge with channel-wise feature calibration can significantly improve the performance of deep learning methods in medical image segmentation. |
DOI | 10.1007/s11548-020-02231-x, Early Access Date = {JUL 2020 |