• No results found

Training of the Deep Neural Networks

Comparisons Between Cross-Entropy Loss and Dice Loss

6.4 Future Work

6.4.3 Training of the Deep Neural Networks

In this thesis, the data material has been split into three data sets; training, validation, and test. K- fold cross validation is an alternative method that could be used, which might give a more robust and less biased estimation of the models [33]. Transfer learning is a technique that might be worth exploring. The method is based on using pre-trained DNNs, trained on large data sets. The pre-trained DNNs could further be trained on the LGE-CMR images.

Chapter 7

Conclusion

The objective of this thesis was to propose a method for automatic myocardial segmenta-tion in LGE-CMR images. The developed method was using an FCNN architecture and was trained end-to-end with a training set consisting of 2006 images and masks from 214 patients affected by MI.

Experiments with two different approaches were performed. Experiment one was to train the DNN with masks of the myocardium, and experiment two was using masks with the healthy myocardium and myocardial scar tissue to train the DNN.

The best model was obtained by binary segmentation. The model got a final result of a mean Dice score 0.705 with a standard deviation of 0.15, and a mean Jaccard index 0.560 with a standard deviation of 0.16. The model was evaluated using 244 images from 30 patients affected by myocardial infarction.

When evaluating the obtained results in this thesis, it is considered that the use of DNN for myocardial segmentation is a promising method worth exploring further.

[1] World Health Organization. Cardiovascular diseases (CVDs). [Online; accessed April 30, 2019]. 2017.URL:https://www.who.int/news- room/fact-sheets/detail/cardiovascular-diseases-(cvds).

[2] Kristian Thygesen et al. “Fourth universal definition of myocardial infarction (2018)”.

In:European Heart Journal40.3 (Aug. 2018), pp. 237–269.ISSN: 0195-668X.DOI: 10.1093/eurheartj/ehy462. eprint:http://oup.prod.sis.lan/

eurheartj/article- pdf/40/3/237/28457750/ehy462.pdf.URL: https://doi.org/10.1093/eurheartj/ehy462.

[3] Inc. Blausen Medical Communications. Myocardial Infarction or Heart Attack.

URL: https : / / commons . wikimedia . org / wiki / File : Blausen _ 0463_HeartAttack.png. (accessed: 09.05.2019).

[4] Qian Yue et al. “Cardiac Segmentation from LGE MRI Using Deep Neural Net-work Incorporating Shape and Spatial Priors”. In:arXiv preprint arXiv:1906.07347 (2019).

[5] O. Ronneberger, P. Fisher, and T.Brox. “U-Net: Convolutional Networks for Biomed-ical Image Segmentation”. In:Medical Image Computing and Computer-Assisted Intervention (MICCAI). Vol. 9351. LNCS. (available on arXiv:1505.04597 [cs.CV]).

Springer, 2015, pp. 234–241.URL:http://lmb.informatik.uni-freiburg.

de/Publications/2015/RFB15a.

[6] Sara Moccia et al. “Automated Scar Segmentation From Cardiac Magnetic Resonance-Late Gadolinium Enhancement Images Using a Deep-Learning Approach”. In: Dec.

2018.DOI:10.22489/CinC.2018.278.

[7] Kjersti Engan et al. “Segmentation of LG Enhanced Cardiac MRI”. In: Jan. 2015, pp. 47–55.DOI:10.5220/0005169200470055.

[8] K. Engan et al. “Automatic segmentation of the epicardium in late gadolinium enhanced cardiac MR images”. In: Computing in Cardiology 2013. Sept. 2013, pp. 631–634.

[9] Fernand Meyer. “Meyer, F.: Topographic distance and watershed lines. Signal Pro-cess. 38, 113-125”. In:Signal Processing38 (July 1994), pp. 113–125.DOI:10.

1016/0165-1684(94)90060-4.

[10] Soille Pierre.”Morphological image analysis : principles and applications”. En-glish. 2nd ed., corrected. Previous ed.: 1999. Berlin : Springer, 2004.ISBN: 3540429883 (alk. paper).

[11] Winnie Yu Brindles Lee Macon and Lauren Reed-Guy.Acute Myocardial Infarc-tion.URL: https://www.healthline.com/health/acute-myocardial-infarction. (accessed: 18.06.2019).

[12] Ian Goodfellow, Yoshua Bengio, and Aaron Courville.Deep Learning.http://

www.deeplearningbook.org. MIT Press, 2016, pp. 12–20, 255–264.

[13] Olga Russakovsky et al. “ImageNet Large Scale Visual Recognition Challenge”. In:

International Journal of Computer Vision (IJCV)115.3 (2015), pp. 211–252.DOI: 10.1007/s11263-015-0816-y.

[14] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. “ImageNet Classifica-tion with Deep ConvoluClassifica-tional Neural Networks”. In:Neural Information Process-ing Systems25 (Jan. 2012).DOI:10.1145/3065386.

[15] Max Pixel.Dendrites Soma Axon Brain Nerve Neuron Cell.URL:https://www.

maxpixel . net / Dendrites Soma Axon Brain Nerve Neuron -Cell-1294021.

[16] Maximilian Riesenhuber and Tomaso Poggio. “Hierarchical models of object recog-nition in cortex”. In:Nature Neuroscience2 (1999), pp. 1019–1025.

[17] Adam Paszke et al. “Automatic differentiation in PyTorch”. In: (2017).

[18] Carole H. Sudre et al. “Generalised Dice Overlap as a Deep Learning Loss Func-tion for Highly Unbalanced SegmentaFunc-tions”. In:Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support. Ed. by M. Jorge Cardoso et al. Cham: Springer International Publishing, 2017, pp. 240–248.ISBN: 978-3-319-67558-9.

[19] H. Leung and S. Haykin. “The complex backpropagation algorithm”. In: IEEE Transactions on Signal Processing39.9 (Sept. 1991), pp. 2101–2104.ISSN: 1053-587X.DOI:10.1109/78.134446.

[20] Diederik Kingma and Jimmy Ba. “Adam: A Method for Stochastic Optimization”.

In:International Conference on Learning Representations(Dec. 2014).

[21] Sebastian Ruder. “An overview of gradient descent optimization algorithms”. In:

CoRRabs/1609.04747 (2016). arXiv:1609 . 04747. URL:http : / / arxiv . org/abs/1609.04747.

[22] Nitish Srivastava et al. “Dropout: A Simple Way to Prevent Neural Networks from Overfitting”. In:Journal of Machine Learning Research15 (2014), pp. 1929–1958.

URL:http://jmlr.org/papers/v15/srivastava14a.html.

[23] Johan Bjorck, Carla P. Gomes, and Bart Selman. “Understanding Batch Normal-ization”. In:CoRRabs/1806.02375 (2018). arXiv: 1806 . 02375.URL:http : //arxiv.org/abs/1806.02375.

[24] Sergey Ioffe and Christian Szegedy. “Batch Normalization: Accelerating Deep Net-work Training by Reducing Internal Covariate Shift”. In: CoRRabs/1502.03167 (2015). arXiv: 1502 . 03167. URL: http : / / arxiv . org / abs / 1502 . 03167.

[25] Xiang Li et al. “Understanding the Disharmony between Dropout and Batch Nor-malization by Variance Shift”. In: CoRRabs/1801.05134 (2018). arXiv:1801 . 05134.URL:http://arxiv.org/abs/1801.05134.

[26] James Bergstra and Yoshua Bengio. “Random Search for Hyper-parameter Opti-mization”. In:J. Mach. Learn. Res. 13.1 (Feb. 2012), pp. 281–305. ISSN: 1532-4435. URL: http : / / dl . acm . org / citation . cfm ? id = 2503308 . 2188395.

[27] Peter I Frazier. “A tutorial on Bayesian optimization”. In:arXiv preprint arXiv:1807.02811 (2018).

[28] Luis Perez and Jason Wang. “The Effectiveness of Data Augmentation in Image Classification using Deep Learning”. In:CoRRabs/1712.04621 (2017). arXiv:1712.

04621.URL:http://arxiv.org/abs/1712.04621.

[29] Sebastien C. Wong et al. “Understanding data augmentation for classification: when to warp?” In:CoRRabs/1609.08764 (2016). arXiv:1609.08764.URL:http:

//arxiv.org/abs/1609.08764.

[30] MATLAB.version 9.6.0.1062519 (R2019a). Natick, Massachusetts: The MathWorks Inc., 2019.

[31] Guido Van Rossum and Fred L Drake Jr.Python tutorial. Centrum voor Wiskunde en Informatica Amsterdam, The Netherlands, 1995.

[32] Yoshua Bengio. “Practical recommendations for gradient-based training of deep architectures”. In:CoRRabs/1206.5533 (2012). arXiv:1206.5533.URL:http:

//arxiv.org/abs/1206.5533.

[33] Sudhir Varma and Richard Simon. “Bias in Error Estimation When Using Cross-Validation for Model Selection.fffdfffdfffd BMC Bioinformatics, 7(1), 91”. In:BMC bioinformatics7 (Feb. 2006), p. 91.DOI:10.1186/1471-2105-7-91.

Appendices

Appendix A