Malaria parasite segmentation using U-Net: Comparative study of loss functions

Main Article Content

Julisa Bana Abraham

Abstract

The convolutional neural network is commonly used for classification. However, convolutional networks can also be used for semantic segmentation using the fully convolutional network approach. U-Net is one example of a fully convolutional network architecture capable of producing accurate segmentation on biomedical images. This paper proposes to use U-Net for Plasmodium segmentation on thin blood smear images. The evaluation shows that U-Net can accurately perform Plasmodium segmentation on thin blood smear images, besides this study also compares the three loss functions, namely mean-squared error, binary cross-entropy, and Huber loss. The results show that Huber loss has the best testing metrics: 0.9297, 0.9715, 0.8957, 0.9096 for F1 score, positive predictive value (PPV), sensitivity (SE), and relative segmentation accuracy (RSA), respectively.

Downloads

Download data is not yet available.

Article Details

How to Cite
Abraham, J. B. (2019). Malaria parasite segmentation using U-Net: Comparative study of loss functions. Communications in Science and Technology, 4(2), 57-62. https://doi.org/10.21924/cst.4.2.2019.128
Section
Articles

References

1. A. Shrestha and A. Mahmood, Review of deep learning algorithms and architectures, IEEE Access. 7 (2019) 53040–53065.
2. Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, Gradient-based learning applied to document recognition, Proceedings of the IEEE, 1998, pp. 2278–2324.
3. A. Krizhevsky, I. Sutskever, and G. E. Hinton, Imagenet classification with deep convolutional neural networks, Commun. ACM. 60 (2017) 84–90.
4. O. Russakovsky et al, Imagenet large scale visual recognition challenge, Int. J. Comput. Vis. 115 (2015) 211–252.
5. J. Long, E. Shelhamer, and T. Darrell., Fully convolutional networks for semantic segmentation, in 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 39 (2015) 3431–3440.
6. O. Ronneberger, P. Fischer, and T. Brox, U-net: convolutional networks for biomedical image segmentation, In: Navab N., Hornegger J., Wells W., Frangi A. (eds) Medical Image Computing and Computer-Assisted Intervention. MICCAI 2015. Lecture Notes in Computer Science, vol 9351. Springer, Cham.
7. A. González-Betancourt, P. Rodríguez-Ribalta, A. Meneses-Marcel, S. Sifontes-Rodríguez, J. V. Lorenzo-Ginori, and R. Orozco-Morales, Automated marker identification using the Radon transform for watershed segmentation, IET Image Process. 11 (2017) 183–189.
8. I. Dave and K. Upla., Computer aided diagnosis of malaria disease for thin and thick blood smear microscopic images. In Proceedings of the 2017 4th International Conference on Signal Processing and Integrated Networks (SPIN), Noida, India, 2017, pp. 561.
9. L. Rosado, J.M.C. da Costa, D. Elias and J.S. Cardoso. Mobile-based analysis of malaria-infected thin bloodsmears: automated species and life cycle stage determination. Sensors 17 (2017) 2167.
10. S. Jalari and B. E. Reddy, A novel two-stage thresholding method for segmentation of malaria parasites in microscopic blood images, J. Biomed. Eng. Med. Imaging 4 (2017) 31–31.
11. S. S. Devi, J. Singha, M. Sharma, and R. H. Laskar, Erythrocyte segmentation for quantification in microscopic images of thin blood smears, J. Intell. Fuzzy Systems 32 (2017) 2847–2856.
12. A. D. Oliveira, et al., The malaria system microapp: a new, mobile device-based tool for malaria diagnosis, JMIR Res. Protoc. 6 (2017) 70.
13. P. Viola and M. Jones, Rapid object detection using a boosted cascade of simple features, in Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001, 2001, pp. I–I.
14. H. A. Nugroho, et al., Multithresholding approach for segmenting plasmodium parasite, 2019 11th International Conference on Information Technology and Electrical Engineering (ICITEE), 2019.
15. J. Yu, D. Huang, and Z. Wei, Unsupervised image segmentation via stacked denoising auto-encoder and hierarchical patch indexing, Signal Process. 143 (2018) 346–353.
16. J. Long, E. Shelhamer, and T. Darrell, Fully convolutional networks for semantic segmentation, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015, pp. 3431–3440.
17. P. J. Huber, Robust Estimation of a Location Parameter, Ann. Math. Statist. 35 (1964) 73–101.
18. D. P. Kingma and J. Ba, Adam: a method for stochastic optimization, 3rd International Conference for Learning Representations, San Diego, 2015.
19. F. Chollet, Deep learning with python, 1st ed. Greenwich, CT, USA: Manning Publications Co, 2017.
20. A. Radford, L. Metz, and S. Chintala, Unsupervised representation learning with deep convolutional generative adversarial networks, (international conference on learning representations , 2016.
21. C. Dang, J. Gao, Z. Wang, F. Chen, and Y. Xiao, Multi-step radiographic image enhancement conforming to weld defect segmentation, IET Image Process. 9 (2015) 943–950.
22. L. R. Dice, Measures of the amount of ecologic association between species, Ecology 26 (1945) 297–302.
23. M. Levandowsky and D. Winter, Distance between sets, Nature 234 (1971) 34–35.
24. T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollár, Focal loss for dense object detection, IEEE Trans. Pattern Analysis Mach. Intell. 74 (2018) 2980-2988.
25. I. Olkin and F. Pukelsheim, The distance between two random vectors with given dispersion matrices, Lin. Algebra Appl. 48 (1982) 257–263.